lang
stringclasses 1
value | s2FieldsOfStudy
listlengths 0
8
| url
stringlengths 78
78
| fieldsOfStudy
listlengths 0
5
| lang_conf
float64 0.8
0.98
| title
stringlengths 4
300
| paperId
stringlengths 40
40
| venue
stringlengths 0
300
| authors
listlengths 0
105
| publicationVenue
dict | abstract
stringlengths 1
10k
⌀ | text
stringlengths 1.94k
184k
| openAccessPdf
dict | year
int64 1.98k
2.03k
⌀ | publicationTypes
listlengths 0
4
| isOpenAccess
bool 2
classes | publicationDate
timestamp[us]date 1978-02-01 00:00:00
2025-04-23 00:00:00
⌀ | references
listlengths 0
958
| total_tokens
int64 509
40k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02b7c6031dd9a61e7516fa0eae29d0d7c6dd9140
|
[
"Computer Science"
] | 0.906761
|
Security and Privacy Challenges and Potential Solutions for DLT based IoT Systems
|
02b7c6031dd9a61e7516fa0eae29d0d7c6dd9140
|
Global Internet of Things Summit
|
[
{
"authorId": "51920756",
"name": "Santeri Paavolainen"
},
{
"authorId": "1751537",
"name": "P. Nikander"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"GIoTS",
"Glob Internet Thing Summit"
],
"alternate_urls": null,
"id": "7755a7fc-8f08-4e5d-96c1-f9603e8d744e",
"issn": null,
"name": "Global Internet of Things Summit",
"type": "conference",
"url": null
}
|
The use of distributed ledger technologies introduces new security and privacy challengesThese challenges are dependent on properties of the ledgers, such as transaction latency and throughput. Some use cases may be outright impossible to implement securely, or in a privacy-retaining manner. Consequently, it is important that these concerns are taken into account when distributed ledger technologies are evaluated and selected as building blocks for higher-levelsystems. In this paper, we illustrate these concerns through use case examples. We discuss theimplications these concerns on the use of distributed ledgers within higher-level systems, such as in SOFIE, a DLT-based approach to securely and openly federate IoT systems.
|
### This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail.
## Paavolainen, Santeri; Nikander, Pekka
Security and Privacy Challenges and Potential Solutions for DLT based IoT Systems
_Published in:_
2018 Global Internet of Things Summit (GIoTS)
_DOI:_
[10.1109/GIOTS.2018.8534527](https://doi.org/10.1109/GIOTS.2018.8534527)
Published: 15/11/2018
_Document Version_
Peer reviewed version
_Please cite the original version:_
Paavolainen, S., & Nikander, P. (2018). Security and Privacy Challenges and Potential Solutions for DLT based
IoT Systems. In 2018 Global Internet of Things Summit (GIoTS) IEEE.
[https://doi.org/10.1109/GIOTS.2018.8534527](https://doi.org/10.1109/GIOTS.2018.8534527)
This material is protected by copyright and other intellectual property rights, and duplication or sale of all or
part of any of the repository collections is not permitted, except that material may be duplicated by you for
your research use or educational purposes in electronic or print form. You must obtain permission for any
other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not
an authorised user.
-----
# Security and Privacy Challenges and Potential Solutions for DLT based IoT Systems
### Santeri Paavolainen and Pekka Nikander
Department of Communications and Networking
Aalto University
santeri.paavolainen@aalto.fi, pekka.nikander@aalto.fi
**_Abstract—The use of distributed ledger technologies introduces_**
**new security and privacy challenges. These challenges are de-**
**pendent on properties of the ledgers, such as transaction latency**
**and throughput. Some use cases may be outright impossible to**
**implement securely, or in a privacy-retaining manner. Conse-**
**quently, it is important that these concerns are taken into account**
**when distributed ledger technologies are evaluated and selected**
**as building blocks for higher-level systems. In this paper, we**
**illustrate these concerns through use case examples. We discuss**
**the implications these concerns on the use of distributed ledgers**
**within higher-level systems, such as in SOFIE, a DLT-based**
**approach to securely and openly federate IoT systems.**
**_Index_** **_Terms—Internet_** **of** **Things;** **distributed** **ledgers;**
**blockchain; security; privacy.**
### I. Introduction
Research and innovation on blockchains and other distributed ledger technologies (DLT) has proliferated after the
success of Bitcoin. This initial popularity led Gartner to place
blockchains at the top of the hype cycle in 2016 [1]. As with
most highly hyped technologies, many mundane concerns,
such as security and privacy, play a catch-up game. In this
paper, we outline some major security and privacy challenges
related to DLT technologies and discuss how they related to
the Internet of Things (IoT) systems. We do this in the light of
three simplified use cases, thereby illustrating the challenges
and potential solutions involved.
In practical terms, a distributed ledger is a massively
replicated append-only data structure. Data can be added to it,
typically by anyone. Once data has been added to the ledger, it
_can never be removed. This inability to remove data is perhaps_
the most important essential feature of distributed ledgers. If
data could be removed, it is questionable if the system can
any more be called a ledger at all.
The other essential feature of a distributed ledger is that
the data is massively replicated. In present systems, such as
Bitcoin and Ethereum, all maintainers keep an identical copy
of all the data. Open ledgers allow anyone to join the network
and download a copy of the data, at any time. Hence, there
are thousands of copies of the data, stored all over the world.
While the ledgers may become more efficient in the future in
the sense that not all maintainers keep all data, we surmise
that in order to retain the massive replication, even the future
systems will store hundreds if not thousands of copies for each
datum.
We focus on the security and privacy challenges related to
the use of distributed ledgers in the context of IoT devices.
More specifically, we leave beyond the scope of this paper
any security problems in the ledgers themselves.[1] Furthermore,
security and privacy risks that are merely related to the
_payment aspect of blockchains are beyond the scope of this_
paper, unless they are directly related to IoT applications.
To our knowledge, this paper is among the first systematic
reviews of the security and privacy risks related to combining
DLT and IoT.
The rest of this paper is organised as follows. First, in
Section II we outline three distinct use cases that we use
to illustrate some of our observations. Then, in Section III,
we discuss the main security and privacy challenges we have
observed. In Section IV we briefly outline some tentative
solutions. In Section V, which is very brief, we discuss the
related work. Finally, Section VI summarizes this paper and
discusses potential future work.
### II. Sample use cases
We focus on three illustrative use cases: a door lock, a
transportation container as a part of a larger IoT system,
and a smart building with multiple sensors and actuators.
Starting with the lock, it should provide the following essential
features:
_•_ The lock shall open when an authorized “key” is
present, and otherwise not.
_•_ All attempts to open the lock, whether successful or
not, must be duly recorded.
_•_ The lock shall work also when there is no Internet
connectivity, potentially with reduced functionality.
A trivial approach would store into the DLT an up-todate list of the identities[2] of the authorized “keys”. In a
similar manner, all accesses may be recorded to the DLT
as separate transactions. Limited offline functionality could
be implemented by caching the latest known valid list of
authorized keys in the device memory.
1See the literature review by Conoscenti et al. [2] for a summary of various
security threats specific to distributed ledgers.
2Not really identities in the strict sense, but e.g. public cryptographic keys
or their fingerprints.
-----
The case of a transportation container is more complex.
We focus on a container during transit. A number of IoT
devices are relevant: the container itself, the vessels or vehicles
used to transport it, any lifts or cranes used to handle it,
and potentially also any storage spaces where the container
may need to wait. All of these devices belong to potentially
different parties, with partially conflicting interests, especially
in view of liabilities, if a container gets lost, damaged, or
compromised.
The essential features for the IoT system appear to be the
following:
_•_ The system shall always know the whereabouts of
and the currently responsible party for all containers.
_•_ When a container arrives at a transit terminal, the
responsibility for the container shall be transferred
from the arriving vessel or vehicle to the appropriate
terminal operator.
_•_ When a container leaves a transit terminal, the
responsibility for the container shall be transferred
from the terminal operator to the departing vessel or
vehicle.
_•_ All transfers of responsibility shall be stored in nonrevocable and non-repudiable record.
Again, there appears to be a trivial solution: Simply use a
DLT to record all events on all containers. And, again, there
are a number of emerging challenges, discussed below.
Our final example is a smart building, with a number of
sensors. In this case, we assume that the building and all the
sensors and actuators are owned by a single party.
Here the essential features appear to be quite similar to the
cases above:
_•_ The lights, ventilation, etc, shall be adjusted based
on human presence and action.
_•_ The sensor data and actuator adjustments shall be
recorded.
_•_ The adjustment system must work (at some level)
even if there is no Internet connectivity.
As in the two other use cases, there are a couple of simplistic
ways to apply DLTs, both with their problems. Firstly, of
course, a DLT can be used to record sensor data and actuation
events. Secondly, it may appear clever to use the so-called
smart contracts to process sensor data and generate actuation
commands. However, in this case we have to question even
the generic applicability of DLTs; their benefits seem meagre
compared to the problems related with them.
### III. Challenges
Given our three example cases — lock, container, and building — we now consider the security, privacy, and some other
challenges emerging from the proposed simplistic approaches.
We cover the various aspects one at a time, and briefly note
current problems in the light of the examples.
_A. Security challenges_
The usually considered computer security aspects include
integrity, confidentiality, and availability. To achieve those,
authentication, authorization, key management and timely
revocation of access rights are needed. Furthermore, in the
case of IoT we have to consider also physical security and
safety as well as the storage and backup of private keys.
**Integrity. One of the main benefits of DLT systems is the**
(near) impossibility of changing the data in the ledger. However, in today’s DLT systems this comes with a high cost: the
so-called full nodes must store the whole transaction history,
which is easily gigabytes or terabytes, typically preventing
IoT devices from acting as full nodes.[3] Even in situations
where memory saving techniques would enable some larger
IoT devices to participate as essentially full nodes, care must
be taken to ensure they will also meet future storage needs.
In quite practical terms, any individual IoT device must
either have access to a trusted full node, or be one, in order
to achieve the full security benefits. Furthermore, to cover
situations with expected intermittent connectivity, the full node
must be available locally, e.g. in the same local network with
our lock, container, or building. From the devices’ point of
view, this is similar to trusting a centralized server. Some IoT
systems may be able to avoid the use of fully trusted full nodes
by either accepting increased latencies during transaction
verification, or by accepting decreased security guarantees.
The container use case appears to have most to gain from
the integrity guarantees of DLTs. In the container case, there
are multiple parties that must record the movements of the
container and refer to the recordings. For these parties, it is
their interest to accept transfers of responsibility only when
the integrity of the ledger can be confirmed. Without going
into the details, the ability of the individual parties to record
their view each independently and then reviewing the views of
the other parties appears beneficial. Here the DLT facilitates
the situation through providing an integrity protected storage
without requiring any direct trust relationships. In the other
two cases, the integrity of the historical record may not be as
critical as in the container case, especially if the current state
is secure and valid.
**Availability. Another major purported benefit of DLTs is**
availability. With the thousands of replicated nodes, the DLTs
are assumed to provide unprecedented availability. Unfortunately, this benefit is difficult to achieve with resourceconstrained IoT devices, such as those used in the lock or
building use cases. Their limited storage capacity prevent from
keeping a full copy of the ledger, requiring the devices to rely
on either remote full nodes, or a local trusted node. Dealing
with intermittent connectivity can also affect availability due
to the time required for DLT synchronization. The building
case is probably the easiest to engineer for having high
availability of DLT access, with the lock being the hardest
and the container somewhere in between.
Hence, in the light of our example cases, the two main
benefits of DLTs — integrity and availability — do not appear
_to provide much benefits to many IoT systems, at least if the_
3A typical IoT device today has at most a few megabytes of memory, often
less, e.g. 64–512 kb.
-----
DLTs are applied in a simplistic and straightforward manner.
We surmise that this is a general property of so-called siloed
IoT systems.
**Authentication and authorization. For authentication and**
authorization, IoT devices could use the DLT as a repository of
trust-related information and the IoT device would rely on the
timeliness and security properties of the ledger to ensure that
most recent and correct configuration was used. Alternatively,
a smart contract in a DLT could be used to actively verify
access and authorization by sending a transaction with suitably
protected parameters to the smart contract, and then reading
the response of the transaction from the ledger. In either case,
the IoT device must be configured with cryptographic keys,
smart contract addresses, etc. to provide security and integrity.
The first method can offer higher availability of up-to-date
authentication and authorization information than centralized
systems, although this only applies to IoT devices with reliable
and timely DLT access either directly or via trusted nodes.
For devices with intermittent or easily disrupted connectivity,
the first method carries a possibility of using stale data,
especially if timely operation is required (e.g. the lock case).
The second method may allow for higher flexibility, but it
suffers even more from intermittent connectivity, and unless
some secondary authentication mechanism is used, it suffers
greatly from DLT transaction latencies.
**Revocation. In a situation where access or other rights are**
revoked from a party, it is often crucial that the revocation
event is distributed in a timely and predictable manner. However, the large majority of today’s DLTs are relatively slow.
In Bitcoin it may take several tens of minutes before a new
transaction gets validated and recorded. While Ethereum is
faster, writing new information may still take in the order of a
minute. Here Iota[3] and Corda [4] appear to be substantially
better, with the average recording time being in the order
of seconds. However, it can be conceived that a resourceful
adversary could arbitrarily delay a revocation from being
accepted to the ledger by incentivising the individual nodes
into not accepting the transaction.
Hence, while DLTs appear as a great mechanism for storing and revoking authorisation data, the long confirmation
_latencies may make the present day DLTs for IoT unusable_
_in practice for use cases with short to moderate timeliness_
_requirements._
**Confidentiality. All the information in a DLT is replicated[4]**
and therefore public by definition. Of course, some of the
data in the DLT may be encrypted. However, given the
permanent nature of the data and the continuous development
of cryptanalysis, there is a non-negligible probability that
any encrypted public data will become decryptable at some
point in the future. Therefore it is highly inadvisable to store
confidential data into a DLT even in an encrypted format.
4We surmise that even in the future DLT systems where the nodes do
not need to store the full data, the replicas of each datum must still be
stored at essentially random nodes. Doing otherwise is likely to unnecessarily
complicate the system and may easily lead to new security problems, e.g.
open venues for new types of denial of service attacks.
This applies especially to private or symmetric cryptographic
_keys, which should never be stored into a DLT or any other_
publicly available storage. In other words, the management of
such keys must take place outside of the DLT. This also means
that DLTs shall not be used to backup private keys.
Thus, if confidential information needs to be transferred to
or from an IoT device, this requires alternate information paths
to exist, which in turn may reduce the overall availability of
the IoT system. Alternatively, a hybrid or multi-ledger system
may be employed, with the confidential information stored in
a private DLT, accessible only to the participating IoT devices,
and only the public portion of operations (user identification,
payments etc.) performed on the public DLT.
**Using public keys as identifiers. In the IoT world, storing**
and backing up private keys may present a major problem,
if the keys are associated with value or other key-specific
semantic meaning. In general, while the IoT devices are small,
they may still contain a handful of private keys that are
specific to the device. In most cases, these keys cannot be
stored or backed up anywhere else, or storing them elsewhere
is cumbersome and adds additional security vulnerabilities
into the system. Furthermore, many IoT devices operate in
uncontrolled environments, and may be physically accessible
by adversaries. Hence, a common practice is to keep the device
specific keys as such, associating them just with the specific
device and nothing else.
_B. Privacy-related challenges_
From the privacy point of view, both of the main DLT
properties — immutability and availability — may endue
privacy challenges. Furthermore, there are challenges related
to privacy laws, including the European GDPR and the right
to be forgotten.
**Immutability of the data stored in blockchains can easily**
cause problems with privacy. The increasing pool of data,
available in the ledger, can be mined for insights, and dedicated techniques, such as correlation attacks, can reveal even
obfuscated information. Therefore, careful analysis is needed
to determine what information should and should not be stored
in any DLTs and what methods should be used to protect the
information. In most situations only hashes of the actual data
(e.g., the root of a Merkle tree) will be stored to blockchains.
Extremely sensitive data, such as cryptographic keys, must
only be stored privately.
Transactions are always traceable in DLT, by the very
definition a distributed ledger. While transactions cannot be
directly tied to individuals — unless they contain unencrypted
personally identifiable information — any leakage of identifiable information will allow the tracing of all past and future
transactions made by the entity tied to a transaction. While
information-hiding techniques such as the use of tumblers
makes it possible to obfuscate in some cases the transaction
parties, the success of obfuscation depends on the properties
(and security) of the tumbler service used and the type of
transaction attempted. It should also be noted that future
developments in identifying transaction patterns may in the
-----
future lead to previously obfuscated transactions becoming
traceable.
_C. Internet of Things point of view_
In addition to the traditional security and privacy concerns,
IoT devices pose a number of challenges that are specific to
the very nature of the IoT devices. That is, contrary to most
other ICT systems used widely, IoT systems tend to be used
long after their installation, from several years to even half a
century. Furthermore, most of today’s IoT devices do not have
any practical means of upgrading them, other than physical
replacement, which may be prohibitively expensive. In this
section, we have a brief look at some of these aspects.
To start with, IoT devices often have limited reconfigura**bility or none at all. They may become “stuck in the past”**
regarding newer technological developments. Changes in the
DLT infrastructure and protocols may cause IoT devices to
either become isolated from the DLT, or to only have limited
functionality available. While it may be feasible to run a
deprecated, or backward-compatible version of IoT backend
systems, it would seem unlikely that it is possible to run an
alternate DLT network for the purpose of supporting old IoT
devices. In this manner an IoT system trying to gain reliability
and security advantages of a public DLT system is also at
the mercy of that DLT system’s later developments. Longterm changes in a DLT’s development may also be difficult to
predict, as even an open ledger has an implicit governing body
subject to potentially diverging interests and incentives [5].
Considering the use cases, locks and buildings are relatively
accessible for upgrades, while containers would be likely to
be upgradeable only during select time windows during their
travels.
**Power requirements are often critical for IoT devices,**
and a large portion of IoT protocol concerns are related
to the power requirements of transmission of data over the
network. As noted before, devices with constrained CPU,
storage, and/or power capacities cannot participate in DLT
networks as full nodes, and are unable to store or process
the full ledger. Furthermore, integrating DLTs is likely to
increase in network traffic which in turn impacts the power
usage of the devices. Thus, it becomes important that any
use of DLTs takes the limited power budget of IoT devices
into account, for example, through the use of protocols that
allow tradeoffs between DLT security, and latency guarantees
and power requirements. Power requirements for the lock case
is especially problematic, as locks are needed to operate on
battery power for extended periods of time. The same logic
applies to unpowered containers, but is not so relevant for
powered containers.
Another difference between many IoT systems and typical
ICT systems is that the IoT systems directly control real life
utilities or other functions whose failure may have severe or
even fatal consequences to humans. Hence, their resilience
**and robustness requirements may be decades more stringent**
than even e.g. for financial systems.
### IV. Tentative approaches
Given the considerations above, we conjecture that in most
_cases individual IoT devices should not be directly connected_
_to any DLT. Instead, typical well engineered approaches will_
be hybrid systems where the individual, resource constrained,
IoT devices talk only to a handful of local, trusted “gateways.”
These gateway nodes will then have more resources, be better
protected and upgradeable, and — perhaps most importantly
— any mission critical functionality will not depend on any
DLTs being continuously available.
Hence, in the rest of this section, we focus on the consequences of this conjecture, discussing how the security
and privacy challenges might be addressed within the IoT
_platforms, i.e. infrastructure nodes (including above mentioned_
local “gateways”) that process IoT data and take part on
the coarse grained control of the missions the IoT devices
are implementing through actuation. This approach may be
considered as an example of the so-called hybrid DLT systems,
where a part of the system is an “open” blockchain while other
parts of the system are “closed” or permissioned.
From the integrity, confidentiality, authentication and
**authorisation point of view, the baseline approaches are well**
progressing in some of the ongoing work, e.g. in the Sovrin
Foundation “identity” blockchain [6]. One basic idea is to
separate all identity information into individual attributes,
such as birth date, first name, and present only the attributes
necessary and nothing else. From the DLT point of view, this
means that the DLT itself works in a role somewhat similar to a
traditional certificate authority (CA) or certificate revocation
list (CRL). In other words, the DLT stores data about trust
_anchors and their relationships, while the actual data relating_
to privacy sensitive identities is stored by the parties themselves. Hence, the DLT is used to maintain the integrity of the
trust anchors while session and data confidentiality and any
decisions requiring authentication and/or authorisation take
place outside of the open DLT.
Considering availability and blockchain latency, one approach is to combine several blockchains, c.f. e.g. Polkadot [7]
and SOFIE [8]. Polkadot outlines a scalable, heterogeneous
multi-chain protocol aimed to be backwards compatible with
existing blockchain networks. The goal is an extensible system
that that has a lower cost structure than a standard blockchain
design. The SOFIE project attempts to take the approach one
step further in the IoT space, by federating IoT systems by
with an inter-ledger transaction layer.
From the privacy point of view, an attribute-oriented ap_proach, promoted e.g by the Sovrin blockchain, may com-_
pletely dismiss the use of permanent identifiers, replacing them
with secure but ephemeral peer-to-peer connections that are
associated with security-related attributes. Especially when the
attributes are combined with zero knowledge protocols, a party
may prove that it has certain rights or posses certain attributes
without revealing anything about its identity.
Another approach, promoted by e.g. Sovrin, MyData [9],
SOFIE, and many others, is storing all privacy critical data
-----
_off-chain and only referring to the data from the chain, if_
so desired. In general, strictly confidential data must not be
directly stored in a DLT, not even in an encrypted form,
due to the high probability that all encryption algorithms will
become weak sooner or later. Hence, for example, the Sovrin
approach is that the parties themselves store their privacycritical attributes and may use zero knowledge proofs to show
that they possess certain attributes without revealing any nonephemeral identifiers or other knowledge that would allow
their “identities” to be linked.
A variant of this would be storing only partial data in a
DLT. In such an approach, the data would be cryptographically
split (or “shared”) [10]. An almost opposite approach would be
_storing the whole state in a DLT [11]. W.r.t. our use cases, such_
hybrid approaches would most probably be very useful for the
lock and building cases, while the container use case could
possibly be based on a more direct DLT approach, depending
on the latency requirements.
### V. Related work
There appears to be very few peer reviewed papers in the
domain of applying blockchains to IoT. Furthermore, those
published seem to err more to the side of proposing how
blockchains could be used with IoT rather than systematically
analysing the potential problems. In this section, we briefly
summarize the few papers and about a dozen of newsletters
and blog posts covering the security and private issues relating
to DLT and IoT integration.
To our knowledge, Fremantle and Scott [12] were the
first authors that discussed IoT security and privacy, also
considering blockchains. However, they merely remarked that
blockchains may have potential in solving the cloud integrity
and authentication problems for IoT, not considering the
potential challenges. Conoscenti et al [2] gave a systematic
literature review on blockchains and IoT, finding only four
use cases explicitly designed for IoT. They briefly considered
blockchain security and noted that user-related privacy issues
may arise, without really going much deeper. Dorri et al [13],
[14] have proposed a solution where each smart building has
a separate local blockchain, though without proof-of-work
mining and with a hierarchical structure. Their approach to
privacy issues related to the use of blockchains is to store the
private information primarily on the user-controlled private
blockchain. While this approach is suitable for environments
entirely under user’s control, it cannot be extended directly to
situations where separated IoT systems need to communicate.
Kshetri [15] discussed the applicability of blockchains to IoT
security in the light of a number of IoT security incidents,
most of the time suggesting straightforward solutions, and
therefore probably suffering from many of the problems we
have outlined above. Laszka et al. [16] considered a electricity
trading use case, where public trading transactions in a DLT
have a potential to expose personal information (e.g. electricity
usage patterns) through automated trading by the IoT devices
comprising of the smart grid. However, they consider a narrow
concern and do not discuss more general problems related to
the use of DLTs by IoT devices. Khan et al. [17] perform a
systematic review of possible attacks specific to IoT systems,
and discuss potential benefits of using blockchains regarding
the discussed attack categories. Many of the attacks described
by Khan et al. can also be used to disrupt IoT devices’ access
to DLTs; however, to us their approach of using blockchains
appears optimistic and glosses over a large portion of the
practical problems discussed in this paper.
In an BBVA Open Mind blog post, Banafa [18] claimed
that “Blockchain technology is the missing link to settle
scalability, privacy, and reliability concerns in the Internet
of Things.” As should be clear by now from above, we byand-large disagree. His second article in the IEEE Internet
of Things newsletter [19] appears to be somewhat more
balanced, but still claimed that the “Blockchain technology
is the missing link to settle privacy and reliability concerns
in the Internet of Things.” However, in addition to heavily
promoting blockchains as the “perhaps [being] the silver bullet
needed by the IoT industry”, he acknowledges that the are
challenges related to blockchain scalability, power and storage
consumption, confirmation latencies, general lack of human
skill, and legal and compliance issues.
The media and industry analysts are — more often than
not — focusing on the apparent benefits of IoT and DLTs.
Consider, for example, reports from Accenture [20] and
Forbes [21] which are quite uncritical in their portrayal of
IoT and DLTs. Even in situations where potential problems
are highlighted, there seem to be focus on the technology,
operational, legal, and compliance issues [22].
### VI. Conclusions
Topping at Gartner hype cycle in 2016, blockchains and
other DLT have been suggested as a security solution to
numerous areas, some people even claiming it perhaps being
“the silver bullet needed by the IoT industry” [19]. We have
briefly but systematically discussed a number of security and
privacy challenges related to using DLT in the context of
IoT systems. Based on our admittedly early analysis, while
admitting that DLTs may have a role in securing some IoT
system use cases, to us it appears unwise to use (open) DLTs
_directly with IoT devices or for storing IoT related data as_
such. On the other hand, using more advanced solutions where
the DLT role is diminished to that of a traditional trusted
third party and/or for storing fingerprints of data, possibly
with smart contract oracles, may well appear quite useful.
In such solutions the security and privacy critical data is
stored off-chain, in more traditional and separately protected
systems, using open DLTs only to facilitate interoperability by
providing distributed trust anchors.
Hence, to us it appears that more work is needed before
we can integrate open DLTs into IoT systems in such a way
that where the business benefits clearly outweigh the potential
security and privacy problems. Firstly, we believe that a viable
inter-ledger approach needs to be developed, allowing multiple
ledgers to be used in the same time. Secondly, we need to
identify the typical patterns of which data should be stored
-----
into a public ledger, which is better left in a private ledger, and
what should be left outside of ledgers altogether. In general,
we expect various hybrid approaches to emerge, wherein the
DLTs will typically have a relatively minor but important role.
### Acknowledgments
This project has received funding from the European
Union’s Horizon 2020 research and innovation programme
under grant agreement No 779984.
### References
[1] K. Panetta. (2017-08-15). Top Trends in the Gartner
Hype Cycle for Emerging Technologies, 2017, [Online].
Available: https://blogs.gartner.com/smarterwithgartner/
top-trends-in-the-gartner-hype-cycle-for-emergingtechnologies-2017/ (visited on 2017-12-11).
[2] M. Conoscenti, A. Vetrò, and J. C. D. Martin,
“Blockchain for the Internet of Things: A systematic
literature review”, in 2016 IEEE/ACS 13th Interna_tional Conference of Computer Systems and Applica-_
_tions (AICCSA), 2016-11, pp. 1–6. DOI: 10 . 1109 /_
AICCSA.2016.7945805.
[3] S. Popov, “The tangle”, Version 1.3, 2017-10-01. [Online]. Available: https://iota.org/IOTA_Whitepaper.pdf.
[4] R. G. Brown. (2016-04-05). Introducing R3 Corda: A
Distributed Ledger Designed for Financial Services,
[Online]. Available: http://www.r3cev.com/blog/2016/
4/4/introducing-r3-corda-a-distributed-ledger-designedfor-financial-services (visited on 2017-12-11).
[5] J. Mattila and T. Seppälä, “Distributed Governance in
Multi-Sided Platforms,” Washington DC, United States:
Industry Studies Association Conference, 2017.
[6] D. Reed, J. Law, and D. Hardman, “The Technical
Foundations of Sovrin”, 2016-09-29. [Online]. Available: https://sovrin.org/wp-content/uploads/2017/04/
The-Technical-Foundations-of-Sovrin.pdf.
[7] G. Wood, “Polkadot: Vision for a heterogeneous multichain framework”, 2016. [Online]. Available: https://
github . com / w3f / polkadot - white - paper / raw / master /
PolkaDotPaper.pdf.
[8] A. Karila, Y. Kortesniemi, D. Lagutin, P. Nikander, N.
Fotiou, G. Polyzos, V. Siris, and T. Zahariadis, “SOFIE
- Secure Open Federation”, Draft version 0.3, 2017-08.
[9] A. Poikola, K. Kuikkaniemi, and H. Honko, Mydata
_a Nordic Model for Human-Centered Personal Data_
_Management and Processing. 2015, ISBN: 978-952-_
243-455-5.
[10] A. Shamir, “How to Share a Secret”, Commun.
_ACM, vol. 22, no. 11, pp. 612–613, 1979-11, ISSN:_
0001-0782. DOI: 10.1145/359168.359176.
[11] T. Aura and P. Nikander, “Stateless connections”, in
_Information and Communications Security, ser. Lecture_
Notes in Computer Science, Springer, Berlin, Heidelberg, 1997-11-11, pp. 87–97, ISBN: 978-3-540-636960. DOI: 10.1007/BFb0028465.
[12] P. Fremantle and P. Scott, “A survey of secure middleware for the Internet of Things”, PeerJ Computer
_Science, vol. 3, e114, 2017-05-08, ISSN: 2376-5992._
DOI: 10.7717/peerj-cs.114.
[13] A. Dorri, S. S. Kanhere, and R. Jurdak, “Blockchain
in internet of things: Challenges and Solutions”,
2016-08-18. arXiv: 1608.05187 [cs]. [Online]. Available: http : / / arxiv . org / abs / 1608 . 05187 (visited on
2017-12-11).
[14] A. Dorri, S. S. Kanhere, R. Jurdak, and P. Gauravaram, “Blockchain for IoT security and privacy: The
case study of a smart home,” in 2017 IEEE Interna_tional Conference on Pervasive Computing and Com-_
_munications Workshops (PerCom Workshops), 2017-03,_
pp. 618–623. DOI: 10 . 1109 / PERCOMW . 2017 .
7917634.
[15] N. Kshetri, “Can Blockchain Strengthen the Internet of
Things?”, IT Professional, vol. 19, no. 4, pp. 68–72,
2017, ISSN: 1520-9202. DOI: 10.1109/MITP.2017.
3051335.
[16] A. Laszka, A. Dubey, M. Walker, and D. Schmidt, “Providing Privacy, Safety, and Security in IoT-Based Transactive Energy Systems using Distributed Ledgers”,
2017-09-27. arXiv: 1709.09614 [cs]. [Online]. Available: http : / / arxiv . org / abs / 1709 . 09614 (visited on
2017-12-05).
[17] M. A. Khan and K. Salah, “IoT security: Review,
blockchain solutions, and open challenges,” Future
_Generation Computer Systems, vol. 82, pp. 395–411,_
2018-05-01, ISSN: 0167-739X. DOI: 10.1016/j.future.
2017.11.022.
[18] A. Banafa. (2016-10-24). Securing the Internet of
Things (IoT) with Blockchain, [Online]. Available:
https : / / www . bbvaopenmind . com / en / securing - the internet - of - things - iot - with - blockchain/ (visited on
2017-12-11).
[19] ——, “IoT and Blockchain Convergence: Benefits and
Challenges”, IEEE IoT Newsletter, 2017-01-10. [Online]. Available: https://iot.ieee.org/newsletter/january2017/iot-and-blockchain-convergence-benefits-andchallenges.html (visited on 2017-12-11).
[20] F. Papleux. (2016-05-24). Blockchain Technology to
solve Internet of Things problems, [Online]. Available:
https://www.accenture.com/us-en/blogs/blogs-usingblockchain-solve-internet-things-problems (visited on
2017-12-11).
[21] J. Chester. (2017-04-28). How Blockchain Startups Will
Solve The Identity Crisis For The Internet Of Things,
[Online]. Available: https : / / www . forbes . com / sites /
jonathanchester/2017/04/28/how-blockchain-startupswill-solve-the-identity-crisis-for-the-internet-of-things/
(visited on 2017-12-11).
[22] i-scoop. (2017-09). Blockchain and the Internet of
Things: The IoT blockchain picture, [Online]. Available:
https://www.i-scoop.eu/blockchain-distributed-ledgertechnology/blockchain-iot/ (visited on 2017-12-11).
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/GIOTS.2018.8534527?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/GIOTS.2018.8534527, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://research.aalto.fi/files/27004281/ELEC_security_and_privacy_challenges_dlt_iot_camera_proof_20180430_express_approved.pdf"
}
| 2,018
|
[
"JournalArticle"
] | true
| 2018-06-01T00:00:00
|
[] | 8,860
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02b87f2a5444d729932c024d14204367bfa0890b
|
[
"Computer Science"
] | 0.897191
|
Designing a forecasting assistant of the Bitcoin price based on deep learning using market sentiment analysis and multiple feature extraction
|
02b87f2a5444d729932c024d14204367bfa0890b
|
Soft Computing - A Fusion of Foundations, Methodologies and Applications
|
[
{
"authorId": "2233400005",
"name": "Sina Fakharchian"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Soft Comput",
"Soft Comput Fusion Found Methodol Appl",
"Soft Computing"
],
"alternate_urls": null,
"id": "32505816-936f-497d-8f3c-ee241a290329",
"issn": "1432-7643",
"name": "Soft Computing - A Fusion of Foundations, Methodologies and Applications",
"type": "journal",
"url": "https://link.springer.com/journal/500"
}
| null |
#### Islamic Azad University Islamshahr Branch: Islamic Azad University Eslamshahr Branch Arti�cial intelligence, Price prediction assistant, Deep learning, Feature selection, Sentiment analysis August 30th, 2022 https://doi.org/10.21203/rs.3.rs-1341589/v1 This work is licensed under a Creative Commons Attribution 4.0 International License. Read Full License
-----
## **Designing forecasting assistant of the Bitcoin price based on deep ** **learning using the market sentiment analysis and multiple feature ** **extraction**
*Sina Fakharchian* *[1]* [*]
*1* *Islamic Azad University Islamshahr Branch, Department of Computer Engineering, IRAN*
*** *Corresponding Author Email Address:* *[Sina.cbar@gmail.com](mailto:Sina.cbar@gmail.com)*
### **Abstract**
#### Nowadays, the issue of fluctuations in the price of digital Bitcoin currency has a striking impact on the profit or loss of people, international relations, and trade. Accordingly, designing a model that can take into account the various significant factors for predicting the Bitcoin price with the highest accuracy is essential. Hence, the current paper presents several Bitcoin price prediction models based on Convolutional Neural Network (CNN) and Long-Short-Term Memory (LSTM) using market sentiment and multiple feature extraction. In the proposed models, several parameters, including Twitter data, news headlines, news content, Google Trends, Bitcoin-based stock, and finance, are employed based on deep learning to make a more accurate prediction. Besides, the proposed model analyzes the Valence Aware Dictionary and Sentiment Reasoner (VADER) sentiments to examine the latest news of the market and cryptocurrencies. According to the various inputs and analyses of this study, several effective feature selection methods, including mutual information regression, Linear Regression, correlation-based, and a combination of the feature selection models, are exploited to predict the price of Bitcoin. Finally, a careful comparison is made between the proposed models in terms of some performance criteria like Mean Square Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Median Absolute Error (MedAE), and coefficient of determination (R [2] ). The obtained results indicate that the proposed hybrid model based on sentiments analysis and combined feature selection with MSE value of 0.001 and R [2] value of 0.98 provides better estimations with more minor errors regarding Bitcoin price. This proposed model can also be employed as an individual assistant for more informed trading decisions associated with Bitcoin.
**Keywords:** Artificial intelligence; Price prediction assistant; Deep learning; Feature selection; Sentiment
analysis
### **1. Introduction **
#### The last decades have witnessed a remarkable growth in the use of digital currencies by people and organizations. Nowadays, the issue of cryptocurrencies has received much attention and is being widely examined in the literature (Chaudhari and Crane, 2020; Dai et al., 2021; ElRahman and Alluhaidan, 2021; Li et al., 2021; Zuiderwijk et al., 2021). In the modern world, cryptocurrency has been introduced as a novel and emerging topic which is governed by the cryptographic protocol using Blockchain (Chohan, 2017). Considering the concept of this cryptocurrency, the way people think about money has been revolutionized (Pant et al., 2018). Also, the value of the cryptocurrency has been significantly raised due to the continuous rise in adoption and widespread usage of that in the real world. According to the striking value of
1
-----
#### cryptocurrencies, some people consider them as equal with real currencies or Fiat currencies. In comparison, others regard them as a good opportunity to invest. On January 9, 2017, the value of a Bitcoin has increased from $ 863 by 2000% and reached its highest price level, i.e., $ 17,550, on December 11, 2017. Eight weeks later, on February 5, 2018, the price of Bitcoin became less than half of the price mentioned earlier, i.e., about $ 7900. Nevertheless, the promising technology behind cryptocurrencies, namely the Chinese blockchain, is about to raise the use of cryptocurrencies. Kristoufek stated that Bitcoin is a unique asset, and the price of Bitcoin cryptocurrencies acts like a standard financial asset (Kristoufek, 2015). Bitcoin is regarded as the first decentralized digital currency in which the transactions are conducted directly between the users and no intermediary (Matta et al., 2015; Naimy and Hayek, 2018). This type of currency is fundamentally different from what is typically employed in a prevalent monetary system. Based on mining, the cryptocurrency is created, which has led to considerable variations in the online economic activities of users worldwide (Jain et al., 2018). Due to the fact that the price of cryptocurrencies does not behave as in the past, it is significantly difficult to predict the price of cryptocurrencies. Additionally, the large fluctuations in the price of cryptocurrency, random effects in the market, and the influence of various factors on the price of Bitcoin, have become a globally novel challenge. Hence, the issue of predicting the variations in the price of cryptocurrency Bitcoin is of great importance. On the other hand, there are many opportunities for better understanding the drivers of the Bitcoin price (Karalevicius et al., 2018). Moreover, since no central governing authority controls the digital currency and is mainly affected by the general public, Bitcoin is regarded as a volatile currency that changes based on socially constructed ideas. Therefore, the issue of sentiment analysis in the prediction of Bitcoin is of great importance, and many authors have studied it in this regard. The idea of some economists such as Daniel Kahneman and Amos Tversky has proved that the decisions made in this field are influenced by sentiments (KAI-INEMAN and Tversky, 1979). The study of R. J. Dolan regarding "Emotions, Cognition, and Behavior" further confirms that decision-making is extremely affected by sentiments (Dolan, 2002). Actually, the sentiment analysis indicates that demand for a good product, and consequently price, maybe influenced more than its economic basics. In recent years, researchers have specifically found that purchasing decisions are made by people and are under the effect of online data collection (Mittal et al., 2019). Galen Thomas Panger stated that Twitter sentiments are related to people's overall sentimental state. In addition, it was revealed that social media such as Twitter has a calming effect rather than reinforcing the user's sentimental state (Panger, 2017). Based on a textual analysis conducted on a social context with the aim of investors called "Search Alpha”, Chen et al. stated that the comments outlined in the submitted articles of "Seeking Alpha" were highly effective and even could predict the astonishments of profitability (Chen et al., 2013). In a similar study, Tetlock demonstrated that high levels of media pessimism in the stock market directly affect trading volume (Tetlock, 2007). Finally, in another study, Gartner pointed out that most users use social media to make their final decisions for purchasing (Pettey, 2010). Over time, extensive literature has developed on the effectiveness of tweet sentiments. Kouloumpis et al. showed that standard methods of natural language processing like sentence scoring were ineffective due to the short nature of tweets and the uniqueness of this writing style
2
-----
#### (Kouloumpis et al., 2011). Pak and Patrick divided the individual tweets into positive, negative, or neutral categories that could better understand sentiments by the computer (Pak and Paroubek, 2010). O'Connor et al. indicated that the sentiments in tweets reflect the public opinion on various topics in public opinion surveys (O'Connor et al., 2010). This study identified sentiment analysis as a more cost-effective option versus public opinion surveys. Nevertheless, according to this concept, the sentiments generated by tweets more accurately reflect the sentiments of the majority of people on the topic. Hence, it can be considered for predicting demand and the results of variations in the products' price. In another study, the researchers found that employment-related searches were related to the unemployment rate (Ettredge et al., 2005). A relationship between the volume of inquiries and the volume of stock trading on NASDAQ was observed in the study of Bordino et al. (Bordino et al., 2012). Choi and Varian have also conducted specific studies on Google Trends and presented remarkable results (Choi and Varian, 2012). According to the results of this study, it can be concluded that simple seasonal models of trend data are considered input data that outperform models that did not use Google Trends. Also, Asur et al. found that the extent of how much a keyword is a trend in the newly released films accurately predicted their revenue in the box office (Asur and Huberman, 2010). Overall, the data of sentimental can be used to predict variation in macroeconomic statistics, and many studies have been performed in this field. Several researchers, including Choi and Varian (Choi and Varian, 2012) and Ettredge et al. (Ettredge et al., 2005), have claimed that web-based search data, which is the same as Google Trends data, can be particularly utilized to predict the price of Bitcoin. Dennis and Yuan collected capacity scores in tweets associated with 500 S&P companies and realized a correlation between them and stock prices (Sul et al., 2014). De Jong et al. analyzed minute-by-minute stock prices and tweet data of 30 stocks at the Dow Jones industrial average (de Jong et al., 2017). Accordingly, it was revealed that 87% of stock returns were under the effect of such tweets; however, the authors also sought the vice versa in that stock. As a result, the prices affected tweets. Bollen et al. used a self-organizing fuzzy neural network to predict price changes in the DOW Jones Industrial Average and obtained 86.7% accuracy by Twitter sentiments (Bollen et al., 2011). Evita Stenqvist and Jacob L¨onn¨o presented a study, "Predicting Bitcoin price fluctuation by analyzing Twitter sentiments," and obtained striking results (Stenqvist and Lönnö, 2017). The authors collected and processed the tweets regarding Bitcoin and Bitcoin prices from May 11 to June 11. Then, the unrelated or unaffected tweets were eliminated from the analysis. After that, the authors used the VADER (Valence Aware Dictionary and Sentiment Reasoner) method to analyze the tweets' text. Besides, the authors categorized the sentiments of each tweet and labeled them negatively, neutrally, or positively. Connor et al. employed the sentiment of news headlines and tweets to predict price changes in Bitcoin, Light Coin, and Atrium (Lamon et al., 2017). The results of this study represent the remarkable performance of the logistic regression for classifying these tweets. The authors also accurately predicted the 43.9% price increase and 61.9% price reduction. Colianni et al. collected tweets from November 15, 2015, to December 3, 2015, and used Naive Bayes and Support Vector Machines to classify tweets, and reached higher accuracy for predicting price (Colianni et al., 2015). Finally, Shah et al. successfully presented a strategy using historical prices and Bayesian regression analysis (Shah and Zhang, 2014).
3
-----
#### Traditional time series prediction techniques like Holt-Winters exponential smoothing models are fundamentally related to linear assumptions and need data with the capability of breaking down into trend, seasonal, and noise to be effective (Chatfield and Yar, 1988). Since the Bitcoin market mainly lacks seasonality and high volatility, the traditional methods are not useful. To tackle this drawback, deep learning (DL) technology has been introduced as a novel technique that reduces the costs and complexity of the calculations (McNally et al., 2018). Unlike the traditional linear statistical models, the artificial intelligence (AI) method is able to consider the nonlinear property. Notably, artificial neural networks (ANNs) with deep learning (DL) algorithms are regarded as the most thriving methods due to their remarkable predictive capabilities (Nakano et al., 2018). In the cutting-edge paper of 2017, A. Radityo et al. employed ANN to forecast the next-day price of Bitcoin (Radityo et al., 2017). Four types of ANN algorithms have been considered in this study, namely, Neuro Evolution of Augmenting Topologies (NEAT), Genetic Algorithm Neural Network (GANN), Genetic Algorithm Backpropagation Neural Network (GABPNN), and Backpropagation Neural Network (BPNN). Considering machine learning algorithms such as generalized linear models and random forests, Bitcoin price prediction was modeled by Madan et al. binomial classification problem (Madan et al., 2015). In 2008, Zhu et al. used the volume of the stock transactions as a neural network input to improve the forecasting performance in the medium and long term and presented acceptable results (Zhu et al., 2008). A modular neural network was employed by Kimoto et al. for predicting the best shopping point (Kimoto et al., 1990). Guresen et al. compared the performance of different neural networks in stock market prediction and proved that a multilayer perceptron (MLP) neural network outperforms the others (Guresen et al., 2011). In contrast, S. McNally stated that the capabilities of the recurrent neural network (RNN) and the long short term memory (LSTM) outweigh the benefits of MLP due to the temporal nature of Bitcoin data (McNally et al., 2018). Similarly, in 2019 S. Tandon et al. attempted to present the price prediction model to forecast the Bitcoin price using RNN and LSTM with 10-fold cross- validation. A careful comparison was made between the proposed model and other available models, including RNN with LSTM, Linear Regression, and Random Forest. The benefits of the proposed model were proved, and remarkable results were presented. In a major advance of 2020, Dutta et al. has used a gated recurrent unit method to forecast the Bitcoin price and obtained acceptable results (Dutta et al., 2020). In 2021, Ramadhan et al. also used (LSTM-RNN) for predicting the Bitcoin price (Ramadhan et al., 2021). A hybrid Bitcoin price prediction method based on ANN and using Bi-LSTM and Bi-RNN was also presented by Das et al. in 2021, and the benefits of the proposed method were revealed (Das et al., 2021). Despite this interest, no one as far as we know has studied the issue of Bitcoin price prediction considering Twitter data, news headlines, news content, Google Trends, Bitcoin-based stock, and finance using CNN and LSTM. Considering CNN and LSTM, the current paper aims to propose a model for forecasting the variations in the price of cryptocurrency Bitcoin. For this purpose, a variety of methods of textual sentiment analysis such as news headlines, news, and tweets are considered. Such methods consist of using the Twitter API, a Python library, namely 'Tweepy,' extracting text and news content from the Telegram channel, the reference site regarding cryptocurrencies, namely Kevin Telegraph, receiving and extracting Google Trends data. In the beginning, using the tweets in which the Bitcoin is mentioned, the data are collected from the storage. Then, the tweets are analyzed to calculate the sentiments score and compare it to other days. After that, that day's price is examined
4
-----
#### to determine if there is a relationship between tweets and variations. As a result, variations in the price of cryptocurrencies can be determined using the sentiment. A careful comparison is made between the proposed models in terms of some performance criteria like Mean Square Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Median Absolute Error (MedAE), and coefficient of determination (R [2] ). The major contributions of this paper are summarized as follows: Presenting Bitcoin price prediction models based on Convolutional Neural Network (CNN) and Long-Short-Term Memory (LSTM) using market sentiment and multiple feature extraction. Analyzing the VADER sentiments to examine the latest news of the market and cryptocurrencies. Pricing Bitcoin price considering feature selection methods, including mutual information regression, Linear Regression, correlation-based, and a combination of the feature selection models. The remaining of this paper is organized as follows: The major fundamental concepts regarding the topic of the present study, including an overview of cryptocurrencies, Twitter, sentiment analysis, and Google Trends, are explained in the second section. The issue of data collection is also illustrated in the third section. The research method and how to select model inputs are examined in the fourth section. Finally, a summary regarding the present study and the main conclusions as well as suggestions for future studies are presented in the fifth section.
### **2. Preliminaries**
#### The analysis presented in this paper needs an understanding of why and how cryptocurrencies are different from valid currencies or stocks in the companies of the traditional stock market. This section provides more information regarding such reasons and clarifies why these cryptocurrencies are used. Since cryptocurrencies are part of the more extensive technology (China Blockchain), Twitter activities can be considered very effective. It should be noted that Google Trends data and the volume of tweets represent an overall tendency to have cryptocurrencies. Hence, the basic concepts concerning cryptocurrencies, Twitter, sentiment analysis, and Google Trends are described here. *2.1 Blockchain and cryptocurrencies * The data of the first cryptocurrency in the world are analyzed in this paper. Bitcoin is the largest cryptocurrency in terms of market size, followed by Atrium. Bitcoin was the first cryptocurrency to be created. Creating Bitcoin is mysterious since it was created by a person or group of people using the name "Satoshi Nakamoto" in 2009. At the same time as launching Bitcoin, Satoshi Nakamoto presented a paper entitled "Bitcoin: A peer-to-peer Payment Method" (Nakamoto, 2008). In contrast to cash, this system outlines a peer-to-peer payment method using an electronic system. The cryptocurrency can be sent directly from one party to another without the use of a third party to verify the transaction between them. This innovation is presented employing the "blockchain," which is like a common ledger in the whole transaction. This is a peer-to-peer network in which the network verifies the whole transactions to prevent forging them.
5
-----
#### Since the applications of blockchain technology go far beyond peer-to-peer payments, this technology provides security, privacy, and decentralization. A decentralized office exploits the blockchain for IoT applications, isolated storage systems, healthcare, and more (Xu and Croft, 1998). The range of blockchain applications has led to creating more blockchains and cryptocurrencies. Furthermore, using the blockchain increases the usage of cryptocurrencies and gives them intrinsic value whose amount depends on many factors. The main reason is this it is a new technological debate. Notably, the information regarding the type of currency and how it stores its new value is useful to improve understanding of what can lead to price changes. *2.2 Twitter * Twitter was created in July 2006 as an application that consists of other applications, websites (such as Instagram, Facebook, LinkedIn, etc.), and microblogging. A microblog is a medium that allows smaller and more frequent updates compared to blogging to be performed. Twitter allows users to send messages publicly (called "tweets") up to 140 characters long, which was doubled on November 6, 2017, to 280 characters per tweet. Users can add a "hashtag" to the tweet, denoted by the symbol of "#." This symbol follows a sequence of characters employed to identify the subject of a tweet and search for that. Hashtags are considered later when collecting tweets in the data section. It is noteworthy that Twitter has received much popularity rapidly since its launch in 2006. Evidence that shows how much Twitter is essential dates back to January 15, 2009, when an Airways plane crashed in the Hudson River in the United States. An image that was posted on Twitter regarding that incident broke the record of the views' number. Because 83% of the world's leaders have Twitter accounts, Twitter earns nearly $ 330 million a month with 1.3 billion users. Due to such considerable statistics, it should be noted that the Twitter database can be significantly rich and efficient. It is considered a great source of information showing how people almost feel about anything you want. Also, you can observe how these feelings change over time since it has the capability to inform you when a tweet has been sent. Hence, Twitter is regarded as a remarkable resource for collecting textual data on a topic such as cryptocurrencies to explore possible relationships between them and their prices. *2.3 Sentiment analysis * It can be estimated that 90% of the global data has been generated in the last two years. Most of this data is in the form of textual data without structure. This data can also be in the form of tweets, articles posted on the Internet, text messages, emails, or others which create such a wide amount of unstructured data. "Natural language processing" (NLP) is considered a novel discussion that is being studied or developed. There is a set of methods for computers to analyze and understand the text. In this paper, a set of natural language processing tools called "emotion analysis" is employed. Sentiment analysis is conducted for extracting and measuring the sentiments or mental opinions outlined in the text. There are several methods to do this, but the "VADER" (Valence Aware) method is selected in this study (Manning and Schutze, 1999). The aim here is to use sentiment analysis in the collected tweets for determining what tweets have positive or negative comments regarding cryptocurrencies.
6
-----
#### *2.4 Google Trends * In many parts of the world, almost the whole aspect of daily life includes the Internet. Browsing the Internet is conducted through search engines and Google. Nowadays, the most popular search engine in the world, with 74.52% of searches in Google. Therefore, Google search data can provide credible insights into what the world is interested in and the extent of this interest in anything. Google makes this data available through Google Trends. The data provides information concerning the popularity of the searched words compared to other words. There is a variation in the ranking of Google Trends data at different times in cryptocurrencies, which can be related to increasing and reduction of the public profit and the price of cryptocurrencies. *2.5 Headline and the main text of the day news * Due to the fact that the price of cryptocurrencies significantly depends on positive and negative news and the cryptocurrency market follows more fundamental analysis, we decided to extract news from the most globally reputable news site in the field of cryptocurrency, i.e., Kevin Telegraph, for increasing the accuracy. In the period from "2021/02/05" to "2021/09/10", the extraction and analysis of sentiments based on Twitter data have also been conducted based on the news to see how the news is effective for determining the Bitcoin price.
### **3. Methodology **
#### The main information regarding the proposed method of this study to predict the Bitcoin price is given in this section. Also, the main method and neural networks that are used to reach the final results and predict the Bitcoin price accurately are outlined in this section. *3.1 The proposed model * In this section, a price forecasting assistant or a predicter model based on CNN and long-short- term memory (LSTM) is analyzed using market sentiment and multiple feature extraction. The proposed model consists of different parts, and each part has information and details, which are described separately in the following section. Besides, the flowchart of the proposed method is demonstrated in Figure 1 for better understanding. Additionally, in the proposed models, VADER sentiment analysis is exploited to examine the latest market news of cryptocurrencies. In the proposed models, the Twitter data analysis, the news headlines, news content, Google Trends, Bitcoin stocks, and financials based on deep learning are employed to forecast the Bitcoin price better and more accurately Moreover, due to the high extraction feature of different input data, the selection methods of the mutual information regression, Linear Regression, and correlation-based selection method are exploited. A combination of three feature selection methods is considered in a separated model to benefit from their advantages. According to the various input data in this section, nine different models are developed based on CNN and LSTM to forecast Bitcoin prices. In each of these proposed models, different layers and separated input data are considered to examine the effect of each input data on the Bitcoin price prediction. Finally, the various proposed models are compared with each other in terms of criteria such as Mean Square Error (MSE), Root Mean Square Error
7
-----
#### (RMSE), Mean Absolute Error (MAE), Median Absolute Error (MedAE), and coefficient of determination (R [2] ). According to the presented flowchart shown in Figure 1, this section consists of several main sub- sections, including data collection and data set, text preprocessing and text feature extraction, data normalization, VADER -based sentiment analysis, feature selection, proposed models based on deep learning, evaluating the performance criteria. In the following, these criteria and the main points and methods are illustrated in each subsection.
**Figure 1.** Flowchart of the proposed method based on multiple feature extraction and deep learning
#### *3.2 Deep neural networks * A deep neural network (DNN), an artificial neural network (ANN) with multiple layers between the input and output layers, contains various types of neural networks but the same components: neurons, synapses, weights, biases, and functions always exist in this network. DNNs have been widely used in related work due to their remarkable application (P and M, 2021; Soni et al., 2021). In short, the literature pertaining to DNN (Awoke et al., 2021; Liu et al., 2021) strongly suggests that this technology is highly beneficial to developing the Bitcoin price-prediction models.
8
-----
#### *3.2.1 LSTM Networks * LSTM networks, abbreviated as "Long Short Term Memory," are a special type of recurrent neural network that has the ability to learn long-term dependencies. Hochreiter and Schmid Huber proposed these networks in 1997 for the first time. Notably, many researchers were involved in improving these networks, which are mentioned in the original text. In fact, the major aim of designing LSTM networks was to deal with the problem of long-term dependency. It is noteworthy that memorizing information for long periods of time is the default and normal behavior of LSTM networks, and their structure is such that they can learn very distant information well, which is a striking characteristic of their structure. The whole recurrent neural networks are in the form of iterative sequences (chains) of modules (units) of neural networks. In standard recurrent neural networks, these iterative modules have a simple structure. For instance, it has only one layer of hyperbolic tangent (tanh). Iterative modules have only one layer in standard recurrent neural networks. LSTM networks have a similar sequence or chain structure, but the iterative module has various structures. They contain four layers rather than one layer of neural network that interact with each other according to a special structure. In LSTMs, iterative modules have four layers that interact with each other. *3.2.2 CNN neural networks * The convolutional neural network is similar to other neural networks (e.g., the MLP neural network) and is composed of neural layers with bias and weights and the ability to learn. The following items occur in each neuron: The neuron receives a set of inputs. Internal multiplication is conducted between the weights of the neurons and the inputs. The result is added to bias. Finally, a nonlinear function (the same as the activation function) is passed. The above process is conducted layer by layer and reaches the output layer, creating the network forecast. *3.3 Feature selection methods * Feature selection is known as the process of specifying the least possible number of features in a data set that can describe this set and the main features (Alweshah et al., 2021). A feature selection aims to eliminate unnecessary features and select an important feature according to the data set and its class (Şahin et al., 2021). In the proposed model, three different selection models, including mutual information regression feature, Linear Regression, and correlation-based selection, are exploited. The proposed method is considered the correlation-based model since it can accurately identify the correlation between Bitcoin value and features. Therefore, the correlation-based model is useful to identify an important feature based on the correlation between the feature and the value of the class or Bitcoin. The mutual information feature selection method is one of the effective
9
-----
#### feature selection methods that is used in the proposed model (Kraskov et al., 2004). Mutual information shows a vital criterion of interdependence between features that are widely used in feature selection (Vergara and Estévez, 2014). In the third model, feature-importances of a Linear Regression model are exploited to have the features based on a regression model. This feature selection model is beneficial for having important features according to a Linear Regression model. The proposed deep model is considered for the feature selection methods, and the combination of these three feature models in Sections 4-6 are designed by Model-1, Model-2, and Model-3 models, respectively. Finally, the results that prove the superiority of these feature selection methods are highlighted in Section 6. *3.3.1 Correlation-based feature selection method * In this feature selection method, a subset of features is called a good subset whose features, on the one hand, have a high correlation with the "classification" or target feature, and on the other hand, are uncorrelated with each other. The extent of "merit" or goodness of a subset of features is calculated by the following equation:
𝑀𝑒𝑟𝑖𝑡 𝑆 𝐾 = 𝑘𝑟̅ 𝑐 𝑓
√𝑘+ 𝑘(𝑘−1)𝑟̅ 𝑓𝑓
#### *3.3.2 Feature selection with mutual information *
(1)
#### The features provide a lot of information from the output to the model, and the model estimates the amount of output in classification and regression projects based on this information. The mutual information method has a completely different approach to the previous methods and examines the relationship between a feature and the output instead of analyzing the mean and variance. Also, based on the amount of mutual information that a feature gives the output, it is scored. The approach of this method is significantly interesting and important, and it can accurately determine how much a feature is appropriate for estimating the output. *3.4 Statistical Analysis * This section considers the various criteria for examining the proposed method based on multiple feature extraction and deep learning. The majority of authors typically employ MSE error to make a comparison between the different models. In this paper, several main prediction criteria such as mean-square error (MSE), root-mean-square-error (RMSE), mean absolute error (MAE), median absolute error (MedAE), and coefficient of determination (R [2] ) are considered. In the following, the formulas of these criteria and their explanations are presented (Bui et al., 2018; Chou and Bui, 2014; Chou et al., 2016). Also, Table 1 summarizes the evaluation criteria of the proposed method. Mean Square Error (MSE): This criterion calculates the mean square error of the distance between the predicted values of the proposed and actual Bitcoin method. The smaller the MSE values, the more accurate the Bitcoin prediction result of the proposed method. Through Equation 2, this criterion is calculated.
𝑀𝑆𝐸 = ∑ 𝑛𝑖=1 [(𝑦] 𝑖 [−𝑦̂] 𝑖 [)] [2] (2)
𝑛
10
-----
#### Where n denotes the number of samples, y i is the experimental or actual Bitcoin values, and 𝑦̂ 𝑖 represents the predicted Bitcoin values of the proposed method. Root Mean Square Error (RMSE): If the square root of MSE is calculated, this criterion is called RMSE. In fact, the comparison between MSE and MAE is not correct due to the variation in the scale of the error value in MSE. Hence, it is necessary to define the RMSE criterion. This criterion is represented in Equation 3.
∑ 𝑛𝑖=1 (𝑦 𝑖 −𝑦̂ 𝑖 ) [2]
𝑅𝑀𝑆𝐸 = (3)
𝑛
#### Where n shows the number of samples, the experimental or actual Bitcoin values are captured by 𝑦 𝑖, and 𝑦̂ 𝑖 indicates the predicted values of the proposed method. Notably, if the variance in individual errors is greater, the gap between the MAE and RMSE criteria becomes larger. Mean Absolute Error (MAE): This criterion calculates the mean absolute difference between the predicted values of the proposed and actual Bitcoin method. The smaller the MAE values, the more accurate the prediction result of the proposed method. Through Equation 4, this criterion is calculated.
𝑛
𝑀𝐴𝐸 = [1] (4)
𝑛 [ ∑|𝑦] [𝑖] [−𝑦̂] [𝑖] [|]
𝑖=1
#### According to Equation 4, n shows the number of samples, 𝑦 𝑖 is the experimental or actual values of Bitcoin, and 𝑦̂ 𝑖 is the predicted values of the proposed method. Median Absolute Error (MedAE): This median criterion is considered to calculate the absolute difference between the predicted values of the proposed and actual Bitcoin method. This criterion is shown in Equation 5.
MedAE = 𝑚𝑒𝑎𝑑𝑖𝑎𝑛( |𝑦 1 −𝑦̂ 2 |,…., |𝑦 𝑛 −𝑦̂ 𝑛 |) (5)
#### According to Equation 5, n shows the number of samples, 𝑦 𝑖 is the experimental or actual Bitcoin values, and 𝑦̂ 𝑖 is the predicted values of the proposed method. Determination coefficient or detection coefficient (R [2] ): This criterion calculates how much the predicted values of the proposed method have a good agreement with the actual values of Bitcoin. In contrast to other criteria, the better it is to one. This criterion is calculated through Equation 6.
∑ 𝑛𝑖=1 (𝑦 𝑖 −𝑦̂ 𝑖 ) [2]
𝑅 [2] = 1 −
∑ 𝑛𝑖=1 (𝑦 𝑖 −𝑦 𝑖 ) 2 (6)
#### Accordingly, 𝑦 𝑖 represents the variables mean, n denotes the number of samples, the experimental or actual Bitcoin values are shown by 𝑦 𝑖, and 𝑦 ̂ 𝑖 demonstrates the predicted values of the proposed method.
**Table 1.** Summary of the criteria for evaluating the proposed method
11
-----
#### *3.5 VADER-based sentiment analysis * It is a sentiment analysis that aims to identify and extract users' opinions (Cambria et al., 2018). The primary aim of this section of the proposed method is to analyze the feelings of users' tweets. A variety of methods have been proposed for sentiment analysis, among which the VADER method is one of the successful methods in the field of sentiment analysis. As a matter of fact, VADER is a tool or library based on words and roll that can extract sentiments from text, emoticons, emojis, abbreviations, and terms accurately (Hota et al., 2021; Hutto and Gilbert, 2014). This tool has a better speed due to its vocabulary and roll, and its output is a 4-dimensional vector in which positive, negative, neutral, and compound values are generated for each input text. It should be noted that the positive, negative, and neutral values are normally considered between zero and one. Therefore, in the proposed method, the tweet text's positive, negative, neutral, and compound values are extracted in Table 2.
**Table 2.** The way of analyzing VADER-based sentiments in the proposed method
**Tweet id** **ne** **g** **ative** **neutral** **p** **ositive** **com** **p** **ound** **p** **olarities**
1 0.10 0.71 0.19 0.88
2 0.04 0.830 0.130 1
3 0.3 0.1 0.6 0.99
. - - -
. - - -
N-1 0.6 0.3 0.1 1
N 0.1 0.5 0.4 0.99
#### VADER-based sentiment analysis is performed in the proposed method according to Table 2, and finally, each of the positive, negative, neutral, and compound values is selected as a final feature. These features are examined in the feature selection step by feature selection methods in terms of
12
-----
#### effectiveness. If they were important, they would be considered in the final list of the feature selection.
### **4. Data collection **
#### This section provides necessary information regarding the data used for analyzing the problem and the proposed models. *4.1 data set * In this part of the proposed method, four types of data, including information associated with news, tweets, Google Trends, and Bitcoin stocks, have been collected by different methods and through API, which is examined and presented in the following. The whole information was daily obtained from "05/02/2021" to "10/09/2021" for each one. Bitcoin Information: The yfinance library is exploited to extract Bitcoin stock features, including open, close, high, low, volume and price. Bitcoin stock information was extracted daily from "05/02/2021" to "10/09/2021". Therefore, four Bitcoin features and a close feature are considered as real values for forecasting at this phase. Table 1 highlights an overview of this feature. Tweet information: The Twitter API was employed to collect tweets extracted daily from "05/02/2021" to "10/09/2021". The collected data includes 1.2 million tweets related to the word BTC and Bitcoin. Finally, tweet information is grouped daily. In addition to the tweets’ texts, the meta feature of the users is also collected at this step. Meta tweet information includes total followers, average followers, and so on, whose exact information is illustrated in Table 3.
**Table 3.** Different extraction features in the proposed method
**No.** **Grou** **p** **Feature name** **Method and final feature Count** **Descri** **p** **tion**
1 Open Direct1Feature extract This feature is related to
2 High Direct1 Feature extract Bitcoin and has been
Bitcoin
3 Low Direct1 Feature extract extracted directly from the
4 Volume Direct1 Feature extract yfinance library .
5 Text_tweet VADER3 Feature extract Tweet features are divided
6 user_followers_sum Sum1 Feature extract into textual and meta
7 user_followers_mean Mean1 Feature extract categories. Textual
8 user_friend_sum Sum 1 Feature extract information is extracted
tweet
9 user_friend_mean Mean 1 Feature extract from VADER based on
10 user_favourites_mean Mean1 Feature extract sentiment analysis,
11 user_verified_most Most 1 Feature extract negative, positive, and
12 user_verified_mean Mean1 Feature extract neutral features .
13 Bitcoin_rank Count1 Feature extract The amount of ranking is
Google
based on the two words
14 Trends BTC_rank Count1 Feature extract
"Bitcoin" and "BTC ".
15 Title news TFIDF50:N Feature extract Features are extracted
from the headline and the
news content based on the
16 NEWS Body news TFIDF50:N Feature extract
TFIDF method. In this
feature extraction model,
13
-----
at least 50 effective words
are considered for the
headline and the news
content.
#### Google Trends Information: The ranking of the two words "Bitcoin" and "BTC" were extracted using the pay trends library at this step. The information of this step was also extracted daily from "05/02/2021" to "10/09/2021". More detailed information regarding these features is given in Table 3. News headline and text information: In this step, the text and news related to Bitcoin were extracted from reputable sites like Coin telegraph using the Beautiful Soup and urllib libraries. Besides, each of the news headlines and text was extracted separately. In the next step, they were preprocessed, and then the TFIDF method was used to extract the effective features or words. *4.2 Text preprocessing and textual feature extraction * At this step of the proposed method, a series of preprocessing operations, including data clean, tokenization, stop word removal, and steaming, is applied to any tweet and textual news data. In natural language processing, algorithms do not have any understanding regarding the text; thus, the first and most important step is to identify or separate the words (signs and words), which is the task of the tokenization step (Jurafsky, 2000; Manning et al., 2014). The next step is to eliminate the stop words that are actually the repetitive words in the text without any information and are only used to connect the words in the sentence (Rani and Lobiyal, 2018). Stemming is the last step that needs to be performed in the preprocessing phase. In fact, the stem refers to the main meaning and concept of the word. Thus a limited number of stems are formed in natural language, and the rest of the words are extracted from these stems (Porter, 1980; Xu and Croft, 1998). Stem's major aim is to extract the stem and remove the affixer attached to the word (Manning and Schutze, 1999; Porter, 2001). Thus stemming is one of the main steps in natural language processing that must take place. Therefore, the steps of the word processing are given step by step below: Data Cleaning Step: In this step of the proposed method, the blank textual data, numerical data, link address, and so on are eliminated from the textual news and tweets to prepare the text for the next steps of text processing. Tokening step: In this step of the proposed method, unifying or tokenizing the sentences in each film is conducted. Stop Word Removal step: In this step of the proposed method, stop word removal is conducted using nltk library and English Porter Stemmer. Tokenization step: In this step of the proposed method, word stemming has been conducted using the nltk library and English Porter Stemmer. After preparation, the Tweet data is sent to VADER for examining the sentiment analysis. Nevertheless, the textual news data in this paper is characterized by the TFIDF extraction method. F-IDF is known as a method to convert text to numerical values based on the importance of the
14
-----
#### words. This type of weighting is based on the belief that the words that distinguish a document from other headlines and news content are important words and thus have more weight (Salton and Buckley, 1988). According to Equation 7, in this type of weighting, the importance of words is measured based on the number of repetitions in the headlines and the news content and the whole documents in the content (data set).
𝑇𝐹𝐼𝐷𝐹(𝑡 𝑖, 𝑑 𝑗 ) = 𝑇𝐹(𝑡 𝑖, 𝑑 𝑗 ) × 𝐼𝐷𝐹(𝑡 𝑖 ) (7)
#### Where 𝑇𝐹𝐼𝐷𝐹(𝑡 𝑖, 𝑑 𝑗 ) examines the significance of the word based on the headline and the news content, and 𝐼𝐷𝐹(𝑡 𝑖 ) calculates the significance of the word based on the headline and the news content including that word. *4.3 Data normalization * One of the crucial steps in preprocessing or preparing data sets in machine learning and deep learning algorithms is normalization and standardization methods. Normalization is conducted to scale the data values in a specific range of values. Most machine learning algorithms and deep data normalization more accurately predict the prices. The Min-Max normalization method is one of the scaling methods that are significantly popular and causes the data to be in the range between [0,1], which can be defined as follows:
𝑋−𝑋 𝑚𝑖𝑛
𝑋 𝑛𝑜𝑟𝑚 = 𝑋 𝑚𝑎𝑥 −𝑋 𝑚𝑖𝑛
(8)
#### According to Equation 8, 𝑋 𝑚𝑖𝑛 represents the lowest value in a feature of the Bitcoin data, the value of each feature is denoted by X in Bitcoin data, and 𝑋 𝑚𝑎𝑥 indicates the maximum value in each feature of the Bitcoin data.
### **5. Proposed models based on deep learning **
#### The first part of the proposed model revealed that various data, including meta-data tweets, sentiment tweet data (VS-data), news title-data, news content-data, Bitcoin data, and Google Trends data, have what type of features as shown in Table 3. Due to the various types of data collected in this paper. Nine different deep learning models have been designed according to the input data type in this part of the proposed method. Each model may have a different layer depending on the type of input data, such as the type of text. In addition, several models are designed based on the whole features and selecting the important features. This selection is based on the different features, including mutual-info-regression, Linear Regression, and correlation, and finally, a model is designed based on the combination of mutual-info-regression, Linear Regression, and correlation. Most of the models in this section are designed to indicate the impact of each data separately on the Bitcoin prediction. Through the combination of the whole existing features in the whole data, it is possible to specify how much these features are effective. Notably, in Section 6, the comparison results of the different criteria imply which models with which features have managed to predict Bitcoin more accurately.
15
-----
#### *5.1 The proposed Model-1 with Bitcoin data * In this model, a deep network based on convolutional layers and LSTM layers is designed with Bitcoin data input, as shown in Figure 1, which is called Model-1 in this paper. In this model, the Bitcoin stock data, including open, close, high, low, volume and price, is only considered to predict the Bitcoin price. The first model is composed of different layers, including three conv1-d layers, two max-pooling layers, a flatten layer, a dense layer, and an LSTM layer.
**Figure 1.** The proposed Model-1 architecture based on convolutional layer and LSTM layer with Bitcoin data
#### The proposed Model-1 architecture based on convolutional layers and Bitcoin data in Figure 2 shows that the convolutional layer is used to extract the better feature. Accordingly, the LSTM layer is utilized to maintain the temporal state of the data. Conv1-d with 500, 200, and 100 filters in this model, two max-pooling layers with two kernels, one LSTM layer with 32 units, and one dense layer with 20 units are set. Besides, the activation function is set with Relu except for the last layer, and the last layer is set according to the data type of the Sigmoid activation function. Notably, details of the loss function and the number of epochs and other hyper-parameters of Model-1 are given in Section 5. *5.2 The proposed Model-2 with Metadata * In this model, a deep network based on convolutional layers and Dense layers is presented with data input of metadata tweet. According to Figure 2, this model is called Model-2 in this paper. In this model, the metadata tweets include user-followers-sum, user-followers-mean, user-friend- sum, user-friend-mean, user-favorites-mean, user-verified-most, and user-verified-mean are considered to predict Bitcoin prices. The second model consists of different layers, including three conv1-d layers, two max-pooling layers, a flatten layer, a dense layer, and a dropout layer.
16
-----
**Figure 2.** The proposed Model-2 architecture based on convolutional layers with Metadata
#### The proposed Model-2 architecture based on convolutional and metadata layers in Figure 2 shows that this model uses a convolutional layer to extract better features. Also, Dense and Dropout layer overfit problems are considered for better network training. Conv1-d with 500, 200, and 100 filters in this model, two max-pooling layers with two kernels, a dropout layer with 0.1%, and two Dense layers with 100 and 20 units are set. Also, in this model, the activation function is set with the Relu except for the last layer, and the last layer is set according to the data type of the Sigmoid activation function. In addition, more details regarding the loss function and the number of epochs and other hyper-parameters of Model-2 are presented in Section 6. *5.3 The proposed Model-3 with VADER data * In this model, a deep network is designed based on convolutional and Dense layers with data input of sentiment analysis of tweet text with Bitcoin data. As shown in Figure 3, this model is called Model-3 in this paper in which the data of the sentiment analysis of tweet text, including positive, negative, neutral, and compound values obtained from the VADER tool, are only used to predict Bitcoin prices. The third model consists of different layers, including three conv1-d layers, two max-pooling layers, a flatten layer, and a dense layer.
**Figure 3.** Proposed Model-3 architecture based on convolutional layers with sentiment analysis data of tweet text
17
-----
#### The proposed Model-3 architecture based on convolutional layers and sentiment analysis data of tweet text and Bitcoin data in Figure 3 demonstrates that this model uses a convolutional layer to extract better features and uses the Dense layer for the linear state. Conv1-d with 500, 200, and 100 filters in this model, two max-pooling layers with two kernels, and a Dense layer with 100 units are set. Also, the last two-layer activation function is set with Relu, and the last layer is set according to the data type of the Sigmoid activation function. In this model, two leakyrelu activation functions are used after max-pooling layers. Notably, more details concerning the loss function and the number of epochs and other hyper-parameters of the Model-3 are presented in Section 6. *5.4 The proposed Model-4 with Meta + Bitcoin Data * This model presents a deep two-channel dense full-layer network with Bitcoin data input and metadata tweets. As shown in Figure 4, this model is named Model-4 in this paper. In contrast to other models, the proposed model has two input channels in which metadata tweets are used in the first channel, including user-followers-sum, user-followers-mean, user-friend-sum, user-friend- mean, user-favorites-mean, user-verified-most, user-verified-mean as input. Notably, the stock data, including open, close, up, down, volume, and price, are used in the second channel to predict the Bitcoin price. Finally, the two channels are combined by the concatenate layer.
**Figure 4.** The proposed architecture of deep two-channel Model-4 with Meta + Bitcoin Data
#### The architecture of the proposed model-4 is based on a deep two-channel model with Bitcoin data and metadata tweets. Figure 4 indicates that this model has used the Dense layer and two-channel state to predict the Bitcoin price better. In this model, Dense layers with 500, 300, 200, and 100 units are set in the first channel. Dense layers with 200 and 100 units and a concatenate layer are set in the second channel. In addition, a concatenate layer is set from a dropout layer with 0.1%, and a Dense layer with 20 is used. Also, the activation function is set with Relu except for the last layer, and the last layer is set according to the data type of the Sigmoid activation function. Besides, more details about the loss function and the number of epochs and other hyper-parameters of the Model-4 are given in Section 6.
18
-----
#### *5.5 The proposed Model-5 with textual tweet data and Embedding layer * As regards Figure 5, in this model, a deep network based on convolutional layers with tweet textual data and an embedding layer is considered. This model is named Model-5 in this paper. In this model, the textual tweet data are directly used, and then the preprocessing operation with the Embedding input layer is considered. The fifth model consists of an embedding input layer from other layers, including three conv1-d layers, two max-pooling layers, a flatten layer, two dense layers, and a dropout layer. The major aim of this model is to use words and sentences directly in the text of the tweet to predict the Bitcoin price. In contrast to the third model, sentiment analysis is not considered.
**Figure 5.** The model architecture proposed Model-5 with text tweet data with an Embedding layer
#### The proposed architecture of Model-5 based on convolutional layers with tweet textual data and Embedding layer is demonstrated in Figure 6, in which the convolutional layer and the Embedding layer are used to extract better features. Also, the Dense and Dropout layers are used to tackle the overfit problem and improve network learning. The embedding layer in this model with dimensions of 2000 * 500 is considered. After the Embedding layer, conv1-d with 500, 200, and 100 filters, two max-pooling layers with two kernels, one dropout layer with 0.1%, and two Dense layers with 100 and 20 units are set in this model. Furthermore, the activation function is set with Relu except for the last layer, and the last layer is set according to the data type of the Sigmoid activation function. Also, more details regarding the loss function and the number of epochs and other hyper-parameters of Model-2 are illustrated in Section 6. *5.6 The proposed Model-6, Model-7, and Model-8 with the whole data and three feature selection * *models * This section presents the three deep network models based on convolutional layers with the whole data and three feature selection models. As shown in Figure 6, these models are based on the mutual-info-regression, Linear Regression, and correlation feature selection methods, which are called Model-6, Model-7, and Model-8, respectively. Some important features are extracted from
19
-----
#### the whole data and given to the model based on learning data in these models. According to Table 1, the whole features are combined if the minimum number of text features and headlines is 100 features. These models contain at least 115 features, among which only 20% of the important features are selected by the feature selection method and given to the final model.
**Figure 6.** The proposed architecture of Model-6, Model-7, and Model-8 with the various data and three feature
selection models
#### The proposed architecture of Model-6, Model-7, and Model-8 with different data and three feature selection models are considered in this section. With respect to Figure 6, the whole features are for better prediction, and also, a feature selection method is used in each of the proposed models to select important features and eliminate the extra features. Conv1-d with 500, 200, and 100 filters in this model, two max-pooling layers with two kernels, and a Dense layer with 100 units are set. Besides, the activation function is set with Relu except for the last layer, and the last layer is set according to the data type of the Sigmoid activation function. Also, the necessary details regarding the loss function and the number of epochs and other hyper-parameters of the Model-3 are illustrated in Section 6. *5.7 The proposed Model-9 with the various data and a combination of three feature selection * *models * A deep network model based on convolutional layers with the whole data and a combination of the three feature selection models are created in this model. As shown in Figure 7, this model is called Model-9 in this paper. Unlike models 6,7,8, the selected features are based on the combination of mutual-info-regression, Linear Regression, and correlation feature selection methods in this model. According to the learning data, each feature selection method selects 15% of the features. Then, the whole features of these three models are combined with each other and include about 45% of the total features. Nevertheless, since the repetitive feature may exist in the total selection feature, about 15 to 20% of the repetitive features are removed, and finally, about 20 to 25% of the important feature are selected by the combined feature selection method.
20
-----
**Figure 7.** The proposed model architecture of Model-9 with the various data and a combination of three feature
selection models
#### The proposed architecture of Model-9 with the various data and combinations of three feature selection models is presented here. Accordingly, the whole three feature selection methods are used in this model to predict all data more accurately and combine feature selection features. In this model, the advantages of three feature selection models are exploited to select important features and remove additional ones. Conv1-d with 500, 200, and 100 filters in this model, two max-pooling layers with two kernels, and a dense layer with 100 units are set. Notably, the activation function is set with Relu except the last layer, and the last layer is set according to the data type of the Sigmoid activation function. Moreover, the necessary details regarding the loss function and the number of epochs and other hyper-parameters of the Model-3 are presented in Section 6. **6. Evaluation and validation ** In this section, the nine proposed models based on the convolutional neural network learning and LSTM are examined for predicting Bitcoin prices. The proposed model has been implemented and developed in the Google Columbine environment with 12 GB RAM and TensorFlow and keras libraries. TensorFlow library is one of the most widely used and popular neural network learning libraries in Python programming language that researchers and companies also exploit to create a
21
-----
#### variety of neural network architectures. In the whole experiment, the price of Bitcoin with a windows length of 1 was predicted due to the availability of the whole inputs for 78 days. Also, 80% of the data was considered for learning and 20% for the experiment. Some of the parameters of the proposed models were introduced in Section 3. Table 4 indicates the hyperparameters of each proposed model for implementation.
|Table 4. Validation of hyperp|parameters of proposed models|
|---|---|
|Proposed model Name|hyperparameter|
|Model-1|epochs=100, batch_size=10, optimizer=Adam,loss=MSE|
|Model-2|epochs=100, batch_size=10, optimizer=Adam,loss=MSE|
|Model-3|epochs=100, batch_size=10, optimizer=Adam,loss=MSE|
|Model-4|epochs=100, batch_size=10, optimizer=Adam,loss=MSE|
|Model-5|epochs=100, batch_size=10, optimizer=Adam,loss=MSE|
|Model-6|epochs=100, batch_size=10, optimizer=Adam,loss=MSE|
|Model-7|epochs=100, batch_size=10, optimizer=Adam,loss=MSE|
|Model-8|epochs=100, batch_size=10, optimizer=Adam,loss=MSE|
|Model-9|epochs=100, batch_size=10, optimizer=Adam,loss=MSE| According to Table 4, in order to make a fair comparison between these values, the whole models are set with the same optimizer and loss. In this section, nine proposed models based on the convolutional neural networks learning and LSTM are compared for predicting Bitcoin price in terms of various criteria, including mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), median absolute error (MedAE) and determination coefficient (R [2] ). The first experiment for the loss function of the whole nine proposed models is based on the learning and test data shown in Figure 8 and Figure 9.
22
-----
**Figure 8.** Comparison of proposed models in terms of loss function for the learning data
#### According to Figure 8, the comparison results of the proposed models in terms of loss function for the learning data show that the eighth and ninth models have better results than the other models, and also the second and third models have the worst performance in terms of the loss function. Notably, some models, such as the second model, are mainly considered to show the direct effect of words on the Bitcoin price. It should be noted that most algorithms can be optimized well on the learning data. Concerning the experimental data, the point is which model can have the best performance. In the following, the proposed models are compared in terms of loss function on the experimental data for more evaluation.
23
-----
**Figure 9.** Comparison of proposed models in terms of loss function on the experimental data
#### As shown in Figure 9, the comparison results of the proposed models in terms of loss function on the experimental data highlight the fact that the eighth and ninth models performed better than the other models in the experimental data. However, some models like the fifth model were not able to have a better performance in the learning data since they only used the sentiment analysis. Besides, Models 2, 5, and 7 have acceptable performance in the experimental data. It is worth mentioning that there is the direct interference of metadata tweets in predicting the Bitcoin price in Model 9 and the lack of interference of other features. Then, the second experiment is compared to the proposed models in terms of various criteria, including mean square error (MSE), root mean square error (RMSE), median absolute error (MedAE), and determination coefficient (R [2] ), as shown in Table 5.
**Table 5.** Comparison of different proposed models in terms of the various criteria
|Model|MSE|RMSE|MAE|MedAE|𝑹 𝟐|
|---|---|---|---|---|---|
|Model-1|0.01029|0.10143|0.06812|0.05259|0.87864|
|Model-2|0.04925|0.22193|0.17161|0.12521|0.4190|
|Model-3|0.00698|0.08357|0.06006|0.04579|0.91762|
|Model-4|0.00362|0.06013|0.04170|0.03244|0.95735|
|Model-5|0.03420|0.185|0.1425|0.1145|0.5962|
|Model-6|0.00258|0.05082|0.02769|0.01285|0.96954|
|Model-7|0.01035|0.10176|0.07705|0.05846|0.87786|
|Model-8|0.00251|0.05013|0.03383|0.02759|0.9703|
|Model-9|0.00151|0.0388|0.02519|0.01747|0.98219|
24
-----
#### Table 5 gives the necessary information regarding the comparison made between the different proposed models in terms of different criteria. The ninth model with the value of 0.001 has obtained the best result in terms of MSE. Also, this model has a better performance compared to other models in the MAE and R [2] criteria with values of 0.02 and 0.98, respectively. Also, in terms of MSE criteria, the second model with the value of 0.04 has obtained the worst result. Compared to other models, this model has performed worse in terms of MAE and R [2] with values of 0.17 and 0.4190, respectively. Notably, based on the sixth, seventh and eighth models in these feature selection methods, it can be concluded that the two Model-6 and Model-8 have shown much better performance. Thus, the value of MSE for Model-6 is equal to 0.00258, and MSE value for Model- 8 is equal to 0.00251. These results imply the fact that the feature selection of the two methods of mutual-info-regression and Linear Regression identified the important features correctly. Additionally, the fourth model in which VADER sentiment analysis is used has improved the whole criteria compared to the first model without using VADER sentiment analysis. Compared to the first model, the R [2] criterion has improved by 8%, and the results have been positive for the effectiveness of VADER sentiment analysis in predicting the Bitcoin price in the fourth model. In the third experiment, the values of the prediction diagram in each proposed model are graphically displayed for the whole Bitcoin data in Figures 10-18.
**Figure 10.** Bitcoin price prediction based on the first proposed model
25
-----
**Figure 11** . Bitcoin price prediction based on the second proposed model
**Figure 12.** Bitcoin price prediction based on the third proposed model
26
-----
**Figure 13.** Bitcoin price prediction based on the fourth proposed model
**Figure 14.** Bitcoin price prediction based on the fifth proposed model
27
-----
**Figure 15.** Bitcoin price prediction based on the sixth proposed model
**Figure 16.** Bitcoin price prediction based on the proposed seventh model
28
-----
**Figure 17.** Bitcoin price prediction based on the proposed eighth model
**Figure 18.** Bitcoin price prediction based on the proposed ninth model
#### The prediction diagram of the proposed models is presented on the whole Bitcoin data. This forecast was based on the loss function, and with respect to comparison results of the criteria in Table 5, it can be concluded that the ninth model has a better performance than other models in terms of accuracy of the price prediction. Notably, after that, the sixth and eighth models have a better performance in predicting the Bitcoin price.
### **7. Conclusion **
#### In summary, the issue of Bitcoin price prediction using Deep Learning (DL) methods was considered in this research. This method is an advanced form of neural network algorithms that allows the extraction of low-level and high-level features from Bitcoin time data. Besides, deep
29
-----
#### learning methods can better consider the non-predictable nature of price. Several Bitcoin price prediction models based on CNN and LSTM were designed. Additionally, the sentiment analysis using the VADER tool and feature extraction of the Bitcoin news was employed in the proposed models. Twitter data analysis, news headlines, news content, Google Trends, Bitcoin stocks, and financials based on deep learning were considered in the proposed model to better and more accurately predict the Bitcoin price. Notably, due to the high extraction features of different input data, three methods of mutual information regression, Linear Regression, and correlation-based feature selection were exploited in this study. A combination of three feature selection methods was presented in a separated model to take advantage of such feature selection methods. Finally, the whole proposed models were compared with each other in terms of the performance criteria such as mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), median absolute error (MedAE), and coefficient of determination (R [2] ). The results of various implementations and experiments proved the remarkable performance of the proposed hybrid model based on sentiment analysis and combined feature selection with MSE value of 0.02 and R [2] value of 0.1 in obtaining better results and less error in predicting the Bitcoin price. Due to the input features, each model can be used as an individual assistant for more informed Bitcoin trading decisions depending on the input features. In future work, investigating the use of more data samples to experiment with the various proposed models might prove crucial. Further, combining deep learning models with robust machine learning algorithms can be considered an interesting topic for future study.
### **Acknowledgment **
#### The authors appreciate the unknown referee’s valuable and profound comments.
### **Conflict of interest **
#### The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
### **References **
ALWESHAH, M., ALKHALAILEH, S., ALBASHISH, D., MAFARJA, M., BSOUL, Q. and DORGHAM,
O., 2021. A hybrid mine blast algorithm for feature selection problems. *Soft Computing*, **25,** 517534.
ASUR, S. and HUBERMAN, B.A., 2010. Predicting the Future with Social Media. *2010 IEEE/WIC/ACM*
*International Conference on Web Intelligence and Intelligent Agent Technology*, 492-499.
AWOKE, T., ROUT, M., MOHANTY, L. and SATAPATHY, S.C., 2021. Bitcoin Price Prediction and
Analysis Using Deep Learning Models, 631-640. Singapore.
BOLLEN, J., MAO, H. and ZENG, X., 2011. Twitter mood predicts the stock market. *Journal of*
*Computational Science*, **2,** 1-8.
BORDINO, I., BATTISTON, S., CALDARELLI, G., CRISTELLI, M., UKKONEN, A. and WEBER, I.,
2012. Web search queries can predict stock market volumes. *PloS one*, **7,** e40014.
BUI, D.-K., NGUYEN, T., CHOU, J.-S., NGUYEN-XUAN, H. and NGO, T.D., 2018. A modified firefly
algorithm-artificial neural network expert system for predicting compressive and tensile strength
of high-performance concrete. *Construction and Building Materials*, **180,** 320-333.
30
-----
CAMBRIA, E., PORIA, S., HAZARIKA, D. and KWOK, K., 2018. SenticNet 5: Discovering conceptual
primitives for sentiment analysis by means of context embeddings. *Proceedings of the AAAI*
*conference on artificial intelligence* .
CHATFIELD, C. and YAR, M., 1988. Holt‐Winters forecasting: some practical issues. *Journal of the Royal*
*Statistical Society: Series D (The Statistician)*, **37,** 129-140.
CHAUDHARI, H. and CRANE, M., 2020. Cross-correlation dynamics and community structures of
cryptocurrencies. *Journal of Computational Science*, **44,** 101130.
CHEN, H., DE, P., HU, Y.J. and HWANG, B.-H., 2013. Customers as advisors: The role of social media
in financial markets. In.
CHOHAN, U.W., 2017. Cryptocurrencies: A brief thematic review. *Available at SSRN 3024330* .
CHOI, H. and VARIAN, H., 2012. Predicting the present with Google Trends. *Economic record*, **88,** 2-9.
CHOU, J.-S. and BUI, D.-K., 2014. Modeling heating and cooling loads by artificial intelligence for energyefficient building design. *Energy and Buildings*, **82,** 437-446.
CHOU, J.-S., CHONG, W.K. and BUI, D.-K., 2016. Nature-inspired metaheuristic regression system:
programming and implementation for civil engineering applications. *Journal of Computing in Civil*
*Engineering*, **30,** 04016007.
COLIANNI, S., ROSALES, S. and SIGNOROTTI, M., 2015. Algorithmic trading of cryptocurrency based
on Twitter sentiment analysis. *CS229 Project* **,** 1-5.
DAI, B., JIANG, S., LI, C., ZHU, M. and WANG, S., 2021. A multi-hop cross-blockchain transaction
model based on improved hash-locking. *International Journal of Computational Science and*
*Engineering*, **24,** 610-620.
DAS, S., BILLAH, M. and MUMU, S.A., 2021. A Hybrid Approach for Predicting Bitcoin Price Using BiLSTM and Bi-RNN Based Neural Network, 223-233. Cham.
DE JONG, P., ELFAYOUMY, S. and SCHNUSENBERG, O., 2017. From returns to tweets and back: an
investigation of the stocks in the Dow Jones Industrial Average. *Journal of Behavioral Finance*,
**18,** 54-64.
DOLAN, R.J., 2002. Emotion, cognition, and behavior. *science*, **298,** 1191-1194.
DUTTA, A., KUMAR, S. and BASU, M., 2020. A Gated Recurrent Unit Approach to Bitcoin Price
Prediction. *Journal of Risk and Financial Management*, **13,** 23.
ELRAHMAN, S.A. and ALLUHAIDAN, A.S., 2021. Blockchain technology and IoT-edge framework for
sharing healthcare services. *Soft Computing*, **25,** 13753-13777.
ETTREDGE, M., GERDES, J. and KARUGA, G., 2005. Using web-based search data to predict
macroeconomic statistics. *Communications of the ACM*, **48,** 87-92.
GURESEN, E., KAYAKUTLU, G. and DAIM, T.U., 2011. Using artificial neural network models in stock
market index prediction. *Expert Systems with Applications*, **38,** 10389-10397.
HOTA, H.S., SHARMA, D.K. and VERMA, N., 2021. 14 - Lexicon-based sentiment analysis using Twitter
data: a case of COVID-19 outbreak in India and abroad. In: KOSE, U., GUPTA, D., DE
ALBUQUERQUE, V.H.C. and KHANNA, A.s (Eds.): Data Science for COVID-19. 275-295.
HUTTO, C. and GILBERT, E., 2014. Vader: A parsimonious rule-based model for sentiment analysis of
social media text. *Proceedings of the International AAAI Conference on Web and Social Media* .
JAIN, A., TRIPATHI, S., DWIVEDI, H.D. and SAXENA, P., 2018. Forecasting Price of Cryptocurrencies
Using Tweets Sentiment Analysis. *2018 Eleventh International Conference on Contemporary*
*Computing (IC3)*, 1-7.
JURAFSKY, D., 2000. Speech & language processing. Pearson Education India.
KAI-INEMAN, D. and TVERSKY, A., 1979. Prospect theory: An analysis of decision under risk.
*Econometrica*, **47,** 363-391.
KARALEVICIUS, V., DEGRANDE, N. and DE WEERDT, J., 2018. Using sentiment analysis to predict
interday Bitcoin price movements. *The Journal of Risk Finance* .
KIMOTO, T., ASAKAWA, K., YODA, M. and TAKEOKA, M., 1990. Stock market prediction system
with modular neural networks. *1990 IJCNN International Joint Conference on Neural Networks*,
1-6 vol.1.
31
-----
KOULOUMPIS, E., WILSON, T. and MOORE, J., 2011. Twitter sentiment analysis: The good the bad and
the omg! *Fifth International AAAI conference on weblogs and social media* .
KRASKOV, A., STöGBAUER, H. and GRASSBERGER, P., 2004. Estimating mutual information.
*Physical review E*, **69,** 066138.
KRISTOUFEK, L., 2015. What are the main drivers of the Bitcoin price? Evidence from wavelet coherence
analysis. *PloS one*, **10,** e0123923.
LAMON, C., NIELSEN, E. and REDONDO, E., 2017. Cryptocurrency price prediction using news and
social media sentiment. *SMU Data Sci. Rev*, **1,** 1-22.
LI, D., HAN, D., WENG, T.-H., ZHENG, Z., LI, H., LIU, H., CASTIGLIONE, A. and LI, K.-C., 2021.
Blockchain for federated learning toward secure distributed machine learning systems: a systemic
survey. *Soft Computing* .
LIU, M., LI, G., LI, J., ZHU, X. and YAO, Y., 2021. Forecasting the price of Bitcoin using deep learning.
*Finance Research Letters*, **40,** 101755.
MADAN, I., SALUJA, S. and ZHAO, A., 2015. Automated bitcoin trading via machine learning
algorithms. *URL:* *[http://cs229](http://cs229/)* *. stanford. edu/proj2014/Isaac% 20Madan*, **20** .
MANNING, C. and SCHUTZE, H., 1999. Foundations of statistical natural language processing. MIT
press.
MANNING, C.D., SURDEANU, M., BAUER, J., FINKEL, J.R., BETHARD, S. and MCCLOSKY, D.,
2014. The Stanford CoreNLP natural language processing toolkit. *Proceedings of 52nd annual*
*meeting of the association for computational linguistics: system demonstrations*, 55-60.
MATTA, M., LUNESU, I. and MARCHESI, M., 2015. Bitcoin Spread Prediction Using Social and Web
Search Media. *UMAP workshops*, 1-10.
MCNALLY, S., ROCHE, J. and CATON, S., 2018. Predicting the Price of Bitcoin Using Machine
Learning. *2018 26th Euromicro International Conference on Parallel, Distributed and Network-*
*based Processing (PDP)*, 339-343.
MITTAL, A., DHIMAN, V., SINGH, A. and PRAKASH, C., 2019. Short-Term Bitcoin Price Fluctuation
Prediction Using Social Media and Web Search Data. *2019 Twelfth International Conference on*
*Contemporary Computing (IC3)*, 1-6.
NAIMY, V.Y. and HAYEK, M.R., 2018. Modelling and predicting the Bitcoin volatility using GARCH
models. *International Journal of Mathematical Modelling and Numerical Optimisation*, **8,** 197215.
NAKAMOTO, S., 2008. Bitcoin: A peer-to-peer electronic cash system. *Decentralized Business Review* **,**
21260.
NAKANO, M., TAKAHASHI, A. and TAKAHASHI, S., 2018. Bitcoin technical trading with artificial
neural network. *Physica A: Statistical Mechanics and its Applications*, **510,** 587-609.
O'CONNOR, B., BALASUBRAMANYAN, R., ROUTLEDGE, B.R. and SMITH, N.A., 2010. From
tweets to polls: Linking text sentiment to public opinion time series. *Fourth international AAAI*
*conference on weblogs and social media* .
P, S. and M, P.B., 2021. Diagnosis of lung cancer using hybrid deep neural network with adaptive sine
cosine crow search algorithm. *Journal of Computational Science*, **53,** 101374.
PAK, A. and PAROUBEK, P., 2010. Twitter as a corpus for sentiment analysis and opinion mining. *LREc*,
1320-1326.
PANGER, G.T., 2017. Emotion in social media. University of California, Berkeley.
PANT, D.R., NEUPANE, P., POUDEL, A., POKHREL, A.K. and LAMA, B.K., 2018. Recurrent Neural
Network Based Bitcoin Price Prediction by Twitter Sentiment Analysis. *2018 IEEE 3rd*
*International Conference on Computing, Communication and Security (ICCCS)*, 128-132.
PETTEY, C., 2010. Gartner Says Majority of Consumers Rely on Social Networks to Guide Purchase
Decisions. *Online im Internet: URL:<* *[http://www.](http://www/)* *gartner. com/it/page. jsp* .
PORTER, M.F., 1980. An algorithm for suffix stripping. *Program* .
—
, 2001. Snowball: A language for stemming algorithms. In.
32
-----
RADITYO, A., MUNAJAT, Q. and BUDI, I., 2017. Prediction of Bitcoin exchange rate to American dollar
using artificial neural network methods. *2017 International Conference on Advanced Computer*
*Science and Information Systems (ICACSIS)*, 433-438.
RAMADHAN, N.G., TANJUNG, N.A.F. and ADHINATA, F.D., 2021. Implementation of LSTM-RNN
for Bitcoin Prediction. *Indonesia Journal on Computing (Indo-JC)*, **6,** 17-24.
RANI, R. and LOBIYAL, D.K., 2018. Automatic Construction of Generic Stop Words List for Hindi Text.
*Procedia Computer Science*, **132,** 362-370.
ŞAHIN, D.Ö., KURAL, O.E., AKLEYLEK, S. and KıLıC, E., 2021. A novel permission-based Android
malware detection system using feature selection based on linear regression. *Neural Computing*
*and Applications* .
SALTON, G. and BUCKLEY, C., 1988. Term-weighting approaches in automatic text retrieval.
*Information Processing & Management*, **24,** 513-523.
SHAH, D. and ZHANG, K., 2014. Bayesian regression and Bitcoin. *2014 52nd Annual Allerton Conference*
*on Communication, Control, and Computing (Allerton)*, 409-414.
SONI, N., SHARMA, E.K. and KAPOOR, A., 2021. Hybrid meta-heuristic algorithm based deep neural
network for face recognition. *Journal of Computational Science*, **51,** 101352.
STENQVIST, E. and LöNNö, J., 2017. Predicting Bitcoin price fluctuation with Twitter sentiment analysis.
In.
SUL, H., DENNIS, A.R. and YUAN, L.I., 2014. Trading on Twitter: The Financial Information Content of
Emotion in Social Media. *2014 47th Hawaii International Conference on System Sciences*, 806815.
TETLOCK, P.C., 2007. Giving content to investor sentiment: The role of media in the stock market. *The*
*Journal of finance*, **62,** 1139-1168.
VERGARA, J.R. and ESTEVEZ, P.A., 2014. A review of feature selection methods based on mutual
information. *Neural Computing and Applications*, **24,** 175-186.
XU, J. and CROFT, W.B., 1998. Corpus-based stemming using cooccurrence of word variants. *ACM*
*Transactions on Information Systems (TOIS)*, **16,** 61-81.
ZHU, X., WANG, H., XU, L. and LI, H., 2008. Predicting stock index increments by neural networks: The
role of trading volume under different horizons. *Expert Systems with Applications*, **34,** 3043-3054.
ZUIDERWIJK, A., CHEN, Y.-C. and SALEM, F., 2021. Implications of the use of artificial intelligence
in public governance: A systematic literature review and a research agenda. *Government*
*Information Quarterly*, **38,** 101577.
33
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s00500-023-09028-5?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s00500-023-09028-5, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,023
|
[
"JournalArticle"
] | false
| 2023-08-22T00:00:00
|
[] | 18,767
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02b9dd96b7cecea670ab7e9a9adea2431216516e
|
[
"Computer Science"
] | 0.832341
|
Allocation of Graph Jobs in Geo-Distributed Cloud Networks
|
02b9dd96b7cecea670ab7e9a9adea2431216516e
|
arXiv.org
|
[
{
"authorId": "2911998",
"name": "Seyyedali Hosseinalipour"
},
{
"authorId": "34086675",
"name": "Anuj K. Nayak"
},
{
"authorId": "2295658531",
"name": "Huaiyu Dai"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ArXiv"
],
"alternate_urls": null,
"id": "1901e811-ee72-4b20-8f7e-de08cd395a10",
"issn": "2331-8422",
"name": "arXiv.org",
"type": null,
"url": "https://arxiv.org"
}
| null |
## Power-Aware Allocation of Graph Jobs in Geo-Distributed Cloud Networks
#### Seyyedali Hosseinalipour, Student Member, IEEE, Anuj Nayak, and Huaiyu Dai, Fellow, IEEE
**Abstract—In the era of big-data, the jobs submitted to the clouds exhibit complicated structures represented by graphs, where the nodes**
denote the sub-tasks each of which can be accommodated at a slot in a server, while the edges indicate the communication constraints
among the sub-tasks. We develop a framework for efficient allocation of graph jobs in geo-distributed cloud networks (GDCNs), explicitly
considering the power consumption of the datacenters (DCs). We address the following two challenges arising in graph job allocation: i)
the allocation problem belongs to NP-hard nonlinear integer programming; ii) the allocation requires solving the NP-complete sub-graph
isomorphism problem, which is particularly cumbersome in large-scale GDCNs. We develop a suite of efficient solutions for GDCNs of
various scales. For small-scale GDCNs, we propose an analytical approach based on convex programming. For medium-scale GDCNs,
we develop a distributed allocation algorithm exploiting the processing power of DCs in parallel. Afterward, we provide a novel
low-complexity (decentralized) sub-graph extraction method, based on which we introduce cloud crawlers aiming to extract allocations of
good potentials for large-scale GDCNs. Given these suggested strategies, we further investigate strategy selection under both fixed and
adaptive DC pricing schemes, and propose an online learning algorithm for each.
**Index Terms—Big-data, graph jobs, geo-distributed cloud networks, datacenter power consumption, job allocation, integer programming,**
convex optimization, online learning.
#### !
1 INTRODUCTION
ECENTLY, the demand for big-data processing has promoted the popularity of cloud computing platforms due
# R
to their reliability, scalability and security [1], [2], [3], [4], [5],
[6]. Handling Big-data applications requires unique systemlevel design since these applications, more than often, cannot
be processed via a single PC, server, or even a datacenter
(DC). To this end, modern parallel and distributed processing
systems (e.g., Apache/Twitter Storm [7], GraphLab [8], IBM
InfoSphere [9], MapReduce [10]) are developed. In this work,
we propose a framework for allocating big-data applications
represented via graph jobs in geo-distributed cloud networks
(GDCNs), explicitly considering the power consumption of
the DCs. In the graph job model, each node denotes a subtask of a big-data application while the edges impose the
required communication constraints among the sub-tasks.
One of the common examples of processing graph jobs is
receiving data from Twitter and counting the number of times
a hashtag is mentioned, to keep an ordered list of the most
commonly mentioned hashtags. Each step of the process is
carried on in a so-called processing element, and it’s these
elements that enforce the separation of each logical step of the
process (e.g. receiving updates, extracting hashtags, counting
hashtags, ordering hastag count list) and allow the execution
of the process on a distributed platform [11]. In this context,
a graph job is formed by viewing each element as a node and
data exchange requirement between the elements as edges.
As the sizes of the problem and graph jobs increase, one can
imagine that a coalition of multiple DCs achieved through
GDCNs is required for the execution of the graph jobs.
**1.1** **Related Work**
There is a body of literature devoted to task and resource
allocation in contemporary cloud networks, e.g., [12], [13],
[14], [15], [16], [17], [18], [19], [20], [21], where the topology
of the graph job is not explicitly considered into their model.
In [12], the task placement and resource allocation plan
for embarrassingly parallel jobs, which are composed of
a set of independent tasks, is addressed to minimize the
job completion time. To this end, three algorithms named
TaPRA, TaPRA-fast, and OnTaPRA are proposed, which
significantly reduce the job execution time as compared to
the state-of-the-art algorithms. In [13], the multi-resource
allocation problem in cloud computing systems is addressed
through a mechanism called DRFH, where the resource
pool is constructed from a large number of heterogeneous
servers containing various number of slots. It is shown that
DRFH leads to much higher resource utilization with considerably shorter job completion times. In [14], the authors
develop an online job scheduling algorithm to distribute
incoming workloads across multiple DCs targeting energy
cost minimization with fairness consideration subject to job
delay requirements. They demonstrate that the solution of
their online algorithm, which is solely based on current
job queue lengths, server availability and electricity prices,
is close to the offline optimal performance with future
information. In [15], distribution of the incoming workload
among multiple DCs and adjustment of the service rates
of the cloud servers are addressed aiming to reduce the
power consumption cost. In [16], the problem of directing
the client requests to an appropriate DC efficiently and
sending back the response packets to the client through
one of the available links in the network is formulated as
a workload management optimization problem To tackle
_•_ _Seyyedali Hosseinalipour, Anuj Nayak, and Huaiyu Dai are with the_
_Department of Electrical and Computer Engineering, North Carolina State_
_University, Raleigh, NC, USA e-mail: shossei3,aknayak,hdai@ncsu.edu._
-----
the problem, the authors propose a distributed algorithm
inspired by the alternating direction method of multipliers.
In [17], a resource allocation scheme is proposed resulting
in efficient utilization of the resources while increasing the
revenue of the mobile cloud service providers. One of the
pioneer works addressing resource allocation in GDCNs
considering the power consumption state of the DCs is [18],
where a distributed algorithm, called DGLB, is proposed
for real-time geographical load balancing. A good survey
of the current state of the art is given in [19]. Also, there
is a body of literature utilizing swarm-based algorithms to
perform the job allocation in cloud networks, e.g., [20], [21].
None of the above works has considered allocation of bigdata jobs composed of multiple sub-tasks requiring certain
communication constraints among their sub-tasks.
Allocation of big-data jobs represented by graph structures is a complicated process entailing more delicate analysis. Among limited literature, references [22], [23], [24] are
most relevant, which focus on minimizing the cost incurred
by utilizing the links among the adjacent DCs while neglecting the power consumption and the status of the utilized
DCs. In [22], a heuristic algorithm is developed to match the
vertices of graph jobs to the idle slots of the cloud servers considering the cost of using the communication infrastructure
of the network to handle the data flows among the subtasks. Using a similar system model in [23], [24], the authors
developed randomized algorithms for the same purpose.
As compared to the heuristic approach of [22], the authors
of [23], [24] also demonstrate the optimality of their proposed
algorithms through a theoretical approach. However, the
algorithms used in these references are developed for a fixed
network cost configuration, i.e., the cost of job execution
using the same allocation strategy is fixed throughout the
time. Also, as mentioned in [25], the randomized algorithms
proposed in [23], [24] suffer from long convergence time. In
summary, the system model and the algorithms proposed
in [22], [23], [24] suffer from the following three limitations.
i) The proposed algorithms are impractical in scenarios that
the job allocation needs to be performed with respect to a
time varying network cost configuration. ii) The proposed
methods are impractical for large-scale networks. This is
due to the fact that efficient handling of the NP-complete
sub-graph isomorphism problem, which is a prerequisite to
identify feasible allocations for graph jobs, is not directly
addressed in these works (see Section 5). iii) The proposed
system models do not capture the power consumption of
the utilized DCs. This is despite the fact that in GDCNs, the
execution cost is mainly determined by the real-time power
consumption of the DCs [26]. Hence, an applicable allocation
framework should be capable of fast allocation of incoming
graph jobs to the GDCNs considering the effect of allocation
on the current DCs’ power consumption state. Also, with the
rapid growth in the size of cloud networks, adaptability to
large-scale GDCNs is a must for such a framework. These
are the main motivations behind this work.
**1.2** **Contributions**
The main goal of this paper is to provide a framework for
graph job allocation in GDCNs with various scales. Our main
contributions can be summarized as follows:
1) We formulate the problem of graph job allocation in
GDCNs considering the incurred power consumption on
the cloud network.
2) We propose a centralized approach to solve the problem
suitable for small-scale cloud networks.
3) We design a distributed algorithm for allocation of graph
jobs in medium-scale GDCNs, using the DCs’ processing
power in parallel.
4) For large-scale GDCNs, given the huge size of the strategy
set, and extremely slow convergence of the distributed
algorithm, we introduce the idea of cloud crawling. In
particular, we propose a fast method to address the NPcomplete sub-graph isomorphism problem, which is one of
the major challenges for graph job allocation in cloud
networks. In this regard, we propose a novel low-complexity
(decentralized) sub-graph isomorphism extraction algorithm
for a cloud crawler to identify “potentially good” strategies
for customers while traversing a GDCN.
5) For large-scale GDCNs, considering the suggested strategies of cloud crawlers, we find the best suggested strategies
for the customers under adaptive and fixed pricing of the
DCs in a distributed fashion. To this end, we model proxy
agents’ behavior in a GDCN, based on which we propose
two online learning algorithms inspired by the concept of
“regret” in the bandit problem [27], [28].
This paper is organized as follows. Section 2 includes
system model. Section 3 contains a sub-optimal approach
for graph job allocation in small-scale GDCNs. A distributed
graph job allocation mechanism for medium-scale GDCNs
is presented in Section 4. Cloud crawling along with online
learning algorithms for large-scale GDCNs are presented in
Section 5. Simulation results are given in section 6. Finally,
Section 7 concludes the paper.
#### 2 SYSTEM MODEL
A GDCN comprises various DCs connected through communication links. Inside each DC, there is a set of fullyconnected cloud servers each consisting of multiple fullyconnected slots. Without loss of generality, we assume
that all the cloud servers have the same number of slots.
Each slot corresponds to the same bundle of processing
resources which can be utilized independently. Since all
the slots belonging to the same DC are fully-connected,
we consider a DC as a collection of slots directly in our
study.[1] It is assumed that a DC provider (DCP) is in charge
of DC management. Abstracting each DC to a node and a
communication link between two DCs as an edge, a GDCN
with nd DCs can be represented as a graph GD = (D, ED),
where D = {d[1], · · ·, d[n][d] _} denotes the set of nodes and ED_
represents the set of edges. Henceforth, GD is assumed to be
_connected; however, due to the geographical constraints, GD_
may not be a complete graph.
Let S _[i]_ = {S1[i] _[,][ · · ·][, S]|S[i]_ _[i]|[}][ denote the set of slots belonging]_
to DC d[i]. The existence of a connection between two DCs
leads to the communication capability between all of their
slots. Consequently, two slots are called adjacent if and only
if both belong to the same DC or there exists a link between
1. The number of cloud servers does not play a major role in our study
except in the energy consumption models
-----
their corresponding DCs. Let ES denote the set of edges
between the adjacent slots, where (Sk[i] _[, S]m[j]_ [)][ ∈E][S][ if and]
only if i = j, ∀k ̸= m or (d[i], d[j]) ∈ED, ∀k, m. We define
the aggregated network graph as G = (VS, ES), where VS =
_∪i[n]=1[d]_ _[S]_ _[i][ and][ |V][S][|][ =][ �][n]i=1[d]_ _[|S]_ _[i][|][.]_
Let J = {Gjob1, Gjob2, · · ·, GjobJ _}, denote the set of_
all possible types of the graph jobs in the system, each of
which is considered as a graph Gjobj = (Vj, Ej). Each node
of a graph job requires one slot from a DC to get executed. It
is assumed that Vj = {vj[1][,][ · · ·][, v]j[n][j] _[}][, and][ ∀][(][m, n][) : 1][ ≤]_ _[m][ ̸][=]_
� �
_n ≤_ _nj,_ _vj[m][, v]j[n]_ _∈Ej if and only if the nodes vj[m]_ and vj[n]
need to be executed using two adjacent slots of the GDCN.
Similar to [22], [23], [24], we assume that allocation of all the
nodes is required during the execution of the respective job.
The system model is depicted in Fig. 1. For the smalland medium-scale GDCNs, the GDCN network is assumed
to be in charge of finding adequate allocations for the
incoming graph jobs from proxy agents (PAs) ([29], [30], [31]),
which act as trusted parties between the GDCN and the
customers. In these cases, each graph job is allocated through
either a centralized controller or a distributed algorithm
utilizing the communication infrastructure between the DCs
(see Section 4). For large-scale GDCNs, cloud crawlers are
introduced to explore the GDCN to provide a set of suggested
strategies for the PAs. Afterward, PAs allocate their graph
jobs with respect to the utility of the suggested strategies
(see Section 5). The following definitions are introduced to
facilitate our subsequent derivations.
**Definition 1. A feasible mapping between a Gjobj and the**
_GDCN is defined as a mapping fj : Vj �→VS, which satisfies_
_the communication constraints of the graph job. This implies_
_that ∀(m, n) : 1 ≤_ _m ̸= n ≤|Vj|, if (vj[m][, v]j[n][)][ ∈E][j][, then]_
� �
_fj(vj[m][)][, f][j][(][v]j[n][)]_ _∈ES. Let Fj = {fj[1][,][ · · ·][, f][ |F]j_ _[j]_ _[|]} denote the_
_set of all feasible mappings for the Gjobj._
**Definition 2. For a Gjobj, a mapping vector associated with**
_a feasible mapping fj[k]_ _[∈F][j][ is defined as a vector][ M][j][|]fj[k]_ [=]
[m[1]j _[|]fj[k]_ _[,][ · · ·][, m]j[n][d]_ _[|]fj[k]_ []][ ∈] [(][Z][+][ ∪{][0][}][)][n][d] _[, where][ m]j[i]_ _[|]fj[k]_ _[denotes]_
_the number of used slots from DC d[i]. Mathematically, m[i]j[|]fj[k]_ [=]
�|Vj _|_
_l=1_ **[1]{fj[ k][(][v]j[l]** [)][∈S] _[i][}][, where][ 1][{][.][}][ represents the indicator function.]_
_Let Mj = {Mj|f 1j_ _[,][ · · ·][,][ M][j][|]fj|Fj |_ _} denote the set of all mapping_
_vectors for the Gjobj._
Finding a feasible allocation/mapping between a graph
job and a GDCN is similar to the sub-graph isomorphism
_problem in graph theory [32]. Some examples of feasible_
allocations for a graph job with three nodes considering a
GDCN with four DCs each consisting of four slots is depicted
in Fig. 2.
Our aim is to allocate big-data driven applications, e.g.,
computation intensive big-data applications [22] or data
streams [23], [24], to GDCNs. Due to the nature of these
applications, the jobs usually stay in the system so long as
they are not terminated. This work can be considered as a
real-time allocation of graph jobs to the system, where we
find the best currently possible assignment considering the
current network status. Hence, we deliberately omit the time
index from the following discussions. Inspired by [26], [33],
we model the power consumption upon utilizing s slots
of d comprising N cloud servers each with idle power
consumption Pidle[i] [as:]
� � _s_ �α[i] �
_η[i]N_ _[i]_ _σ[i]_ + Pidle[i] _, α[i]_ _≥_ 2. (1)
_|S_ _[i]|_
In this model, η[i] is the so-called Power Usage Effectiveness,
which is the ratio between the total power usage (including
cooling, lights, UPS, etc.) and the power consumed by the
IT-equipment of a DC, and σ[i] is chosen in such a way that
_σ[i]_ + Pidle[i] [determines the peak power consumption of a]
cloud sever Pmax[i] [inside][ d][i][. Also,][ α][i][ is a DC-related constant.]
Subsequently, we define the incurred cost of executing a
graph job with type j allocated according to the feasible
mapping vector Mj = [m[1]j _[,][ · · ·][, m][n]j_ _[d]_ []][ as follows:]
_nd_ � �α[i] _nd_
� _ξ[i]η[i]N_ _[i]_ σ[i] _L[i]_ + m[i]j + Pidle[i] + � _ξ[i]ν[i]m[i]j[,]_ (2)
_i=1_ _|S_ _[i]|_ _i=1_
where L[i] is the original load of DC d[i], ν[i] indicates the I/O
incurred power of using the communication infrastructure
of DC d[i] per slot, and ξ[i] is the ratio between the cost and
power consumption, which is dependent on the DC’s location
and infrastructure design. The I/O cost is considered to
be proportional to the number of used slots since the data
generated at each DC is correlated with that number, and that
data should be exchanged using the I/O infrastructure either
among adjacent DCs or between DCs and the users. Note
than Eq. (1) and Eq. (2) do not capture the order of the slots
used in each DC and assume that the utilization of a slot from
every server in a DC requires the same power consumption.
However, in reality some servers might be in the idle mode,
which need more power to boot and execute the process.
Since each DC may contain tens of servers, considering the
status of each server increases the dimension of the problem
significantly, which makes the problem intractable even in
small-scale GDCNs. Also, obtaining the status of all the
servers in all the DCs is challenging. Addressing these issues
is out of the scope of this paper and left as a future work. In
this paper, we assume that after allocation of the graph jobs
to the GDCN and sending the information to the respective
DCs, each DC manager makes an internal decision about the
effective usage of the servers’ slots considering the status of
the servers.
**2.1** **Problem Formulation**
Our goal is to find an allocation for each arriving graph job
to minimize the total incurred cost on the network. Due to
the inherent relation between the cost and loads of the DCs,
minimizing the cost is coupled with balancing the loads of
the DCs. In a GDCN, let Nj denote the number of Gjobj ∈J
in the system demanded for execution. Let Mj denote the
matrix of mapping vectors of these graph jobs defined as
follows:
**Mj =** �Mj,(1), Mj,(2), · · ·, Mj,(Nj )� _, ∀j ∈{1, · · ·, J},_
� �⊤
**Mj,(i) =** _m[1]j,(i)[, m][2]j,(i)[,][ · · ·][, m][n]j,[d](i)_ _, ∀i ∈{1, · · ·, Nj}._
-----
**Centralized Controller**
**Proxy Agent** **Free Slots** **Busy Slots**
**Cloud**
**Crawler** **Large-scale GDCN** **18**
**1** **10** **13** **22**
**14**
**4**
**2** **5** **19** **23**
**3** **6** **11** **15** **21**
**8** **7** **12** **16** **17** **20**
**Cloud Crawler's**
**Suggested Strategies**
**Number of**
**Datacenter Slots** **{[5, 3],[6, 2],[10, 2]}**
**Index**
**{[4, 3],[3, 2],[5, 2]}** **{[5, 5],[10, 2]}**
**{[10, 1],[11, 2],[15, 2],**
**[16, 1],[17, 1]}** **{[10, 7]}**
Fig. 1: System model for graph job allocation in GDCNs with various scales.
We formulate the optimal graph job allocation as the
following optimization problem (P1): Symbol
[M∗1[,][ M]∗2[,][ · · ·][,][ M]∗J [] =] _GD_
_D_
_nd_ � �Nj α[i] _d[i]_
arg min � _ξ[i]η[i]N_ _[i]_ _σ[i]_ _[L][i][ +][ �]j[J]=1_ _k=1_ _[m]j,[i]_ (k) _nd_
[M1,M2,···,MJ ] _i=1_ _|S_ _[i]|_ _SG[i]_
� _nd_ _J_ _Nj_ _VS_
+ Pidle[i] + � � � _ξ[i]ν[i]m[i]j,(k)_ (3) _ED_
_ES_
_i=1_ _j=1_ _k=1_
_J_
s.t. _J_
_J_ _Nj_ _Gjobj_
� � _Nj_
_m[i]j,(k)_ _[≤|S|][i][ −]_ _[L][i][,][ ∀][i][ ∈{][1][,][ · · ·][, n][d][}][,]_ (4) _Vj_
_j=1_ _k=1_ _Ej_
**Mj,(i) ∈Mj, ∀j ∈{1, · · ·, J}, ∀i ∈{1, · · ·, Nj}.** (5) _NL[i][i]_
**Medium-scale GDCN**
**_Consensus-based_**
**_Distributed Algorithm_**
Fig. 2: Examples of graph job
allocation. The green (blue)
color denotes busy (idle)
slots. The red color indicates
the utilized slots upon allocation.
Table 1: Major notations.
**_Online Learning_**
In 1, the objective function is the total incurred cost
_P_
of execution, the first condition given by (4) ensures the
stability of the DCs, and the second constraint given by (5)
guarantees the feasibility of the assignment. There are two
main difficulties in obtaining the solution: i) Identifying the
feasible mappings (Mj-s) requires solving the sub-graph
isomorphism problem between the graph jobs’ topology and
the aggregated network graph, which is categorized as NPcomplete [32]. Hence, we only assume the knowledge of
_Mj-s in the small- and medium-scale GDCNs. In the large-_
scale GDCNs, we propose a low-complexity decentralized
approach to extract isomorphic sub-graphs to a graph job
and implement it in our proposed cloud crawlers. ii) 1 is
_P_
a nonlinear integer programming problem, which is known
to be NP-hard. In small- and medium-scale GDCNs, we
tackle this problem considering a convex relaxed version of
it. However, for large-scale GDCNs, we find a “potentially
good” subset of feasible mappings as the cloud crawlers
traverse the network. Afterward, the strategy selection is
carried out using the computing power of the PAs in a
decentralized fashion.
**Remark 1. Considering the possibility of link outages and security**
_preferences, the users may prefer utilizing fewer DCs during the_
_job execution Since these situations are more likely to happen in_
**pj,(m)** Probability of selection of strategy m ∈SAj
_large-scale GDCNs, we incorporate this tendency to the utility_
_function of the users, i.e., Eq. (33) and Eq. (40), described in_
_Section 5._
#### 3 GRAPH JOB ALLOCATION IN SMALL-SCALE GDCNS: CENTRALIZED APPROACH
Solving 1 requires solving an integer programming prob_P_
lem in nd�Jj=1[N][j][ dimensions. For a small GDCN with three]
types of graph jobs (J = 3), 5 DCs (nd = 5), and 100 graph
jobs of each type in the system, the dimension of the solution
becomes 1500 rendering the computations impractical. To
alleviate this issue, we solve 1 in a sequential manner for
_P_
available graph jobs in the system. In our approach, at each
stage, the best allocation is obtained for one graph job while
neglecting the presence of the rest. Afterward, the graph job
is allocated to the GDCN and the loads of the utilized DCs
are updated. As a result, at each stage, the dimension of the
solution is nd (5 in the above example) For a Gjobj ∈J let
|Symbol|Definition|
|---|---|
|GD|The GDCN graph|
|D|Set of DCs in the GDCN|
|di|The DC with index i|
|nd|Number of DCs in the GDCN|
|i S|Set of slots of DC di|
|G|Aggregated graph of the GDCN|
|VS|Set of slots of the entire GDCN|
|ED|Set of edges between adjacent DCs of a GDCN|
|ES|Set of edges between adjacent slots of DCs in a GDCN|
|J|Set of graph jobs in the system|
|J|Number of different types of jobs in the system|
|Gjobj|Associated graph to the graph job with type j|
|Nj|Number of jobs with type j in the system|
|Vj|Set of nodes of the graph job with type j|
|Ej|Set of edges of the graph job with type j|
|Li|Load of DC di|
|Ni|Number of cloud servers in DC di|
|Mj|Set of all the mapping vectors for Gjobj|
|P|Set of PAs in the system|
|SAj|Set of cloud crawler’s suggested strategies for Gjobj|
|p j,(m)|Probability of selection of strategy m ∈SAj|
-----
the available graph jobs be indexed from 1 to Nj according
to their execution order, where preferred customers can be
prioritized in practice. For a graph job with type j with index
_k, we reformulate_ 1 as ( 2):
_P_ _P_
�nd _ξ[i]η[i]N_ _[i]σ[i]_ � _Li + mij,(k)_
_i=1_ _|S_ _[i]|_
**M[∗]j,(k)[=arg min]**
**Mj,(k)**
�α[i]
+ Pidle[i]
_nd_
�
+ _ξ[i]ν[i]m[i]j,(k)_
_i=1_
s.t.
(6)
(7)
(8)
Finally, the dual problem can be written as ( 4):
_P_
max (14)
**_λ,Λ∈(R[+])[nd]_** _,γ∈R_ _[D][(][λ][, γ,][ Λ][)][.]_
3 is a convex optimization problem with differentiable
_P_
affine constraints; hence, it satisfies the constraint qualifications
implying a zero duality gap. As a result, the solution of 3
_P_
coincides with the solution of 4. It can be verified that the
_P_
minimum of the Lagrangian function occurs at the following
point:
By replacing this in the Lagrangian function, the dual
function is given by: D(λ, γ, Λ) = L(M[∗]j,(k)[,][ λ][, γ,][ Λ][)][, where]
**M[∗]j,(k)** [= [][m]j,[1] (k)∗, m2j,(k)∗, · · ·, mnj,d(k)∗]⊤. The optimal Lagrangian multipliers can be obtained by solving the dual
problem given by:
_∇D(λ, γ, Λ)|(λ∗,γ∗,Λ∗) = 0._ (16)
Given the solution of Eq. (16), the optimal allocation in
(R[+])[n][d] is given by M[∗]j,(k)[|][(][λ][∗][,γ][∗][,][Λ][∗][)][. The solutions of Eq.][ (16)]
can be derived via the iterative gradient ascent algorithm [34].
Let **M[�][∗]j,(k)** [= [][ �]m[1]j,(k)∗, · · ·, �m[n]j,[d](k)∗]⊤ denote the derived
solution in the continuous space, we obtain the solution of 2
_P_
by solving the following weighted mean-square problem:[2]
Λ[i]−λ[i]−γ−ξ[i]ν[i]
_ξ[i]_ _η[i]_ _N_ _[i]_ _σ[i]_ _α[i]_
(|S[i] _|)[αi]_
1
� _α[i]_ _−1_
_−_ _L[i], ∀i ∈{1, · · ·, nd}. (15)_
�
_m[i]j,(k)_ _[≤|S]_ _[i][| −]_ _[L][i][,]_ _∀i ∈{1, 2, · · ·, nd},_ (7)
**Mj,(k) ∈Mj,** (8)
where L[i] denotes the updated load of DC d[i] after the
previous graph job allocation. The last constraint in 2
_P_
forces the solution to be discrete making the derivation of
a tractable solution impossible. In the following, we relax
this constraint and provide a tractable method to derive
the solution in the set of feasible points. For the moment,
we consider Mj,(k) ∈ (R[+])[n][d] _, ∀j, k. We define P3 as the_
following optimization problem with the same objective
function as 2, in which the constraint given by Eq. (8) is
_P_
relaxed and represented as two constraints ( 3):
_P_
_m[i]j,(k)∗_ =
**M[∗]j,(k)[=arg min]**
**Mj,(k)**
�nd _ξ[i]η[i]N_ _[i]σ[i]_ � _Li + mij,(k)_
_i=1_ _|S_ _[i]|_
�α[i]
+ Pidle[i]
_nd_
�
_wi_
_i=1_
� �2
_m[i]j,(k)_ _[−]_ _m[�][i]j,(k)∗_ _,_ (17)
+
_nd_
�
_ξ[i]ν[i]m[i]j,(k)_
_i=1_
where w.-s are the design parameters, which can be tuned to
impose a certain tendency toward utilizing specific DCs.
So far, to derive the above solution, it is necessary to have
a powerful centralized processor with global knowledge
about the state of all the DCs. This is due to the inherent
updating mechanism of the gradient ascent method [34],
in which iterative update of each Lagrangian multiplier
requires global knowledge of the current values of the other
Lagrangian multipliers and the DCs’ loads. Obtaining this
knowledge may not be feasible for a given GDCN with
more than a few DCs. Moreover, multiple powerful backup
processors may be needed to avoid the interruption of the
allocation process in situations such as overheating of the
centralized processor. In the following section, we design a
distributed algorithm using the processing power of the DCs
in parallel to resolve the above concerns.
#### 4 GRAPH JOB ALLOCATION IN MEDIUM-SCALE GDCNS: DECENTRALIZED APPROACH WITH DCS
**IN CHARGE OF JOB ALLOCATION**
The described dual problem in Eq. (14), given the result of
Eq. (15), can be written as follows:
**M[∗]j,(k)** [= arg min]
**Mj,(k)∈Mj**
s.t. (7),
_nd_
�
_m[i]j,(k)_ [=][ |V][j][|][,]
_i=1_
_m[i]j,(k)_ _[≥]_ [0][,][ ∀][i][ ∈{][1][,][ 2][,][ · · ·][, n][d][}][,]
(9)
(10)
(11)
where Eq. (10) ensures the assignment of all the nodes of
the graph job to the GDCN, and Eq. (11) guarantees the
practicality of the solution. It is easy to verify that 3 is a
_P_
convex optimization problem. We use the Lagrangian dual
_decomposition method [34] to solve this problem. Let λ =_
[λ[1], λ[2], _, λ[n][d]_ ], γ, and Λ = [Λ[1], Λ[2], _, Λ[n][d]_ ] denote the
_· · ·_ _· · ·_
Lagrangian multipliers associated with the first, the second,
and the third constraint, respectively. The Lagrangian function
associated with 3 is then given by:
_P_
_nd_
�
_L(Mj,(k), λ, γ, Λ) = −_ Λ[i]m[i]j,(k)[+]
_i=1_
�nd _ξ[i]η[i]N_ _[i]_ σ[i] � _Li + mij,(k)_ �+α[i] Pidle[i]
_i=1_ _|S_ _[i]|_
+
_nd_
�
_ξ[i]ν[i]m[i]j,(k)_
_i=1_
�
_. (12)_
� _nd_
�
_m[i]j,(k)_ _[−|V][j][|]_
_i=1_
+
_nd_
�
_λ[i][ �]m[i]j,(k)_ _[−|S]_ _[i][|][ +][ L][i][�]_ + γ
_i=1_
_nd_
�
_D[i](λ[i], γ, Λ[i]),_ (18)
_i=1_
The corresponding dual function of 3 is given by:
_P_
_D(λ, γ, Λ) = min_ (13)
**Mj,(k)** _[L][(][M][j,][(][k][)][,][ λ][, γ,][ Λ][)][.]_
2. Instead of solving the mean-square problem, the k-d tree data
structure [35] can be used to find the closest feasible allocation in
average complexity of O(log(|Mj _|)), where |Mj_ _| is the number of_
feasible allocations
max
_λ[i]∈R[+],γ∈R,Λ[i]∈R[+]_
-----
where
_D[i](λ[i], γ, Λ[i]) = ξ[i]η[i]N_ _[i]_
� _Li + mij,(k)∗_
σ[i]
_|S_ _[i]|_
_α[i]_
�
+ Pidle[i]
**Algorithm 1: CDGA: Consensus-based distributed**
graph job allocation
_∗_ _i_ _∗_ _i_ _i[�]_
+ ξ[i]ν[i]m[i]j,(k) + λ [�]m[i]j,(k) _−|S_ _| + L_
+γ�m[i]j,(k)∗−|Vj|/nd�−Λ[i]m[i]j,(k)∗, ∀i ∈{1, · · ·, nd}. (19)
In Eq. (18), each term can be associated with a DC. For d[i],
there are two private (local) variables λ[i], Λ[i] and a public
(global) variable γ, which is identical for all the DCs. Due to
the existence of this public variable, the objective function
cannot be directly written as a sum of separable functions. In
the following, we propose a distributed algorithm deploying
local exchange of information among adjacent DCs to obtain
a unified value for the public variable across the network.
**4.1** **Consensus-based Graph Job Allocation**
We propose the consensus-based distributed graph job
allocation (CDGA) algorithm consisting of two steps to find
the solution of Eq. (18): i) updating the local variables at
each DC, ii) updating the global variable via forming a
consensus among DCs. We consider each term of Eq. (18) as
a (hypothetically) separate term and rewrite the problem as
a summation of separable functions, with γ replaced by γ[i]
in D[i](., ., .):
max
_λ[i]∈R[+],γ[i]∈R,Λ[i]∈R[+]_
_nd_
�
_D[i](λ[i], γ[i], Λ[i])._ (20)
_i=1_
At each iteration of the CDGA algorithm, each DC first
derives the value of the following variables locally using the
gradient ascent method:
_λ[i](k + 1) = λ[i](k) + cλ(∇λi_ _D[i](λ[i](k), γ[i](k), Λ[i](k))),_
_γ[′][i](k + 1) = γ[i](k) + cγ(∇γi_ _D[i](λ[i](k), γ[i](k), Λ[i](k))),_ (21)
Λ[i](k + 1) = Λ[i](k) + cΛ(∇Λi _D[i](λ[i](k), γ[i](k), Λ[i](k))),_
where c.-s are the corresponding step-sizes and γ[′][i] is a local
variable. Afterward, the local copies of the global variable (γ[i]s) are derived by employing the consensus-based gradient
ascent method [36]:
_γ[i](k + 1) =_
�nd �
**W[Φ][�]**
_j=1_
(22)
_ij[γ][′][j][(][k][)][,]_
**input : Convergence criteria 0 < υ << 1, maximum number of**
iterations K.
**1 At each DC d[i]** _∈D, choose an arbitrary initial value for_
_λ[i](1), γ[i](1), Λ[i](1)._
**2 for k = 1 to K do**
**3** At each DC d[i] _∈D, derive the values of λ[i], γ[′][i], Λ[i]_ for the
next iteration (k+1) using Eq. (21).
**4** At each DC d[i] _∈D, update the value of γ[i]_ using Eq. (22).
**5** **if |γ[i](k + 1) −** _γ[i](k)| ≤_ _υ and |Λ[i](k + 1) −_ Λ[i](k)| ≤ _υ and_
_|λ[i](k + 1) −_ _λ[i](k)| ≤_ _υ and_
_|γ[i](k + 1) −_ _γ[j]_ (k)| ≤ _υ, 1 ≤_ _i ̸= j ≤_ _nd then_
**6** Go to line 7.
**7 Derive the convex relaxed solution described in Eq. (15).**
**8 Derive the allocation using Eq. (17).**
#### 5 GRAPH JOB ALLOCATION IN LARGE-SCALE GDCNS: DECENTRALIZED APPROACH USING CLOUD CRAWLING AND PAS’ COMPUTING RESOURCES
Large-scale GDCNs consist of an enormous number of PAs
and DCs. This fact imposes three challenges for graph job
allocation: i) The CDGA algorithm developed above becomes
infeasible. In particular, excessive computational burden
will be incurred on the DCs due to the large number of
arriving jobs. Also, CDGA in large-scale GDCNs will incur
a long delay (e.g., a GDCN with 100 DCs involves 300
Lagrangian multipliers and requires hundreds of iterations
for convergence), which may render the final solution less
effective for the current state of the network. Moreover,
continuous communication between the DCs imposes a
considerable congestion over the communication links. ii)
So far, the inherent assumption in our study is a known
set of feasible allocations for the graph jobs. This requires
solving the NP-complete problem of sub-graph isomorphism
between the graph jobs and the large-scale aggregated
network graph, which may take a long time. iii) Even for a
given graph job, the size of the feasible allocation set becomes
prohibitively large in a large-scale network. For instance, in a
fully-connected network of 100 DCs, each with 10 slots, the
number of feasible allocations for a simple triangle graph job
is �10003 � _∼_ 166 × 106. These concerns motivate us to develop
_cloud crawlers, based on which we address the mentioned_
challenges through a decentralized framework. Here, we
use the term “crawler” to describe the movement between
adjacent DCs. This may bear a resemblance to the term web
_crawler. Nevertheless, the cloud crawlers introduced here are_
fundamentally different from conventional web crawlers
(e.g., [37], [38], [39]). Our cloud crawlers aim to extract
suitable sub-graphs from GDCNs for specified graph job
structures when traversing the network, while web crawlers
are mainly developed to extract information from Internet
URLs by looking for keywords and related documents.
**5.1** **Strategy Suggestion Using Cloud Crawling**
We introduce a cloud crawler (CCR) which carries a collection
of structured information traveling between adjacent DCs. It
probes the connectivity among the DCs and status of them
(power usage load distribution etc ) based on which it
where W = I − _ϵL(GD), with L(GD) the Laplacian matrix_
of GD and ϵ ∈ (0, 1), and Φ ∈ N denotes the number of
performed consensus iterations among the adjacent DCs. In
this method, the adjacent DCs perform Φ consensus iteration
with local exchange of γ[′]-s before updating γ. The pseudocode of the CDGA algorithm is given in Algorithm 1. Since
the solution is found in the continuous space, similar to
Section 3, the last stage of the algorithm is obtaining the
solution in the feasible set of allocations. This step requires a
centralized processor with the knowledge of the feasible
solutions. Nevertheless, as compared to the centralized
approach (Section 3), the centralized processor is no longer
in charge of deriving the optimal allocations for each graph
-----
**Algorithm 2: Cloud crawling**
**input : Initial server d[i]** _∈D, Gjobj_, the center node vc and its
maximum shortest distance D to the nodes of the graph,
size of the suggested strategies |SAj _|._
**1 Initialize a BST (BST** ), IA as a list of list of lists,
_VISIT ED = {}, and vector Feas A with length D + 1._
**2 VISIT ED = VISIT ED ∪** _d[i]_
**3 Feas A[1] = 1**
**4 for r = 1 to D do**
**5** _Feas A[r + 1] = Feas A[r] + |Nv[r]c_ _[|]_
**6 Observer the current pdf of the load of the server f ˜Li**
**7 Initialize IA temp as a list of list of lists.**
**8 %Completing the incomplete allocations using the slots of current**
DC:
**9 for r = 1 to len(IA) do**
**10** _Last Alloc = IA[r, len(IA[r])]%Obtain the last allocation_
done for each incomplete allocation. This is a list of 4
elements (see line 40)
**11** _AN = |Vj_ _| −_ _Last Alloc[4] % #assigned nodes of the job_
**12** _j = find(Feas A == AN_ ) + 1 %Next neighborhood that
needs to be assigned
**13** _SA = 0_
**14** **while j ≤** _D + 1 do_
**15** _SA = SA + |Nv[j]c_ _[|][%#used slots from the current DC]_
**16** **if SA ≤|S[i]| then**
**17** _p = E_ �π˜[i][ �]L˜[i] + SA��
**18** _LL temp = IA[r] %Initialize a temporary list_
**19** _LL temp.append([d[i], SA, p, Last Alloc[4] −_ _SA])_
**20** **if j = D + 1 then**
**21** %Add completed allocations to the BST
**22** _Tot p = Find Tot Cost(LL temp)_
%Algorithm 3
**23** _Alloc = Create Alloc(LL temp)%Algorithm 4_
**24** _BST = BST Add(BST, Tot p, Alloc, |SAj_ _|)_
**25** %Algorithm 5 ����key ����value
**26** **else**
**27** _IA temp.append(LL temp)_
**28** _j = j + 1_
**29 IA = IA temp**
**30 %Assigning the nodes to the current DC:**
**31 for r in Feas A do**
**32** **if r ≤|S[i]| then**
**33** _p = E_ �π˜[i][ �]L˜[i] + r��
**34** _RS = |Vj_ _| −_ _r %Number of unassigned nodes of the job_
**35** **if RS = 0 then**
**36** %The allocation corresponding to assigning all the
nodes to the current server is added to the BST
**37** _BST Add(BST, p, [d[i], r], |SAj_ _|) %Algorithm 5_
**38** **else**
**39** %A new incomplete allocation is added to IA as a list
of list
**40** _IA.append([[d[i], r, p, RS]])_
**41 if All the adjacent DC are in the set VISIT ED then**
**42** Initialize a new IA and randomly choose one adjacent DC d[k]
**43** _VISIT ED = {}_
**44 else**
**45** Randomly choose one adjacent DC d[k]
**46 VISIT ED = VISIT ED ∪{d[k]}**
**47 i = k**
**48 crawl to d[i]** and go to line 6
provides a set of suggested allocations for the graph jobs. For
a faster network coverage, multiple CCRs for each type of
graph job can be assumed. Information gleaned by the CCRs
can be shared with the PAs who act as mediators between
the GDCN and customers using two mechanisms: i) the CCR
shares them with a central database, which PAs have access
to, on a regular basis; ii) the CCR shares them with DCs as
it passes through them and the DCs update the connected
PAs accordingly. The goal of a CCR is to find “potentially
good” feasible allocations to fulfill a graph job’s requirements
considering the network status. We consider a potentially
good feasible allocation as a sub-graph in the aggregated
network graph which is isomorphic to the considered graph
job leading to a low cost of execution. In the following, we
first prove a theorem, based on which we provide a corollary
aiming to describe a fast decentralized approach to solve the
sub-graph isomorphism problem in large-scale GDCNs.
**Definition 3. Two graphs G and G[′]** _with vertex sets V and_
_V_ _[′]_ _are called isomorphic if there exists an isomorphism (bijection_
_mapping) g : V →_ _V_ _[′]_ _such that any two nodes, a, b ∈V, are_
_adjacent in G if and only if g(a), g(b) ∈V_ _[′]_ _are adjacent in G[′]._
**Algorithm 3: Find Tot Cost**
**input : A list LL**
**output: The total cost C.**
**1 C = 0**
**2 j = 0**
**3 while LL[j]!=null do**
**4** _C = C + LL[j][3] % Sum the incurred costs on all the DCs_
involved
**5** _j = j + 1_
**6 return C**
**Algorithm 4: Create Alloc**
**input : A list LL**
**output: Allocation strategy S**
**1 Initialize S as a list of lists**
**2 j = 0**
**3 while LL[j]!=null do**
**4** _S = S.append([LL[j][1], LL[j][2]]) %The DC’s index and its_
number of used slots
**5** _j = j + 1_
**6 return S**
**Algorithm 5: BST Add**
**input : A binary search tree BST**, a key and a value, the desired
size of the suggested strategy set |SAj _|_
**output: A binary search tree BST**
**1 if BST.length < |SAj** _| then_
**2** BST=BST.Insert(key,value)
**3 else if key < BST.get max().key then**
**4** BST=BST.Delete(BST.get max())
**5** BST=BST.Insert(key,value)
**6 return BST**
**Theorem 1. Consider graphs G and H with vertex sets VG and**
_VH_ _, respectively, where |VG| ≤|VH_ _|. Assume that H can be_
_partitioned into multiple complete sub-graphs h0, ..., hN_ _, N ≥_ 1,
_with vertex sets Vh0_ _,, ..., VhN, where ∪i[N]=0[V][h]i_ [=][ V][H] _[, and all]_
_the nodes in each pair of sub-graphs with consecutive indices are_
_connected to each other. Consider node v ∈VG and let D ≤_ _N_
_denote the length of the longest shortest path between v and nodes_
_in VG. Define Nv[k]_ [≜] _[{][v][ˆ][ ∈V][G]_ [:][ SP] [(ˆ][v, v][) =][ k][}][,][ N][ 0]v [=][ {][v][}][,]
_where SP_ ( ) denotes the length of the shortest path between the
-----
_two input nodes. Let {ij}j=0 be a sequence of integer numbers_
_that satisfy the following conditions:_
_to Gjobj._
0 ≤ _i0, i1, i2, · · ·, iD ≤_ _D + 1,_
_D_
�
_il = D + 1,_
_l=0_
_ij = 0 ⇒_ _ij+1 = 0, ∀j ∈{0, · · ·, D −_ 1}.
(23)
(24)
(25)
_m[k]j_ [=]
��iiii0=01=1−1[1][|N][{][i]1[ i]v[≥]c[1][|] _[}][|N][ i]vc[+][i][0][−][1]|_ _kk = = j j01,,_
�ii2=1 **[1][{][i]2[≥][1][}][|N][ i]vc[+][i][0][+][i][1][−][1]|** _k = j2,_
_..._
�iiD=1 **[1][{][i]D[≥][1][}][|N]vic+[�][D]l=0[−][1]** _[i][l][−][1]|_ _k = jD,_
0 _Otherwise._
(32)
_For such a sequence {ij}j[D]=0[, there is at least an isomorphic sub-]_
_graph to G, called G[′], in H with the corresponding isomorphism_
_mapping g, for which at least one of the nodes of G[′], v[′]_ = g(v),
_belongs to h0, if the following set of conditions is satisfied:_
_|Vh0_ _| ≥_ [�]i[i]=0[0][−][1] _[|N][ i]v[|][,]_
_|Vh1_ _| ≥_ [�]i[i]=1[1] **[1][{][i]1[≥][1][}][|N][ i]v[+][i][0][−][1]|,**
_|Vh2_ _| ≥_ [�]i[i]=1[2] **[1][{][i]2[≥][1][}][|N][ i]v[+][i][0][+][i][1][−][1]|,**
_..._
_i+[�][D]l=0[−][1]_ _[i][l][−][1]_
_|VhD_ _| ≥_ [�]i[i]=1[D] **[1][{][i]D[≥][1][}][|N]v** _|._
(26)
Using our method described in the above corollary, it can
be verified that the complexity of obtaining an isomorphic
sub-graph to a graph job for a CCR becomes O(D), where
_D is the diameter of the graph job. Henceforth, we recall_
_vc defined in Corollary 1 as the center node, which can be_
chosen arbitrarily from the graph job’s nodes. The pseudocode of our algorithm implemented in a CCR is given
in Algorithm 2. We use the binary search tree (BST) data
structure [40] to structurize the carrying suggested strategies.
To handle the large number of feasible allocations, we limit
the capability of a CCR in carrying potentially good strategies
(size of the BST) to a finite number |SAj| for Gjobj ∈J .
Some important parts of Algorithm 2 are further illustrated
in the following.
**A) Initialization: A CCR is initialized at a DC for a certain**
graph job, Gjobj ∈J, and a specified number of suggested
strategies (|SAj|) to be carried.[3] Each CCR carries a BST, a
list [41] of incomplete allocations (IA) and a set of visited
neighbors (V isited) (can be implemented as a list). In Fig. 3-a,
topology of a graph job is shown along with three DCs where
each square denotes a slot in a DC. The CCR is initialized at
_d[1]_ traversing the path d[1] _d[2]_ _d[3]._
_→_ _→_
**B) Determining the Graph Job Topology Constraints**
**(lines: 4-5): For a given center node of a graph job, i.e., vc in**
Corollary 1, the algorithm calculates the feasible number
of nodes allocated to DCs according to Corollary 1. In
Fig. 3-a, the center node is denoted by vc and different
set of neighbors located in various shortest paths to vc are
demonstrated.
**C) Allocation Initialization and Completion (lines: 9-**
**40): According to Corollary 1, the crawler attempts to**
complete the incomplete allocations in IA by accommodating
the remaining nodes of the graph job to the current DC
(lines: 9-29). During this process, if the remaining number
of unassigned nodes of the graph job becomes zero, the
corresponding allocation is added to the BST considering its
incurred cost. The rest of the allocations are added to the
_IA. Also, the allocations are initialized upon arriving at each_
DC using Corollary 1 (lines: 31-40). In Fig. 3-a, the initialized
allocations are depicted underneath d[1]. Also, the updated set
of incomplete allocations and completed allocations during
the movement of the CCR are depicted underneath d[2] and
_d[3]. Also, some of the completed allocations are depicted in_
Fig. 3-b for a better understanding.
**D) Traversing the GDCN (lines: 41-48): The CCR exam-**
ines the adjacent DCs of its current location. If there are
multiple non-visited neighbor DCs, the CCR chooses its
next destination randomly among them. However, if all the
3. Note that using a simple extension of this algorithm, a CCR can
handle the extraction of suggested strategies for multiple graph jobs at
the same time
_Proof. The key to prove this theorem is considering the_
following mapping between the nodes of G and the subgraphs in H:
[v → _h0, Nv[1]_ _[→]_ _[h][1][,][ N][ 2]v_ _[→]_ _[h][2][,][ · · ·][,][ N][ D]v_ _[→]_ _[h][D][]][.]_ (27)
Under this mapping, the mapped nodes form an isomorphic
graph to G since the connection between all the adjacent
nodes in G is met in H. That is because they are either
placed at the same (fully-connected) hi, 0 ≤ _i ≤_ _N or_
in (fully-connected) adjacent hi-s, 0 ≤ _i ≤_ _N_ . With a
similar justification, it can be proved that concatenation of
the mapped nodes to the adjacent hi-s, 0 ≤ _i ≤_ _N_, in
Eq. (27) preserves the isomorphic property. For instance, all
the following mappings form isomorphic graphs to G in H:
[v → _h0, Nv[1]_ _[→]_ _[h][1][,][ · · ·][,][ N][ D]v_ _[−][2]_ _→_ _hD−2,_
_Nv[D][−][1]_ _∪Nv[D]_ _[→]_ _[h][D][−][1][,][ {} →]_ _[h][D][]][,]_ (28)
[v → _h0, Nv[1]_ _[→]_ _[h][1][,][ · · ·][,][ N][ D]v_ _[−][3]_ _→_ _hD−3,_
_Nv[D][−][2]_ _∪Nv[D][−][1]_ _∪Nv[D][→][h][D][−][2][,][ {}→]_ _[h][D][−][1][,][ {}→]_ _[h][D][]][,]_ (29)
[v ∪Nv[1] _[→]_ _[h][0][,][ N][ 2]v_ _[→]_ _[h][1][,][ · · ·][,][ N][ D]v_ _[−][1]_ _→_ _hD−2,_
_Nv[D]_ _[→]_ _[h][D][−][1][,][ {} →]_ _[h][D][]][,]_ (30)
[v → _h0, Nv[1]_ _[∪N][ 2]v_ _[∪N][ 3]v_ _[→]_ _[h][1][,]_
_Nv[4]_ _[→]_ _[h][2][,][ · · ·][,][ N][ D]v_ _[→]_ _[h][D][−][2][,][{} →]_ _[h][D][−][1][,][ {} →]_ _[h][D][]][.]_ (31)
It can be seen that conditions stated in Eq. (23)-(25) denote
the feasible concatenation strategies, where each ij denotes
the number of neighborhoods mapped to hj, 0 ≤ _j ≤_ _D._
Also, Eq. (26) ensures the feasibility of the corresponding
mappings.
**Corollary 1. For Gjobj, assume a CCR located at DC d[j][0]**
_allocating at least one node of Gjobj, vc ∈Vj, to one slot at_
_d[j][0]_ _, where the length of longest shortest path between vc and_
_nodes in Vj is D. Assume that the CCR’s near future path can_
_be represented as d[j][0]_ _→_ _d[j][1]_ _· · · →_ _d[j][D]_ _, where ji ̸= jk, ∀i ̸= k._
_Considering d[j][i]_ _as hi in Theorem 1, for each realization of the_
_sequence {ij}j[D]=0_ _[satisfying Eq.][ (23)][-][(26)][, the following described]_
_allocation Mj = [m[1]_ _m[2]_ _· · · m[n][d]_ ] is feasible and is isomorphic
-----
1
_v2_ _vc_
[[d [1] _,1,[~]π[1]([~]L[1]+1),_ 5]]
d[1] d[2] d[3]
[[d[1] _,1,[~]π[1]([~]L[1]+1),_ 5], [ _d_ [2], 3 _,[~]π[2]([~]L[2]+3)_ _,2]]_ [[d [1] _,1,[~]π[1]([~]L[1]+1),_ 5], [d [2], 3,[~]π[2]([~]L[2]+3) _,2],_ [d [3] _,1_ _,[~]π[3]([~]L[3]+1),_ 1]]
[[d [1] _,1,[~]π[1]([~]L[1]+1),_ 5], [d [2], 4, [~]π[2]([~]L [2]+4) _,1]]_ [[d [1] _,1,[~]π[1]([~]L[1]+1),_ 5], [d [2], 3 _,[~]π[2]([~]L[2]+3)_ _,2],_ [d [3] _,2,_ [~]π[3]([~]L[3]+2) _,_ 0]]
[[d [1] _,_ 4,[~]π[1]([~]L [1]+4) _,_ 2],[d [2] _,1,_ [~]π[2]([~]L[2]+1), 1]] [[d [1] _,1,[~]π[1]([~]L[1]+1),_ 5], [d [2], 4, [~]π[2]([~]L [2]+4) _,1],_ [d[3], 1,[~]π[3]([~]L[3]+1), 0]]
[[d [1] _,_ 4,[~]π[1]([~]L [1]+4) _,_ 2],[d [2] _,2,_ [~]π[2]([~]L[2]+2) _,_ 0]] [[d [1] _,_ 4,[~]π[1]([~]L [1]+4) _,_ 2],[d [2] _,1,_ [~]π[2]([~]L[2]+1), 1], [d [3], 1,[~]π[3]([~]L [3]+1), 0]]
Completed
[[d [1] _,5,_ [~]π[1]([~]L[1]+5) _,1],_ [d [2], 1,[~]π[2]([~]L[2]+1), 0]] [[d [2] _,_ 1, [~]π[2]([~]L[2]+1) _,5 ],[d_ [3] _,_ 3, [~]π[3]([~]L[3]+3), 2]]
Completed [[d[2] _,_ 4,[~]π[2]([~]L[2]+4), 2],[d[3] _,_ 1, [~]π[3]([~]L [3]+1), 1]]
[[d [2] _,_ 1, [~]π[2]([~]L[2]+1) _,5 ]]_ [[d[2] _,_ 4,[~]π[2]([~]L[2]+4), 2],[d [3] _,_ 2,[~]π[3]([~]L [3]+2), 0]]
_v5_ _v3_ _v4_
_Nvc0 ={vc}_
_Nvc1 ={v1,v2,v4}_
_Nvc2 ={v3}_
aNvc3 ={v5}
Format:[DC’s index, #Assigned nodes, Incurred cost, #Remaining nodes] [[d [3] _,_ 1, [~]π[3]([~]L[3]+1) _,5 ]]_ Completed
|d1 d2 dd33 {v,v,v,v,v,v} b c 1 2 3 4 5|d1 d2 dd33 {v,v,v,v,v} {v} c 1 2 3 4 5|dd11 dd22 dd33 {v,v,v,v} {v,v} c 1 2 4 3 5|d1 d2 d3 {v} {v,v,v} {v,v} c 1 2 4 3 5|dd11 dd22 d3 {v,v,v,v} {v,v} c 1 2 4 3 5|
|---|---|---|---|---|
Fig. 3: a: The graph job topology and the neighboring nodes to the center node (left); three DCs along with the carried incomplete
and complete allocations of the CCR upon arriving at each DC (right). b: Some examples of completed allocations.
neighbor DCs are visited, the CCR clears the visited set,
and chooses its next destination at random. This process is
designed to avoid re-visiting the previously visited DCs or
trapping at a DC in which all of its neighbor DCs are visited.
**Remark 2. After the size of the BST reaches the predefined**
_length (|SAj|), a new completed strategy is added to the BST_
_if it possesses a lower incurred cost as compared to the strategy_
_with the maximum incurred cost in the BST, and the latter is_
_deleted subsequently._
**Remark 3. It is assumed that each DC has a probabilistic**
_prediction for its near future load distribution. Hence, the_
_crawler obtains the expected cost of allocation in line 17, e.g.,_
_E{π˜[i](L[˜][i]_ + m)} = [�][|S]j=0[i][|−][m] _π(j + m)fL˜_ _i_ (L[˜][i] = j) when m
_slots of DC d[i]_ _are taken, where fL˜_ _i is the probability mass function_
_of the predicted load of d[i]_ _and π(j + m) is the incurred cost stated_
_in Eq. (2) with L[i]_ + m[i]j _[replaced with][ j][ +][ m][.]_
**Remark 4. In the BST considered (see Algorithm 2), each**
_node has two attributes: “key” and “value”, where key is_
_a real number and value is a list. The functions getmax(),_
_Delete(), and Insert() are assumed to be known, for which_
_sample implementation can be found in [40]. Also in Algorithm 2,_
_the function len() returns the length of the input argument. If the_
_input is a list, it returns the number of elements; if the input is_
_a list of lists, it returns the number of lists inside the outer-list,_
_etc. Moreover, in Algorithm 5, the “length” attribute indicates_
_the number of nodes of the BST._
**Remark 5. The BST is used for its unique characteristics. If**
_the BST is balanced, (e.g., implemented as an AVL tree) this_
_data structure enables deletion of the strategy with the maximum_
_cost and insertion of a new strategy, which are both necessary_
_in the CCR, in time complexity of O(log |SAj|). Moreover, a_
_simple inorder traversal, which can be done in O(|SAj|), gives_
_the suggested strategies in ascending order with respect to their_
_incurred cost._
**Remark 6. We designed CCRs mainly for allocation of graph jobs**
_in large-scale GDCNs. However, they could also be implemented_
_in small- and medium-scale GDCNs. In those cases, if the CCR_
_continuously explores the network and the power consumption_
_of DCs changes smoothly over time, the allocation cost of the_
_best strategy in the BST of the CCR would be similar to those_
_of the solutions obtained in Section 3 and_ _4. Note that, the_
_analytical solutions proposed for small- and medium-scale GDCNs_
_are guaranteed to find the optimal solution of_ 3, which, due
_P_
_to the limitations discussed at the beginning of this section, are_
_not applicable in large-scale GDCNs. However, cloud crawling_
_can be viewed as an empirical approach, which not only provides_
_a distributed solution, but also offers additional benefit such as_
_extraction of isomorphic sub-graphs to a given graph job._
Fig. 1 depicts a sample CCR traversing over the network,
where its corresponding information is shown in the bottom
left of the figure. In this figure, a crawler is considered
attempting to assign a graph job with 7 nodes to the network.
It is assumed that given the center node vc, we have: Nv[0]c [= 1][,]
_Nv[1]c_ [= 2][,][ N][ 2]vc [= 2][,][ N][ 3]vc [= 1][, and][ N][ 4]vc [= 1][. In the depicted]
BST, each suggested strategy is a list of lists, each of which
consists of two elements: index of a DC and the number of
slots utilized from that DC.
So far, PAs are provided with a pool of potentially good
allocations using CCRs. In the following, we address suitable
strategy selection approaches for PAs with respect to the
pricing policy of the DCPs.
**5.2** **Strategy Selection Under Fixed Pricing**
Due to the simplicity of implementation, fixed pricing is still
a common approach to offer cloud services to customers. In
this case, DCPs determine a constant price for utilizing each
slot of their DC, which is chosen with respect to the expected
load of the DC to guarantee a certain amount of profit. In this
subsection and Subsection 5.3, it is assumed that PAs assign
their graph jobs to the system in a sequential manner, where
at each iteration each PA assigns (at most) one graph job of
each type (if it is requested by a customer) to the system.
_5.2.1_ _Problem Formulation_
We formulate the problem from the perspective of one PA
since the utilization cost of DCs are assumed to be constant.
For the n[th] arriving Gjobj, the PA chooses an allocation
**Mj,(n) = [m[1]j,(n)[,][ · · ·][, m][n]j,[d](n)[]][ from the pool of the CCR’s]**
suggested allocations SAj. In this case, we define the utility
function of a PA as:
_nd_
�
_Uj(n)|Mj,(n) ≜_ _ρj −_ _χj_ _PC_ _[k]m[k]j,(n)_
_k=1_
_nd_
_−_ _φj_ � **1{mkj,(n)[>][0][}][ +][ χ][j][|V][j][|][PC]** _[max][ +][ φ][j][|V][j][|][,]_ (33)
-----
where PC denotes the slot cost of the indexed DC and
_PC_ _[max]_ is a constant larger than the price of all the slots in
the system. In this expression, different preferences of PAs
are governed by positive real constants φj and χj, ρj ∈ R[+]
is the default reward of execution, the second term describes
the payment, the third term describes the privacy preference
of the PA, and the last two terms are added to ensure that
the utility function is non-negative. In the third term, a large
value of φj implies more tendency toward utilizing fewer
DCs to execute the graph job. The normalized utility function
can be derived as: _U[˜]j(n) = Uj(n)/�ρj + χj|Vj|PC_ _max +_
_φj(|Vj| −_ 1)�.
In this context, each PA aims to maximize his utility by
selecting the best sequence of allocations _M[�]j[∗][. Mathemati-]_
cally:
_Nj_
�
_M�j[∗]_ [=] arg max _Uj(n)|Mj,(n)_
_M�j_ ={Mj,(n)}Njn=1 _n=1_
s.t. Mj,(n) ∈SAj, ∀n ∈{1, · · ·, Nj}. (34)
Due to accessibility constraints, a newly joined PA may not
have complete information about the prices of all the slots.
Also, DCPs may update the utilization costs periodically.
Hence, initially there is a lack of knowledge about the DCs’
prices on the PAs’ side making conventional optimization
techniques inapplicable. We tackle this problem by proposing
an online learning algorithm partly inspired by the concept
of regret. This concept originates in the multi-armed bandit
problem [27], where the gambler aims to identify the best
slot machine to play (best strategy) at each round of his
gambling while considering the history of the rewards of
the machines. Our algorithm is an advanced version of the
original algorithm in [27] tailored for the graph job allocation
in GDCNs.
_5.2.2_ _Boosted Regret Minimization Assignment (BRMA)_
_Algorithm_
By choosing a strategy from the set of suggested strategies of
a CCR and observing the utility, one can get an estimate about
the utility of the similar strategies targeting similar DCs. To
address this, in our algorithm, we use the concept of k-means
_clustering [42] to partition the strategies into different groups_
with respect to their similarity. Let C = {C1, C2, · · ·, C|C|}
denote the set of clusters obtained using the method of [42].
We group consecutive iterations of our algorithm as a “timeframe”, according to which T = {tf1, tf2, · · ·, tf⌈N/Γ⌉}
denotes the set of time-frames, where N is the number
of iterations and Γ is the time-frame length. In this case,
iterations 1 to Γ belong to tf1, iterations Γ + 1 to 2Γ belong
to tf2, etc. Let Atfk denote the set of actions performed in the
_k[th]_ time-frame and Ak denote the action executed at iteration
_k. For strategy m ∈SAj, let κ[k]m_ [=][ �]n[k][Γ]=(k−1)Γ+1 **[1][{][A]n[=][m][}]**
denote the number of times the strategy is chosen during
_tfk. The pseudo-code of our proposed algorithm is given in_
Algorithm 6. The main differences between our proposed
algorithm and the method in [27] are as follows: i) the
concept of clustering is leveraged to group the analogous
strategies; ii) a new weight update mechanism is proposed
based on the concept of “similarity”, with which the weights
of the unutilized strategies are estimated employing the
utility of the chosen strategies; iii) the concept of time
**Remark 7** _In case of updating the weights of the strategies at_
**Algorithm 6: BRMA: Boosted regret minimization as-**
signment
**input : Length of time-frames Γ, SAj**, number of time frames
_TNj_ .
**1 Obtain clusters C = {C1, C2, · · ·, C|C|} from SAj according**
to [42].
**2 Assign wji** (tf1) = 1, pji (tf1) = 1/|SAj _|, ∀i ∈SAj_ .
**3 for n = 1 to TNj do**
**4** Choose a strategy m ∈SAj according to pj (tf⌈n/Γ⌉)
(An = m) and observe the normalized utility _U[˜]j_ (n)|m.
**5** **if n = Γk, k ∈** Z[+] **then**
**6** Obtain the virtual reward for each m ∈Atfk as follows:
_kΓ_
� _U˜j_ (z)|Az **1{Az** =m}
_Q[′]j,(m)[(][tf][k][)=]_ _z=(k−1)Γ+1_ _._ (36)
_κ[k]m_
**7** Obtain the virtual reward for those strategies m /∈Atfk
which at least one strategy from their cluster is chosen
as follows:
� _kΓ_
1 �
_Q[′]j,(m)[(][tf][k][) =]_ _|Cq|_ _z=(k−1)Γ+1_ **1{Az** _∈Cq_ _}_
exp � _k||sA<Az_ _|| ||z_ _,m>m||_ � _−_ 1 _U˜j_ (z)|Az � if m ∈ _Cq._ (37)
exp(ks) − 1
� similarity index�� �
**8** For strategy m ∈SAj, Q[′]j,(m)[(][tf][k][) = 0][ if neither itself]
nor any strategy from its cluster is chosen in this
time-frame.
**9** Update the weights of the strategies for the next
time-frame:
� _KQ′j,(m)[(][tf][k][)]_ �
_wj,(m)(tfk+1) = wj,(m)(tfk) exp_ _, (38)_
_|SAj_ _|_
**10** Derive the distribution pj (tfk+1) according to Eq. (35).
frame is incorporated in our design (see Remark 7). These
approaches significantly improve the speed of convergence
of the algorithm (see Section 6). In our algorithm, during
_tfn, every time a PA needs to allocate Gjobj, he chooses a_
strategy (m ∈SAj) with probability:
_wj,(m)(tfn)_ _E_
**pj,(m)(tfn) = (1 −** _E)_ �a∈SAj _[w][j,][(][a][)][(][tf][n][) +]_ _|SAj|_ _[,]_ (35)
where wj,(m)(tfn) denotes the current weight of strategy m
and 0 < E < 1. The second term is introduced to avoid
trapping in local maxima.
**Description of the BRMA Algorithm: Initially, all the**
strategies have the same weight and the same probability of
selection. At each iteration, one strategy is chosen according
to the probability of selection. At the end of each time-frame,
virtual rewards of the chosen strategies are derived using
Eq. (36). Also, the utility of those strategies in a cluster from
which one member is chosen is estimated using Eq. (37). To
obtain the estimation, we propose the following similarity
exp( _[ks<Az,m>]||Az_ _|| ||m||_ [)][−][1]
index: exp(ks)−1, where ks >> 1 is a real constant.
This index is maximized when two strategies utilize the same
exact DCs with the same number of slots, and it is zero when
they have no used DCs in common. The weights for the next
time-frame are obtained using Eq. (38) (K is a positive real
constant) followed by obtaining the probability of selections.
-----
_each time instant, selecting multiple strategies with low utilities in_
_the initial iterations boosts the weights of those strategies leading_
_to an undesired low probability of selection for not chosen high_
_utility strategies. To avoid this, we use the concept of time-frame_
_and update the weights at the end of time-frames._
**5.3** **Strategy Selection Under Adaptive Pricing**
In modern cloud networks, cloud users are charged in a realtime manner with respect to their incurred load on DCs and
the status of the DCs [43], [44]. In this work, we propose an
adaptive pricing framework suitable for graph job allocation
in GDCNs. Let P = {p1, · · ·, p|P|} denote the set of active
PAs. For Gjobj, based on Eq. (2), upon utilizing suggested
strategy Mj,(ak) _j, we model the total payment of PA_
_∈SA_
_pk ∈P to the DCPs as:_
_nd_
�
Υ(Mj,(ak), {Mj,(ak′ )}k[|P|][′]=1,k[′]≠ _k[) =]_ _m[i]j,(ak)[ξ][i][ν][i][+]_
_i=1_
_nd_ _ξ[i]η[i]N_ _[i]_ �σ[i]� _L[i]+[�][|P|]k|S=1[i]|[m]j,[i]_ (ak ) �α+[i] _Pidle[i]_ � (39)
�
�|P| _m[i]j,(ak)[.]_
_i=1_ _k=1_ _[m]j,[i]_ (ak)
Consequently, we model the utility of PA pk ∈P as:
�
_c[p][k]_ ({Mj,(ak)}k[|P|]=1[)][ ≜] [Π]i[n]=1[d] **[1]{L[i]+[�][|P|]k=1** _[m]j,[i]_ (ak )[≤|S] _[i][|}]_ _ρj−_
_nd_
_χjΥ({Mj,(ak)}k[|P|]=1[)][ −]_ _[φ][j]_ � **1{mij,(ak** )[>][0][}][ +][ χ][j][Υ][max] (40)
_i=1_
� � �
+ φj|Vj| _−_ 1 − Π[n]i=1[d] **[1]{L[i]+[�][|P|]k=1** _[m]j,[i]_ (ak )[≤|S] _[i][|}]_ Ξ[p][k] _,_
where Π[n]i=1[d] **[1]{L[i]+[�][|P|]k=1** _[m]j,[i]_ (ak )[≤|S] _[i][|}][ ensures the availability]_
of enough free slots, Ξ[p][k] denotes the penalty for delaying
the execution, the constants are the same as Eq. (33), and
Υ[max] denotes the maximum payment of a PA. In this
case, the utility of each PA depends not only on its own
choice of action but also on the chosen actions by others. In
this paradigm, we model the interactions between the PAs
as a non-cooperative game, more specifically a multi-player
_normal form game. Consequently, we assume that each PA_
is rational in the sense that it aims to maximize its own
utility function. In summary, for Gjobj, the game can be
defined as: Gj = (P, {SAj}, {c[p]}p∈P ), where c[p] is the
utility of PA p . To solve this game, we use the concept
_∈P_
of correlated equilibrium (CE), which generalizes the idea
of Nash equilibrium to enable correlated strategy choices
among the players. For the proposed game Gj, we define πj
as the probability distribution over the joint strategy space
Π[|P|]k=1[SA][j][ =][ SA][j]|P|. The set of correlated equilibria CE _j is_
the convex polytope given by the following expression:[4]
� �
_CE_ _j =_ _πj :_ _πj(Mj,(ak), Mj,(−pk))_
**Mj,(−pk** )∈SAj _[|P|−][1]_
�cpk (Mj,(a′k[)][,][ M][j,][(][−][p][k][)][)][ −] _[c][p][k]_ [(][M][j,][(][a][k][)][,][ M][j,][(][−][p][k][)][)]� _≤_ 0,
�
_∀pk ∈P, ∀Mj,(ak), Mj,(a[′]k[)][ ∈SA][j]_ _._ (41)
4 Mj ( ) [denotes the strategy of all the PAs except][ p]k
Inspired by the pioneer work [28], we propose a distributed
algorithm, called regret matching-based assignment (RMBA)
algorithm, to solve the proposed game while reaching the
CE. The PAs’ actions are described in Algorithm 7. In RMBA
algorithm, each PA saves the history of actions of the other
PAs, using which he obtains the past rewards of the actions
given that the other PAs would have taken the same actions
(Eq. (42)). Afterward, each PA derives the regret of not
executing different strategies (Eq. (43), (44)). Finally, the
probability of selection of the strategies are determined,
where strategies with higher rewards in the past receive
higher probabilities (Eq. (45)).[5]
**Algorithm 7: RMBA: Regret matching-based assign-**
ment
**input : PA pk, graph job’s type j, number of iterations Nj**, SAj .
**1 Select strategies randomly for the first iteration (n = 1).**
**2 for n = 1 to Nj do**
**3** Denote history of allocations up to iteration n as {Aτ _}τ[n]=1[,]_
where At is a vector consisting of |P| elements, which the
_j[th]_ one, A[j]t [, corresponds to][ p][j] [’s used allocation at iteration]
_t._
**4** Calculate the substituting reward for every two different
allocations (∀m1, m2 ∈SAj ):
_SRm[p][k]1,m2_ [(][n][) =]
�
_c[p][k]_ (m2, A[−]n _[k][)]_ if A[k]n [=][ m][1][,]
(42)
_c[p][k]_ (An) O.W.
**5** Calculate the substituting average rewards:
�nτ =1 _[SR]m[p][k]1,m2_ [(][τ] [)][ −] _[c][p][k]_ [(][A]τ [)]
∆[p]m[k]1,m2 [(][n][) =] _._ (43)
_n_
**6** Calculate the average regret:
_Rm[p][k]1,m2_ [(][n][) = max][{][∆]m[p][k]1,m2 [(][n][)][,][ 0][}][.] (44)
**7** Form the selection probability distribution of the allocations,
_∀m ∈SAj_, for the next iteration:
p[p]j,[k](m)[(][n][ + 1)=] �m[′] _∈SAj_ 1[R]A[pk][k]n,m[′][ (][n][)][ R]ifA[p] m[k][k]n,m ̸=[(] A[n][)][k]n[,]
�
p[p]j,[k](m)[(][n][ + 1)=1][ −] _m[′]∈SAj_ :m[′]≠ _m_ _p[p]j,[k](Otherwisem[′])[(][n][ + 1)]._
(45)
**8** Choose a strategy for the next iteration A[p]n[k]+1 [according to]
the derived distribution.
#### 6 SIMULATION RESULTS
**6.1** **Simulation of a Small-Scale GDCN**
In this scenario, the network consists of 5 fully-connected
DCs. The number of slots per DC is assumed to follow one
of the three scenarios described in Table 2. Each of the cloud
servers inside a DC is assumed to have 3 slots.
5. Partitioning the time into “time-frames” does not have a significant
impact on the convergence of the RMBA algorithm. This is due to the fact
that at each iteration of this algorithm, the regret for all the strategies are
obtained considering the previously taken action of all the PAs, which
reduces the chance of trapping in low utility strategies at the initial
iterations (see Remark 7)
-----
Closed triad
(Type 1)
Square
(Type 2)
Bull graph
(Type 3)
Tadpole graph
(Type 5)
Double-star graph
(Type 4)
1600
1400
1200
1000
800
600
400
|Col1|S|Col3|eq_Sub_Opt vs. Greedy_1|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
|S|||eq_Sub_Opt vs. Greedy_2||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
Scenario 1 Scenario 2 Scenario 3
(a): The incurred power of allocation using
the optimal allocation (exhaustive search)
and the sub-optimal approach and greedy
algorithms.
Fig. 4: Topology of the graph jobs.
7500 10[1]
7400
7300
7200
7100
7000
Greedy_1
6900 Greedy_2Seq_Sub_Opt
Seq_Opt (Exhaustive search)
0 5 10 15 20 25 30 35 40 45 50
Allocation Round
(b): Cumulative average incurred power
of allocation using the optimal allocation
(exhaustive search) and the sub-optimal
approach and greedy algorithms.
(c): The generated revenue upon using the
sub-optimal approach as compared to the
greedy methods.
Fig. 5: Simulation results of the small-scale GDCN.
DC 1 DC 2 DC 3 DC 4 DC 5
Scenario 1 9 12 15 18 21
Scenario 2 12 15 18 21 24
Scenario 3 15 18 21 24 27
|Col1|Fig. 5: Simu|
|---|---|
|DC 1|DC 2 DC 3 DC 4|
|Scenario 1 9|12 15 18|
|Scenario 2 12|15 18 21|
|Scenario 3 15|18 21 24|
Table 2: Number of slots of each DC for different scenarios.
For all DCs, the following parameters are chosen according to [33], Pidle = 150W, Pmax = 250W, η = 1.3, and
_α = 3 (these numbers were reported to model the IBM_
BladeCenter server). Type and topology of the graph jobs
are depicted in Fig. 4. It is assumed that at each iteration
10 graph jobs are needed to be allocated. The arrival rates
of graph jobs are set to 1, 1, 1, 3, and 4 per iteration for
type 1 to type 5.[6] The initial load of each DC is assumed to
be a random variable uniformly distributed between 0 and
20% of its number of slots. For each DC, inspired by [26],
_ν is chosen to be 5% of the peak power consumption. The_
cost of electricity (ξ[i], _i) is chosen to be the average cost of_
_∀_
electricity in the US 0.12$/kWh. In simulations, we use the
term “incurred power” referring to the difference between
the power consumption of the GDCN after the graph jobs are
assigned as compared to that before the assignment. Since
we are among the first to study the power-aware allocation
of graph jobs in GDCNs, there is no baseline for direct
comparison. Hence, we propose two greedy algorithms as
the baselines:
**Greedy 1: In this algorithm, for each DC, its future**
power consumption upon allocating all the nodes of the
arriving graph job to it is derived. Then, the DCs are
sorted in ascending order according to their future power
consumption as: d[i][1] _, · · ·, d[i][nd]_ . Finally, from the feasible set
6. It is observed that large graph jobs containing more nodes and
complicated communication constraints lead to a larger performance
gap between our proposed methods and the baseline algorithms as
compared to small graph jobs. Note that the actual performance gap is
dependent on the topology of the graph job and may vary from one to
another
of assignments, the assignment with the largest number
of utilized slots from d[i][1] is chosen. The ties are broken
considering the number of utilized slots from d[i][2] and so
on.
**Greedy 2: In this algorithm, at first the number of free**
slots of each DC is derived and the DCs are sorted in
descending order with respect to their number of free slots
as: d[i][1] _, · · ·, d[i][nd]_ . From the feasible set of assignments, the
assignment with the largest number of utilized slots from
_d[i][1]_ is chosen. Upon having a tie, the process continues by
comparing the number of slots utilized form d[i][2] and so on.
Simulation results of the sequential graph job allocation
described in Section 3 are presented in Fig. 5. For the
third scenario[7] described in Table 2, Fig. 5(a) reveals a
negligible gap between solving the integer programming
described in Eq. (6)-(8) using exhaustive search and using
the subsequent proposed sub-optimal method (Eq. (15)-(17)),
and Fig. 5(b) depicts the corresponding cumulative average
incurred power of allocations. For all the three scenarios,
Fig. 5(c) depicts the cumulative profit obtained after 100
iterations upon using the proposed sub-optimal approach
as compared to the greedy methods if graph jobs stay busy
in the system for 24 hours at each round of allocation. As
can be seen from Fig. 5(c), on average, our method leads to
saving of $1100 on electricity cost.
**6.2** **Simulation of a Medium-Scale GDCN**
In this scenario, the network comprises 15 fully-connected
DCs. Similar to the previous case, we consider 3 scenarios,
each created via (fully) connecting the three replicas of a
GDCN described in Table 2. The number of slots per DC,
power parameters of each DC, ν, and the price of electricity
are assumed to be the same as the previous case. It is assumed
that at each iteration, there are 30 graph jobs waiting to be
assigned to the network (the arrival rate for each type is
7 The rest are omitted due to similarity
-----
10
5
0
0 100 200 300 400 500
Iteration
20
15
10
0 100 200 300 400 500
Iteration
-20
-40
0 100 200 300 400 500
Iteration
0
0
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|||m|1 1,(1)*=|0.977|
||||||
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|||m2 1|,(1)*=|0.003|
||||||
0 100 200 300 400 500
Iteration
m[6]1,(1)*= 0.977
0 100 200 300 400 500
Iteration
0 100 200 300 400 500
Iteration
m[4]1,(1)*= 0.003
0 100 200 300 400 500
Iteration
0 100 200 300 400 500
Iteration
0
-5
-10
-15
0
-5
-10
-15
0
-5
-10
-15
0
-5
-10
-15
10
5
0
0 100 200 300 400 500
Iteration
40
20
0
0
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|||m|6 1,(1)*=|0.977|
||||||
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|||m4 1|,(1)*=|0.003|
||||||
0 100 200 300 400 500
Iteration
-20
-40
0 100 200 300 400 500
Iteration
10
5
0
0 100 200 300 400 500
Iteration
0 100 200 300 400 500
Iteration
100
50
0
0 100 200 300 400 500
Iteration
0
0
-5
-10
-15
-20
-40
-20
-30
-40
-20
-30
-40
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|||m|1 11,(1)*=|0.977|
||||||
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|||m5 1|,(1)*=|0.003|
||||||
0 100 200 300 400 500
Iteration
0
-5
-10
-15
-20
-40
0 100 200 300 400 500
Iteration
0 100 200 300 400 500
Iteration
0 100 200 300 400 500
Iteration
(a): Convergence of local parameter Λ for
different DCs.
(b): Convergence of global parameter γ for
different DCs.
(c): Number of utilized slots of various DCs.
Fig. 6: Evolution of the parameters using the CDGA algorithm. The parameters of only 6 DCs are shown for more readability.
20000
18000
16000
14000
12000
10000
8000
6000
4000
2000
|Col1|CDG CDG|A vs. Greedy_1 A vs. Greedy_2|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
Scenario 1 Scenario 2 Scenario 3
(c): The generated revenue upon using the
CDGA algorithm as compared to the greedy
methods.
(a): The incurred power of allocation using
the optimal allocation (exhaustive search),
CDGA algorithm, and greedy algorithms.
(b): Cumulative average incurred power
of allocation using the optimal allocation
(exhaustive search), CDGA algorithm, and
greedy algorithms.
Fig. 7: Simulation results of the medium-scale GDCN.
three times higher than that of the first scenario). The stepsizes in Eq. (21) are set as follows: cλ = 0.1, cγ = 0.18, and
_cΛ = 0.15. We choose φ = 5, and for deriving the matrix_
**W, the parameter ϵ is chosen to be 0.1. Also, we choose the**
initial value of Λ[i], λ[i], γ[i] at DC d[i] to be _[η][i][N][ i][σ][i][α][i]_
5(|S _[i]|)[αi][,][ 0][, and]_
_−η[i]N_ _[i]σ[i]α[i]_
3(|S _[i]|)[αi][, respectively. For assignment of a graph job with]_
type 1, Fig. 6 depicts the convergence of the local and global
variables of the DCs at 6 sampled DCs. Fig. 6(a) describes
the convergence of the local variable Λ at the DCs, Fig. 6(b)
shows the convergence of the global variable γ, and Fig. 6(c)
depicts the number of offered slots of each DC to the graph
job. The parameter λ takes the value of zero almost always
upon the convergence, and thus is not depicted. As can be
seen from Fig. 6(c), the DCs 1, 6, and 11 would offer one slot
to the graph job while the rest would offer zero slots. Also,
from Fig. 6(b), it can be seen that the initial non-identical
choices of the global variable γ at each DC finally converges
to a unified value among the DCs.
Fig. 7 depicts the results of comparing the CDGA algorithm to the greedy algorithms. Fig. 7(a) depicts the
incurred power of allocation for scenario 3 of network
construction, and Fig. 7(b) shows the cumulative average
incurred cost of allocation. These two figures reveal the
close-to-optimal performance of the CDGA algorithm. Also,
Fig. 7(c) demonstrates the obtained profit using the CDGA
algorithm as compared to the greedy algorithms after 100
iterations. As can be seen from this figure, on average, our
method results in saving of $10000 on electricity cost
**6.3** **Simulation of a Large-Scale GDCN**
_6.3.1_ _The Network and the CCR’s Setting_
We consider a GDCN consisting of 200 DCs each of which
possesses 3k slots, where k is a random integer number
between 4 and 11. It is assumed that each server in a
DC contains 3 slots. The power parameters of each DC,
_ν, and the price of electricity are assumed to be the same as_
before. In [45], a scale-free (SF) architecture called Scafida
is proposed for DC networks. The main advantages of
this model are error tolerance, scalability, and flexibility of
network architecture. Also, this graph structure is used to
model Internet connections [46]. Considering these facts, the
network topology of the GDCN is assumed to be an SF graph
constructed using preferential attachment mechanism with
parameter m = 3 [46]. For each type of graph job, we run a
CCR on the network, where the initial place of the CCR is
randomly chosen among the DCs and the size of carrying
BST is set to 1000. In the following, the strategies of the PAs
are always chosen from the suggested strategies of the CCRs.
_6.3.2_ _DCs with Fixed Prices_
In this case, the load of each DC is assumed to be a uniform
random variable between 20% and 100% of its number of
slots. The price of utilizing each slot is fixed to be the total
electricity cost of the DC divided by its number of slots. For
a PA, for each type of job, the utility is derived by fixing
all the values of ρ, χ, and φ to 1. The strategy exploration
parameter E is set to 0.01, length of time-frame Γ is chosen to
be 15 = 15 k = 10 in Eq (37) and K = 10 in Eq (38)
_|C|_
-----
1
0.5
0
1
0.5
0
1
0.5
0
1
0.5
0
1
0.5
0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|Col1|Col2|Col3|
|---|---|---|
||||
||||
||||
||||
||k_s k_s|=-10 =1|
240
0 20 40 60 80 100 120 140 160 180 200
n
300
290
280
270
0 20 40 60 80 100 120 140 160 180 200
n
0 20 40 60 80 100 120 140 160 180 200
n
0 20 40 60 80 100 120 140 160 180 200
n
300
295
290
285
280
275
270
265
260
255
0 20 40 60 80 100 120 140 160 180 200
n
0 20 40 60 80 100 120 140 160 180 200
1 n
120
100
180
160
140
220
200
180
250
200
300
250
0 20 40 60 80 100 120 140 160 180 200
n
0 20 40 60 80 100 120 140 160 180 200
n
260
250
20 40 60 80 100
n
20 40 60 80 100
n
0 20 40 60 80 100 120 140 160 180 200
n
0 20 40 60 80 100 120 140 160 180 200
n
(a): Comparison between the P [90%] of the
BRMA algorithm and the random strategy
selection for various graph jobs.
(b): Comparison between the utility of the
BRMA algorithm and the random strategy
selection for various graph jobs.
(c): Effect of having different similarity considerations (left) and different number of
clusters (right) for graph job with type 5.
Fig. 8: Simulation results of the large-scale GDCN upon having DCs with fixed prices.
|3200 3000 2800 2600 2400 2200 2000 1800 1600 1400|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
(c): Reduction in currency in circulation
upon using the RMBA algorithm as compared to the random strategy selection.
(a): Average utility of all the PAs for different
types of graph jobs (left), and utility of
individual PAs for each type of graph job in
one round of RMBA algorithm (right).
(b): Total power consumption of the utilized
servers for the RMBA algorithm and random
strategy selection.
Fig. 9: Simulation results of the large-scale GDCN upon having DCs with dynamic prices.
To increase the convergence rate, we force the algorithm
to execute one strategy from each cluster during the first
time-frame. The resulting curves are obtained via averaging
over 200 simulations. In this case, our BRMA algorithm is
compared with the random strategy selection method due
to the lack of an existing algorithm for the problem. Note
that in this case, load parameters and prices of the DCs are
unknown prior to the graph job assignment; hence the abovedefined greedy algorithms can not be applied to this setting.
Besides the utility, we also introduce P [90%] as a performance
metric, which is the probability of selecting an allocation
that has at most 10% lesser utility as compared to the best
suggested allocation. The results are depicted in Fig. 8. With
one graph job assigned per iteration, Fig. 8(a) depicts the
convergence of P [90%] using the BRMA algorithm for all types
of jobs, while Fig. 8(b) reveals the corresponding utility of
assignment as compared to the random strategy selection.
As can be seen, the probability of choosing the high utility
strategies increases while the BRMA algorithm explores the
suggested strategies. Also, it can be seen that the proposed
BRMA algorithm has a higher utility even at the first 15
iterations. This is because at the first time-frame the BRMA
algorithm exercises one strategy from each cluster, based
on which strategies with good utilities are explored with
higher probabilities. The utility gain of at least 20% for each
graph job assignment upon using our BRMA algorithm can
be seen from Fig. 8(b). Also, Fig. 8(c) depicts the effect of
changing the similarity coefficient ks (left sub-plot) and the
effect of choosing various number of clusters (right subplot). For ks = −10 all the strategies have a high similarity
factor, and thus the algorithm does not perform well. As
this parameter increases, the effect of similarity on weight
update decreases, and the algorithm aims to select the best
strategy with the highest utility rather than selecting a group
of high utility strategies. Due to this fact, choosing ks = 1 as
compared to ks = 500 results in a higher initial utility since
the algorithm has more tendency toward choosing a portion
of strategies with high utilities. However, choosing ks = 500
leads to a higher final utility. Also, as can be seen, having
more clusters (up to the size of the time-frame) leads to a
finer grain partitioning and a better performance.
_6.3.3_ _DCs with Dynamic Prices_
In this case, the PAs’ payments is associated with the load
of their utilized DCs (Eq. (39)). The initial loads of the DCs
are chosen to be uniformly distributed between the 20%
and 100% of their number of slots. The presence of 5 PAs
is assumed in the system. For each PA, for each type of
job, the utility is derived by fixing all the values of ρ, χ,
and φ to 1 in Eq. (40). The results are presented in Fig. 9.
In Fig. 9(a), the average utility of the PAs for assigning one
graph job over 100 Monte-Carlo simulations is presented (left
plot). As can be seen, at least 20% utility gain is obtained
using our RMBA algorithm To demonstrate the real-time
-----
performance of the RMBA algorithm, in the right plot of
Fig. 9(a), the utility of all the PAs for each type of graph
job in one round of Monte-Carlo simulation is depicted.
As can be seen, after exploring the environment and the
other PAs’ actions, each PA identifies the more rewarding
strategies. Also, due to the inherent relationship between the
utility of the PAs and the power consumption of the DCs,
this method leads to less power consumption of DCs. This
fact is revealed in Fig. 9(b), where the corresponding power
consumption associated with the utilized DCs opted by all
the PAs is depicted for the RMBA algorithm and the random
selection strategy. Fig. 9(c) depicts the reduction in currency
_in circulation obtained through less money gathering from_
the PAs and less payment for the electricity using the RMBA
algorithm as compared to the random strategy selection.
#### 7 CONCLUSION
In this work, we study the problem of graph job allocation in
geo-distributed cloud networks (GDCNs). The slot-based
quantization of the resources of the DCs is considered.
Inspired by big-data driven applications, it is considered
that tasks are composed of multiple sub-tasks, which need
multiple slots of the DCs with a determined communication
pattern. The cost-effective graph job allocation in GDCNs is
formulated as an integer programming problem. For smallscale GDCNs, given the feasible assignments of the graph
jobs, we propose an analytic sequential sub-optimal solution
to the problem. For medium-scale GDCNs, we introduce
a distributed algorithm using the communication infrastructure of the network. Given the impracticality of those
methods in large-scale GDCNs, we propose a decentralized
graph job allocation framework based on the idea of strategy
suggestion using our introduced cloud crawlers (CCRs). To
opt efficient strategies from the pool of suggested strategies,
we propose two online learning algorithms for the PAs
considering fixed and adaptive pricing of DCs. Extensive
simulations are conducted to reveal the effectiveness of all
the proposed algorithms in GDCNs with different scales.
For the future work, we suggest studying graph jobs with
heterogeneous order of nodes’ execution. Also, encapsulating
the mathematical model of network link outages into the
allocation of graph jobs among multiple DCs is worth further
investigation.
#### REFERENCES
[1] I. A. T. Hashem, I. Yaqoob, N. B. Anuar, S. Mokhtar, A. Gani, and
S. U. Khan, “The rise of “big data” on puting: Review and open
research issues,” Inform. Syst., vol. 47, pp. 98–115, 2015.
[2] H. Cai, B. Xu, L. Jiang, and A. V. Vasilakos, “Iot-based big data
storage systems in cloud computing: Perspectives and challenges,”
_IEEE Internet Things J., vol. 4, no. 1, pp. 75–87, Feb 2017._
[3] L.-J. Zhang, “Editorial: Big services era: Global trends of cloud
computing and big data,” IEEE Trans. Services Computing, vol. 5,
no. 4, pp. 467–468, 2012.
[4] C. Ji, Y. Li, W. Qiu, U. Awada, and K. Li, “Big data processing in
cloud computing environments,” in Proc. 12th Int. Symp. Pervasive
_Syst., Alg. Netw., Dec 2012, pp. 17–23._
[5] C. Yang, Q. Huang, Z. Li, K. Liu, and F. Hu, “Big data and cloud
computing: innovation opportunities and challenges,” Int. J. Digit.
_Earth, vol. 10, no. 1, pp. 13–53, 2017._
[6] D. Agrawal, S. Das, and A. El Abbadi, “Big data and cloud
computing: Current state and future opportunities,” in Proc. 14th
_Int. Conf. Extending Database Technol., ser. EDBT/ICDT ’11._ ACM,
2011 pp 530–533
[7] [Apache storm, http://storm.apache.org/, accessed: 2018-03-03.](http://storm.apache.org/)
[8] [“GraphLab,” https://turi.com/, accessed: 2018-03-03.](https://turi.com/)
[9] S. Soares, IBM InfoSphere: A platform for Big Data governance and
_process data governance._ MC Press, 2013.
[10] T. Condie, N. Conway, P. Alvaro, J. M. Hellerstein, K. Elmeleegy,
and R. Sears, “Mapreduce online,” vol. 10, no. 1, 2010, p. 20.
[11] Yahoo’s open sourced S4 could be a real-time cloud platform.
[[Online]. Available: https://www.programmableweb.com/news/](https://www.programmableweb.com/news/yahoos-open-sourced-s4-could-be-real-time-cloud-platform/2010/12/31)
[yahoos-open-sourced-s4-could-be-real-time-cloud-platform/](https://www.programmableweb.com/news/yahoos-open-sourced-s4-could-be-real-time-cloud-platform/2010/12/31)
[2010/12/31](https://www.programmableweb.com/news/yahoos-open-sourced-s4-could-be-real-time-cloud-platform/2010/12/31)
[12] L. Shi, Z. Zhang, and T. Robertazzi, “Energy-aware scheduling
of embarrassingly parallel jobs and resource allocation in cloud,”
_IEEE Trans. Parallel Distrib. Syst., vol. 28, no. 6, pp. 1607–1620, June_
2017.
[13] W. Wang, B. Liang, and B. Li, “Multi-resource fair allocation in
heterogeneous cloud computing systems,” IEEE Trans. Parallel
_Distrib. Syst., vol. 26, no. 10, pp. 2822–2835, Oct 2015._
[14] S. Ren, Y. He, and F. Xu, “Provably-efficient job scheduling for
energy and fairness in geographically distributed data centers,” in
_Proc. IEEE 32nd Int. Conf. Distrib. Comput. Syst., 2012, pp. 22–31._
[15] Y. Yao, L. Huang, A. Sharma, L. Golubchik, and M. Neely, “Data
centers power reduction: A two time scale approach for delay
tolerant workloads,” in Proc. IEEE Int. Conf. Comput. Commun.
_(INFOCOM), 2012, pp. 1431–1439._
[16] H. Xu and B. Li, “Joint request mapping and response routing for
geo-distributed cloud services,” in Proc. IEEE Int. Conf. Comput.
_Commun. (INFOCOM), 2013, pp. 854–862._
[17] R. Kaewpuang, D. Niyato, P. Wang, and E. Hossain, “A framework
for cooperative resource management in mobile cloud computing,”
_IEEE J. Sel. Areas Commun., vol. 31, no. 12, pp. 2685–2700, 2013._
[18] T. Chen, A. G. Marques, and G. B. Giannakis, “DGLB: Distributed
stochastic geographical load balancing over cloud networks,” IEEE
_Trans. Parallel Distrib. Syst., vol. 28, no. 7, pp. 1866–1880, July 2017._
[19] P. Poullie, T. Bocek, and B. Stiller, “A survey of the state-of-the-art
in fair multi-resource allocations for data centers,” IEEE Trans. Netw.
_Service Manage., vol. 15, no. 1, pp. 169–183, 2018._
[20] X. Liu, X. Zhang, W. Li, and X. Zhang, “Swarm optimization algorithms applied to multi-resource fair allocation in heterogeneous
cloud computing systems,” Computing, vol. 99, no. 12, pp. 1231–
1255, 2017.
[21] Z. Zhang and X. Zhang, “A load balancing mechanism based on
ant colony and complex network theory in open cloud computing
federation,” in Proc. 2nd Int. Conf. Ind. Mechatronics Autom., vol. 2,
May 2010, pp. 240–243.
[22] A. C. Zhou, Y. Gong, B. He, and J. Zhai, “Efficient process mapping
in geo-distributed cloud data centers,” in Proc. Int. Conf. High
_Performance Comput., Netw., Storage Anal., ser. SC ’17._ New York,
NY, USA: ACM, 2017, pp. 16:1–16:12.
[23] J. Ghaderi, S. Shakkottai, and R. Srikant, “Scheduling storms and
streams in the cloud,” in Proc. 2015 ACM SIGMETRICS Int. Conf.
_Measurement Modeling Comput. Syst._ ACM, 2015, pp. 439–440.
[24] ——, “Scheduling storms and streams in the cloud,” ACM Trans.
_Model. Perform. Eval. Comput. Syst., vol. 1, no. 4, pp. 14:1–14:28,_
2016.
[25] K. Psychas and J. Ghaderi, “On non-preemptive VM scheduling
in the cloud,” in Proc. ACM Meas. Anal. Comput. Syst., vol. 1, no. 2.
ACM, Dec. 2017, pp. 35:1–35:29.
[26] M. Dayarathna, Y. Wen, and R. Fan, “Data center energy consumption modeling: A survey,” IEEE Commun. Surveys Tut., vol. 18, no. 1,
pp. 732–794, 2016.
[27] P. Auer, N. Cesa-Bianchi, P. Fischer, and R. Schapire, “The nonstochastic multiarmed bandit problem,” SIAM J. Comput., vol. 32,
no. 1, pp. 48–77, 2002.
[28] S. Hart and A. Mas-Colell, “A simple adaptive procedure leading to
correlated equilibrium,” Econometrica, vol. 68, no. 5, pp. 1127–1150,
2000.
[29] K. M. Sim, “Agent-based interactions and economic encounters
in an intelligent intercloud,” IEEE Trans. on Cloud Comput., vol. 3,
no. 3, pp. 358–371, July 2015.
[30] S. Hosseinalipour and H. Dai, “A two-stage auction mechanism
for cloud resource allocation,” IEEE Trans. Cloud Comput., pp. 1–1,
2019.
[31] S. Hosseinalipour and H. Dai, “Options-based sequential auctions
for dynamic cloud resource allocation,” in Proc. 2017 IEEE Int. Conf.
_Commun. (ICC), May 2017, pp. 1–6._
[32] R. C. Read and D. G. Corneil, “The graph isomorphism disease,” J.
_Graph Theory vol 1 no 4 pp 339–363 1977_
-----
[33] Y. Yao, L. Huang, A. B. Sharma, L. Golubchik, and M. J. Neely,
“Power cost reduction in distributed data centers: A two-time-scale
approach for delay tolerant workloads,” IEEE Trans. Parallel Distrib.
_Syst., vol. 25, no. 1, pp. 200–211, Jan 2014._
[34] S. Boyd and L. Vandenberghe, Convex optimization. Cambridge
university press, 2004.
[35] J. L. Bentley, “Multidimensional binary search trees used for
associative searching,” Commun. ACM, vol. 18, no. 9, pp. 509–517,
1975.
[36] B. Johansson, T. Keviczky, M. Johansson, and K. H. Johansson,
“Subgradient methods and consensus algorithms for solving convex
optimization problems,” in Proc. 47th IEEE Conf. Decision Control
_(CDC), 2008, pp. 4185–4190._
[37] A. Heydon and M. Najork, “Mercator: A scalable, extensible web
crawler,” World Wide Web, vol. 2, no. 4, pp. 219–229, 1999.
[38] J. Cho, H. Garcia-Molina, and L. Page, “Efficient crawling through
url ordering,” Comput. Netw. ISDN Syst., vol. 30, no. 1-7, pp. 161–
172, 1998.
[39] P. Boldi, B. Codenotti, M. Santini, and S. Vigna, “Ubicrawler: A
scalable fully distributed web crawler,” Software: Practice Experience,
vol. 34, no. 8, pp. 711–726, 2004.
[40] T. H. Cormen, C. E. Leiserson, R. L. Rivest, and C. Stein, Introduction
_to Algorithms, 3rd ed._ The MIT Press, 2009.
[41] “Data Structures: List,” [https://docs.python.org/3/tutorial/](https://docs.python.org/3/tutorial/datastructures.html)
[datastructures.html, accessed: 04-04-2018.](https://docs.python.org/3/tutorial/datastructures.html)
[42] S. Lloyd, “Least squares quantization in PCM,” IEEE trans. Inf.
_Theory, vol. 28, no. 2, pp. 129–137, 1982._
[43] J. Zhao, H. Li, C. Wu, Z. Li, Z. Zhang, and F. C. Lau, “Dynamic
pricing and profit maximization for the cloud with geo-distributed
data centers,” in Proc. IEEE Conf. Comput. Commun. (INFOCOM),
2014, pp. 118–126.
[44] J. Wan, R. Zhang, X. Gui, and B. Xu, “Reactive pricing: an adaptive
pricing policy for cloud providers to maximize profit,” IEEE Trans.
_Netw. Service Manage., vol. 13, no. 4, pp. 941–953, 2016._
[45] L. Gyarmati and T. A. Trinh, “Scafida: A scale-free network inspired
data center architecture,” ACM SIGCOMM Comput. Commun. Rev.,
vol. 40, no. 5, pp. 4–12, 2010.
[46] A.-L. Barabasi, R. Albert, and H. Jeong, “Scale-free characteristics´
of random networks: the topology of the world-wide web,” Physica
_A: Statist. Mech. Appl., vol. 281, no. 1, pp. 69–77, 2000._
**Seyyedali Hosseinalipour (S’17) received the**
B.S. degree in Electrical Engineering from Amirkabir University of Technology (Tehran Polytechnic), Tehran, Iran in 2015. He is currently pursuing
a Ph.D. degree in the Department of Electrical
and Computer Engineering at North Carolina
State University, Raleigh, NC, USA. His research
interests include analysis of wireless networks,
resource allocation and load balancing for cloud
networks, and analysis of vehicular ad-hoc networks.
**Anuj Nayak received his B. E. degree in Electron-**
ics and Communication Engineering from PES
Institute of Technology, Bengaluru, India in 2014.
He worked with Signalchip Innovations Pvt. Ltd.,
India as an algorithm design engineer from 2014
to 2016. He joined Master of Science in Electrical
Engineering at North Carolina State University in
2016. His research interests are in the area of
complex networks, statistical signal processing
and artificial intelligence.
**Huaiyu Dai (F 17) received the B.E. and M.S.**
degrees in electrical engineering from Tsinghua
University, Beijing, China, in 1996 and 1998,
respectively, and the Ph.D. degree in electrical
engineering from Princeton University, Princeton,
NJ in 2002.
He was with Bell Labs, Lucent Technologies,
Holmdel, NJ, in summer 2000, and with AT&T
Labs-Research, Middletown, NJ, in summer 2001.
He is currently a Professor of Electrical and
Computer Engineering with NC State University,
Raleigh, holding the title of University Faculty Scholar. His research
interests are in the general areas of communication systems and
networks, advanced signal processing for digital communications, and
communication theory and information theory. His current research
focuses on networked information processing and crosslayer design
in wireless networks, cognitive radio networks, network security, and
associated information-theoretic and computation-theoretic analysis.
He has served as an editor of IEEE Transactions on Communications,
IEEE Transactions on Signal Processing, and IEEE Transactions on
Wireless Communications. Currently he is an Area Editor in charge of
wireless communications for IEEE Transactions on Communications.
He co-chaired the Signal Processing for Communications Symposium
of IEEE Globecom 2013, the Communications Theory Symposium of
IEEE ICC 2014, and the Wireless Communications Symposium of IEEE
Globecom 2014. He was a co-recipient of best paper awards at 2010
IEEE International Conference on Mobile Ad-hoc and Sensor Systems
(MASS 2010), 2016 IEEE INFOCOM BIGSECURITY Workshop, and
2017 IEEE International Conference on Communications (ICC 2017).
-----
|
{
"disclaimer": null,
"license": null,
"status": null,
"url": ""
}
| 2,018
|
[
"JournalArticle"
] | false
| 2018-08-13T00:00:00
|
[] | 31,451
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02bbc9d6c8a6e49c6be4bcf4d8a4d87408667c4c
|
[] | 0.844771
|
zkPAKE: A Simple Augmented PAKE Protocol
|
02bbc9d6c8a6e49c6be4bcf4d8a4d87408667c4c
|
Anais do XV Simpósio Brasileiro de Segurança da Informação e de Sistemas Computacionais (SBSeg 2015)
|
[
{
"authorId": "3257599",
"name": "Karina Mochetti"
},
{
"authorId": "2566165",
"name": "A. Resende"
},
{
"authorId": "1705007",
"name": "Diego F. Aranha"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Human memory is notoriously unreliable in memorizing long secrets, such as large cryptographic keys. Password-based Authenticated Key Exchange (PAKE) protocols securely establish a cryptographic key based only on the knowledge of a much shorter password. In this work, an augmented PAKE protocol is designed and proposed for secure banking applications, requiring the server to store only the image of the password under a one-way function. The protocol is more efficient than alternatives because it requires fewer public key operations and a lower communication overhead.
|
XV Simpósio Brasileiro em Segurança da Informação e de Sistemas Computacionais — SBSeg 2015
# zkPAKE: A Simple Augmented PAKE Protocol
**Karina Mochetti[1]** **, Amanda C. Davi Resende[1]** **, Diego F. Aranha[1][∗]**
1
Institute of Computing (UNICAMP)
Av. Albert Einstein, 1251, 13083-852, Campinas-SP, Brazil
mochetti,dfaranha @ic.unicamp.br,amanda@lasca.ic.unicamp.br
_{_ _}_
**_Abstract. Human memory is notoriously unreliable in memorizing long secrets,_**
_such as large cryptographic keys. Password-based Authenticated Key Exchange_
_(PAKE) protocols securely establish a cryptographic key based only on the_
_knowledge of a much shorter password. In this work, an augmented PAKE pro-_
_tocol is designed and proposed for secure banking applications, requiring the_
_server to store only the image of the password under a one-way function. The_
_protocol is more efficient than alternatives because it requires fewer public key_
_operations and a lower communication overhead._
## 1. Introduction
Cryptographic keys for encryption and signature schemes must be generated randomly
and can have from a few hundred bits to many thousand bits. Since human memory
can hardly memorize such amount of unstructured data, keys are often stored in external
devices. However, this is not always possible and a secure communication key must be
established using a smaller and simpler password, that the user is able to remember.
A Password-Authenticated Key Exchange (PAKE) protocol is a method for establishing secure cryptographic keys based only on the knowledge of a simple password,
short enough to be easily memorized by humans [Boyd and Mathuria 2003]. In most
PAKE protocols, the server and the client share only the knowledge of the password in
some form and use it to negotiate a shared key in an authenticated way.
The first PAKE protocol [Lomas et al. 1989] was developed under the additional
assumption that the client has knowledge of the server public key, alongside the shared
password. Other protocols have been developed over the years, but the main limitation in
practice nowadays is that the more mature protocols are patented.
In this work, we reassure the importance of PAKE protocols in secure banking
applications, with emphasis to augmented PAKE protocols, and propose an augmented
PAKE protocol constructed from zero-knowledge proofs.
## 2. Background and Related Work
The main goal of a PAKE protocol is to establish a cryptographic key between a client and
a server, based only on their knowledge of a password, without relying on a Public Key
Infrastructure (PKI), which is complex and subject to man-in-the-middle attacks. The
most efficient and commonly used PAKE protocols are EKE [Bellovin and Merritt 1992]
and SPEKE [Jablon 1996], constructed from the basic Diffie-Hellman protocol. The main
difference in their construction is that the SPEKE protocol uses the password as the group
generator, while the EKE protocol uses it only as an auxiliary encryption key.
_∗Supported by Intel in the scope of the project “Physical Unclonable Functions for SoC Devices”._
334 c _2015 SBC_ _S_ _B_ _d C_ _t_ _ã_
-----
XV Simpósio Brasileiro em Segurança da Informação e de Sistemas Computacionais — SBSeg 2015
A secure PAKE protocol must fulfill four basic security requirements
[Hao and Ryan 2010]: it must be resistant to both offline and online dictionary attacks,
provide forward security and known-session security. Dictionary attacks consist of an exhaustive search of the password based on a list of words which are guessed as most likely
to succeed. An online attack tries several inputs against a legitimate protocol, while an
offline attack attempts to emulate the protocol using several known outputs. Therefore,
a PAKE protocol implementation cannot leak any information that allows an attacker to
learn the password through an exhaustive search.
A protocol is forward secure if it ensures that session keys remain secure even if
the password is disclosed. This property implies that if an attacker knows the password
but only passively observes the key exchange, he cannot derive the session key. Finally,
in a known-session secure protocol, a compromised session should not harm the security
of any other sessions, i.e., an attacker may have all information specific to the session, but
this must not affect the security of other established sessions.
An extra security requirement can be resistance against server compromise. To
accomplish this, the protocol must assure that an attacker cannot impersonate a user even
if the credential files are stolen. PAKE protocols with this feature are called augmented
_PAKEs [Perlman and Kaufman 1999], as opposed to balanced PAKEs [Jablon 1996]._
In an augmented PAKE the server does not know or store a plaintext password,
but an image of the client’s password under a one-way function. Augmented PAKE protocols are usually more complex and computationally expensive than balanced PAKEs.
For some applications, this feature is not useful and the additional complexity and computational costs are not worthy. Such applications use secure balanced PAKE protocols,
such as EKE and SPEKE, but without resistance against server compromise. For other
applications, such as secure banking though, resisting server compromises can be critical,
even with some performance penalty.
Secure banking typically employs cryptographic protocols to provide secure communication between two parties, such as Transport Layer Security (TLS) and Secure
Sockets Layer (SSL). Although popular, these protocols are subject to man-in-the-middle
attacks [Anderson 2001] and are sensible to user flaws; most users click through certificate warnings and ignore browser security indicators [Engler et al. 2009].
In this scenario, the client already knows a small and simple password to be able
to perform transactions in the server maintained by the bank. If this password is stored as
plaintext in the server, any malicious employee may successfully impersonate the client
in a balanced PAKE protocol. Therefore, for this kind of application, resistance against
server compromise is important, preventing an insider from impersonating the client.
Note that not all bank employees may have control over clients accounts to perform transactions, specially the ones involved in maintaining the computer infrastructure.
To solve this problem we design an augmented PAKE. In this case, the user will
have to register his/her password with the bank upon opening an account. This will be performed in the enrollment phase, in which the bank will receive and store an image of the
password. Now, a malicious employee does not have knowledge of the plaintext password
and cannot impersonate the user on the authentication phase of a PAKE protocol.
335 c _2015 SBC_ _S_ _B_ _d C_ _t_ _ã_
-----
XV Simpósio Brasileiro em Segurança da Informação e de Sistemas Computacionais — SBSeg 2015
## 3. Our Protocol
In this Section we describe our contribution, the zkPAKE protocol, presented in Figure 1.
zkPAKE is an augmented PAKE protocol, based on zero-knowledge proof of knowledge
(ZKPK), a feature shared with some authentication protocols.
**Server** **Client**
Shared Information: g: generator of group G, g[r]
_N_ : nonce _N_ _r = H1(pwd)_
_v ←$ Zq_
_t = g[v]_
_c = H1(g, g[r], t, N_ )
_u = v −_ _H1(c)r mod q_
_t[′]_ = g[u]g[rH][1][(][c][)] _u, H1(c)_ skc = H2(c)
_c[′]_ = H1(g, g[r], t[′], N )
_H1(c[′])_ =[?] _H1(c)_
sks = H2(c[′]) _H1(sks)_ _H1(sks)_ =[?] _H1(skc)_
**Figure 1. zkPAKE Protocol.**
An enrollment phase must be held before the main zkPAKE protocol execution.
This phase is performed in physically secure way between the client and the server, such
as an user registering his/her password in person within the bank. A shared secret is
then generated based on the password pwd and the generator g of group G. The client
computes the secret g[r], with r being a hash of the password pwd and sends it privately
to the server. Note that, instead of storing and using the password directly, the server
will use the image of the password in the authentication phase, satisfying the augmented
PAKE definition. The enrollment phase needs to be executed only once for each client.
The next phase consists in the authentication steps of the basic PAKE protocol.
The server begins the transmission sending a nonce N . The client is able to calculate g[r]
and generate a secret key skc = H2(c) using a technique similar to a protocol for zeroknowledge proof of possession [Chaum et al. 1987]. After u and H1(c) are returned, the
server can generate and prove knowledge of a secret key sks = H2(c[′]). Note that in our
construction the authentication is done by both sides, thus the protocol inherently provides
mutual authentication.
## 4. Results
Table 1 presents a performance analysis of our protocol, comparing it with the main PAKE
protocols proposed in the literature. The number of exponentiations on client or server
side are computed for each protocol, considering the usual optimizations for implementing exponentiations depending on the base. For simplicity, symmetric encryption, hash
function and other cheap operations are not taken into account. Powering an unknown
basis has a unitary cost (1.0), while fixed-base exponentiation costs half as much (0.5).
Double exponentiation can be implemented by interleaving to save squarings, costing a
unity and half (1.5). All protocols can be instantiated using elliptic curve groups, enjoying these optimizations [Hankerson et al. 2003]. The computation and communication
savings of zkPAKE compared to alternatives become clear.
336 c _2015 SBC_ _S_ _B_ _d C_ _t_ _ã_
-----
XV Simpósio Brasileiro em Segurança da Informação e de Sistemas Computacionais — SBSeg 2015
**Protocol** **Type** **Exp (Client)** **Exp (Server)** **Exp (Total)** **Messages**
EKE balanced 1.5 1.5 3 4
SPEKE balanced 2 1.5 3.5 3
J-PAKE balanced 4 4 8 4
A-EKE augmented 1.5 1.5 3 5
B-SPEKE augmented 3 3 6 3
SRP augmented 2.5 2 4.5 4
zkPAKE augmented 1 1 2 3
**Table 1. Performance comparison among PAKE protocols, by number of mes-**
**sages and exponentiations computed by server/client. Powering an unknown**
**base costs 1.0, a fixed base costs 0.5, and double exponentiation costs 1.5.**
## 5. Conclusion
In this work we reviewed PAKE protocols, a method to establish secure cryptographic
keys based only on the knowledge of a simpler password, focusing on augmented PAKEs.
We proposed an augmented PAKE protocol that improves the performance of the protocols found in the literature, both in computation and communication costs. A formal
security analysis is under way.
## References
Anderson, R. J. (2001). Security Engineering: A Guide to Building Dependable Dis_tributed Systems. John Wiley & Sons, Inc., New York, NY, USA, 1st edition._
Bellovin, S. M. and Merritt, M. (1992). Encrypted Key Exchange: Password-based Protocols Secure Against Dictionary Attacks. In IEEE Computer Society Symposium on
_Research in Security and Privacy, Oakland, CA, USA, pages 72–84._
Boyd, C. and Mathuria, A. (2003). Protocols for Authentication and Key Establishment.
Information Security and Cryptography. Springer.
Chaum, D., Evertse, J., and van de Graaf, J. (1987). An Improved Protocol for Demonstrating Possession of Discrete Logarithms and Some Generalizations. In Advances in
_Cryptology (EUROCRYPT), Amsterdam, The Netherlands, pages 127–141._
Engler, J., Karlof, C., Shi, E., and Song, D. (2009). Is it too late for PAKE? In Web 2.0
_Security and Privacy Workshop (W2SP)._
Hankerson, D., Menezes, A. J., and Vanstone, S. (2003). Guide to Elliptic Curve Cryp_tography. Springer-Verlag, Secaucus, NJ, USA._
Hao, F. and Ryan, P. (2010). J-PAKE: Authenticated Key Exchange without PKI. Trans_actions on Computational Science, 11:192–206._
Jablon, D. P. (1996). Strong Password-only Authenticated Key Exchange. Computer
_Communication Review, 26(5):5–26._
Lomas, T. M. A., Gong, L., Saltzer, J. H., and Needham, R. M. (1989). Reducing Risks
from Poorly Chosen Keys. In 12th ACM SOSP, pages 14–18.
Perlman, R. J. and Kaufman, C. (1999). Secure Password-Based Protocol for Downloading a Private Key. In Network and Distributed System Security Symposium (NDSS).
337 c _2015 SBC_ _S_ _B_ _d C_ _t_ _ã_
|Protocol|Type|Exp (Client)|Exp (Server)|Exp (Total)|Messages|
|---|---|---|---|---|---|
|EKE|balanced|1.5|1.5|3|4|
|SPEKE|balanced|2|1.5|3.5|3|
|J-PAKE|balanced|4|4|8|4|
|A-EKE|augmented|1.5|1.5|3|5|
|B-SPEKE|augmented|3|3|6|3|
|SRP|augmented|2.5|2|4.5|4|
|zkPAKE|augmented|1|1|2|3|
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.5753/sbseg.2015.20109?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5753/sbseg.2015.20109, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://sol.sbc.org.br/index.php/sbseg/article/download/20109/19937"
}
| 2,015
|
[] | true
| 2015-11-09T00:00:00
|
[] | 3,352
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02bd69b0f570d0b6307d4df9681dc02253767ec0
|
[
"Computer Science"
] | 0.878093
|
Lamred: Location-Aware and Privacy Preserving Multi-Layer Resource Discovery for IoT
|
02bd69b0f570d0b6307d4df9681dc02253767ec0
|
Acta Cybernetica
|
[
{
"authorId": "2350456841",
"name": "Mohammed B. Alshawki"
},
{
"authorId": "1952663",
"name": "P. Ligeti"
},
{
"authorId": "145434644",
"name": "C. Reich"
}
] |
{
"alternate_issns": [
"2676-993X"
],
"alternate_names": [
"Acta Cybern"
],
"alternate_urls": null,
"id": "396366a4-a1ba-422b-a9ff-1471ee08eab8",
"issn": "0324-721X",
"name": "Acta Cybernetica",
"type": "journal",
"url": "http://www.inf.u-szeged.hu/actacybernetica/"
}
|
The resources in the Internet of Things (IoT) network are distributed among different parts of the network. Considering huge number of IoT resources, the task of discovering them is challenging. While registering them in a centralized server such as a cloud data center is one possible solution, but due to billions of IoT resources and their limited computation power, the centralized approach leads to some efficiency and security issues. In this paper we proposed a location aware and decentralized multi layer model of resource discovery (LaMRD) in IoT. It allows a resource to be registered publicly or privately, and to be discovered in a decentralized scheme in the IoT network. LaMRD is based on structured peer-to-peer (p2p) scheme and follows the general system trend of fog computing. Our proposed model utilizes Distributed Hash Table (DHT) technology to create a p2p scheme of communication among fog nodes. The resources are registered in LaMRD based on their locations which results in a low added overhead in the registration and discovery processes. LaMRD generates a single overlay and it can be generated without specific organizing entity or location based devices. LaMRD guarantees some important security properties and it showed a lower latency comparing to the cloud based and decentralized resource discovery.
|
Acta Cybernetica 25 (2021) 319–349.
# Lamred: Location-Aware and Privacy Preserving Multi-Layer Resource Discovery for IoT[∗]
### Mohammed B. M. Kamel[abcd], Peter Ligeti[ae], and Christoph Reich[bf]
**Abstract**
The resources in the Internet of Things (IoT) network are geographically
distributed among different parts of the network. Considering huge number
of IoT resources, the task of discovering them is challenging. While registering them in a centralized server such as a cloud data center is one possible
solution, but due to billions of IoT resources and their limited computation
power, the centralized approach leads to some efficiency and security issues.
In this paper we proposed a location-aware and privacy preserving multilayer model of resource discovery (Lamred) in IoT. It allows a resource to be
registered publicly or privately, and to be discovered with different locality
levels in a decentralized scheme in the IoT network. Lamred is based on
structured peer-to-peer (P2P) scheme and follows the general system trend
of fog/edge computing. Our model proposes Region-based Distributed Hash
Table (RDHT) to create a P2P scheme of communication among fog nodes.
The resources are registered in Lamred based on their locations which results
in a low added overhead in the registration and discovery processes. Lamred
generates a single overlay and it can be generated without specific organizing
entity or location based devices. Lamred guarantees some important security
properties and it showed a lower latency comparing to the centralized and
decentralized resource discovery models.
**Keywords: resource discovery, DHT, IoT**
_∗This research has been partially supported by Application Domain Specific Highly Reliable_
IT Solutions project which has been implemented with the support provided from the National
Research, Development and Innovation Fund of Hungary, financed under the Thematic Excellence
Programme TKP2020-NKA-06 (National Challenges Subprogramme) funding scheme, by UNKP-[´]
20-3 New National Excellence Program of the Ministry for Innovation and Technology from the
source of National Research, Development and Innovation Fund, by SH program and by the
Ministry of Science, Research and the Arts Baden-W¨urttemberg Germany
_aEotvos Lorand University, Budapest, Hungary_
_bHochschule Furtwangen University, Furtwangen, Germany_
_cUniversity of Kufa, Najaf, Iraq_
_dE-mail: mkamel@inf.elte.hu, mkamel@hs-furtwangen.de, ORCID: 0000-0003-1619-2927_
_eE-mail: turul@cs.elte.hu, ORCID: 0000-0002-3998-0515_
_f_ E-mail: christoph.reich@hs-furtwangen.de, ORCID: 0000-0001-9831-2181
DOI: 10.14232/actacyb.289938
-----
320 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
## 1 Introduction
The Internet of Things (IoT) network consists of billions of resources distributed
in different parts of the network. The huge number of resources and their different
levels of accessibility (e.g. private resources, local resources and public resources)
make the task of registering and discovering them a challenging task. Adopting a
centralized scheme such as relying on a cloud service helps organizing the resources
in an entity that has a high computation capability and can be used to discover
those registered resources. But, in systems that rely only on a centralized entity
a significant amount of traffic has to be used for the registration and discovery
processes which might affect the overall efficiency of the system. Comparing to
cloud computing infrastructure that send the traffic to a centralized cloud data
center, the fog/edge nodes in the fog and edge computing infrastructures try to
distribute the data among nodes and keep it as close as possible to the origin
source of data. Hence, fog computing extends the cloud computing to the edge of
the network, close to the point of origin of the data [3]. Processing the data locally
during the registration and discovery of resources helps to achieve scalability, at the
same time mitigates the potential privacy and security risks against single point
of attack and failure. However, there should be a unique decentralized scheme
that defines and arranges the relationship between the fog/edge nodes and their
responsibilities.
Distributed Hash Table (DHT) creates an overlay by assigning a seemingly
unique identifiers to the participating nodes. The generated overlay can be used
to organize the distributed nodes in the decentralized resource registration and
discovery processes [15]. The identifiers in DHT are generated by feeding some of
the parameters of the peer nodes (e.g. IP addresses) to a hash function, and the
output is used as the identifiers of the nodes. Depending on the identifier, each node
is resided in a specific location in the overlay with a predefined responsibilities. Due
to the random-looking behaviour of the hash functions, the output of the relatively
close parameters in the input range might not be close in the hash space. While
this property is required to ensure the random and uniform distribution of nodes
and the stored data in the overlay, but adopting the original DHT technique in fog
and edge computing infrastructures might results that two adjacent nodes reside
in two far locations in the overlay. As a result, while adopting DHT in resource
discovery [15] removes the centralized entity, but might map the geographically
close nodes to distant nodes in the resulted space. If the nodes in the resource
discovery models are distributed without considering their physical locations, an
efficiency issue might be raised. This is due to the reason that the logical path
of nodes on the underlying network could vary from the logical based path in the
overlay network that is organizing the distributed nodes. Thus the lookup latency
can be high, which in this case leads to operational inefficiency in applications
running over it [18]. During organizing nodes in the resource discovery model, the
locations of nodes have to be taken into consideration. Afterward, a resource is
registered based on its location in a close node in the distributed system which
reduces the required time to register and reach that specific node.
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 321
Therefore, while adopting DHT as a structured Peer-to-Peer (P2P) scheme to
organizing fog and edge nodes in IoT has some advantages such as scalability and
functionality without involving any centralized entity, but DHT might cause the
data to be stored in a far node. In this paper we proposed a location-aware and
privacy preserving multi-layer model of resource discovery (Lamred) in IoT. Lamred
aims to keep the data as close as possible to the origin of the data by taking into
consideration the locations of both resources and IoT gateways and utilizing a single
DHT overlay. It can be implemented without specific location based devices, and
add no extra local overhead comparing to traditional DHT overlays. Here are the
main contributions of this paper:
- Propose Lamred, a new DHT based model as a P2P overlay for resource
discovery in the IoT Network.
- Propose a Region based Distributed Hash Table (RDHT) for location aware
resource registration and discovery. Lamred keeps the resources as close as
possible to the clients, hence reducing the required time during the registration and discovery processes.
- Propose a private tag generation method in Lamred for private resource registration and discovery.
- Use cryptographic primitives to protect the private resources in the system
and ensure the required anonymity and privacy in Lamred.
The rest of this paper is organized as follows. The next section defines some
of the preliminaries. Section 3 summarizes the efforts in current research field of
resource discovery. Section 4 describes Lamred, the proposed model of resource
discovery, and introduces its different components. In Section 5 we evaluate the
model, proof the required security properties and discuss the performance of Lamred. Finally, we conclude our work in Section 6.
## 2 Preliminaries
### 2.1 Cryptographic Primitives
**Definition 1 (collision-resistant one-way hash function). A function H(.)** _that_
_maps an arbitrary length input M into a fixed-length digest d is called collision-_
_resistant one-way hash function it satisfies the following properties:_
- Given M _, it is easy to compute H(M_ ).
- Given d, it is hard to find any M s.t. d = H(M ).
- Given d = H(M ) and M _, it is hard to find M_ _[′]_ _s.t. M_ _[′]_ ≠ _M and H(M_ ) =
_H(M_ _[′])._
-----
322 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
- It is hard to find two distinct messages M _[′]_ _and M_ _[′′]_ _s.t._ _M_ _[′]_ ≠ _M_ _[′′]_ _and_
_H(M_ _[′]) = H(M_ _[′′])._
If the solution can be computed in the polynomial time, therefore it is considered
_easy to compute. On the other hand, if there is no solution known to solve the_
problem in polynomial time, it is considered a hard problem [7].
**Definition 2 (Probabilistic Polynomial Time). An algorithm** _is a PPT (Prob-_
_A_
_abilistic Polynomial Time) if its probabilistic and ∃c ∈_ N such that ∀x, A(x) halts
_in_ _x_ _steps._
_|_ _|[c]_
### 2.2 Distributed Hash Table
DHT is a distributed system that creates a structured P2P overlay in a network.
The participating nodes in the network can join and leave the DHT at any specific
time. Upon joining a new node, a new identifier is assigned to it and depending
on the assigned identifier, it will be responsible of storing a set of data in the
network. Using multiple replicas helps the DHT to be fault tolerant and improves
the availability of data in the network. Using identifiers instead of other types
of addressing (e.g. IPs) helps to balance and manage the data storage among
participating nodes without any centralized entity. In addition to load balancing,
it solves the scalability by providing the service of generating the identifiers by the
participating nodes themselves. There are several protocols to implement DHT
such as Chord [29], Kademila [20], Pastry [28], and Tapestry [33].
DHT uses a large address space of integer numbers. The size of the address
space depends on the fixed output size of the function that is used to generate
the identifiers. The size of the key space is he same as the address space, i.e. the
same function is used to generate identifiers for nodes and keys for the stored data.
To achieve the random function of identifier generation and uniform distribution
of data among all participating nodes a collision-resistant one-way hash function
(definition 1) is used in DHT.
Similar to hash tables [19], the data in DHT is stored in key/value pairs. The
value parameter includes any stored information about the data (e.g. the address of
the data) and can be retrieved from the DHT based on its associated key. The key
parameter of the key/value pair is generated by feeding specific information (e.g.
the attribute of the stored data such as its name, its type, etc.) to the collisionresistant one-way hash function which produces a uniformly distributed randomized
hash value. The generated hash value which represents the key parameter in the
key/value pair is used to determine the responsible node in the network of storing
this specific pair. To achieve the distributed indexing, DHT defines a specific
portion in the key space that each particular node is responsible for. DHT has two
implementation interfaces for storing and lookup: Put and Get. The Put interface
takes the key/value pair and stores this pair in the DHT. The Get interface takes a
single parameter key and lookup in the DHT to retrieve the identifier of a node that
is responsible to store the corresponding value to the given key. In the DHT the
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 323
store (i.e. put interface) and lookup (i.e. get interface) operations are guaranteed
to be done with an upper bound of O(log(n)), in which n is the number of nodes in
the DHT. This feature guarantees that any participating node in DHT can store a
pair of key/value or lookup based on a given key by routing through of maximum
_log(n) nodes._
### 2.3 Resource Discovery
Resources in IoT can be IoT data or IoT devices. These resources are registered
in the network and can be discovered by the clients. The process of discovery is to
get the access address (e.g. URI or IP and port addresses) of IoT data, IoT devices
or a combination of them as a result of discovering (i.e. querying) the resources
in the network. The search techniques can be functional (event-based, locationbased, time-related, content-based, spatio temporal-based, context-based, real-time
and user interactive searching) or implementational (text-based, metadata-based or
ontology-based approach) [25].
The resources can be registered in different parts of the network distributed
among many nodes (Figure 1a) or in a centralised trusted entity (Figure 1b). The
resource discovery [9] is a mechanism to return the access address of a resource
based on the information provided during the lookup operation. The resource
access address can be its IP and port addresses, its URI or other metadata and
further links about the resource. The discovery process starts by issuing a query
including the attributes of the required resources to be discovered. An attribute
of a resource can be any information that describes it, such as its location, its
type, etc. The query is issued by a client and sent to the discovery system. The
query that is received by the discovery system is then processed and divided into
sub-queries. As instance, the query can be divided based on the attributes of the
required resources, and a sub-query is issued in the system for each attribute. The
discovery system then finds and communicates with the responsible nodes to get
the required information about the resources. After getting the list of resources,
they are ranked based on some scoring methods and the final result is sent back to
the requested client.
The data that are generated by the discovered resources in the system can be collected in either request/response or publish/subscribe patterns. In request/response,
the data from the discovered resource is returned back to the clients based on their
requests. As instance, Constrained Application Protocol (CoAP) [4] is a document
transfer protocol that works based on a request/response approach on a clientserver architecture. In the publish/subscribe, the discovered resource publishes its
data to the clients that are already subscribed. The process can be done through
a publish/subscribe server that is the middleware between the subscribed clients
and the published resources. MQ Telemetry Transport (MQTT) [11] is a protocol
that is based on publish/subscribe approach and facilitates one to many communications through a common node (i.e. broker). Resources publish the messages by
sending them to the broker and on the other hand clients subscribe for a specific
message in the broker.
-----
324 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
(a) Decentralised Scheme (b) Centralized Scheme
Figure 1: Discovery and Access Mechanism
## 3 Related Work
Some researchers adopt the use of a centralised entity as part of their proposed models that manages some parts or all parts of the system. Cheshire and Krochmal [5]
proposed a Domain Name System (DNS) based discovery for the IoT network. It
defines a model on how the users register their resources and discover the resources
based on the DNS protocol. The proposed model does not modify the underlying
DNS protocol messages and codes and as a result is simple to implement. In this
model, a centralized authority stores the registered resources and there is no additional security consideration to the original DNS protocol itself. Authors in [12]
have proposed a large scale resource discovery to discover the devices and sensors
in the IoT network by building a scalable architecture called Digcovery. The framework enables the users to register their resources into a shared infrastructure and to
access/discover accessible resources by a mobile phone. Their proposed work is focuses on the discoverability of devices based on context-awareness and geo-location.
Digcovery allows high scalability for the discovery based on a flexible architecture.
However, it relies on a centralized point called digcoverycore for management and
discovery.
Datta and Bonnet [8] proposed a resource discovery framework for IoT. The
proposed framework includes a centralized registry that registers and indexes the
attributes of resources. The attributes of the resources are used as the parameters
during the discovery process through the search engine, that returns the access
addresses of the discovered resources. The authors in [13] proposed a discovery
model for IoT that performs the discovery based on various constraint parameters
such as input/output (IO), precondition/effect and quality of experience (QoE). In
the proposed model, a centralized directory server is used to register and discover
the services in the IoT network. The discovery is done using semantic service
description method OWL-S[iot] that describes both the IoT services and discovery
requests. Using the centralized scheme helps organizing the resources in an entity
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 325
that has a high computation capability, however, this centralized entity might turn
into a single point of failure, which, if fails, the overall system will stop. This
profoundly affect the availability and reliability of the system. Additionally, the
centralized entity could turn into a bottleneck for the system affecting the overall
system performance.
Several researches utilized the P2P scheme in the IoT network as a method for
distributed resource discovery. The model in [15] removes the centralized entity
by managing the fog nodes in a P2P scheme. It divides the resources into public resources that are discoverable by all clients and private resources that can be
discovered by a subset of clients. In addition it provides other features such as
multi-attribute discovery. However, the main drawback of this model is that by
not considering the physical location of the resources and fog nodes it fails to keep
the registration process low by using only the fog nodes that are in the same region
of the registered resource. A particular emphasis on the links and nodes locality
presents a Mesh-DHT in [30] that implemented in IEEE 802.11 wireless mesh network (WMN). The authors employed the Mesh-DHT for building a scalable DHT
in WMNs. This approach enables an entirely distributed organization of information by building a stable, location-knowledgeable overly network. Because nodes
primarily talk to physically nearby nodes, it allows minimizing the overhead in
DHT communication of WMNs, therefore requiring fewer transmissions. However,
the model can not reflect the locality of mesh routers in the overlay construction
and therefore does not able to represent the locality of keys. Wirtz et al. [31]
have been proposed an improved version of the DHT based service registration and
discovery. Their work is based on their previous model (Mesh-DHT) and focuses
to address its main drawback. Their proposed model partitions the global DHT
overlay into different scopes with different degrees of locality that are hierarchically
organized in levels. By choosing an appropriate level, a lookup can be restricted to
only consider the locally available information. The put(key, value) and get(key)
in DHT are extended to put(key, value, level) and get(key, level), respectively.
The authors in [21] proposed a single-gateway based hierarchical DHT solution
(SG-HDHT) for an efficient resource discovery in Grids (i.e. Virtual Organizations
(VO)). The model forms a tree of structured overlay and consists of a two-level
hierarchical overlay network. It defines a global DHT and number of second level
DHTs, a DHT overlay for each VO. Only one peer (called a gateway or super peer)
in a DHT overlay of a VO is attached to the global level DHT of the hierarchy. The
proposed resource discovery in this model deals with two different classes of peers:
super peers and simple nodes. The lookup is directed to the super peer of the VO
and then through the global DHT to the superpeer of the requested resource.
The authors in [1] proposed a wireless communication and computation framework that sustains the scalability for a massive increase of IoT devices. The researchers adopt the fog computing paradigm and therefore their proposed model enlarges the cloud-based solution by providing computing services close to the source
of data generation. WMN nodes are used as the fog nodes in the proposed model.
The authors have employed Chord [29], to generate a DHT based P2P overlay
of fog nodes for resource discovery. This proposed model specifically targets the
-----
326 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
underutilized processing power of devices for computing purposes. The discovery
in this model is done by involving the fog nodes as brokers for the discovery of
the required resources. Pahl and Liebald [23] introduced a distributed modular
directory of service properties and a query federation mechanism based on virtual
state layer (VSL) [24] that allows mapping complex semantic queries on the simple
search. The presented modularaization adds little latency which makes it suitable
for time-critical operations. The proposed model supports multi attribute discovery and allows adding new attributes in the system at runtime that fits the nature
of the dynamically varying IoT.
Authors in [6] proposed an architecture consists of two discovery levels, local
and global service discovery. It uses the P2P scheme for resource discovery, and
IoT gateways are the peers in the P2P overlay. This architecture uses two layers: the Distributed Location Service (DLS) and the Distributed Geographic Table
(DGT) [26]. The DLS is a DHT based architecture that works as a name resolution
service by providing any required information to access any resource in the network,
depending on its URL. The DGT builds a layer to distribute the information depending on the location of nodes, which can be used to discover the resources based
on their geographic location information. The model successfully manages the registration and discovery based on the locations of the resources. Although DGT
keeps the location data of the IoT gateways, but since DGT and DLS are loosely
coupled then in order to discover the resources in a given location the system has to
retrieve the IoT gateways data from DGT overlay and then lookup the DLS overlay
for the required resources. In addition to keep the data as close as possible to the
registered resources, Lamred aims to utilize a single DHT overlay and add no extra
local overhead and low global overhead comparing to traditional DHT overlays.
Furthermore, it aims to allow the participating nodes to join Lamred without using
specific location based devices.
## 4 Location-Aware and Privacy Preserving Multi- Layer Resource Discovery (Lamred)
The resource discovery in fog/edge computing has some requirements that have to
be addressed. Due to the distributed nature of the IoT gateways and the limited
computing power of the IoT resources, the resource discovery model has to depend
only on low computation processes and does not involve any centralized entity.
Lamred allows four levels of discovery: local discovery that is limited to a single IoT
gateway, intra-regional discovery that is limited to a local region (i.e. sub-region of a
region set), regional discovery that is done in a specific geographical area and public
_discovery (i.e. location independent) that is done among all publicly registered_
resources, regardless of their locations. In addition, Lamred distinguishes between
two types of resources, public resources that can be discovered by any client in the
system (e.g. a public temperature sensor or a resource offering a public service)
and private resources that can be discovered by a predefined subset of clients (e.g.
private resources in a smart home or a local printer in an organization).
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 327
There are three main disjoint sets in Lamred: set of clients ( ), set of objects ( )
_C_ _O_
and set of gateways ( ). The finite set consists of the IoT clients in the network.
_W_ _C_
An object o is any device in the IoT network with proper computational
_∈O_
power that handles a resource u. Subsets of and are connected to different IoT
_C_ _O_
gateways in . A gateway w’s responsibility may vary from handling a few nodes
_W_
(e.g. smart home) to hundreds of nodes (e.g. environmental monitoring). The
proposed model creates a region-based DHT (RDHT) [17] overlay that provides a
structured P2P method of addressing and discovery of the peers. The members
of (i.e. IoT gateways) represent the peers in the P2P overlay. Let H(.) be a
_W_
collision resistant one-way hash function with d bits message digest, Enck(m) be
an encryption of the message m using symmetric key k and Signw(m) be a digital
signature for message m generated by w gateway.
_∈W_
### 4.1 Lamred Properties
The Lamred has been designed to address the requirements for resource registration
and discovery in the IoT network. Table 1 shows a comparison of some of the
supported properties in the different resource discovery models. In general, Lamred
has the following properties:
- Location Aware: Lamred utilizes RDHT [17] that creates an overlay of
IoT gateways divided logically into multiple region sets and local regions (i.e.
sub-regions) in DHT overlay based on their physical locations. It generates
a single overlay that can be generated without specific organizing entity or
location based devices.
- Multi-attributes: Each resource has number of attributes. These attributes
can be its location, its type, its provided service and so on. To discover
a resource or a set of resources, in addition to their exact identifiers more
than one attribute might be needed to get the precise result of the required
resources. Lamred supports the multi-attributes discovery and the clients are
able to discover the resources based on multiple attributes. In addition to the
predefined set of attributes, participants in Lamred are able to create new
attributes in real time.
- Scalability: The scalability describes the ability of the a decentralized resource discovery to adjust the registration and discovery process as the system grows in term of number of nodes. Lamred distributes the responsibility
among many nodes that can continue working efficiently as the number of
nodes grows.
- Management: Lamred provides defined interfaces for the authorized IoT
entities to be able to add, remove, update and discover resources in the network.
- Discoverability: Because of the use of RDHT overlay, Distributed Address
Table (DAT) [16] can be integrated as a part of Lamred (in a specific region
-----
328 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
Table 1: Supported properties in Resource Discovery Models
Location Multi Security
Features Decentralized
aware overlay attributes considerations
Jara et. al. [12] -
Cheshire et. al. [5] -
Datta and Bonnet [8] -
Jia et. al. [13] -
Cirani et. al.[6]
Wirtz et. al. [30]
Wirtz et. al. [31]
Mokadem et. al. [21]
Shabir et. al. [1]
Kamel et. al. [15]
Pahl et. al. [23]
Lamred
in RDHT) to allow discoverability of all resources in the network including
as instance those behind the Network Address Translator (NAT).
- Responsibility Definition: Each node in RDHT overlay of Lamred is aware
of the range of its responsibility to registering a subset of resources. This
results that the clients that need to discover a resource in the system being
aware of the specific IoT gateway in Lamred that is responsible to store the
required information to access that specific resource, and issue a discovery
request to that specific node.
- Discoverability Range: Lamred uses a private/public architecture and
is able to keep some of the resources private and only discoverable by the
authorized clients in the IoT network.
### 4.2 Security model
An object o that needs to register its private resource in Lamred has a pre_∈O_
defined set Fo ⊂C of friend clients. The members of Fo are able to discover the
privately registered resource of object o. We assume that from the viewpoint of
any object o ∈O in Lamred, the set of friends Fo are honest nodes. The rest of
the clients Ro = {r ∈C \ Fo} can be assumed to be malicious. In the case of
the finite set of IoT gateways in Lamred, we have to assume that there is no
_W_
cut containing malicious nodes only in the communication graph composed of the
clients, objects and gateways (otherwise, the malicious nodes together could make
the communication impossible). More precisely, we assume that for a given resource
_u of an object o for every friend f ∈Fo there exist a path (w1, w2, . . ., wk) in the_
communication graph such that the object o is connected to w1, the friend client f
is connected to wk and all of w1, . . ., wk are semi-honest. The semi-honest entities
|Features|Decentralized|Location aware overlay|Multi attributes|Security considerations|
|---|---|---|---|---|
|Jara et. al. [12]||-|||
|Cheshire et. al. [5]||-|||
|Datta and Bonnet [8]||-|||
|Jia et. al. [13]||-|||
|Cirani et. al.[6]|||||
|Wirtz et. al. [30]|||||
|Wirtz et. al. [31]|||||
|Mokadem et. al. [21]|||||
|Shabir et. al. [1]|||||
|Kamel et. al. [15]|||||
|Pahl et. al. [23]|||||
|Lamred|||||
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 329
are assumed to follow the protocol properly, but they might store the received data
locally in an attempt to get more information from the stored data.
Beside these properties, the nature of the communication model also regulates
the applicable security. In Lamred only the IoT gateways assumed to be able to use
public key cryptography, while the objects handling the IoT resources can encrypt
and decrypt messages using symmetric keys only because of their limited computation power. In case of IoT gateways, we suppose that every w can generate a
_∈W_
digital signature Signw(p) of any transmitted packet p. The proposed construction
is supposed to achieve security requirements in the computational sense, i.e. we
assume PPT adversaries (definition 2) with negligible success probabilities when
attempt to attack the scheme. A function has a negligible success probability if the
success occurs with a probability smaller than any polynomial fraction if the size
of the input exceeds a given bound [2]. The private resources are registered and
discovered by an added private tuple (see Section 4.5). The goal of the security
model in the privately registered resources of Lamred is to allow friend clients to
securely and anonymously discover the private resources, which comes from the
following security properties:
- Resource anonymity: Every PPT adversary can learn any connection between a given private tuple and a given private resource with negligible probability only.
- Resource privacy: Every PPT adversary can learn the address of a resource
from a given private tuple with negligible probability only.
- Unforgeability: Every PPT adversary is able to generate, remove or update
a valid private tuple of a resource on behalf of a given honest object with
negligible probability only.
### 4.3 Location Regions in Lamred
Lamred consists of maximum 2[g] regions with maximum of 2[d] IoT gateways in
each region. The regions are grouped in 2[g/][2] sets, each set with one representative
region and 2[g/][2] 1 local regions. Consequently, an identifier of a node in Lamred
_−_
consists of three concatenated parts: region set id, local region id and local node
id and is (g + d)-bit long. During the creation of identifiers in Lamred, two hash
functions are used. The first hash function generates g-bit output digest based on
a given information of the region set and the local region, while the second hash
function generates d-bit output digest based on the given node information. g and
_d parameters can have same value and the same hash function can be used to_
generate the different parts of the identifiers. There are two specific generic regions
in Lamred, namely private and public regions. Each node joins private and public
regions regardless of its physical location. In addition to that, a new node joins a
local region in the Lamred based on its location. Figure 2 illustrates the regions in
RDHT overlay of Lamred.
-----
330 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
Figure 2: Regions in RDHT
The regions are generated by feeding the information of the locations to the
hash function that outputs a g-bit digest. The given input information of the
locations can be represented by human readable names of regions or a specific
prefix of latitude/longitude data. Each region set has a representative region, in
which the nodes in the that region represent other nodes in the region set. To
create a region id, the information of the representative region of that region set
is fed to the hash function and the first left g/2 bits represents the first g/2 bits
of the generated region id. The local region information is then fed to the hash
function and the last g/2 bits is taken that will represent the second g/2 bits of the
generated region id. The representative region itself, will have the last g/2 bits all
set to zero. As a result, all the regions in each of the g/2 region sets in RDHT share
the same g/2 prefix bits. Because of the Avalanche effect property [32] of the hash
function algorithms, each subset of a generated digest by the hash function should
be affected equally as any other subset of the digest. Therefore, generating the
region id by taking g/2 bits from the g bits digest of representative region and g/2
bits from the g bits digest of the local region should not affect the randomness of
the generated identifier. The remaining d-bit of the identifier of a node is generated
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 331
by hashing the information of the IoT gateway (e.g. its IP address). Figure 3 shows
the generation of the identifier of a node in Lamred.
Figure 3: Identifier generation in Lamred
A new IoT gateway w ∈Wrg ⊂W joins Lamred by registering in its local region,
along with other members of Wrg in the same local region. This is done by hashing
the location information of the representative region and its local region to get the
first g bits of the identifier of node w and then hashing its unique information (e.g.
IP address of w) resulted in the rest d bits of the identifier of node w. In addition
to the local region, the newly joined node w can join private and public regions as
well. This is done by first hashing its unique information (e.g. IP address of w)
to be able to generate two identifiers that are used in private and public regions.
The generated identifier in the private region starts with g zeros followed by the d
bits output of the hash function used by w. The generated identifier in the public
region starts with g 1 zeros followed by a single bit 1 and the d bits output of
_−_
the hash function used by w. The joining process is done through an introducer
_node that is already a member of Lamred. The joining process of a newly joined_
node w starts by sending a look up request through the introducer node for its own
identifier (i.e. the newly generated identifiers of node w) in both private and public
regions, as well as in its own local region.
Similar to Kademlia [20] there are two general α and k parameters in Lamred
that determine the parallelism and system wide replication, respectively. Each node
in Lamred, has d lists of the k-buckets [20] that includes the access addresses to the
nodes in the same region. In addition, each node in any region of a region set should
keep g/2 lists of the k-buckets that includes access addresses to all representative
-----
332 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
regions in all region sets in RDHT, including the public and private regions. The
nodes in the representative region of any subset, keep g/2 lists of the k-buckets
that includes access addresses to all local regions in the region set. As a result,
the nodes in the representative regions have d + g lists and all other nodes in any
region in RDHT have d + g/2 lists. Each of those lists has maximum of k entries.
Considering the size of the DNS domain cache in Raspberry Pi that is defaulting
to 10,000 entries[1], storing the access addresses of maximum k IoT gateways in
maximum d + g lists does not require any additional storage consideration in the
Lamred peers. Figure 4 shows an example of a RDHT overlay of Lamred with 3
region sets, in addition to the public and private regions. In this tiny example the
hash function of both region and local identifiers generates a 4-bit digest, therefore,
the identifier of each node consists of 8-bits that includes 4-bits region identifier and
4-bits local identifier. As instance, initiator peer with identifier 10001110 wants to
get the access address of a resource that is stored in the destination peer 11101111.
Since the destination peer is in region id (1110), the representative region that this
peer belongs to is (1100). Therefore, the initiator peer sends the request to the
node (11001110) in the representative region which is then directed to the specific
region and finally to the destination peer in the required region.
### 4.4 Public Resource Registration and Discovery
A resource u in the network has its specific access address and a set of attributes
that describe its properties (e.g. its type, its provided service, etc.). When an object
_o_ wants to register its resource u in the network, it has to add the required
_∈O_
information in Lamred through a member of that is directly connected to. This
_W_
set of information includes the tag that is generated by hashing the attribute type
of the resource, the value of the added attribute, the ownership information and
the access address to resource u as illustrated in Figure 5. After adding the tuple
(tag, value, ownership, data) of resource u to Lamred, it can be discovered by all
clients in the network. There are two options for registering a resource in the
network. The first option which is the default choice for a resource is to register
it in the local region, i.e. in the same region that it belongs to. Since registering
it locally ensures that the tuple will be stored in a node in the same geographical
region, it requires less time and overhead for registration. The resources that do not
depend on a specific location or provide services that are location-independent, can
register themselves in the public region as well. In addition to that, the directly
connected IoT gateway (i.e. the IoT gateway w that the object o is connected
to) keeps a copy of the registered resource locally in the cache for a specific time
depending on the caching expiry parameters.
The overall workflow of resource registration and discovery is shown in Figure 6. An object o registers its resource u in the network as tuples of
_∈O_
(tag, value, ownership, data). The set of the attributes that describe resource u
are fed to the hash function to generate the tag parameter. The value in the added
1https://docs.pi-hole.net/ftldns/dns-cache/
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 333
Figure 4: An example of Lamred implementation with 4 bits per region id and
local id
tuple indicates the actual value of each of the attributes of the registered resource.
During resource registration, the object o generates a random number r and adds
-----
334 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
Figure 5: The public tuple structure in Lamred
its hashed value as a later proof of ownership of the generated tuple. Revealing the
pre-image of the hash value (i.e. r) guarantees the ability of proving the ownership
of the tuple that is used during updating or removing it from the Lamred. The
access address (i.e. data in the tuple) parameter of the resource u might consist
of its address, URI or other metadata about the resource u. The tuple is then
stored in RDHT based on the its tags with a predefined number of replicas. The
actual number depends on the replication factor rp. Choose an appropriate rp
parameter depends on the nature of the network. As a general rule for choosing
the appropriate rp value, the probability of existence of a subset of offline nodes in
Lamred Offline with cardinality greater than the number of replicas has to
_⊂W_
be negligible ϵ. This is shown in equation 1. In addition, the existence of replicas
increases the system performance by reducing the access load on any specific node
in Lamred.
_P_ ( _Offline_ _rp) < ϵ_ (1)
_∥_ _∥≥_
Figure 6: Overall workflow of resource registration and discovery in Lamred
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 335
Let Wrg ⊂W be a subset of IoT gateways in a specific region that the resource
_u has been registered in. In addition to storing the tuples locally in the directly_
connected IoT gateway w ∈Wrg ⊂W and depending on the replication factor, the
_close nodes in the same local region Wrg to tag parameter are responsible for storing_
(tag, value, ownership, data) tuple. If a node with identifiers idw and a tuple with
tag tagv are close or equal, then we denote it with idw ≈ _tagv. The model does_
not depend on any specific distance function (dst) to compute the closeness. It can
be any particular distance function. Metrics such as bitwise exclusive or (xor) [20]
can be used to compute dst value.
Let Id be a set of all possible sequences of d-bit binary digit (i.e. identifiers) in
region rg and each peer w ∈Wrg ⊂W has an identifier idw ∈Id and each resource
_u has a tagu ∈Id. Let define the following set of peers_
_M_ (u) = {w : tagu ≈ _idw, ∄w[′]_ _| dst(idw′_ _, tagu) < dst(idw, tagu)}_ (2)
The set M (u) links each resource u depending on its added attribute tagu to a
node w ∈Wrg that its identifier idw ∈Id is close or equal to tagu. The cardinality of
_M_ (u) depends on the replication factor rp parameter. The procedure of registering
a public resource u in the network consists of three steps:
- Tuple Definition and Generation: The object o and based on the attributes that describe the resource u generates the tags, i.e. hash value of
the attributes. Each of the tags is put along with their values, the ownership
parameter that is the hash value of a randomly generated number r and the
access address to the resource u and send the generated tuples to directly
connected w. In addition to that, the object o determines whether u has to
be stored in the same region or in the public region.
- Tuple Signing: In this step the appropriate set of tuples of the resource u
are signed by w .
_∈W_
- Resource Registration: The gateway w registers the resource u by storing
the tuples at the corresponding nodes in Lamred.
The public resources that are registered without any restrictions in Lamred
can be discovered by all clients in the network based on their attributes and the
registered regions. Lamred allows discovery of the registered resources based on
one or more attributes. A client c lookup for a resource by sending a lookup
_∈C_
request with the required set of attributes, their values and the required region to
the node w in Lamred that is directly connected to. The node w after receiving a
discovery request from a client c generates the appropriate tags for the discovery
process based on the received attributes. The discovery process contains three main
steps as follows.
- Query Generation: In the first step, the node w generates the set of tags
based on the received attributes from a client c. This is done by hashing each
-----
336 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
of the requested attributes in the client’s request. In addition to that, the
region id is also added to the generated tag.
- Lookup: The second step starts by issuing the lookup request by w in Lamred.
The result Ri of each of the lookup operations is a set of data parameters
that indicates the resulted resources based on the given attribute i and its
required value.
- Result Gathering: After receiving the results and verifying them based on
their attached digital signatures, the intersected members of sets R0 ∩R1 ∩
_· · ∩Rn will be returned as a result to the requested client c. In this step and_
prior to returning the result to client c, some scoring methods can be applied.
The tuples of the registered resources are remained in Lamred based on the
caching expiry parameter. In addition to that, an object is able to update the data
or remove its registered resource from Lamred by issuing a request including the
pre-image of the ownership field in the added tuple. The request is signed by the
directly connected node in Lamred, w and is sent to the corresponding node in
Lamred. After checking the ownership of the tuple (i.e. H(r) = ownership), the
requested tuple is updated by a new tuple or removed from Lamred based on the
received request.
### 4.5 Private Resource Registration and Discovery
Every object o in the IoT network is able to keep a resource private and discoverable
only by a predefined set Fo by generating a private tuple as illustrated in Figure 7.
An object o has a set of its friends Fo = {f1,...,fn} ⊂ _C that can be communicated_
with in a secure and trusted way. The members of a friend set Fo of an object
_o are connected through members of_ but they are not part of RDHT overlay
_W_
itself. Each private resource has a private identifier idu that is chosen uniformly at
random from a given range, e.g. from bit strings of length 512. The identifier idu
is known only by the members of Fo. Additionally, each c ∈C has also a private
identifier idc that is chosen uniformly at random from a given range. The object
_o stores the private identifiers of each f ∈Fo ⊂C locally. In addition, for every_
object o and for each f ∈Fo, an initial value (IV of ) and a common secret key
(kof ) are generated and shared between them on a secure channel. The key kof is
used to encrypt the indirect transmitted data between them. These keys are stored
at each node locally at the setup phase and its future distribution scheme is out of
the scope of this paper.
If an object o registers its resource u privately, only the members of Fo can
discover and access this specific resource of o. To do so, an object o has to generate
a privateTaguf for a resource u and every friend client f ∈Fo using equation
3. The access address of the private resource is then encrypted using the shared
key kof . A random number r is also generated and its hashed value is added
as a later proof of ownership of the generated private tuple. A private resource
can be registered privately in the local region or in the private region. While
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 337
Figure 7: The private tuple structure in Lamred
registering a private resource in the private region does not guarantee the low
latency process, but it hides the actual region that the private resource belongs
to. On the other hand, registering a private resource in the local region ensures
a low latency process, but reveals its local region. The decision of whether the
private resource should be registered in the same local region or in the generic
private region is made by the object o that handles the private resource. After
generating the tuples as (privateTaguf _, ownership, encryptedaccessAddress), the_
corresponding node in Lamred receives the resulted tuples (a single tuple for each
_f ∈Fo) from the directly connected object and put them in the same local region_
or the private region of RDHT. After registration, the members of Fo can discover
the registered private resource by computing its private tag. These private tags
are not permanent and used only once. The privateTaguf (new) parameter can be
calculated using privateTaguf (old), idu and idf values. At any given time, the
current private tag of a resource is computed as 3:
_privateTaguf_ (new) = H(privateTaguf (old) ⊕ _idu ⊕_ _idf_ ) (3)
where privateTaguf (old) is the previous private tag of the resource u (i.e. the
output of the previous hash) and the initial value is privateTaguf (old) = H(IVof ⊕
_idu ⊕_ _idf_ ). The one-time private tag ensures that the IoT gateway w ∈W that
_idw_ _privateTaguf is not the same during the life cycle of the resource. Similar_
_≈_
to publicly registered resources, the private tuple of a privately registered resource
can be updated or removed from Lamred by issuing a request and revealing the
pre-image of the ownership field in the added private tuple. Although the discovery
process of a private resource in the network resembles the public discovery, but there
are two differences. First is that in order to be able to discover a resource u that is
handled by o, a client has to be able to compute its private tag, i.e. being a valid
member of Fo. Secondly, after receiving the discovery result, the returned access
data is confidential and can be read only by knowing the secret key k corresponding
to this specific node.
-----
338 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
## 5 Evaluation
### 5.1 Security Analysis
**Theorem 1. If H(.) is a one-way hash function then the system satisfies resource**
_anonymity._
_Proof. Suppose that the private tuple_
_P = (P1|P2|P3) = (H(privateTaguf_ (old)⊕idu⊕idf )|H(r)|Enckof (accessAddress))
is stored in Lamred by object o to register the resource u that can be discovered
only by its friend client f ∈Fo. Note that, only P1 includes some information
related to the private resource u (i.e. idu), hence we can deal with this part of
the private tuple only, hence the goal of the adversary is to compute idu from the
tuple. Let assume that the adversary knows privateTaguf (old) also (e.g. from
previous communications). First suppose that a client m ∈C \ Fo wants to learn
some information. Additionally, we can suppose that m ∈Ff, i.e. m knows idf _._
If m could find a pre-image of H(privateTaguf (old) ⊕ _idu ⊕_ _idf_ ), and if she could
remove privateTaguf (old) ⊕ _idf then she can compute idu. However, since H() is_
a one-way function, m can find any x with H(x) = P1 with negligible probability
only. The remaining nodes in Lamred outside Ff are in a much hopeless situation,
since even if they are assumed to find a pre-image of the hash, after that they
have to remove privateTaguf (old) and idf and the later is chosen randomly arising
unconditional resource anonymity in this case. This completes the proof.
**Theorem 2. If Enc is a computationally secure encryption then the system satis-**
_fies resource privacy._
_Proof. Suppose that the private tuple_
_P = (P1|P2|P3) = (H(privateTaguf_ (old)⊕idu⊕idf )|H(r)|Enckof (accessAddress))
is stored in Lamred by object o to register the resource u that can be discovered
only by its friend client f ∈Fo and a malicious node m ∈W ∪ (C \ Fo) wants
to discover and learn the access address of the registered private resource. Note
that, only P3 depends on the access address of the private resource, hence we can
deal with this part of the private tuple only. This last part of the tuple is the
_accessAddress encrypted with a computationally secure encryption._ Therefore,
without the knowledge of the symmetric key kof the address accessAddress can
be computed with negligible probability only. This completes the proof.
**Theorem 3. If H(.) is a collision-resistant one-way hash function and Enc is a**
_computationally secure encryption, then the system satisfies unforgeability._
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 339
_Proof. Suppose that the object o registers the resource u and the actual private_
tuple
_P = (P1|P2|P3) = (H(privateTaguf_ (old)⊕idu⊕idf )|H(r)|Enckof (accessAddress))
is stored in Lamred. Let m ∈W∪(C\Fo) be a malicious node and first suppose that
_m wants to remove or update this tuple. Then m has to compute the pre-image of_
_P2, which is possible with negligible probability only since H(.) is one-way._
Next, suppose that m wants to generate a new valid private tuple. To achieve
this, the malicious node m has to first compute the new private tag (i.e. P1) and
then replace the part containing information related to accessAddress (i.e. P3).
We will show that neither part of the tuple can be computed with non-negligible
probability. First suppose that m wants to compute P1[′] [=][ H][(][P][1] _[⊕]_ _[id][u]_ _[⊕]_ _[id][f]_ [),]
furthermore assume that m ∈Ff (i.e. m knows idf ) and privateTaguf (old) is also
known by m. Then the only remaining part necessary for P1[′] [is][ id][u] [and only][ P][1] [and]
_privateTaguf_ (old) depends on this identifier. In both cases m has to compute the
pre-image of the hash function H(.) which can be done with negligible probability
only, since H(.) is a one-way function. Finally, suppose that m wants to compute
_P3[′]_ [=][ Enc][k]of [(][accessAddress][′][) for a fake address][ accessAddress][′][. Such fake address]
can be found with negligible probability since Enc is a computationally secure
encryption. This completes the proof.
### 5.2 Performance Analysis
In addition to proving the security properties in Lamred, the main concern is to
keep the data of the registered public and private resources in the system as close
as possible to the point of origin to prevent the high latency of long distances. In
order to study the performance of Lamred and validate its feasibility and reliability,
several issues such as region sizes, required preparation time for registration and
discovery in constrained IoT devices, local and global discovery, and the affect
of local cache size and churn on Lamred have been investigated. The network
latency has been taken into consideration for measuring the performance of Lamred.
Table 2 shows the assumed random parameters of real-time latency[2] for each of the
different network links in the system.
Table 2: Network parameters
type parameter
local connection latency 2 ms
sub-regional latency (local region) 3 - 8 ms
intra-regional latency (region set) 10 - 30 ms
long distance latency 80 - 120 ms
2https://wondernetwork.com/pings
|type|parameter|
|---|---|
|local connection latency|2 ms|
|sub-regional latency (local region)|3 - 8 ms|
|intra-regional latency (region set)|10 - 30 ms|
|long distance latency|80 - 120 ms|
-----
340 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
We should call the reader’s attention to the fact that the IoT gateways are the
peers in RDHT and not the IoT clients or IoT resources. The members of and
_C_
(i.e. clients and objects that handle IoT resources) are not part of RDHT itself
_O_
and are connected through the peers in RDHT. The Kademlia implementation[3] of
PeerSim simulator[22] has been used for the performance experiments. The implementation has been modified to fit our proposed model. In the implementation and
as with uTorrent[4], the popular implementation of Kademlia, system wide replication is set to 8 and the lookup parallelism is set to 4. The results of researches
[14][27] that focus on studying these two factors and other parameters in Kademlia
[20] implementation to improve the lookup latency in DHT based implementation can be applied on Lamred. The system performance has been tested using a
simulated Lamred network with 400 million to 2 billion IoT gateways. The IoT
gateways are distributed and grouped in 200 region sets with 200 regions per region set (i.e. overall 40,000 regions with 10,000 to 50,000 IoT gateways per region).
Table 3 shows the resource discovery latency in a local region. Figure 8 shows the
resource discovery in Lamred with different region size and discovery scope. The
sub-regional discovery is done within the same region, intra-regional discovery is
done between two regions that are within the same region set and regional discovery
(i.e. long distance discovery) is done between two regions that are in two different
region sets. Due to huge number of IoT gateways in RDHT, the intra-regioal and
regional discovery have been simulated in different stages. Without loss of generality, we assumed that no churn occurred and no cache has been used in Lamred
during these tests.
Table 3: Resource discovery in a region in Lamred
region size discovery latency
10,000 45.27 ms
20,000 47.14 ms
30,000 48.1 ms
40,000 48.9 ms
50,000 49.7 ms
To evaluate the efficiency of the Lamred for handling issues of robustness, availability, and replication, we performed a set of experiments where we introduced
churn in the network. Over 100 to 2000 milliseconds intervals and for a period of
120 seconds, we randomly either killed an existing IoT gateway or started a new
one in a region of 10,000 nodes. During the evaluation, resource discovery rate of
100 requests per second have been issued. As shown in the presented result in figure
9, there is 11 ms delay comparing to the network without churn in the discovery
time in Lamred when the churn rate is 0.1 second (i.e. every 100 millisecond either
an IoT gateway leaves or joins Lamred) and less than one millisecond delay when
3http://peersim.sourceforge.net/
4https://www.utorrent.com/
|region size|discovery latency|
|---|---|
|10,000|45.27 ms|
|20,000|47.14 ms|
|30,000|48.1 ms|
|40,000|48.9 ms|
|50,000|49.7 ms|
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 341
800
600
400
200
0
1 2 3 4 5
Number of IoT gateways
10[4]
_·_
Figure 8: Regional Resource Registration Delay
56
54
52
50
48
46
0 500 1,000 1,500 2,000
Churn Rate (ms)
Figure 9: Churn affect on Lamred
the churn rate is higher than 1.6 second between each occurrence.
Figure 10 shows the affect of cache on Lamred. During the evaluation in a region
of 10,000 IoT gateways and 1000 requests per second, the probability of discovering
a resource that has been already resided in the local cache has been set to 0.05 0.25. The analysis showed that the local cache in Lamred nodes that includes the
-----
342 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
45
40
35
30
0 5 10[−][2] 0.1 0.15 0.2 0.25
_·_
Cache hit probability
Figure 10: Cache affect on Lamred
locally registered and frequently discovered resources, improves the overall delay
linearly.
As part of the evaluation, Lamred has been compared with the centralized
service discovery (CSD) [13] and the decentralized resource discovery (DRD) [15]
models. Since the DRD model [15] uses the IoT gateways without considering their
locations, during analysis and comparison we simulated this model by creating a
region set and assuming that the resources are registered in the regions without
considering the locations of the resources. The direct matching scheme that has
the minimum response time in the CSD model [13] has been used. There are 1000
IoT objects that have been distributed uniformly at random among a region in
Lamred with 5,000 - 10,000 IoT gateways. As it appears from figure 11, although
the resource discovery in centralized discovery is fixed, but the lookup process of
discovering a resource in the network of a centralized scheme is higher than the
proposed model. At the same time, as it is notable, when in the proposed model
the number of gateways in the system increases the delay of the lookup process
increases logarithmically. The reason is the use of the DHT overlay for discovery
in which the lookup time among n peers is log(n).
Lamred shows that it has a low latency that makes it suitable for IoT network
with large number of IoT resources. The latency in a region of Lamred has been
compared with the latency in some of the recent works and the result is listed in
Table 4.
The private resource registration and discovery follows a different approach than
other regions, as discussed in Section 4.5. In this case the object has to generate
the private tag of the resource to be used in the private region of Lamred and on
the other hand, the client application has to calculate the private tag in order to
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 343
250
200
150
|Col1|Lamred DRD [15] CSD [13]|
|---|---|
100
50
0
0 0.2 0.4 0.6 0.8 1
Number of IoT gateways
10[4]
_·_
Figure 11: Resource discovery delay in different models
Table 4: Latency in resource discovery Models
Model Properties Latency
Search Engine Based
Datta and Bonnet [8] 450-600 ms
Resource Discovery
Centralized Resource
Jia et. al. [13] 230 ms
Discovery
Decentralized ReKamel et. al. [15] 150 ms
source Discovery
4 predicates/ search
Pahl et. al. [23] 80 ms
providers
A region with 10,000
Lamred 45 ms
IoT gateways
be able to discover the private resource and get the encrypted access address of
it. The private tag generation in equation 3 has been tested on an MCU with
single-core 32-bit 80 MHz microcontroller. The SHA256 [10] is used as hashing
algorithm for tag generation and AES-128-CBC is used as encryption algorithm.
For analysis and implementation of SHA256 and AES algorithms on the MCU, the
Crypto library[5] for the ESP8266 IoT devices has been used. During the test, each
of the cryptographic operations has been repeated 20 times, and their mean value
is registered. The average time required to perform the encryption, decryption,
hashing and the private tag generation and discovery are shown in Tables 5 and 6.
|Model|Properties|Latency|
|---|---|---|
|Datta and Bonnet [8]|Search Engine Based Resource Discovery|450-600 ms|
|Jia et. al. [13]|Centralized Resource Discovery|230 ms|
|Kamel et. al. [15]|Decentralized Re- source Discovery|150 ms|
|Pahl et. al. [23]|4 predicates/ search providers|80 ms|
|Lamred|A region with 10,000 IoT gateways|45 ms|
5https://github.com/intrbiz/arduino-crypto
-----
344 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
Table 5: Required operation time by the microcontroller for private resource
registration
**Operation** **Required time**
XOR operation 8 ms
RNG 5 ms
SHA256 (Private Tag and RN) 2 * 227 ms
AES-128-CBC encryption 288 ms
Tag generation (Registration) **805 ms**
Table 6: Required operation time by the microcontroller for private resource
discovery
**Operation** **Required time**
XOR operation 8 ms
SHA256 (Private Tag) 227 ms
AES-128-CBC decryption 348 ms
Tag generation (Discovery) **583 ms**
### 5.3 Complexity Analysis
Suppose that Lamred consists of 2[g] regions, divided into NRegionSet region sets.
Let’s suppose that the cardinality of IoT gateways in the private and public regions
are NP, and in the local regions R1, R2 and R3 of RDHT overlay are NR1, NR2
and NR3, respectively. Suppose that both R1 and R2 are in the same region set
that includes NRS1 local regions, and R3 is in a different region set. We discuss
the complexity of the proposed model in five cases:
- Sub-regional registering or discovering a resource in the same local region
(RDlocal)
- Location independent registering or discovering a resource in the private/
public regions (RDpp)
- Discovering a resource that is stored in the cache of an IoT gateway (Dcache)
- Intra-regional discovering a resource in the same region set (RDRS)
- Regional discovering of a resource in a different region set than the client
region (Dregional)
Registering or discovering a resource in the same region R1 that the object
and client belong to is done by the corresponding peer w ∈WR1 and is equal
to RDlocal = O(log(NR1)). Registering or discovering a resource in the private
or public regions regardless of its location is done by first reaching a node in the
target private or public regions and then finding the exact node in these regions
|Operation|Required time|
|---|---|
|XOR operation|8 ms|
|RNG|5 ms|
|SHA256 (Private Tag and RN)|2 * 227 ms|
|AES-128-CBC encryption|288 ms|
|Tag generation (Registration)|805 ms|
|Operation|Required time|
|---|---|
|XOR operation|8 ms|
|SHA256 (Private Tag)|227 ms|
|AES-128-CBC decryption|348 ms|
|Tag generation (Discovery)|583 ms|
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 345
responsible for storing the access address of the required resource. Since each node
in Lamred has the access addresses of nodes in public and private regions, it takes
_O(1) to reach each of these two regions and then perform a lookup request for the_
exact required node. Therefore, registering or discovering a resource in the private
or public regions depends on the number of nodes in each of these regions and is
equal to RDP P = O(log(NP )).
Each IoT gateway in Lamred keeps a copy of the registered or previously discovered resources locally for a specific time depending on the caching expiry parameters. If a client requests to discover a resource that resides in the cache (which
happens for the frequently discovered resources), then the result is returned directly to the client and is equal to Dcache = O(1). If the client and the discovered
resource are in regions R1 and R2 that are in the same region set including overall
_NRS1 local regions, the discovery access time takes RDRS = O(log(NRS1NR2))_
and is done in two stages. Firstly, it takes O(log(NRS1)) to reach the target region (i.e. R2) and then it takes O(log(NR2)) to discover the required resource by
reaching the specific responsible node in target region R2. Discovering a resource
in region R2 by a client belongs to region R3 that is in a different region set rakes
_Dregional = O(log(NRegionSetNRS1NR2)) and is done in three stages. Firstly, ac-_
cessing the representative region of the region set that the target region R2 belongs
to takes O(log(NRegionSet)) based on the number of available region sets in Lamred.
Then, reaching the region R2 takes O(log(NRS1)). Finally, it takes O(log(NR2))
to perform a lookup and discover the required resource by reaching the specific
responsible node in target region R2.
## 6 Conclusion
In this paper a location aware and privacy preserving multi layer model of resource
discovery (Lamred) in IoT has been proposed. It adopts the peer to peer (P2P)
scheme by utilizing Regional Distributed Hash Table (RDHT), a proposed version
of DHT. Lamred ensures that there is no single point of failure in the system and the
network can be easily scaled without any need of a reorganizing and synchronizing
authority. The RDHT overlay is generated by taking into consideration the physical
location of IoT gateways in the system. Resources are not part of RDHT overlay,
but they can be registered locally, globally or privately different regions in RDHT
through an IoT gateway. On the other hand, clients can discover the resources based
on one or more attributes of the required resources. During the discovery phase,
the client can choose a specific local region or the public region for the discovery of
the resources. The private resources that are registered privately either in the local
region or in the private region can only be discovered by a predefined set of clients
in Lamred. During the evaluation, Lamred showed a lower latency comparing to
the centralized and location-independent decentralized resource discovery models.
In addition, the required security properties of the registered resources in Lamred,
namely resource anonymity, privacy, and unforgeability have been proved.
Some open problems remain related to the proposed model. On one hand, while
-----
346 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
the current model supports registering private resources in Lamred, but it offers a
two-level binary policy and can not define the set of attributes of clients that are
able to discover privately registered resources. This problem has to be addressed in
future works. On the other hand, in Lamred a separate lookup in the DHT overlay
for each of the attributes is issued which will add a significant overhead to the
system, there should be a future study to improve the efficiency of the discovery
process.
## References
[1] Ali, Shabir, Banerjea, Shashwati, Pandey, Mayank, and Tyagi, Neeraj. Wireless Fog-Mesh: A communication and computation infrastructure for IoT based
smart environments. In Mobile, Secure, and Programmable Networking, pages
322–338, 2019. DOI: 10.1007/978-3-030-03101-5_27.
[2] Bhatnagar, Nirdosh. _Mathematical Principles of the Internet._ CRC Press,
2019. ISBN: 9781138505483.
[3] Bonomi, Flavio, Milito, Rodolfo, Zhu, Jiang, and Addepalli, Sateesh. Fog
computing and its role in the Internet of Things. In Proceedings of the first
_edition of the MCC workshop on Mobile cloud computing, pages 13–16. ACM,_
2012. DOI: 10.1145/2342509.2342513.
[4] Bormann, Carsten, Castellani, Angelo P, and Shelby, Zach. CoAP: An application protocol for billions of tiny internet nodes. IEEE Internet Computing,
16(2):62–67, 2012. DOI: 10.1109/MIC.2012.29.
[5] Cheshire, Stuart and Krochmal, Marc. DNS-based service discovery. RFC
6763, RFC Editor, 2013.
[6] Cirani, Simone, Davoli, Luca, Ferrari, Gianluigi, L´eone, R´emy, Medagliani,
Paolo, Picone, Marco, and Veltri, Luca. A scalable and self-configuring architecture for service discovery in the Internet of Things. IEEE Internet of
_Things Journal, 1(5):508–521, 2014. DOI: 10.1109/JIOT.2014.2358296._
[7] Damg˚ard, Ivan Bjerre. A design principle for hash functions. In Conference
_on the Theory and Application of Cryptology, pages 416–427. Springer, 1989._
DOI: 10.1007/0-387-34805-0_39.
[8] Datta, Soumya Kanti and Bonnet, Christian. Search engine based resource
discovery framework for Internet of Things. In 2015 IEEE 4th Global Con_ference on Consumer Electronics (GCCE), pages 83–85. IEEE, 2015. DOI:_
```
10.1109/GCCE.2015.7398707.
```
[9] Datta, Soumya Kanti, Da Costa, Rui Pedro Ferreira, and Bonnet, Christian.
Resource discovery in Internet of Things: Current trends and future standardization aspects. In 2nd World Forum on Internet of Things (WF-IoT), pages
542–547. IEEE, 2015. DOI: 10.1109/WF-IoT.2015.7389112.
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 347
[10] Eastlake, Don and Hansen, Tony. US secure hash algorithms (SHA and HMACSHA). RFC 4634, RFC Editor, 2006.
[11] Hunkeler, Urs, Truong, Hong Linh, and Stanford-Clark, Andy. MQTTS—A publish/subscribe protocol for Wireless Sensor Networks. In 2008 3rd
_International Conference on Communication Systems Software and Middle-_
_ware and Workshops (COMSWARE’08), pages 791–798. IEEE, 2008. DOI:_
```
10.1109/COMSWA.2008.4554519.
```
[12] Jara, Antonio J, Lopez, Pablo, Fernandez, David, Castillo, Jose F, Zamora,
Miguel A, and Skarmeta, Antonio F. Mobile digcovery: A global service discovery for the Internet of Things. In 2013 27th International Conference on
_Advanced Information Networking and Applications Workshops, pages 1325–_
1330. IEEE, 2013. DOI: 10.1109/WAINA.2013.261.
[13] Jia, Bing, Li, Wuyungerile, and Zhou, Tao. A centralized service discovery
algorithm via multi-stage semantic service matching in Internet of Things.
In 2017 IEEE International Conference on Computational Science and Engi_neering (CSE) and IEEE International Conference on Embedded and Ubiq-_
_uitous Computing (EUC), volume 1, pages 422–427. IEEE, 2017._ DOI:
```
10.1109/CSE-EUC.2017.82.
```
[14] Jimenez, Raul, Osmani, Flutra, and Knutsson, Bj¨orn. Sub-second lookups on
a large-scale Kademlia-based overlay. In 2011 IEEE International Conference
_on Peer-to-Peer Computing, pages 82–91. IEEE, 2011. DOI: 10.1109/P2P._
```
2011.6038665.
```
[15] Kamel, Mohammed B. M., Crispo, Bruno, and Ligeti, Peter. A decentralized
and scalable model for resource discovery in IoT network. In 2019 International
_Conference on Wireless and Mobile Computing, Networking and Communica-_
_tions (WiMob), pages 1–4. IEEE, 2019. DOI: 10.1109/WiMOB.2019.8923352._
[16] Kamel, Mohammed B. M., Ligeti, Peter, Nagy, Adam, and Reich, Christoph.
Distributed Address Table (DAT): A decentralized model for end-to-end communication in IoT. To appear in Journal of P2P Networking and Applications,
2021.
[17] Kamel, Mohammed B. M., Ligeti, Peter, and Reich, Christoph. Region-based
distributed hash table for fog computing infrastructure. In 13th Joint Confer_ence on Mathematics and Informatics, pages 82–83, 2020._
[18] Lua, Eng Keong, Crowcroft, Jon, Pias, Marcelo, Sharma, Ravi, and Lim,
Steven. A survey and comparison of peer-to-peer overlay network schemes.
_IEEE Communications Surveys & Tutorials, 7(2):72–93, 2005. DOI: 10.1109/_
```
COMST.2005.1610546.
```
[19] Maurer, Ward Douglas and Lewis, Theodore Gyle. Hash table methods. ACM
_Computing Surveys (CSUR), 7(1):5–19, 1975. DOI: 10.1145/356643.356645._
-----
348 _Mohammed B. M. Kamel, Peter Ligeti, and Christoph Reich_
[20] Maymounkov, Petar and Mazieres, David. Kademlia: A peer-to-peer information system based on the xor metric. In International Workshop on Peer-to_Peer Systems, pages 53–65. Springer, 2002. DOI: 10.1007/3-540-45748-8_5._
[21] Mokadem, Riad, Hameurlain, Abdelkader, and Tjoa, A Min. Resource discovery service while minimizing maintenance overhead in hierarchical DHT
systems. International Journal of Adaptive, Resilient and Autonomic Systems
_(IJARAS), 3(2):1–17, 2012. DOI: 10.4018/jaras.2012040101._
[22] Montresor, Alberto and Jelasity, M´ark. PeerSim: A scalable P2P simulator.
In Proc. of the 9th Int. Conference on Peer-to-Peer (P2P’09), pages 99–100,
Seattle, WA, September 2009. DOI: 10.1109/P2P.2009.5284506.
[23] Pahl, M. and Liebald, S. A modular distributed IoT Service Discovery. In
_2019 IFIP/IEEE Symposium on Integrated Network and Service Management_
_(IM), pages 448–454, 2019._
[24] Pahl, Marc-Oliver. Distributed smart space orchestration. PhD thesis, Technische Universit¨at M¨unchen, 2014. https://mediatum.ub.tum.de/1196145.
[25] Pattar, Santosh, Buyya, Rajkumar, Venugopal, KR, Iyengar, SS, and Patnaik,
LM. Searching for the IoT resources: Fundamentals, requirements, comprehensive review, and future directions. IEEE Communications Surveys & Tutorials,
20(3):2101–2132, 2018. DOI: 10.1109/COMST.2018.2825231.
[26] Picone, Marco, Amoretti, Michele, and Zanichelli, Francesco. Geokad: A P2P
distributed localization protocol. In 2010 8th IEEE International Conference
_on Pervasive Computing and Communications Workshops (PERCOM Work-_
_shops), pages 800–803. IEEE, 2010. DOI: 10.1109/PERCOMW.2010.5470545._
[27] Roos, Stefanie, Salah, Hani, and Strufe, Thorsten. On the routing of Kademliatype systems. In Advances in Computer Communications and Networks. River
Publishers, 2017.
[28] Rowstron, Antony and Druschel, Peter. Pastry: Scalable, decentralized object
location, and routing for large-scale peer-to-peer systems. In IFIP/ACM Inter_national Conference on Distributed Systems Platforms and Open Distributed_
_Processing, pages 329–350. Springer, 2001. DOI: 10.1007/3-540-45518-3__
```
18.
```
[29] Stoica, Ion, Morris, Robert, Karger, David, Kaashoek, M Frans, and Balakrishnan, Hari. Chord: A scalable peer-to-peer lookup service for internet applications. ACM SIGCOMM Computer Communication Review, 31(4):149–160,
2001. DOI: 10.1145/964723.383071.
[30] Wirtz, H., Heer, T., Hummen, R., and Wehrle, K. Mesh-DHT: A locality-based
distributed look-up structure for Wireless Mesh Networks. In 2012 IEEE In_ternational Conference on Communications (ICC), pages 653–658, June 2012._
DOI: 10.1109/ICC.2012.6364336.
-----
_Lamred: Location-Aware and Privacy Preserving Multi-Layer RD for IoT_ 349
[31] Wirtz, Hanno, Heer, Tobias, Serror, Martin, and Wehrle, Klaus. DHT-based
localized service discovery in wireless mesh networks. In 2012 IEEE 9th In_ternational Conference on Mobile Ad-Hoc and Sensor Systems (MASS 2012),_
pages 19–28. IEEE, 2012. DOI: 10.1109/MASS.2012.6502498.
[32] Yang, Yijun, Zhang, Xiaomei, Yu, Jianping, Zhang, Peng, et al. Research on
the hash function structures and its application. Wireless Personal Commu_nications, 94(4):2969–2985, 2017. DOI: 10.1007/s11277-016-3760-4._
[33] Zhao, Ben Y, Huang, Ling, Stribling, Jeremy, Rhea, Sean C, Joseph, Anthony D, and Kubiatowicz, John D. Tapestry: A resilient global-scale overlay
for service deployment. IEEE Journal on selected areas in communications,
22(1):41–53, 2004. DOI: 10.1109/JSAC.2003.818784.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.14232/actacyb.289938?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.14232/actacyb.289938, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "https://cyber.bibl.u-szeged.hu/index.php/actcybern/article/download/4202/4036"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-08-04T00:00:00
|
[
{
"paperId": "b1c811e7d93434b611c4c3efb9f6859f935a3596",
"title": "Distributed Address Table (DAT): A Decentralized Model for End-to-End Communication in IoT"
},
{
"paperId": "cb253bb7e1ec8eedd89a928c97046cd3fe4a84cb",
"title": "A Decentralized and Scalable Model for Resource Discovery in IoT Network"
},
{
"paperId": "4d43b603b9b964a65d10e11744188a735dc70e0f",
"title": "A Modular Distributed IoT Service Discovery"
},
{
"paperId": "18a83fa4be1e4145dcc9a12cf63e9959f2750f6d",
"title": "Mathematical Principles of the Internet"
},
{
"paperId": "e0f4c60084b35e55ce8a347588a86e673403461d",
"title": "Wireless Fog-Mesh: A Communication and Computation Infrastructure for IoT Based Smart Environments"
},
{
"paperId": "ac3334f3dcf24185734f2d4cd08d5218869ff5dc",
"title": "Searching for the IoT Resources: Fundamentals, Requirements, Comprehensive Review, and Future Directions"
},
{
"paperId": "c81b5e1cd3f92a93a8b57fb8fbb5d9bb7801f9e6",
"title": "A Centralized Service Discovery Algorithm via Multi-Stage Semantic Service Matching in Internet of Things"
},
{
"paperId": "8126b63bbdb563ec9998ef24bc856f751ef65469",
"title": "Research on the Hash Function Structures and its Application"
},
{
"paperId": "944d3057afc08e747c03e5525a0c661975574a9e",
"title": "Distributed smart space orchestration"
},
{
"paperId": "27ea55102200748342611ddc6cde727397a68f02",
"title": "Resource discovery in Internet of Things: Current trends and future standardization aspects"
},
{
"paperId": "fb196ff74fcc81a344d7a6c881c20f91c3e32829",
"title": "Search engine based resource discovery framework for Internet of Things"
},
{
"paperId": "b17920b215c930968c0338a20d63c272bcc55df8",
"title": "A Scalable and Self-Configuring Architecture for Service Discovery in the Internet of Things"
},
{
"paperId": "b0fae5822da8a9180d1c3eff67a259cbce8f3ec8",
"title": "Mobile Digcovery: A Global Service Discovery for the Internet of Things"
},
{
"paperId": "bf90f47843f48ee5878992ca0f5317982948cc63",
"title": "DNS-Based Service Discovery"
},
{
"paperId": "442720a13e8ccf8183b1e3cba7551a38f73f7804",
"title": "DHT-based localized service discovery in wireless mesh networks"
},
{
"paperId": "207ea0115bf4388d11f0ab4ddbfd9fd00de5e8d1",
"title": "Fog computing and its role in the internet of things"
},
{
"paperId": "0102a2399f3a732aa9946fa0cdea4d08c14fb020",
"title": "Mesh-DHT: A locality-based distributed look-up structure for Wireless Mesh Networks"
},
{
"paperId": "cbae592ee655bdd3e12db6bb5d8d0e0c12792ba9",
"title": "CoAP: An Application Protocol for Billions of Tiny Internet Nodes"
},
{
"paperId": "57497484797e21bd62164721bbfe702e2ab0149a",
"title": "Sub-second lookups on a large-scale Kademlia-based overlay"
},
{
"paperId": "1931a42da621b241d534bdfd0d2accb26778744f",
"title": "Resource discovery service while minimizing maintenance overhead in hierarchical DHT systems"
},
{
"paperId": "0d436385bcdefe946d9f2c29248484e2cdcfbb38",
"title": "GeoKad: A P2P distributed localization protocol"
},
{
"paperId": "69fe4e4710c7a53b1dd36a5900b3a824707529e8",
"title": "PeerSim: A scalable P2P simulator"
},
{
"paperId": "c2ee3bc875cad2a9f55bb8cfc6b389255f23ef3b",
"title": "MQTT-S — A publish/subscribe protocol for Wireless Sensor Networks"
},
{
"paperId": "d56b506dbad493ca7dad513c912209cfa0eea248",
"title": "US Secure Hash Algorithms (SHA and HMAC-SHA)"
},
{
"paperId": "c8b3842adb17c2a3d3e34988a4f6cc4a7412e877",
"title": "A survey and comparison of peer-to-peer overlay network schemes"
},
{
"paperId": "b851141730342a9613adc38e92a38fd1cb42f623",
"title": "Tapestry: a resilient global-scale overlay for service deployment"
},
{
"paperId": "eb51cb223fb17995085af86ac70f765077720504",
"title": "Kademlia: A Peer-to-Peer Information System Based on the XOR Metric"
},
{
"paperId": "cf025469b2d7e4b37c7f2d2bf0d46c6776f48fd4",
"title": "Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems"
},
{
"paperId": "f03db79dc2922af3ec712592c8a3f69182ec5d65",
"title": "Chord: A scalable peer-to-peer lookup service for internet applications"
},
{
"paperId": "d2712ce067a604c61a28778babebeced19b6bf8e",
"title": "A Design Principle for Hash Functions"
},
{
"paperId": "799c1ea3ebeac3c74b39690eef8bcb2e7fb3a67d",
"title": "Hash Table Methods"
},
{
"paperId": null,
"title": "Region-based distributed hash table for fog computing infrastructure"
},
{
"paperId": "50b089daa17328a0f2c34b7db8b3a08bc9740b9e",
"title": "16 On the Routing of Kademlia-type Systems"
},
{
"paperId": null,
"title": "Location-Aware and Privacy Preserving Multi-Layer RD for IoT 29"
}
] | 19,513
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02bf94cf6055f6ef847e8338fb3dae5dcc53f103
|
[
"Computer Science",
"Mathematics"
] | 0.866262
|
Arguments of Proximity - [Extended Abstract]
|
02bf94cf6055f6ef847e8338fb3dae5dcc53f103
|
Annual International Cryptology Conference
|
[
{
"authorId": "1784514",
"name": "Y. Kalai"
},
{
"authorId": "1768059",
"name": "Ron D. Rothblum"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int Cryptol Conf",
"Annu Int Cryptol Conf",
"CRYPTO",
"International Cryptology Conference"
],
"alternate_urls": null,
"id": "212b6868-c374-4ba2-ad32-19fde8004623",
"issn": null,
"name": "Annual International Cryptology Conference",
"type": "conference",
"url": "http://www.iacr.org/"
}
| null |
# Arguments of Proximity
## [Extended Abstract]
Yael Tauman Kalai[1(][B][)] and Ron D. Rothblum[2]
1 Microsoft Research, Cambridge, USA
yael@microsoft.com
2 Weizmann Institute of Science, Rehovot, Israel
ron.rothblum@weizmann.ac.il
**Abstract. An interactive proof of proximity (IPP) is an interactive pro-**
tocol in which a prover tries to convince a sublinear-time verifier that
_x ∈L. Since the verifier runs in sublinear-time, following the property_
testing literature, the verifier is only required to reject inputs that are
_far from L. In a recent work, Rothblum et. al (STOC, 2013) constructed_
an IPP for every language computable by a low depth circuit.
In this work, we study the computational analogue, where soundness is
required to hold only against a computationally bounded cheating prover.
We call such protocols interactive arguments of proximity.
Assuming the existence of a sub-exponentially secure FHE scheme, we
construct a one-round argument of proximity for every language computable in time t, where the running time of the verifier is o(n)+polylog(t)
and the running time of the prover is poly(t).
As our second result, assuming sufficiently hard cryptographic PRGs,
we give a lower bound, showing that the parameters obtained both in
the IPPs of Rothblum et al., and in our arguments of proximity, are close
to optimal.
Finally, we observe that any one-round argument of proximity immediately yields a one-round delegation scheme (without proximity) where
the verifier runs in linear time.
## 1 Introduction
With the prominent use of computers, tremendous amounts of data are available.
For example, hospitals have massive amounts of medical data. This data is very
precious as it can be used, for example, to learn important statistics about various diseases. This data is often too large to store locally, and thus is often stored
on cloud platforms (or external servers). As a result, if a hospital (which has
bounded storage and bounded computational power), wishes to perform some
computation on its medical data, it would need to delegate this computation to
the cloud. Since the cloud’s computation may be faulty, the party delegating the
computation (say, the hospital), may want a proof that the computation was
done correctly. It is important that this proof can be verified very efficiently,
_⃝c_ International Association for Cryptologic Research 2015
-----
and that the prover’s running time is not much larger than the time it takes to
perform the computation, since otherwise, the solution will not be practical.
This problem is closely related to the problem of computation delegation,
where a weak client delegates a computation to a powerful server, and the
server needs to provide the client with a proof that the computation was
done correctly. In contrast to the current setting, in the setting of computation delegation, the input is thought of as being small and the computation is thought of as being large. The client (verifier) is required to run in
time that is proportional to the input size (but much smaller than the time
it takes to do the computation), and the powerful server (prover) runs in time
polynomially related to the time it takes to do the computation. Indeed the
problem of computation delegation is extremely important, and received a lot
of attention (e.g., [GKR08, Mic94, Gro10, GGP10, CKV10, AIK10, GLR11, Lip12,
BCCT12a, DFH12, BCCT12b, GGPR12, PRV12, KRR13a, KRR13b]).
In reality, however, the input (data) is often very large, and the client cannot
even store the data. Hence, we seek a solution in which the client runs in time
that is sub-linear in the input size. The question is:
_If the client cannot read the data, how can he verify the correctness of a_
_computation on the data?_
The work of [CKLR11], on memory delegation, considers this setting where
the input (thought of as the client’s memory) is large, and the client cannot store
it locally. However, in memory delegation, it is assumed that the client (verifier)
stores a short “commitment” of the input, and then can verify computations
in sub-linear time. However, computing such a commitment takes time at least
linear in the input length, which is infeasible in many settings.
Recently, Rothblum, Vadhan and Wigderson [RVW13], in their work on interactive proofs of proximity (IPP, a notion first studied by Erg¨un, Kumar and
Rubinfeld [EKR04]), provide a solution where the verifier does not need to know
such a commitment. Without such a commitment, the verifier cannot be sure
that the computation is correct (since he cannot read the entire input), however
they guarantee that the input is “close” to being correct. More specifically, they
construct an interactive proof system for every language computable by a (logspace uniform) low depth circuit, where the verifier is given oracle access to the
input (the data), and the verifier can check whether the input is close to being
in the language in sub-linear time in the input (and linear time in the depth of
the computation). We note that in many settings where the data is large (such
as medical data) and the goal is to compute some statistics on this data, an
approximate solution is acceptable. The work of [RVW13] is the starting point
of our work.
**1.1** **Our Results in a Nutshell**
We depart from the interactive proof of proximity setting, and consider argu_ments of proximity. In contrast to proofs of proximity, in an argument of proxim-_
ity, soundness is required to hold only against computationally bounded cheating
provers. Namely, the soundness guarantee is that any bounded cheating prover
-----
can convince the verifier to accept an input that is far from the language (in
Hamming distance) only with small probability. By relaxing the power of the
prover we obtain stronger results.
We construct one-round arguments of proximity for every deterministic language (without a dependency on the depth). Namely, fix any t = t(n) and any
language DTIME(t(n)), we construct a one-round argument of proximity
_L ∈_
for where the verifier runs in time o(n) + polylog(t), assuming the existence of
_L_
a sub-exponentially secure fully homomorphic encryption (FHE) scheme.
Our one-round argument of proximity is constructed in two steps, and follows
the outline of the recent works of Kalai et al. [KRR13a, KRR13b]. These works
first show how to construct an MIP for all deterministic languages, that is sound
against no-signaling strategies. Such no-signaling soundness is stronger than the
typical notion of soundness, and is inspired by quantum physics and by the
principal that information cannot travel faster than light (see Sect. 3.2 for the
definition, and [KRR13a, KRR13b] for more background on this notion). They
then show how to convert these no-signaling MIPs into one-round arguments.
As our first step, we combine the IPPs of [RVW13], and the no-signaling
MIP construction of [KRR13b], to obtain a no-signaling multi-prover interactive
_proof of proximity (MIPP). This construction combines techniques and results of_
[RVW13] and [KRR13b], and may be of independent interest.
Then, similarly to [KRR13a], we show how to convert any no-signaling MIPP
to a one-round argument of proximity. This transformation relies on a heuristic
developed by Aiello et al. [ABOR00], which uses a (computational) PIR scheme
(or a fully homomorphic encryption scheme) to convert any MIP into a one-round
argument. This heuristic was proven to be secure in [KRR13a] if the underlying
MIP is secure against no-signaling strategies. We extend the result of [KRR13a]
to the proximity setting.
Finally, we provide a negative result, which shows that the parameters we
obtain for MIPP and the parameters obtained in [RVW13], are somewhat tight.
Proving such a lower bound was left as an open problem in [RVW13]. This part
contains several new ideas, and is the main technical contribution of this work.
We also show that the parameters in our one-round argument of proximity
are somewhat optimal, for arguments with adaptive soundness and are proven
to be (adaptively) sound via a black-box reduction to a falsifiable assumption.
See the full version for further details.
**Linear-Time Delegation. We observe that both proofs and arguments of prox-**
imity, aside from being natural notions, can also be used as tools to obtain new
results for delegating computation in the standard setting (i.e., where soundness
is guaranteed for every x ). More specifically, using our results on arguments
_̸∈L_
of proximity and the [RVW13] results on interactive proofs of proximity for lowdepth circuits, we can construct (standard) one-round argument-systems for any
deterministic computation, and interactive proof systems for low-depth circuits,
-----
where the verifier truly runs in linear-time. In contrast, the results of [GKR08]
and [KRR13b] only give a quasi-linear time verifier.[1]
**1.2** **Our Results in More Detail**
Our main result is a construction of a one-round argument of proximity for any
deterministic language. Here, and throughout this work, we use n to denote the
input length. Let t = t(n), let DTIME(t) be a language. For a proximity
_L ∈_
parameter ε = ε(n) (0, 1), we denote by ε-IPP an interactive proof for testing
_∈_
_ε-proximity to_ .[2] Similarly we denote by ε-MIPP a multi-prover interactive
_L_
proof for testing ε-proximity to .
_L_
**Theorem 1 (Informal). Suppose that there exists a sub-exponentially secure**
FHE. Fix a proximity parameter ε def= n[−][(1][−][β][)], for some sufficiently small β > 0,
_and a security parameter τ (polynomially related to n)._
_There exists a 1-round argument of ε-proximity for_ _, where the verifier runs_
_L_
_in time n[1][−][γ]_ + polylog(t) + polyFHE(τ ), where γ > 0 is a constant and polyFHE is
_a polynomial that depends only on the FHE scheme, and makes n[1][−][γ]_ + polylog(t)
_oracle queries to the main input. The prover runs in time poly(t). The total_
_communication is of length polyFHE(τ_ ).
Note that for languages in DTIME �2[n][α] [�] for sufficiently small α > 0 (and in
particular for languages in P), the verifier in Theorem 1 runs in sub-linear time.
As mentioned previously, this result is obtained in two steps. We first construct an MIPP that is sound against no-signaling strategies, and then show how
to convert any such MIPP into a one-round argument of proximity.
**Theorem 2 (Informal). Fix a proximity parameter ε = ε(n)** (0, 1). There
_∈_
_exists an ε-MIPP that is secure against no-signaling strategies, where the ver-_
_ifier makes q = (1/ε)[1+][o][(1)]_ _oracle queries to the input, the communication_
_complexity c = (εn)[2]_ _n[o][(1)]_ polylog(t) and the running time of the verifier
_·_ _·_
_is (εn)[2]_ _· polylog(t) +_ � 1ε [+][ εn]�1+o(1).
We then show how to convert any no-signaling ε-MIPP to a one-round argument
of ε-proximity. In the following we say that a fully homomorphic encryption
scheme (FHE) is (T, δ) secure if every family of circuits of size T can break the
semantic security of the FHE with probability at most δ.
**Theorem 3 (Informal). Fix a proximity parameter ε = ε(n)** (0, 1). Suppose
_∈_
_that the language_ _has an ℓ-prover ε-MIPP that is sound against δ-no-signaling_
_L_
_strategies, with communication complexity c. Suppose that there exists a (T, δ/ℓ)-_
_secure FHE, where T_ 2[c]. Then _has a 1-round argument of ε-proximity where_
_≥_ _L_
1 Actually, by an observation of Vu et al. [VSBW13] (see also [Tha13, Lemma 3]), the
verifier in the [GKR08] protocol can be directly implemented in linear-time. However
the latter implementation would only guarantee constant soundness error.
2 A string x ∈{0, 1}n is ε-close to L if there exists x′ ∈{0, 1}n _∩L such that △(x, x′) ≤_
_εn, where △_ denotes the Hamming distance between the two strings.
-----
_the running time of the prover and verifier and the communication complexity of_
_the argument system, are proportional to those of the underlying MIPP scheme._
We note that the parameters in Theorem 2 are somewhat similar to the parameters of the interactive proof of proximity (IPP) in [RVW13]. In particular, in
both constructions it holds that c _q = Ω(n). The work of [RVW13] shows that_
_·_
this lower bound of c _q = Ω(n) is inherent for IPPs with 2-messages (and that a_
_·_
weaker bound holds for IPPs with a constant number of rounds), and left open
the question of whether this lower bound is inherent for general (multi-round)
IPPs.
We resolve this question by showing that for every ε-IPP, and every ε-MIPP
that is sound against no-signaling strategies, it must be the case that c _q = Ω(n)._
_·_
For this result we assume the existence of exponentially hard pseudorandom
generators.
**Theorem 4 (Informal). Assume the existence of exponentially hard pseudo-**
_random generators. There exists a constant ε > 0 such that for every q = q(n)_
_≤_
_n, there exists a language_ P such that for every ε-IPP for _, and for every_
_L ∈_ _L_
_ε-MIPP for_ _that sound against no-signaling adversaries, it holds that q_ _c =_
_L_ _·_
_Ω(n), where q is the query complexity and c is the communication complexity._
In fact, assuming a slightly stronger cryptographic assumption, we can replace
_L ∈_ P with L ∈ NC1 (which shows that the [RVW13] upper bound for log-space
uniform NC is essentially tight). See Sect. 4 for details.
We note that the [RVW13] lower bound for 2-message IPPs is unconditional (and in particular they do not assume that the verifier is computationally
bounded). It remains an interesting open problem to obtain an unconditional
lower bound for multi-message IPPs.
The parameters we obtain for the one-round argument also satisfy q _c =_
_·_
_Ω(n). We show that these parameters are close to optimal for arguments with_
adaptive soundness, that are proven sound via a black-box reduction to falsifiable
assumptions. We refer the reader to the full version for details.
Finally, using the [RVW13] protocol or the protocol of Theorem 1 we construct delegation schemes in which the verifier runs in linear-time.
**Theorem 5 (Informal). For every language in (logspace-uniform) NC there**
_exists an interactive proof system in which the verifier runs in time O(n) and_
_the prover runs in time poly(n)._
**Theorem 6 (Informal). Assume that there exists a sub-exponentially secure**
FHE. Then, for every language in P there exists a 1-round argument-system in
_which the verifier runs in time O(n) and the prover runs in time poly(n)._
**1.3** **Related Work**
As mentioned above, the work of [RVW13] and [KRR13a, KRR13b] are most
related to ours. Both our work, and the work of [RVW13], lie in the intersection of property-testing and computation delegation. As opposed to property
-----
testing, where an algorithm is required to decide whether an input is close to
the language on its own in sub-linear time, in our work the algorithm receives
a proof, and only needs to verify correctness of the proof in sub-linear time.
Thus, our task is significantly easier than the task in property testing. Indeed
we get much stronger results. In particular, the works on property testing typically get sub-linear algorithms for specific languages, whereas our result holds
_for all deterministic languages.[3]_
Another very related problem is that of constructing a probabilistically check_able proof of proximity (PCPP) [BSGH+06] (also known as assignment testers_
[DR06]). A PCPP consists of a prover who publishes a long proof, and a verifier,
who gets oracle access to this proof and to the instance x, and needs to decide
whether x is close to the language in sub-linear time. The significant difference
between PCPP and proofs/argument of proximity is that in the PCPP setting the
proof is a fixed string (and cannot be modified adaptively based on the verifier’s
messages).
The fundamental works of Kilian and Micali [Kil92, Mic94] show how to
convert any probabilistically checkable proof (PCP) into a 2-round (4-message)
argument. As pointed out by [RVW13], their transformation can be also used
to convert any PCPP into a 2-round argument of proximity. Thus, obtaining a
2-round argument of proximity follows immediately by applying the transformation of [Kil92, Mic94] to any PCPP construction. Moreover, the parameters of
the resulting 2-round argument are optimal (up to logarithmic factors); i.e., the
query complexity, the communication complexity and the runtime of the verifier
is poly(log(t), τ ) where t is the time it takes to compute if x is in the language,
and where τ is the security parameter.
The focus of this work is on constructing one-round arguments of proximity.
Unfortunately, our parameters do not match those of the two-round arguments
of proximity outlined above. However, we show that using our techniques (i.e., of
constructing one-round arguments of proximity from no-signaling MIPPs), our
parameters are almost optimal.
Other works that are related to ours are the work of Gur and Rothblum
[GR13] on non-interactive proofs of proximity, and of Fischer et al. [FGL14] on
partial testing. The former studies an NP version of property testing (which
can be thought of as a 1-message variant of IPP), whereas the latter studies a
model of property testing in which the tester needs to only accept a sub-property
(we note that the two notions, which were developed independently, are tightly
related, see [GR13, FGL14] for details).
**Organization. In this extended abstract we give an overview of our techniques**
and only prove some of our results. In Sect. 2 we give a high level view of our
techniques. In Sect. 3 we formally define arguments of proximity and the other
central definitions that are used throughout this work. In Sect. 4 we show our
3 Indeed, as shown by Goldwasser, Goldreich and Ron [GGR98], there are properties
in very low complexity classes that require Ω(n) queries and running-time in order
to test (without the help of a prover).
-----
lower bound for no-signaling MIPPs. See the full version for the missing proofs
and formal theorem statements.
## 2 Our Techniques
**2.1** **Our Positive Results**
To construct arguments of proximity for languages in DTIME(t), we adapt the
technique of [KRR13a] to the “proximity” setting. That is, we first construct an
MIPP that has soundness against no-signaling strategies and then employ the
technique of Aiello et al. [ABOR00] to obtain an argument of proximity. We
elaborate on these two steps below. In what follows, we focus for simplicity on
languages in P, though everything extends to languages in DTIME(t).
**No-Signaling MIPPs for P. Our first step (which is technically more involved)**
is a construction of MIPPs that are sound against no-signaling strategies for
any language P. This construction is inspired by (and reminiscent of) the
_L ∈_
IPP construction of [RVW13]. The starting point for the [RVW13] IPP is the
“Muggles” protocol of Goldwasser et al. [GKR08], whereas our starting point is
the no-signaling MIP of [KRR13b].
The main technical difficulty in using both the [GKR08] and [KRR13b] protocols by a sublinear time verifier is that in both protocols, the verifier needs
to compute an error corrected encoding of the input x. More specifically, the
verifier needs to compute the low degree extension of x, denoted LDEx. Since
error-correcting codes are very sensitive to changes in the input, a sub-linear
algorithm has no hope to compute LDEx.
The key point is that in both the [GKR08] and the [KRR13b] protocols, it
suffices for the verifier to check the value of LDEx at relatively few randomly
selected points (this property was also used by [CKLR11] in their work on memory delegation). Hence, it will be useful for us to view both the [GKR08] and
[KRR13b] protocols as protocols for producing a sequence of points J in the
low degree extension of x and a sequence of corresponding values v with the
following properties:
– If x ∈L and the prover(s) honestly follow the protocol then LDEx(J) = v .
– If x / then no matter what the cheating prover does (resp., no-signaling
_∈L_
cheating prover do), with high probability the verifier outputs J, v such that
LDEx(J) ̸= v .
Hence, the verifiers in both protocols first run this subroutine to produce J and
**_v and then accept if and only if LDEx(J) = v_** . Remarkably, in both cases, in the
protocol that produces J and v, the verifier does not need to access x.
The next step in [RVW13] is a parallel repetition of the foregoing protocol
in order to reduce the soundness error. Once the soundness error is sufficiently
small, [RVW13] argue that for every x that is ε-far from, no matter what the
_L_
cheating prover does (in the parallel repetition of the base protocol), the verifier
will output J, v such that not only LDEx(J) ̸= v, but furthermore, x is far from
-----
_any x[′]_ such that LDEx[′] (J) = v . This steps simply follows by taking a union
bound over all x[′] that are close to x.
We borrow this step almost as-is from [RVW13] except for the following
technical difficulty - it is not known whether parallel repetition decreases the
soundness error of no-signaling MIP protocols.[4] However, we observe that the
[KRR13b] protocol already allows for sufficient flexibility in choosing its soundness error so that the parallel repetition step can be avoided.
The last step of [RVW13] is designing an IPP protocol for a language that
they call PVALJ,v (for “polynomial evaluation”). This language, parameterized
by J and v, consists of all strings x such that LDEx(J) = v . Using this IPP
for PVAL, the IPP verifier for a language first runs the (parallel repetition of
_L_
the) [GKR08] protocol, to produce J, v as above. Then, the IPP verifier runs the
PVALJ,v protocol and accepts if and only if the PVAL-verifier accepts. If x ∈L
then we know that LDEx(J) = v and therefore the PVAL-verifier will accept,
whereas if x is far from L then x is far from PVALJ,v and therefore the PVALverifier will reject. Hence the (parallel repetition of the) [GKR08] protocol is
sequentially composed with the IPP for PVAL.
For the no-signaling case, we also use the [RVW13] IPP protocol for PVAL.
A technical difficulty that arises is that in contrast to the IPP setting in which
sequential composition (of two interactive proofs) is trivial, here we need to compose a 1-round no-signaling MIP with an IPP protocol, to produce a no-signalling
MIPP. We indeed prove that such a composition holds thereby constructing a
no-signaling MIPP as we desire.
**From No-Signaling MIPP to Arguments of Proximity. The transformation**
from a no-signaling MIPP to an argument of proximity is based on the assumption that there exists a fully homomorphic encryption scheme (or alternatively, a
computational private information retrieval scheme) and is practically identical
to that in [KRR13a]. More specifically, the argument’s verifier uses the MIPP
verifier to generate a sequence of queries q1, . . ., qℓ to the ℓ provers. It encrypts
each query using a fresh encryption key as follows: ˆqi _Encki(qi). The argu-_
_←_
ment’s verifier sends all the encrypted queries to the prover. Given ˆq1, . . ., ˆqℓ, the
prover uses the homomorphic evaluation algorithm to compute the MIPP answers
“underneath” the encryption. It sends these answers back to the verifier, which
can decrypt the encrypted answers and decide. As in [KRR13a] we show that
if the MIPP is sound against no-signaling strategies then, assuming the semantic security of the FHE, the resulting protocol is sound against computationally
bounded adversaries.
**Linear-TimeDelegation.** Weshowthatusingtheforegoingone-roundargument
of proximity for every language P and good error-correcting codes, one can
_L ∈_
easily construct a one-round delegation protocol where the verifier runs in linear
time (in contrast, the verifier in [KRR13b] runs in quasi-linear time). A similar
observation, in the context of PCPs, was previously pointed out by [EKR04].
4 Holenstein [Hol09] showed a parallel repetition theorem for no-signaling 2-prover
MIPs. It is not known whether this result can be extended to 3 or more provers.
-----
Let P and consider = ECC(x) : x where ECC is an error cor_L ∈_ _L[′]_ _{_ _∈L}_
recting code with constant rate, constant relative distance, linear-time encoding
and polynomial-time decoding[5]. Then, P and so it has an argument of
_L[′]_ _∈_
proximity with a sublinear-time verifier. We construct a delegation scheme for
_L by having both the verifier and the prover compute x[′]_ = ECC(x) and run
the argument of proximity protocol with respect to x[′]. Since the argument of
proximity verifier runs in sublinear time, and ECC(x) can be computed in lineartime, the resulting delegation verifier runs in linear-time. Soundness follows from
the fact that a cheating prover that convinces the argument-system verifier to
accept x / _L can be used to convince the argument-of-proximity verifier to_
_∈_
accept ECC(x) which is indeed far from .
_L[′]_
A similar result can be obtained for interactive proofs for low-depth computation based on the results of [RVW13] by using an error-correcting code that
can be decoded in logarithmic-depth (such a code was constructed by Spielman [Spi96]).
**2.2** **Our Negative Results**
We prove that assuming the existence of exponentially hard pseudorandom generators, there exists a constant ε > 0 for which there does not exist a no-signaling
_ε-MIPP for all of P with query complexity q and communication complexity c_
such that q _c = o(n) (where n is the input length). We also show a similar result_
_·_
for ε-IPP.
We start by focusing on our lower bound for MIPP. The high-level idea is
the following: Suppose (towards contradiction) that every language in P has a
no-signaling MIPP with query complexity q and communication complexity c
where q _c = o(n). The fact that q = o(n) implies that (for every language in P),_
_·_
there is some set of coordinates S [n] of size O(n/q) that with high (constant)
_⊆_
probability the verifier does not query.
As a first step, suppose for the sake of simplicity that there is a fixed (universal) set of coordinates S [n] such that with high probability the verifier never
_⊆_
queries the coordinates in S, for every language in P (for example, if the verifier’s queries are non-adaptive and are generated before it communicates with
the prover, then such a set S must exist). We derive a contradiction by showing that one can use the no-signaling MIPP to construct a no-signaling MIP for
languages in NP P with communication c = o(n). The latter was shown to be
_\_
impossible, assuming that NP ⊈ DTIME(2[o][(][n][)]) [DLN+04] (see also [Ito10]).
The basic idea is the following: Take any language NP P that is assumed
_L ∈_ _\_
to be hard to compute in time 2[o][(][n][)], and convert it into the language P,
_L[′]_ _∈_
defined as follows: x[′] _∈L[′]_ if and only if x[′]S [is a valid witness of][ x][′][n]\S [in the]
underlying NP language . The no-signaling MIP for will simply be the no_L_ _L_
signaling ε-MIPP for, where the MIP verifier simulates the ε-MIPP verifier
_L[′]_
with oracle access to x[′] where x[′][n]\S [=][ x][, and][ x]S[′] [= 0][|][S][|][. Note that the][ MIP]
verifier, which takes as input x (supposedly in ), cannot (efficiently) generate a
_L_
5 Such codes are known to exist, see, e.g., [Spi96].
-----
corresponding witness w and set x[′]S [=][ w][. But the point is that it does not need]
to, since S was chosen so that with high probability the MIPP verifier for will
_L[′]_
not query x[′] on coordinates in S.
There are several problems with this approach. First, the witness can be very
long compared to x, and the set S may be very small compared to n. In this
case we will not be able to fit the entire witness in the coordinate set S. Second,
after running the MIPP, the verifier is convinced that x[′] is close to an instance
in . However, this does not imply that x is in (and can only imply that x is
_L[′]_ _L_
close to ).
_L_
One can fix these two problems with a single solution: Instead of setting
_x[′][n]\S_ [=][ x][ we set][ x][[′]n]\S [=][ ECC][(][x][), where][ ECC][ is a error-correcting code with]
efficient encoding, that is resilient to 2ε-fraction of errors. Now, we can take
ECC(x) so that ECC(x) is very large compared to _w_, so that we can fit all
_|_ _|_ _|_ _|_
of the witness in the coordinate set S. Moreover, if |ECC(x)| > |w| then if x[′]
is ε-close to L[′] then x[′][n]\S [is 2][ε][-close to][ L][. This, together with the fact that]
ECC(x) is resilient to 2ε-fraction of errors implies that the encoded element is
indeed in .
_L_
The foregoing idea indeed seems to work if there was a fixed (universal) set S
that the MIPP verifier does not query (with high probability). However, this is
not necessarily the case, and this set S may be different for different languages
in P. In particular, we cannot claim that for the language the set S is exactly
_L[′]_
where the witness lies. Namely, it may be that the verifier in the underlying
MIPP always queries some coordinates in S.
We solve this problem by using repetitions. Namely, every element x[′] _∈L[′]_
will consist of many instances (encoded using an error-correcting code) along
with many witnesses; i.e., x[′] = (ECC(x1, . . ., xm), w1, . . ., wm), where each wj is
a witness for the NP statement xj ∈L. Now, suppose that the verifier makes q
queries to x[′] (where q = o(n)). Then if we take m = 4q then we know that 3/4
of the (xj, wj)’s are not queried.
As above, we derive a contradiction by showing that one can use the nosignaling MIPP to construct a no-signaling MIP for languages in NP P with o(n)
_\_
communication, (which is known to be impossible for languages that cannot be
computed in time 2[o][(][n][)] [DLN+04, Ito10]). However, now the no-signaling MIP
construction will be different: Given an instance x (supposedly in ), the MIP
_L_
verifier will choose a random i[∗] _∈R [m], along with m random instance and_
witness pairs (x1, w1), . . ., (xm, wm), where xi[∗] = x and wi[∗] can be arbitrary
(assumed not to be queried).
We need to argue that with probability at least 3/4 the verifier will not query
the coordinates of wi[∗], and thus with probability at least 3/4 the MIP verifier will
successfully simulate the MIPP verifier. If the queries of the MIPP verifier were
chosen before interacting with the prover then this would follow immediately
from the fact that i[∗] _∈_ [m] is chosen at random. However, the MIPP verifier may
choose its oracle queries after interacting with the MIPP provers, and therefore
we need to argue that the MIPP provers also do not know i[∗]. Note that the
MIPP provers see all of x1, . . ., xm. Hence, in order to claim that the provers
-----
cannot guess i[∗] it needs to be the case that x is distributed identically to the
other x1, . . ., xm.
Hence, we seek a language NP P for which there exists a distribution
_L ∈_ _\_ _D_
(distributed over ) such that:
_L_
1. It is computationally hard to distinguish between x ∈R D and x ̸∈L (i.e., L
is hard on the average); and
2. x ∈R D can be sampled together with a corresponding NP witness.
We note that the first requirement is needed to obtain a contradiction (and
replaces the weaker assumption that NP P) whereas the second assumption
_L ∈_ _\_
is required so that we can sample x1, . . ., xm (together with the corresponding
witnesses) so that MIPP protocol cannot distinguish between x and any of the
_xj’s (thereby hiding i[∗]). In can be easily verified that both requirement are met_
by considering which is the output of a cryptographic pseudorandom generator
_D_
(PRG). Hence the language that we use is precisely the output of such a PRG.
_L_
Indeed, we can only argue that our no-signaling MIP has average-case completeness (with respect to the distribution ), since if x is distributed
_D_ _∈L_
differently from (x1, . . ., xm) then the verifier of the MIPP may always query the
coordinates where the witness of x is embedded, in which case the MIP verifier
will fail to simulate. However, for random x ∈R L the provers (and verifier) in
the MIPP cannot guess i[∗] with any non-negligible advantage, and therefore the
verifier will not query the coordinates of wi[∗] with probability at least 3/4, in
which case the MIP verifier will succeed in simulating the underlying ε-MIPP
verifier. We refer the reader to Sect. 4 for further details.
**A Lower Bound for IPP. To obtain a multiplicative lower bound for IPP, we**
follow the same paradigm outlined above for MIPP’s with no-signaling soundness.
More specifically, we consider a language NP and the corresponding language
_L ∈_
_L[′]_ = ��ECC(x1, . . ., xm), w1, . . ., wm� : wj is an NP-witness for xj�
as above. We show that an IPP protocol for implies a (standard) interactive_L[′]_
proof for with similar communication complexity. Here we obtain a contra_L_
diction by arguing that (assuming exponential hardness) there are languages in
NP P for which every interactive proof require Ω(n) communication. The lat_\_
ter is based on the proof that IP PSPACE (i.e., the “easy” direction in the
_⊆_
IP = PSPACE theorem).
Given the [RVW13] positive result of IPP for low depth computations, we
would like to show that our lower bound is not just for languages in P but even
for languages, say, in NC1 (thereby showing that the [RVW13] result is tight).
To do so we observe that if (1) the error correcting code that we use has an
encoding procedure that can be computed by an NC1 circuit and (2) the cryptographic PRG can be computed in NC1, then indeed L[′] _∈_ NC1.
**A Lower Bound for One-Round Arguments of Proximity. For one-round**
arguments of proximity, we show a similar lower-bound of q _c = Ω(n), assuming_
_·_
-----
the argument has adaptive soundness, and the proof of (adaptive) soundness is
via a black-box reduction to some falsifiable cryptographic assumption.
Loosely speaking, a cryptographic assumption is falsifiable (a notion due to
Naor [Nao03]) if there is an efficient way to “falsify it”, i.e., to demonstrate that
it is false. We note that most standard cryptographic assumptions (e.g., one-way
functions, public-key encryption, LWE etc.) are falsifiable. A black-box reduction
of one cryptographic primitive to another, is a reduction that, using black-box
access to any (possibly inefficient) adversary for the first primitive, breaks the
security of the second primitive.
Similarly to the MIPP and IPP lower bounds, we consider the languages
_L_
and, as above, where NP is exponentially hard on average and P. We
_L[′]_ _L ∈_ _L ∈_
prove that if there exists an adaptively sound one-round argument of proximity
for with q _c = o(n) then there exists an adaptively sound one-round argument_
_L[′]_ _·_
for with o(n) communication (in the crs model).
_L_
We then rely on a result of Gentry and Wichs [GW11], which shows that
there does not exist a one-round argument for exponentially hard (on average)
NP languages, with adaptive soundness and black-box reduction to a falsifiable
assumption.
We conclude that P does not have an adaptively sound one-round argument of
proximity with q _c = o(n), and a black-box reduction to a falsifiable assumption._
_·_
We refer the reader to the full version for details.
## 3 Definitions
In this section we define arguments of proximity and MIPs of proximity (with
soundness against no-signaling strategies). See the full version for additional
standard definitions.
**Notation. For x, y** 0, 1, we denote the Hamming distance of x and y by
_∈{_ _}[n]_
Δ(x, y) def= |{i ∈ [n]: xi ̸= yi}|. We say that x is ε-close to y if Δ(x, y) ≤ _δ. We_
say that x is ε-close to a set S 0, 1 if there exists y _S such that x is_
_⊆{_ _}[n]_ _∈_
_ε-close to y._
If A is an oracle machine, we denote by A[x](z) the output of A when given
oracle access to x and explicit access to z.
For a vector a = (a1, . . ., aℓ) and a subset S ⊆ [ℓ], we denote by aS the
sequence of elements of a that are indexed by indices in S, that is, aS = (ai)i∈S.
For a distribution A, we denote by a ∈R A a random variable distributed
according to (independently of all other random variables). We will measure
_A_
the distance between two distributions by their statistical distance, defined as
half the l1-distance between the distributions. We will say that two distributions
are δ-close if their statistical distance is at most δ.
**3.1** **Arguments of Proximity**
An interactive argument of proximity for a language consists of a polynomial_L_
time verifier that wishes to verify that x is close (in Hamming distance) to some
-----
_x[′]_ such that x[′] _∈L, and a prover that helps the verifier to decide. The verifier is_
given as input n ∈ N, a proximity parameter ε = ε(n) > 0 and oracle access to
_x_ 0, 1 (and its oracle queries are counted). The prover gets as input ε and
_∈{_ _}[n]_
_x. The two parties interact and at the end of the interaction the verifier either_
accepts or rejects. We require that if x then the verifier accepts with high
_∈L_
probability but if x is ε-far from, then no computationally bounded prover can
_L_
convince the verifier to accept with non-negligible (in n) probability.
We focus on 1-round arguments of proximity systems. Such an argumentsystem consists of a single message sent from the verifier V to the prover P,
followed by a single message sent from the prover to the verifier.
Let ε = ε(n) ∈ (0, 1) be a proximity parameter. Let T : N → N and s : N →
[0, 1] be parameters. We say that (V, P ) is a one-round argument of ε-proximity
for, with soundness (T, s), if the following two properties are satisfied:
_L_
1. Completeness: For every x, the verifier V _[x](_ _x_ _, ε) accepts with over-_
_∈L_ _|_ _|_
whelming probability, after interacting with P (ε, x).
2. Soundness: For every family of circuits {Pn[∗][}][n][∈][N] [of size][ poly][(][T] [(][n][)) and for]
all sufficiently large x /, the verifier V _[x](_ _x_ _, ε) rejects with probability_
_∈L_ _|_ _|_
_≥_ 1 − _s(|x|), after interacting with P|[∗]x|[(][ε, x][).]_
**3.2** **Multi-prover Interactive Proofs (MIP)**
Let be a language and let x be an input of length n. In a one-round ℓ-prover
_L_
interactive proof, ℓ computationally unbounded provers, P1, . . ., Pℓ, try to convince a (probabilistic) poly(n)-time verifier, V, that x . The input x is known
_∈L_
to all parties.
The proof consists of only one round. Given x and its random string, the
verifier generates ℓ queries, q1, . . ., qℓ, one for each prover, and sends them to
the ℓ provers. Each prover responds with an answer that depends only on its own
individual query. That is, the provers respond with answers a1, . . ., aℓ, where for
every i we have ai = Pi(qi). Finally, the verifier decides wether to accept or
reject based on the answers that it receives (as well as the input x and its
random string).
We say that (V, P1, . . ., Pℓ) is a one-round multi-prover interactive proof system (MIP) for, with completeness c [0, 1] and soundness s [0, 1] (think of
_L_ _∈_ _∈_
_s < c) if the following two properties are satisfied:_
1. Completeness: For every x, the verifier V accepts with probability c,
_∈L_
over the random coins of V, P1, . . ., Pℓ, after interacting with P1, . . ., Pℓ, where
_c is a parameter referred to as the completeness of the proof system._
2. Soundness: For every x, and any (computationally unbounded, possibly
_̸∈L_
cheating) provers P1[∗][, . . ., P]ℓ[ ∗][, the verifier][ V][ rejects with probability][ ≥] [1][ −] _[s][,]_
over the random coins of V, after interacting with P1[∗][, . . ., P]ℓ[ ∗][, where][ s][ is a]
parameter referred to as the error or soundness of the proof system.
Important parameters of an MIP are the number of provers, the length of
queries, the length of answers, and the error. We say that the proof-system has
_perfect completeness If completeness hold with probability 1 (i.e. c = 1)._
-----
**No-Signaling MIP. We will consider a variant of the MIP model, where the**
cheating provers are more powerful. In the MIP model, each prover answers
its own query locally, without knowing the queries that were sent to the other
provers. The no-signaling model allows each answer to depend on all the queries,
as long as for any subset S ⊂ [ℓ], and any queries qS for the provers in S, the
distribution of the answers aS, conditioned on the queries qS, is independent of
all the other queries.
Intuitively, this means that the answers aS do not give the provers in S
information about the queries of the provers outside S, except for information
that they already have by seeing the queries qS.
Formally, denote by D the alphabet of the queries and denote by Σ the
alphabet of the answers. For every q = (q1, . . ., qℓ) ∈ _D[ℓ], let Aq be a distribution_
over Σ[ℓ]. We think of Aq as the distribution of the answers for queries q.
We say that the family of distributions {Aq}q∈Dℓ is no-signaling if for every
subset S ⊂ [ℓ] and every two sequences of queries q, q[′] _∈_ _D[ℓ], such that qS = qS[′]_ [,]
the following two random variables are identically distributed:
– aS, where a ∈R Aq
– a[′]S [where][ a][′][ ∈][R][ A][q][′]
If the two distributions are δ-close, rather than identical, we say that the family
of distributions {Aq}q∈Dℓ is δ-no-signaling.
An MIP (V, P1, . . ., Pℓ) for a language L is said to have soundness s against
no-signaling strategies (or provers) if the following (more general) soundness
property is satisfied:
2. Soundness: For every x, and any no-signaling family of distributions
_̸∈L_
_{Aq}q∈Dℓ, the verifier V rejects with probability ≥_ 1 − _s, where on queries_
_q = (q1, . . ., qℓ) the answers are given by (a1, . . ., aℓ) ∈R Aq, and s is the_
soundness parameter.
If the property is satisfied for any δ-no-signaling family of distributions
_{Aq}q∈Dℓ, we say that the MIP has soundness s against δ-no-signaling strategies_
(or provers).
MIP of Proximity (MIPP). Let be a language, let x be an input of length n
_L_
(which we refer to as the main input) and let ε = ε(n) (0, 1) be a proximity
_∈_
parameter. In a one-round ℓ-prover interactive proof of proximity, ℓ computationally unbounded provers, P1, . . ., Pℓ, try to convince a (probabilistic) polynomialtime verifier, V, that the input x is ε-close (in relative Hamming distance) to
some x[′] _∈L. The provers have free access to n, ε and x. The verifier has free_
access to n and ε and oracle access to x (and the number of oracle queries is
counted).
We say that (V, P1, . . ., Pℓ) is a one-round multi-prover interactive proof system of ε-proximity (ε-MIPP) for, with completeness c [0, 1] and sound_L_ _∈_
ness s [0, 1], if the following properties are satisfied:
_∈_
1. Running Time: The verifier runs in polynomial time, i.e., time polynomial
in the communication complexity and the number of oracle queries.
-----
2. Completeness: For every x the verifier V accepts with probability c,
_∈L_
after interacting with P1, . . ., Pℓ.
3. Soundness: For every x that is ε-far from, and any (computationally
_L_
unbounded, possibly cheating) provers P1[∗][, . . ., P]ℓ[ ∗][, the verifier][ V][ rejects with]
probability ≥ 1 − _s, after interacting with P1[∗][, . . ., P]ℓ[ ∗][.]_
We denote such a proof system by ε-MIPP (and omit the soundness and completeness parameters from the notation). We say that the proof-system has perfect
_completeness if completeness hold with probability 1 (i.e. c = 1). The parame-_
ters we are mainly interested in are the query complexity and the communication
complexity.
**No-Signaling MIPP. An ε-MIPP, (V, P1, . . ., Pℓ) for a language L is said to**
have soundness s against no-signaling strategies (or provers) if the following
(more general) soundness property is satisfied:
2. Soundness: For every x that is ε-far from, and any no-signaling family of
_L_
distributions {Aq}q∈Dℓ, the verifier V rejects with probability ≥ 1 − _s, where_
on queries q = (q1, . . ., qℓ) the answers are given by (a1, . . ., aℓ) ∈R Aq, and s
is the error parameter.
If the property is satisfied for any δ-no-signaling family of distributions {Aq}q∈Dℓ,
we say that the MIP has soundness s against δ-no-signaling strategies (or provers).
## 4 Lower Bound for No-Signaling MIPP
In this section we prove a lower bound, showing that there does not exist a
no-signaling MIPP for all of P with query complexity q and communication
complexity c such that q _c = o(n) (where n is the input length). More specifically,_
_·_
for every q we construct a language in P and prove that if exponentially hard
_L_
pseudo-random generators exist then for any no-signaling ε-MIPP for with
_L_
query complexity q and communication complexity c, it must be the case that
_q_ _c = Ω(n). In the full version we show how to extend the result to IPPs and_
_·_
to arguments of proximity.
In what follows we denote by τ the security parameter.
**Definition 1. A pseudo-random generator G :** 0, 1 0, 1 _(with stretch_
_{_ _}[n]_ _→{_ _}[ℓ][(][n][)]_
_ℓ(n) > n) is said to be exponentially hard if for every circuit family {Aτ_ _}τ of_
_size 2[o][(][τ]_ [)],
Pr Pr = negl(τ ).
����s∈R{0,1}[τ] [[][A][τ] [(1][τ] _[, G][(][s][)) = 1]][ −]_ _y∈R{0,1}[ℓ][(][τ]_ [)][[][A][τ] [(1][τ] _[, y][) = 1]]����_
**Theorem 7. Assume the existence of exponentially hard pseudo-random gener-**
_ators. There exists a constant ε > 0 such that for every q = q(n)_ _n, there exists_
_≤_
_a language_ P such that every MIPP for testing ε-proximity to _with com-_
_L ∈_ _L_
_pleteness 2/3, soundness 1/3, query complexity q and communication complexity_
_c it holds that q_ _c = Ω(n)._
_·_
-----
**Remark 1. The above theorem holds with respect to any constant completeness**
parameter c > 0 and constant soundness parameter s such that s < c, and we
chose c = 2/3 and s = 1/3 only for the sake of concreteness.
**Remark 2. The assumption in Theorem 7 can be reduced to sub-exponentially**
hard pseudo-random generators (i.e., it is infeasible for circuits of size 2[τ][ δ] to
distinguish the output of the generator from uniform, for some δ > 0), rather
than exponential hardness, at the cost of a weaker implication (i.e., q _c = Ω(n[δ]))._
_·_
**Proof of Theorem 7. We start by defining the notion of average-case no-**
signaling MIP (in the crs model), which is used in the proof of Theorem 7. We
note that this average-case completeness seems too weak for applications and we
define this weak notion only for the sake of the proof of Theorem 7.
**Definition 2. An average-case no-signaling MIP in the common random string**
_(crs) model, for a language_ _, with completeness c and soundness s, consists of_
_L_
(V, P1, . . ., Pℓ, crs), where as before V is the verifier, P1, . . ., Pℓ _are the provers,_
_and crs is a common random string of length poly(n), chosen uniformly at ran-_
_dom and given to all parties. In particular, V ’s queries and decision may depend_
_on the crs, and the answers generated by both honest and cheating provers may_
_depend on the crs. The following completeness and soundness conditions are_
_required:_
_– Average-Case Completeness. For all sufficiently large n ∈_ N,
Pr �(V, P1, . . ., Pℓ)(x, crs) = 1� _≥_ _c,_
_where the probability is over uniformly distributed x ∈R L ∩{0, 1}[n], over_
_uniformly generated crs ∈R {0, 1}[poly][(][n][)], and over the random coin tosses of_
_the verifier V ._
_– Soundness Against No-Signaling Provers. For every x_ _, and every_
_̸∈L_
_family of distributions {Aq,crs}q∈Dℓ,crs∈{0,1}poly(n) such that for every crs ∈_
_{0, 1}[poly][(][n][)]_ _the family of distributions {Aq,crs}q∈Dℓ_ _is no-signaling, the ver-_
_ifier V rejects with probability_ 1 _s, where the answers corresponding to_
_≥_ _−_
(q, crs) are given by (a1, . . ., aℓ) ∈R Aq,crs.
The following proposition, which we use in the proof of Theorem 7, follows
from [DLN+04] (see also [Ito10]).
**Proposition 1. Suppose that a language** _has an average-case no-signaling MIP_
_L_
_in the crs model, communication complexity c = c(n) (where n is the instance_
_length), and with constant completeness and soundness (where the soundness para-_
_meter is smaller than the completeness parameter). Then, there exists a randomized_
_algorithm D that runs in time poly(n, 2[c]) such that:_
_– For every n ∈_ N,
Pr
_x∈RL∩{0,1}[n][[][D][(][x][) = 1]][ ≥]_ [2][/][3]
_where the probability is also over the coin tosses of D._
-----
_– For every x_ _it holds that_
_̸∈L_
Pr[D(x) = 1] 1/3
_≤_
_where the probability is over the coins tosses of D._
We note that [DLN+04, Ito10] did not consider the crs model nor average-case
completeness, but the claim extends readily to this setting as well.
We are now ready to prove Theorem 7.
_Proof of Theorem 7. Assume that there exists a pseudo-random generator (PRG),_
denoted by G : 0, 1 0, 1, that is exponentially secure. Namely, every
_{_ _}[τ]_ _→{_ _}[2][τ]_
adversary of size 2[o][(][τ] [)] cannot distinguish between uniformly distributed r ∈R
_{0, 1}[2][τ]_ and G(s) for uniformly distributed s ∈R {0, 1}[τ], with non-negligible
advantage. For sake of simplicity, we assume that G is injective[6].
Let ε > 0 be a constant for which there exists a (good) error-correcting-code,
denoted by ECC, with constant rate and efficient encoding that is resilient to
(2ε)-fraction of adversarially chosen errors.
Fix any query complexity q = o(n).[7] We show that there exists a language
P such that for every no-signaling ε-MIPP for with query complexity q
_L ∈_ _L_
and communication complexity c (and completeness [2]3 [and soundness][ 1]3 [) it must]
be the case that q _c = Ω(n)._
_·_
Consider the following language:
_L =_ �(ECC(r1, . . ., rm), s1, . . ., sm) : ∀i ∈ [m], G(si) = ri�,
where m = 4q and τ = |si| = Θ(n/q), where n = |(ECC(r1, . . ., rm), s1, . . ., sm)|.
The fact that |si| = Θ(n/q) follows from the fact that ECC has constant rate
(i.e., ECC(z) = O( _z_ )).
_|_ _|_ _|_ _|_
The fact that ECC is efficiently decodable and G is efficiently computable
implies that P. Suppose for contradiction that there exists a no-signaling
_L ∈_
_ε-MIPP for L, denoted by (V, P1, . . ., Pℓ), with communication complexity c such_
that c = o(n/q).
Consider the following NP language
_LG = {r : ∃s s.t. G(s) = r}._
Proposition 1, together with the fact that G is exponentially secure, implies
that LG does not have an average-case MIP in the crs model with soundness
against no-signaling strategies, with communication complexity o(τ ) for instances
of length τ .
We obtain a contradiction by constructing an average-case MIP in the crs
model with soundness against no-signaling strategies, with communication complexity o(τ ). To this end, consider the following MIP in the crs model for LG,
denoted by (V _[′], P1[′][, . . ., P]ℓ[ ′][,][ crs][).]_
6 We note that this assumption can be easily removed by replacing the use of the
uniform distribution over the language L[′] (defined below) with the distribution G(s)
7 Note that forfor s ∈R {0, 1 q}[τ] =. _Ω(n) the theorem is trivially true._
-----
– The crs consists of m uniformly distributed seeds s1, . . ., sm ∈R {0, 1}[τ], and
a random coordinate i ∈R [m].
– The verifier V _[′], on input r ∈{0, 1}[2][τ]_, does the following:
1. Let ri = r, and for every j ∈ [m] \ {i}, let rj = G(sj).
2. Emulate V with oracle access to (ECC(r1, . . ., rm), s1, . . ., sm).
(Note that with overwhelming probability r ̸= G(si), and thus ri ̸= G(si).
However V will not notice this unless it queries coordinates that belong
to si.)
– The provers P1[′][, . . ., P]ℓ[ ′][, emulate][ P][1][, . . ., P][ℓ] [on input (][ECC][(][r][1][, . . ., r][m][)][, s][1][,]
_. . ., sm), while setting ri = r and setting si = s where r = G(s) (assum-_
ing that such s exists).[8] If such s does not exist then the provers P1[′][, . . ., P]ℓ[ ′]
send a reject message, and abort.
Note that the communication complexity of (V _[′], P1[′][, . . ., P]ℓ[ ′][,][ crs][) is equal to the]_
communication complexity of (V, P1, . . ., Pℓ, crs), denoted by c. By our assumption, c = o(n/q) = o(τ ), as desired.
**Average-Case Completeness. We need to prove that Pr[(V** _[′], P1[′][, . . ., P]ℓ[ ′][)]_
(r, crs) = 1] ≥ 2[1] [, where the probability is over][ uniformly distributed][ r][ ∈][R][ (][L][G][)][τ] [,]
over uniformly generated crs = (s1, . . ., sm, i) where each sj ∈R {0, 1}[τ], i ∈R [m],
and over the random coin tosses of the verifier V .
Let GOOD denote the event that V _[′]_ does not query any of the coordinates
that belong to si, where i ∈ [m] is the random coordinate chosen by V _[′]. Notice_
that for every r ∈LG,
Pr �(V _[′], P1[′][, . . ., P]ℓ[ ′][)(][r,][ crs][) = 1][ |][ GOOD]�_ =
Pr �(V, P1, . . ., Pℓ)(ECC(r1, . . ., rm), s1, . . ., sm) = 1 | si is not queried� _≥_ [2]
3
where the probabilities are over a uniformly distributed crs and the random
coin tosses of V _[′]_ and V, and where in the second equation ri = r and si = s,
where r = G(s). Recall that the fact that r ∈LG implies that such s exists.
The fact that
Pr[(V _[′], P1[′][, . . ., P]ℓ[ ′][)(][r,][ crs][)=1]][ ≥]_ [Pr[(][V][ ′][, P][ ′]1[, . . ., P]ℓ[ ′][)(][r,][ crs][)=1][|][ GOOD][]][·][Pr[][GOOD][]]
implies that it suffices to prove that Pr[GOOD] ≥ 4[3] [, where the probability is over]
uniformly distributed r ∈R LG, uniformly distributed crs, and over the random
coin tosses of V _[′]._
Note that r1, . . ., rm are all distributed identically to r, and thus V, P1, . . ., Pℓ,
which all receive as input (ECC(r1, . . ., rm), s1, . . ., sm), where ri = r, do not have
any advantage in guessing i (here we crucially use the fact that the MIPP provers are
not given access to the crs). Therefore, since V makes at most q queries,
8 This step can be done by a brute force search (since the honest provers are also
computationally unbounded). Nevertheless, we note that typically in proof-systems
for languages in NP the prover is given the NP witness and so this step can also be
done efficiently.
-----
and since m = 4q, it follows from the union bound that V queries any location
of si with probability at most _m[q]_ [=][ 1]4 [. Hence, Pr[][GOOD][]][ ≥] [3]4 [and (average-case)]
completeness follows.
**Soundness Against No-Signaling Strategies. We prove that for every r /**
_∈_
_LG, every crs = (s1, . . ., sm, i), and every no-signaling cheating strategy P_ [NS] =
(P1[∗][, . . ., P]ℓ[ ∗][), it holds that Pr[(][V][ ′][, P][ NS][)(][r,][ crs][) = 1]][ ≤] [1]3 [, where the probability]
is over the random coin tosses of V _[′]_ and P [NS].
To this end, fix any r /∈LG and any crs = (s1, . . ., sm, i) where each sj ∈
0, 1 and i [m]. Suppose for the sake of contradiction that there exists a no_{_ _}[τ]_ _∈_
signaling cheating strategy P [NS] = (P1[∗][, . . ., P]ℓ[ ∗][) such that Pr[(][V][ ′][, P][ NS][)(][r,][ crs][) =]
1] > [1]3 [, where the probability is over the random coin tosses of][ V][ ′][ and][ P][ NS][.]
Recall that V _[′]_ runs V on input (ECC(r1, . . ., rm), s1, . . ., sm), where ri = r
and where rj = G(sj) for every j ∈ [m] \ {i}. We prove that there exists a
no-signaling cheating strategy, denoted by P[ˆ][NS], such that
�� �
Pr _V,_ _P[ˆ][NS][�]_ (ECC(r1, . . ., rm), s1, . . ., sm) = 1 _>_ [1]3 _[,]_ (1)
where the probability is over the random coin tosses of V and P[ˆ][NS].
The cheating strategy P[ˆ][NS] simply emulates P [NS]. Namely, P[ˆ][NS], upon receiving queries (q1, . . ., qℓ), will emulate P [NS](r, crs) upon receiving (q1, . . ., qℓ), where
_r = ri and crs = (s1, . . ., sm, i). Note that P[ˆ][NS]_ simulates P [NS] perfectly, and
therefore indeed Equation (1) holds. Also note that the fact that P [NS] is a nosignaling strategy immediately implies that P[ˆ][NS] is also a no-signaling strategy.
To get a contradiction, it thus remains to show that (ECC(r1, . . .,
_rm), s1, . . ., sm) is ε-far from L. Indeed, the fact that ECC is an error correcting_
code resilient to 2ε-fraction of adversarial errors, together with the fact that
_r /∈LG implies that (ECC(r1, . . ., rm), s1, . . ., sm) is ε-far from L, as desired. ⊓⊔_
**Acknowledgments.. We thank Guy Rothblum for pointing out to us the question**
about arguments of proximity for P - the question that initiated this work. The second
author was supported by the Israel Science Foundation (grant No. 671/13).
## References
[ABOR00] Aiello, W., Bhatt, S., Ostrovsky, R., Rajagopalan, S.R.: Fast verification
of any remote procedure call: short witness-indistinguishable one-round
proofs for NP. In: Welzl, E., Montanari, U., Rolim, J.D.P. (eds.) ICALP
2000. LNCS, vol. 1853, pp. 463–474. Springer, Heidelberg (2000)
[AIK10] Applebaum, B., Ishai, Y., Kushilevitz, E.: From secrecy to soundness:
efficient verification via secure computation. In: Abramsky, S., Gavoille,
C., Kirchner, C., Meyer auf der Heide, F., Spirakis, P.G. (eds.) ICALP
2010. LNCS, vol. 6198, pp. 152–163. Springer, Heidelberg (2010)
-----
[BCCT12a] Bitansky, N., Canetti, R., Chiesa, A., Tromer, E.: From extractable collision resistance to succinct non-interactive arguments of knowledge, and
back again. In: ITCS, pp. 326–349 (2012)
[BCCT12b] Bitansky, N., Canetti, R., Chiesa, A., Tromer, E.: Recursive composition
and bootstrapping for snarks and proof-carrying data. IACR Cryptology
ePrint Archive 2012:95 (2012)
[BSGH+06] Ben-Sasson, E., Goldreich, O., Harsha, P., Sudan, M., Vadhan, S.P.:
Robust PCPs of proximity, shorter PCPs, and applications to coding.
SIAM J. Comput. 36(4), 889–974 (2006)
[CKLR11] Chung, K.-M., Kalai, Y.T., Liu, F.-H., Raz, R.: Memory delegation. In:
Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 151–168. Springer,
Heidelberg (2011)
[CKV10] Chung, K.-M., Kalai, Y., Vadhan, S.: Improved delegation of computation
using fully homomorphic encryption. In: Rabin, T. (ed.) CRYPTO 2010.
LNCS, vol. 6223, pp. 483–501. Springer, Heidelberg (2010)
[DFH12] Damg˚ard, I., Faust, S., Hazay, C.: Secure two-party computation with
low communication. In: Cramer, R. (ed.) TCC 2012. LNCS, vol. 7194,
pp. 54–74. Springer, Heidelberg (2012)
[DLN+04] Dwork, C., Langberg, M., Naor, M., Nissim, K., Reingold, O.: Succinct
proofs for NP and spooky interactions. Unpublished manuscript (2004).
[http://www.cs.bgu.ac.il/kobbi/papers/spooky sub crypto.pdf](http://www.cs.bgu.ac.il/kobbi/papers/spooky_sub_crypto.pdf)
[DR06] Dinur, I., Reingold, O.: Assignment testers: Towards a combinatorial proof
of the PCP theorem. SIAM J. Comput. 36(4), 975–1024 (2006)
[EKR04] Funda Erg¨un, Ravi Kumar, and Ronitt Rubinfeld: Fast approximate probabilistically checkable proofs. Inf. Comput. 189(2), 135–159 (2004)
[FGL14] Fischer, E., Goldhirsh, Y., Lachish, O.: Partial tests, universal tests and
decomposability. In: ITCS, pp. 483–500 (2014)
[GGP10] Gennaro, R., Gentry, C., Parno, B.: Non-interactive verifiable computing: outsourcing computation to untrusted workers. In: Rabin, T. (ed.)
CRYPTO 2010. LNCS, vol. 6223, pp. 465–482. Springer, Heidelberg
(2010)
[GGPR12] Gennaro, R., Gentry, C., Parno, B., Raykova, M.: Quadratic span programs and succinct NIZKs without PCPs. IACR Cryptology ePrint
Archive 2012:215 (2012)
[GGR98] Goldreich, O., Goldwasser, S., Ron, D.: Property testing and its connection to learning and approximation. J. ACM (JACM) 45(4), 653–750
(1998)
[GKR08] Goldwasser, S., Kalai, Y.T., Rothblum, G.N.: Delegating computation:
interactive proofs for muggles. In: STOC, pp. 113–122 (2008)
[GLR11] Goldwasser, S., Lin, H., Rubinstein, A.: Delegation of computation without rejection problem from designated verifier cs-proofs. IACR Cryptology
ePrint Archive 2011:456 (2011)
[GR13] Gur, T., Rothblum, R.D.: Non-interactive proofs of proximity. Electronic
Colloquium on Computational Complexity (ECCC), 20:78 (2013)
[Gro10] Groth, J.: Short pairing-based non-interactive zero-knowledge arguments.
In: Abe, M. (ed.) ASIACRYPT 2010. LNCS, vol. 6477, pp. 321–340.
Springer, Heidelberg (2010)
[GW11] Gentry, C., Wichs, D.: Separating succinct non-interactive arguments
from all falsifiable assumptions. In: STOC, pp. 99–108 (2011)
[Hol09] Holenstein, T.: Parallel repetition: simplification and the no-signaling
case. Theory Comput. 5(1), 141–172 (2009)
-----
[Ito10] Ito, T.: Polynomial-space approximation of no-signaling provers. In:
Abramsky, S., Gavoille, C., Kirchner, C., Meyer auf der Heide, F.,
Spirakis, P.G. (eds.) ICALP 2010. LNCS, vol. 6198, pp. 140–151. Springer,
Heidelberg (2010)
[Kil92] Kilian, J.: A note on efficient zero-knowledge proofs and arguments
(extended abstract). In: STOC, pp. 723–732 (1992)
[KRR13a] Kalai, Y.T., Raz, R., Rothblum, R.D.: Delegation for bounded space. In:
STOC, pp. 565–574 (2013)
[KRR13b] Kalai, Y.T., Raz, R., Rothblum, R.D.: How to delegate computations: The
power of no-signaling proofs. Electronic Colloquium on Computational
Complexity (ECCC), 20:183 (2013)
[Lip12] Lipmaa, H.: Progression-free sets and sublinear pairing-based noninteractive zero-knowledge arguments. In: Cramer, R. (ed.) TCC 2012.
LNCS, vol. 7194, pp. 169–189. Springer, Heidelberg (2012)
[Mic94] Micali, S.: CS proofs (extended abstracts). In: FOCS, pp. 436–453 (1994)
[Nao03] Naor, M.: On cryptographic assumptions and challenges. In: Boneh, D.
(ed.) CRYPTO 2003. LNCS, vol. 2729, pp. 96–109. Springer, Heidelberg
(2003)
[PRV12] Parno, B., Raykova, M., Vaikuntanathan, V.: How to delegate and verify in public: verifiable computation from attribute-based encryption. In:
Cramer, R. (ed.) TCC 2012. LNCS, vol. 7194, pp. 422–439. Springer,
Heidelberg (2012)
[RVW13] Rothblum, G.N., Vadhan, S.P., Wigderson, A.: Interactive proofs of proximity: delegating computation in sublinear time. In: STOC, pp. 793–802
(2013)
[Spi96] Spielman, D.A.: Linear-time encodable and decodable error-correcting
codes. IEEE Trans. Inf. Theory 42(6), 1723–1731 (1996)
[Tha13] Thaler, J.: Time-optimal interactive proofs for circuit evaluation. In:
Canetti, R., Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043,
pp. 71–89. Springer, Heidelberg (2013)
[VSBW13] Vu, V., Setty, S.T.V., Blumberg, A.J., Walfish, M.: A hybrid architecture
for interactive verifiable computation. In: IEEE Symposium on Security
and Privacy, pp. 223–237 (2013)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-662-48000-7_21?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-662-48000-7_21, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,015
|
[
"JournalArticle",
"Conference"
] | false
| 2015-08-16T00:00:00
|
[] | 18,938
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02c150bf1e09acfa1d7d0c785bcfacbde71ab987
|
[
"Computer Science"
] | 0.84577
|
Blockchain and IoT Convergence—A Systematic Survey on Technologies, Protocols and Security
|
02c150bf1e09acfa1d7d0c785bcfacbde71ab987
|
Applied Sciences
|
[
{
"authorId": "2339400",
"name": "Alessandra Pieroni"
},
{
"authorId": "40514123",
"name": "N. Scarpato"
},
{
"authorId": "35255573",
"name": "L. Felli"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Appl Sci"
],
"alternate_urls": [
"http://www.mathem.pub.ro/apps/",
"https://www.mdpi.com/journal/applsci",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-217814"
],
"id": "136edf8d-0f88-4c2c-830f-461c6a9b842e",
"issn": "2076-3417",
"name": "Applied Sciences",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-217814"
}
|
The Internet of Things (IoT) as a concept is fascinating and exciting, with an exponential growth just beginning. The IoT global market is expected to grow from 170 billion USD in 2017 to 560 billion USD by 2022. Though many experts have pegged IoT as the next industrial revolution, two of the major challenging aspects of IoT since the early days are having a secure privacy-safe ecosystem encompassing all building blocks of IoT architecture and solve the scalability problem as the number of devices increases. In recent years, Distributed Ledgers have often been referred to as the solution for both privacy and security problems. One form of distributed ledger is the Blockchain system. The aim of this paper consists of reviewing the most recent Blockchain architectures, comparing the most interesting and popular consensus algorithms, and evaluating the convergence between Blockchain and IoT by illustrating some of the main interesting projects in this research field. Furthermore, the paper provides a vision of a disruptive research topic that the authors are investigating: the use of AI algorithms to be applied to IoT devices belonging to a Blockchain architecture. This obviously requires that the devices be provided with adequate computational capacity and that can efficiently optimize their energy consumption.
|
# applied sciences
_Review_
## Blockchain and IoT Convergence—A Systematic Survey on Technologies, Protocols and Security
**Alessandra Pieroni** **[1]** **, Noemi Scarpato** **[2]** **and Lorenzo Felli** **[1,3,]***
1 Department of Innovation and Information Engineering, Guglielmo Marconi University, Via Plinio 44,
00193 Rome, Italy; a.pieroni@unimarconi.it
2 Department of Human sciences and Promotion of the quality of life, San Raffaele Roma Open University,
Via Val Cannuta 247, 00166 Rome, Italy; noemi.scarpato@uniroma5.it
3 I.T. Department, ISPRA—The Italian Institute for Environmental Protection and Research,
Via Vitaliano Brancati 48, 00144 Rome, Italy
***** Correspondence: lorenzo.felli@isprambiente.it
Received: 13 August 2020; Accepted: 14 September 2020; Published: 26 September 2020
**Abstract: The Internet of Things (IoT) as a concept is fascinating and exciting, with an exponential**
growth just beginning. The IoT global market is expected to grow from 170 billion USD in 2017 to
560 billion USD by 2022. Though many experts have pegged IoT as the next industrial revolution,
two of the major challenging aspects of IoT since the early days are having a secure privacy-safe
ecosystem encompassing all building blocks of IoT architecture and solve the scalability problem as
the number of devices increases. In recent years, Distributed Ledgers have often been referred
to as the solution for both privacy and security problems. One form of distributed ledger is
the Blockchain system. The aim of this paper consists of reviewing the most recent Blockchain
architectures, comparing the most interesting and popular consensus algorithms, and evaluating the
convergence between Blockchain and IoT by illustrating some of the main interesting projects in this
research field. Furthermore, the paper provides a vision of a disruptive research topic that the authors
are investigating: the use of AI algorithms to be applied to IoT devices belonging to a Blockchain
architecture. This obviously requires that the devices be provided with adequate computational
capacity and that can efficiently optimize their energy consumption.
**Keywords: IoT; blockchain; distributed ledgers; privacy; scalability**
**1. Introduction**
Exploring and understanding the different components of an Internet of Things (IoT) architecture,
detecting vulnerability areas in each component, and exploring the appropriate technologies to detect
any weaknesses are essential to address the IoT security [1] and privacy issues. In October 2016 a DNS
provider called Dyn Inc. suffered a DDoS cyberattack [2] that originated from tens of millions of IP
addresses. One of the sources of the attack was devices like printers, DVRs and other appliances that
are connected to the Internet, known as the “Internet of Things”. A Malware called Mirai infected
these devices and launched the distributed denial-of-service (DDoS) attacks. The number of attacks
involving IoT devices during 2018 increased, with 32.7 million IoT incidents reported last year [3].
The main drawback in this scenario was their reliance on a centralized cloud infrastructure and the
lack of safety protocols [4]. A decentralized approach, based on tamper-proof digital ledger exchange
of data, would overcome many of the problems associated with the centralized cloud approach.
A Blockchain allows users to sign, secure, and verify every transaction. It is highly challenging to edit
or remove blocks of data that are saved on the ledger [5]. A large number of Blockchain architectures
have been released but all follow the same basic rules:
Use of encryption to sign transactions between the parties.
_•_
-----
_Appl. Sci. 2020, 10, 6749_ 2 of 23
Transactions are stored on a distributed ledger over a peer-to-peer network.
_•_
Reaching consensus using a decentralized approach.
_•_
The ledger is made up of sequentially linked blocks of transactions, cryptographically signed,
which form a Blockchain. The aim of this paper consists of analyzing the main concepts of IoT
and Blockchain technologies and evaluating in detail the synergies and the connections between
the two architectures, highlighting the most appropriate research articles that enable the studying,
the comparison and the classification of the most interesting IoT–Blockchain projects already deployed
or in developing stage.
**2. Key Contribution**
Creating a synergy between Blockchain architectures and the IoT world would allow for much
safer devices that are now in common use, paving the way for new possibilities in a variety of
application areas. Think of the healthcare world [6], where many IoT devices already collect sensitive
information and the discovery of vulnerabilities is a daily problem, with all the privacy issues that can
result. In recent years, many studies have been done on the integration between IoT and Blockchain,
for example thinking of Blockchain as a component serving the IoT device [7]. A number of papers
have focused on how to solve the safety aspects of IoT devices [8–11]. To improve the reading
and understanding of the projects submitted, this article addresses the main aspects of Blockchain
technology, such as encryption and consensus algorithms, addresses the main security issues of the
various types of IoT devices and identifies where the Blockchain can be used as a solution. Section 8
presents the main use cases related to Blockchain & IoT. Furthermore, the authors intend to introduce
at the end of this paper their future research interest that consist of using AI techniques to enhance the
capability of IoT devices in Blockchain architectures in order to face with complex big data management
systems, predictive models in several research fields, e.g., e-health and mobility [12–15].
**3. Workflow**
To carry out this review, we have analyzed and presented the Blockchain–IoT projects that
represent the most innovative solutions that can currently be tested on the world scene, choosing them
mainly according to the following criteria:
Number of citations of whitepapers and related articles on Google scholar, Microsoft academic
_•_
and semantic scholar.
Novelty of the proposed solution.
_•_
Market capitalization if any.
_•_
The strategy used to search for articles within the above mentioned search engines began by
using the keywords present in this article in conjunction with specific terms such as “Consensus”,
“Convergence”, “RFID”, “RSA”, “Weakness” mainly using the Boolean operator “AND”. If source
code exists, this has been installed and tested. Only English language articles have been considered.
**4. Internet of Things**
The term Internet of Things (IoT) refers to the concept of extending Internet connectivity
beyond conventional platforms to daily elements to take advantage of the immediate connection,
communication, storage and processing of the information gathered from the surrounding environment.
Embedded with Internet connectivity and other forms of hardware sensors, these devices are present in
today’s smart homes and will be the cornerstones for future smart cities. Defined from an idea by Kevin
Ashton in 1999 while working on RFID technology at MIT (Massachusetts Institute of Technology),
IoT concept underlies a new, augmented way to interact with daily tasks and activities by both human
beings and machines.
-----
_Appl. Sci. 2020, 10, 6749_ 3 of 23
_4.1. IoT Weaknesses_
4.1.1. Cloud Infrastructure
To date, the technology behind IoT systems, simple by its very nature, has led to complex protocols
with conflicting configurations. Almost all of today’s Internet of Things ecosystems are based on
centralized systems. Centralized clouds and network equipment involved in these architectures are
exponentially expensive as the number of devices increases. In this centralized model, IoT devices
are authenticated, identified and communicate their data in real time or semi-real time mode with the
cloud. In a highly connected smart-city scenario, where private homes, offices, streets with their traffic
lights, transportation and pedestrians produce a mass of data every second, the cloud infrastructure
need to scale, leading to a price increase. As the environments become smarter, the higher the cost
of this type of infrastructure will be. IoT devices are subject to various types of vulnerabilities and
often if a device connected to the cloud infrastructure is breached, the whole infrastructure is at risk.
Below is a list of the main security issues affecting the cloud infrastructure related to IoT devices:
1. Wrapping attack: This attack occurs by duplicating the user credentials during the log in process,
and the SOAP2 messages that are exchanged during the connection setup between the web
browser and the server are modified by the attackers [16].
2. Eavesdropping: under the term eavesdropping fall the techniques used to intercept communications
that occur within a channel established between two authorized users [16].
3. Flooding attack/DOS attack: The goal of a DOS attack is to consume all the available resources of
a server to make the system unresponsive to legitimate traffic [16].
4. Data Stealing problem. This type of attack involves hacking the data and security of cloud systems
by stealing system access credentials.
5. Man-in-the-Middle Attack (MITM): in this case the attacker succeeds in gaining access to the
communication channel between two legitimate users, being able to both intercept and modify
the information without making anyone aware of it [16].
6. Reflection Attack: This type of attack is perpetrated in challenge-response type systems that use
the same communication protocol in both directions. The idea behind this type of attack is to trick
the victim by asking him for a solution (response) to his own challenge [16].
7. Replay Attack: The replay attack is a form of cyberattack that targets computer networks in
order to take possession of an authentication credential communicated from one host to another,
and then propose it again by simulating the identity of the issuer. Usually the action is carried
out by an attacker who interposes himself between the two communicating sides or from a
spoofed IP [16].
8. Brute force/Dictionary attack: in a brute force attack, a series of attempts are made to guess the
credentials of a certain system, based on information generated through specific dictionaries or
by specific rules.
4.1.2. Sensors and Devices: Perception Layer
Perception layer collects all information/data from physical environment like temperature, speed,
time, humidity etc. It is nothing but collection of sensors, actuators which forms Wireless Sensor
Network (WSN). In this layer many of the Cloud attack are present but adapted to the specific protocols
used at this level (RFID, WSN, NFC, BLUETOOTH etc.). Other common vulnerabilities concern to
identity and password theft. Attackers adopt several mechanisms to find passwords, using known
dictionaries and vulnerabilities in password creation programs.
Common password hack:
Brute force/Dictionary Attack: This attack is launched by guessing passwords containing all
_•_
possible combinations of letters, numbers and alphanumeric character or by using precomputed
dictionary of passwords [16].
-----
_Appl. Sci. 2020, 10, 6749_ 4 of 23
Social hacking: This attack exploits human weaknesses. Instead of using a generic dictionary,
_•_
private information of the target person is collected, and a tailored dictionary is created.
RFID common vulnerabilities [17,18] :
DoS Attack: denial-of-Service attack is accomplished by flooding the targeted machine with
_•_
superfluous requests from a fake RFID device.
Eavesdropping: an antenna is used to record communications between legitimate RFID tags and
_•_
readers. Eavesdropping attacks can be done in both directions: tag to reader and reader to tag
Skimming: in this case, the attacker observes the information exchanged between a legitimate
_•_
tag and legitimate reader. Through the extracted data, the attacker attempts to make a cloned
tag which imitates the original RFID tag. Very dangerous in case of RFID chips in credit cards,
passports and another personal RFID hardware.
Replay Attack: similar to a MITM attack, in Replay attacks, the communicating signal between
_•_
the tag and the reader is intercepted, recorded and replayed.
NFC common vulnerabilities [19,20]:
Phishing attack: phishing is a type of scam carried out on the Internet by a malicious attacker
_•_
trying to deceive the victim by convincing them to provide personal information, financial data
or access codes, pretending to be a reliable entity in a digital communication.
User Tracking: in case tags use the same unique ID for anti-collision technique, an attacker can
_•_
easily track them by compromising the secrecy of the entire NFC system.
Relay attack: this concerns the intrusion within a communication between device and NFC,
_•_
the ability to read data coming from the source device and send it back to the destination device.
Eavesdropping: the attack is perpetrated via an antenna used to record communications between
_•_
NFC devices. Although NFC communication takes place between very close devices, this type of
attack is feasible. The purpose of the attack can be two-fold, theft of information or corrupting the
information exchanged, making it useless.
Spoofing: some mobile devices are configured to run automatically the commands received by
_•_
NFC tags. In a spoofing attack, a third party pretends to be another entity to induce a user to
touch his device against the tag programmed specifically to execute malicious code.
**5. Distributed Ledgers**
Blockchain technology was introduced by a single entity or group under the name of Satoshi
Nakamoto in 2008 and the code of its implementation was published a year later in 2009 in the
document ‘Bitcoin: A Peer-to-Peer Electronic Cash System’ [21]. The Blockchain is essentially a
distributed and transactional database shared by the various nodes of the network. The validity
and integrity of the data is maintained by chaining the transactions contained in the blocks using
hash functions that prevent them from being modified without consent. Bitcoin uses the public
key infrastructure (PKI) mechanism [22]. In PKI, the user has a couple formed by a public and a
private key. The public key is used as the address of the user’s wallet, while the private key is
used to sign transactions. A block is accepted by the network on average every 10 min through a
consensus mechanism. The new chain with the new block on top will spread quickly in all the nodes
of the network.
Inside each node there is a key-value database in which the blocks containing the transactions
that have reached consensus will be written. Each node validates the new blocks. Although the search
for the hash that satisfies the consensus called Proof of Work takes on average 10 min regardless
of the network’s computational capacity, checking the correctness of these hashes is extremely fast.
This method creates a linear chain of blocks on which all nodes agree (Figure 1). This chain of blocks is
the public ledger technique of Bitcoin, called Blockchain.
-----
_Appl. Sci. 2020, 10, 6749_ 5 of 23
**Figure 1. Blockchain structure. (Zheng et al. 2016 [23]).**
_5.1. Blockchain vs. Classical Vulnerability_
A decentralized Blockchain approach to the Internet of Things makes many of the classic attacks
unenforceable. Adopting a secure, tamper-evident peer-to-peer communication model to process
billions of transactions between IoT devices can also significantly reduce the costs associated with
installing and maintaining various network and cloud systems and distribute computing and storage
needs across the billions of devices that form the Internet of Things networks. This will also prevent the
“single point of failure” vulnerability, where the failure of a single node in a network can lead the entire
network to a halting collapse. In Blockchain, message exchanges between devices can be treated in the
same ways as financial transactions in a Bitcoin network. Devices rely on cryptographically signed
transactions and digital smart contracts thus guaranteeing a level of security that was previously
unobtainable. The fact that Blockchain cryptographically verifies the transactions eliminates the
possibility of man-in-the-middle attack, replay and all other classical “device-to-cloud” attacks. Some of
these attacks, however, have been borrowed and adapted against the new Blockchain architecture:
Eavesdropping: Using different nodes listening in the p2p network it is possible to deanonymize
_•_
many Blockchains revealing the IP addresses of specific wallet address owners [24]. To solve this
issue, Blockchain architectures have been created that allow for a higher level of anonymity [25].
Replay attack: In case of a fork in a Blockchain, an attacker can use a signed transaction on the first
_•_
Blockchain and replicate it as it is on the second since the private keys on both chains are identical.
Replay protection is fairly trivial to implement and it has become a de-facto prerequisite for any
forked chain. For example, Bitcoin Cash created replay protection for their chain by implementing
a unique marker that would allow the Bitcoin Cash nodes to distinguish transactions spent on the
legacy Bitcoin chain as independent from the Bitcoin Cash chain.
Sybil Attack: To create many connections in a Blockchain it is necessary to start many nodes at
_•_
the same time. An attack of this kind can be used to capture users’ IPs or study the topology of
the network, falling back as a type of attack to be very similar to Eavesdropping. Mining process
is protected by consensus-building algorithms that are specifically designed to avoid this type
of attack.
MITM (Man-In-The-Middle) attack: There is no way to listen to a transaction and steal data
_•_
because of the encryption systems used to sign communications within Blockchain networks. It is
possible, however, if we expand the mesh of the definition of MITM attack, embracing not only
the network but also the software that is part of it, identify some vulnerabilities such as editing
the destination wallet in the transactions sent by the hardware ledger wallet or the theft of funds
from wallets generated by weak private keys, where an attacker who possess the private key
waits “in the middle” for a transaction to steal the funds in the very next block.
Brute force/Dictionary attack: No way to brute force a private key correctly generated. The problems
_•_
arise from the unsafe generation of the private key. Unfortunately, some wallets generate private
keys directly from user-defined passwords without using any random numbers. This exposes
users to this type of attack.
Phishing attack: There is no architecture that is free from this type of attack because it does not
_•_
depend on the intrinsic security of the software or hardware used, but exclusively on the human
ability to understand and avoid being cheated.
Table 1 shows a comparison overview of vulnerabilities by type of architecture.
-----
_Appl. Sci. 2020, 10, 6749_ 6 of 23
**Table 1. Vulnerability comparison.**
**Attack** **Cloud** **RFID** **NFC** **Blockchain**
Wrapping X
Eavesdropping X X X
Flooding X
Stealing Account X
MITTM X X X
Browser X
Reflection X
Session Hijacking X
Replay X X X
Brute force X
DoS/DDoS X X X
Skimming X
Phishing X X X
User tracking X
Spoofing X X
_5.2. Consensus_
Any node in the network can add information to its chain and share it in the network. It is,
therefore, necessary that the other nodes in the network have a foolproof method to agree on the
correctness of the information before any changes are added to the chain. Trust is a crucial aspect of this
process, as no one can be sure of the reliability of the node that is adding information. It is critical that
all new information must be reviewed and confirmed before it is accepted. In other words, to achieve
this goal, a distributed review system called “consensus” is needed that definitively solves the problem
of trust between nodes. In computing, a consensus algorithm is a process used to reach agreement on
a single data value between distributed processes or systems. Consensus algorithms are designed to
achieve reliability in a network that involves multiple potentially unreliable nodes. For a centralized
network it is crucial that each participant read the same info. On the other hand, for a de-centralized
system that means ensuring a sufficiently large number of nodes in the network are in agreement with
what the transaction history is, and how to validate a transaction. This is what establishes the peculiar
version of the truth in the Blockchain environment. A multitude of consensus-building algorithms
have been developed over the years, and not all of them are suitable for use in an IoT architecture.
Table 2 shows a comparison between the most used consensus algorithms in terms of type, scalability,
finality a speed.
**Table 2. General comparison between Consensus mechanism.**
**PoW** **PoS** **DPoS** **pBFT**
Type Permissionless Both Both Permissioned
Scalability Medium High High Low
Finality Probabilistic Probabilistic Probabilistic Deterministic
Transaction/s Low High High High
5.2.1. Pow
The first algorithm ever used to reach a global consensus was the Proof of Work (Pow) [26].
The PoW has long been the most widely used method to achieve consensus within Blockchain
architectures [27]. PoW, by searching for hash functions whose difficulty depends on the computational
power of the entire network, makes it difficult to build a valid block and connect it to a Blockchain.
Modifying a block requires the revalidation of the block itself, plus the revalidation of all subsequent
blocks. The older the block to modify, the greater the number of validations needed. If you consider
that only a few miners or groups of miners can generate new blocks every 10 min, even the change
-----
_Appl. Sci. 2020, 10, 6749_ 7 of 23
of the newly validated block is extremely expensive. This difficulty protects the chain of blocks
from tampering; the longer the chain of blocks, the more difficult it is to reverse any previously
recorded transaction. To manipulate the block chain, an attacker must have more than 50% of the entire
PoW-based network’s computing power. Although PoW provides an elegant solution for the global
consensus of distributed ledgers, it has several inherent drawbacks first and foremost the electricity
consumption required. This is of primary importance especially when thinking in terms of IoT where
energy efficiency, computing power and installed memory have limits to be taken into account.
5.2.2. PoS
To avoid the problems associated with achieving consensus through excessive power consumption,
an efficient alternative to PoW called Proof of Stake (PoS) has been introduced [28]. The system consists
of a group of validator nodes that alternate by casting a vote on the next block, and the weight of
each validator node’s vote is directly proportional to the amount of its tied deposit. Safety and energy
efficiency are among the most significant advantages of PoS. Improper behavior by a node can lead
to the loss of the tied deposit. The Blockchain can thus operate more efficiently without the need for
intensive energy consumption by the network obtaining as a direct consequence an economic stability
of the network: the greater the amount tied by a node, the greater the probability that its behavior in
the network will be correct.
5.2.3. DPoS
Unlike the PoS consensus system, the DPoS can be thought of as a representative democratic
system [23]. This feature is implemented thanks to the opportunity for network participants to give
their vote to one or more delegates to represent their stake in the network. DPoS offers several benefits
for IoT applications:
Pooling: Many small nodes can share their stakes, thus obtaining a higher chance together to
_•_
participate in the proposals and in the vote for the next block, and share the rewards afterwards.
Suitable for Resource-constrained nodes: Nodes with limited resources can choose their delegates
_•_
and avoid running the node 24 h.
Haigh availability: Single nodes can choose new delegates for each new block. This flexibility
_•_
provides a high availability for the network reaching consensus.
5.2.4. pBFT
The Practical Byzantine Fault Tolerance (pBFT) is an algorithm born before the advent of
Blockchain and was introduced by Castro [29,30] in 1999 as an efficient and attack-resistant algorithm
to reach agreements in a distributed asynchronous network. The main architecture of a pBFT system
consists of providing a practical Byzantine state machine replica that tolerates Byzantine failures
by assuming that there are independent node failures and manipulated messages propagated by
specific independent nodes. The main advantages of a pBFT system is the impressive execution time
in reaching consensus and verifying valid transactions. The negative aspect is related to the extreme
network overconnection and the high number of exchanged messages that can increase exponentially
as the number of network nodes increases. As demonstrated by Castro on his paper [29], pBFT offers
safety, availability and confidence in propagating accurate messages if at least two thirds of the network
nodes are honest. The network cost of pBFT is really minimal compared to unreplicated network
system. The model works very well only with small networks, because of the cumbersome amount of
communication required between the nodes.
5.2.5. Others
1. Proof of Burn [31]: the probability for a specific node to success in mining new block is directly
proportional to the number of coins burned by itself. The burning process consists of sending
-----
_Appl. Sci. 2020, 10, 6749_ 8 of 23
tokens to a specific address that cannot send tokens. These tokens are thus ’burned’ in the sense
that they are no longer in circulation and therefore inaccessible. This is a similar idea to PoW
but without wasting real world energy. Proof of burn has many of the same criticisms as PoS;
the consensus is determined by the richest nodes in the network.
2. Proof of capacity [32,33]: Similar to Proof of Work, but uses storage instead of computation.
Cryptographically signed data is written to the local storage according the following rules:
A very slow hash function is computed and stored. In the process, the hard drives are filled
_•_
with groups of the precomputed hashes, and each group contains 4096 pairs.
While mining, a deadline time from a specific pair for each group is calculated. This deadline
_•_
time represents the time to wait for mining another block after the last one.
The right to mine the next block is granted to the node who have the shortest deadline.
_•_
3. The Tangle [34,35]: the most famous alternative to standard Blockchain structures, ’The Tangle’
use a DAG (Direct Acyclic Graph) to store transactions. Better explanation of this method used to
achieve a different kind of consensus is reviewed in Section 8.1.
4. Hashgraph Consensus [36]: Similar to IOTA’s Tangle, a DAG is created but following different
rules. The new data structure named Hashgraph uses a gossip protocol to spread information
throughout the network, and a virtual voting mechanism to achieve consensus involving random
sync and validation between nodes. Any node can view the world from the perspective of any
other node since last sync. In this way each node can determine if a given set of information or
transactions is valid by checking if at least two thirds of the network’s nodes have witnessed
specific transaction.
Table 3 shows the different consensus versions used in major blockchain projects
**Table 3. Popular Blockchain consensus mechanisms.**
**Project** **PoW** **~PoW** **PoS** **~PoS** **DPoS** **~DPoS** **pBFT** **~pBFT**
Bitcoin [21] SHA256
Ethereum [37] Ethash
Litecoin [38] Scrypt
Monero [39] CryptoNight
HDAC [40] EPoW
Lynx [41] HPoW
Komodo [42] dPoW
Purple [43] SSPoW
Dash [44] X
Stellar [45] X
Cosmos [46] X
Vericoin [47] PoST
Shield [48] PoS Boo
XSN [49] TPoS
Reddcoin [50] PoSV
Cardano [51] Ouroboros CA
Tron [52] X
Bitshares [53] X
Steem [54] X
Ark [55] X
Lisk [56] X
EOS [57] BFT DPoS BFT DPoS
Zilliqa [58] X
Sawtooth [59] X
Neo [60] DBFT
-----
_Appl. Sci. 2020, 10, 6749_ 9 of 23
_5.3. Cryptography and Hashing_
Cryptography is the branch of cryptology that deals with “hidden writings”, i.e., methods to
make a message “blurred” so that it is not comprehensible/intelligible to people not authorized to
read it. Cryptography before the modern age was synonymous to encryption and the use of these
techniques date back as far as the ancient Egyptians, and have roots spanning all throughout history.
Cesar Cipher and the World War II Enigma machine are two of the most iconic examples of historical
encryption techniques. Blockchain technology makes use of cryptography in multiple different ways,
from generating wallet keys, to securing transaction handling.
5.3.1. Cryptographic Hash Algorithm
A hash function is any function that can be used to map data of arbitrary size onto data of a
fixed size [61]. This kind of functions are also useful in cryptography where this kind of functions
allows one to easily verify whether some input data map onto a given hash value, but if the input
data is unknown it is deliberately difficult to reconstruct it by knowing the stored hash value.
Hash algorithm is the most commonly used cryptographic algorithm in the Blockchain architecture [62].
In Blockchain, hash algorithm is mainly used for wallet address creation, data integrity, data encryption,
consensus computing and to link blocks together. It can compress messages of arbitrary length into
binary strings of fixed length in a limited and reasonable time, and output hash value. Hash function
has the characteristics of unidirectionality, hiding, collision resistant and puzzle friendliness (hard to
find the right hash for a block, but easy to verify). Most used hash functions in Blockchain architectures
include MD5 [63,64], SHA1 [65], SHA256 [63], and SM3 [66].
5.3.2. Asymmetric Encryption Algorithm
The real novelty of the last century is the invention of a cryptographic technique, called Asymmetric
cryptography, that uses different keys to encrypt and decrypt a message, facilitating the task of key
distribution. In Asymmetric cryptography the secret key is divided into two parts, a public and a
private key. The public part ca be shared, while the private key must be kept secret. The public key can
be made public for the sender to encrypt the information to be sent, and the private key can be used
for the receiver to decrypt the received encrypted content. Blockchain technology uses cryptography
as a means of ensuring transactions are done safely, while securing all information and storages of
value. Therefore, anyone using Blockchain can have complete confidence that once something is
recorded on a Blockchain, it is done so legitimately and in a manner that preserves security. The most
commonly used and secure asymmetric encryption algorithms are Rivest–Shamir–Adleman (RSA [67])
and Elliptic-curve cryptography (ECC [68]).
**6. Blockchain: Strengths and Weaknesses**
Like any other software architecture, the Blockchain has both positive and negative aspects.
Below is a list of the main ones:
Pros:
Decentralization: As a decentralized and distributed technology, all transactions are decentralized,
_•_
and verified by the network itself removing any single point of failure in a network of devices.
Security: The use of public/private key pairs to sign transactions, specific hash functions to
_•_
link block together and the peculiar consensus algorithms, gives to Blockchain systems a high
resistance to tampering.
Cost reduction: according to a Santander FinTech study [69], distributed ledger technology
_•_
could reduce financial services infrastructure cost between US$15 billion and $20 billion per
annum by 2022, providing the possibility to decommission legacy systems and infrastructures
and significantly reduce IT costs.
-----
_Appl. Sci. 2020, 10, 6749_ 10 of 23
Privacy & Transparency: The chain and its content are public and readable by anyone. Anyone can
_•_
read and verify the honesty of every transaction but link a wallet(address) to a specific identity is
not allowed by the protocol. There are some hacking strategies, however [70], to link a Blockchain
address to a specific IP. To overcome this type of weakness, in addition to the use of Tor or VPN
networks, specific Blockchain privacy-oriented projects have been developed [25,39]
Drawback[71]:
Legal issues: In a decentralized environment where nodes exist around the planet, there is no way
_•_
to establish a common jurisdiction. Different countries have very different approaches to titles,
ownership, contracts, trademarks and liabilities. Some governments have made cryptocurrencies
illegal in their territories [72].
Volatility: All cryptocurrencies are from high to extremely high volatile. Cryptocurrency markets
_•_
are not regulated, some exchange has suspicious volumes [73], and most of cryptocurrencies has
very low volume compared to Bitcoin, exposing them to high speculative activity.
Storage: Usually, to run a full node, the entire Blockchain should be synchronized locally
_•_
(i.e., Bitcoin needs 200GB). This high demand for storage space makes adoption in IoT systems
extremely complicated.
Transaction speed: Bitcoin is restricted by mining block time and block capacity to handling up
_•_
to 7 transactions per second while VISA has the peak capacity to handle 24,000 transactions per
second [74]. Actually, no decentralized Blockchain project can reach such an order of magnitude.
Lack of maturity and standards: Distributed Ledgers is an emerging technology. Many Blockchain
_•_
projects are not production ready, partially untested, and will require early adopters to accept
significantly increased risk levels over the next five to seven years.
**7. Blockchain–IoT Projects**
Although still early in the IoT adoption cycle, there are nevertheless many signs the
market is maturing and more and more IoT devices are becoming a consistent part of our
everyday lives [75–78]. The Blockchain concept has been in evolution for a decade (Figure 2),
from the earliest solution like Bitcoin to the present multipurpose variations in different fields.
However, Blockchain technology is continually improving its features and is striving to find an
increasingly efficient implementation [79,80]. This paper summarizes the present and outline future
development trends of Blockchain technology applied to IoT devices.
**Figure 2. Blockchain evolution.**
-----
_Appl. Sci. 2020, 10, 6749_ 11 of 23
_7.1. IOTA_
IOTA [34,35] is the project with highest market cap [81] between all the IoT–Blockchain projects
analyzed in this paper. The validation process is not based on consensus, no classical mining scheme,
no fee required for the transactions. The data structure used to handle the transaction is a direct acyclic
graph-based ledger called ’The Tangle’ [35], in which when a node submits a transaction the system
uses Markov Chain Monte Carlo (MCMC) algorithm to select two unconfirmed transactions to check if
these do not produce conflicting results (Figures 3 and 4).
**Figure 3. Classical Blockchain structure vs. Tangle.**
IOTA uses Proof of Work as an anti-Sybil [82] measure. In this way the consensus and validation
are done by the whole network of active participants by giving everyone an equal say in the network
regarding the transaction making process. This is a radical shift of mind because differently from
canonical Blockchain schemas, the Tangle is probabilistic. Full consensus for a transaction is only
reached once (almost) all network participants have repeatedly certified that the transaction is more
valid than another transaction. There is no global consistency in the tangle architecture. There is
eventual consistency. This is related to the CAP theorem [83]. If a transaction is referenced directly
or indirectly by every new transaction then it can be considered “confirmed” with high likelihood.
Advantages over traditional Blockchain technology:
Highly Scalable: Instead of storing transactions in blocks with a limited size, each transaction lives
_•_
on its own and must approve two other transactions. With this method, the number of transactions
that can be handled in a certain amount of time increases with the number of transactions.
No fees: Each node validates its own and two other transactions. No block-mining required.
_•_
Availability: Very high availability tanks to the Tangle architecture with no transaction to be
_•_
inserted and mined in blocks with the danger of never being verified.
Partition-Tolerant: Very High partition-tolerant system except for the presence of Coordinators.
_•_
Quantum Computing resistant: A quantum-resistant algorithm called the Winternitz One-Time
_•_
Signature Scheme is used to providing much better inherent security against the future quantum
computer threat.
Main drawbacks:
Coordinators: Is meant for assuring security in the early stage of the network as it grows. It will
_•_
be eventually removed when the network becomes sufficiently large. Presently is a limit for
decentralization and for partition-tolerance of the system.
Milestones: Will be eliminated in the future but are important to avoid attack to the structure.
_•_
Without them, the size of the ledger would grow to sizes too big to be handled by most nodes,
primarily by IoT devices.
Low consistency: There is no global state which everybody agrees to. Only after milestones,
_•_
but will be eliminated in the future.
-----
_Appl. Sci. 2020, 10, 6749_ 12 of 23
**[Figure 4. The Tangle live from http://tangle.glumb.de/.](http://tangle.glumb.de/)**
_7.2. VeChain_
VeChain [84] is the second largest IoT/Blockchain family project by market cap to date.
Largely based on Geth, the Go implementation of the Ethereum protocol, but with changes to support
an alternative consensus algorithm called “Proof-of-Authority”, this project is a Blockchain-based
platform that records the truth of what happens at every stage of the supply chain. Founded in 2015
by Sunny Lu, the former CIO of Louis Vuitton China, he combined his expertise in luxury goods
with Blockchain technology to create an IoT application for supply-chain management. The VeChain
client, “Thor Core” was designed to store supply-chain data and execute applications based on smart
contracts. Every user who runs a node must do a KYC which aside token also track their reputation.
VeChain economy consist of two different kind of coins:
VeChain Token(VET): Store of value and smart payment currency.
_•_
Thor power(VTHO): Same as gas for Ethereum but using a different coin from the base one.
_•_
This coin is consumed every time a change to the Blockchain is necessary.
Two different kind of nodes exists in the VeChain Blockchain:
1. Authority node: There will only be 101 Authority Nodes and they validate all Blockchain
transactions. Specific features are required to be elected as Authority node: KYC process,
dedicated hardware and a minimum quantity of 250.000.000 VET blocked till the mainnet
launch. There has been a lot of discussion in the VeChain community whether the identity
of the 101 Authority Nodes should be publicly known. Single individuals with an Authority node
can become a target once their identity gets publicly know. Enterprises that are currently owning
an Authority node can also prefer to stay anonymous because they are not ready to publicly
announce that they are using Blockchain technology or to stay ahead of competitors. A total of
30% of all gas (VTHO) consumed by Blockchain transaction is rewarded to the 101 Authority
Masternode owners. To date no official list of Authority node exists, but only very few actors
have confirmed their status: DNV GL [85], CAHrenheit [86].
2. Economic node: The VeChain Economy Masternode is different to the Authority Masternode
in that it seeks to provide stability to the VeChain ecosystem by acting as a sort of tool to
give dividend.
_7.3. WaltonChain_
WaltonChain [87] has a specific focus on RFID solution (the name of the project came from
Charles Walton best known as the first patent holder for the RFID devices. As of VeChain they
developed their in-house RFID solution. The firm claims that the RFID chips have improved
-----
_Appl. Sci. 2020, 10, 6749_ 13 of 23
sensitivity due to the optimized noise suppression technology used. They offer improved security
for each IoT device as they use asymmetric random password pair generation and these are
unique, authentic, and tamper-resistant. The mainchain called WaltonChain manages various sub
chains, tracks WaltonCoin transactions and cross-subchain transactions, and executes smart contracts.
The mainchain uses their unique Proof of Stake and Trust, an upgrade of PoS, consensus algorithm.
When selecting the next block producer, PoST takes into account the quantity of staked WTC coins
as well as the reputation of the nodes. Architecturally speaking, WaltonChain ecosystem uses an
overall structure including a parent chain and subchains (or child chains) where the parent chain is
WaltonChain and the token used for circulation and payment is called Waltoncoin.
_7.4. IoTeX_
Differently from WaltonChain, IoTeX [88] keep control on subchains maintaining a general
consensus. By the other side, if one subchain is compromised, the root and therefore the others will
have privacy breach. To ensure the integrity of the whole system, the consensus algorithm used is a
specialized DPoS, in which 21–50 delegates are voted in and elected to mine for a certain number of
blocks generated. IoTeX uses multiple Blockchains to interact with different segments of IoT devices or
nodes based on the type of data or function. Transactions, unlike Bitcoin, are confidential using an
interesting RingCT2.0 modified signature technique. The solution is to employ a secure multi-party
computation (SMPC) protocol among a set of bootstrapping nodes of the Blockchain to generate
secret domain parameters. The lightweight address system used does not require receiving addresses
to scan the entire network to become aware of incoming transactions. IoT devices need to be light
in terms of hardware resource, not able to record the full transaction history locally, and, in case
of DPoS, the overhead for an IoT device that backs up online is not easily affordable. To address
the problem, IoTeX implemented a periodic Checkpoint creation, solution already announced for
Ethereum’s upcoming Casper implementation. Each Checkpoint can be verified based on the previous
Checkpoint, so that the light client can quickly follow the entire Blockchain without download a large
number of public keys and signatures and then verify them all.
_7.5. Ambrosus_
Ambrosus (AMB) is a project that is looking to develop Blockchain tracking software for the
food and pharmaceutical industry. They plan on combining high-tech sensors, smart contracts,
and Blockchain protocols to create a secure supply chain where suppliers and consumers alike can track
products to ensure authenticity, origin, proper handling and compliance in all areas. The Ambrosus
protocol is based on the Ethereum Blockchain. The architecture is built in 4 different layers:
1. AMB-TRACE: Sensors and devices that generate data
2. AMB-EDGE-GATEWAY: Collect, preanalyze and push data
3. AMB-NET: Collect data in centralized data bases and in the Ethereum Blockchain
4. AMB-DASH: Dashboard tools to visualize data
_7.6. HDAC_
The Hyundai Digital Asset Company (HDAC) is created in 2017 through the cooperation of
Hyundai BS&C, DEXKO, Doublechain, and Hyundai-Pay. The consensus mechanism is a modified
Proof of Work protocol called ePoW where ’e’ stands for ’Equitable chance’ and ’Energy saving’.
HDAC ePoW is an algorithm developed to reduce the mining monopoly by applying the block
window concept. If a node succeeds in mining, no new block can be mined by the same node
during the block window application period. Even if a greedy node neglects this mechanism and
succeeds in mining a new block, it will not be recognized as a valid block by the HDAC Blockchain
network. A private permissioned Blockchain network is a Blockchain with access privileges and may
not be accessed by every node freely unlike a public Blockchain. The HDAC public Blockchain is
-----
_Appl. Sci. 2020, 10, 6749_ 14 of 23
permissionless and act as a coordinator for several private permissioned IoT-oriented Blockchains.
To realize this interconnection HDAC Blockchain uses Bridge Node intermediaries that perform key
configurations and access control through registration and pre-authentication operations. The block
time has been set to 3 min, the maximum block size is up to 8 MB. Another interesting feature is the
use of quantum random numbers [89] for a private Blockchains to create a very much effective and
safe wallet, private key, and public keys address than pseudo-random number generator in use today.
_7.7. IoTChain_
Currently ranked around the 300th place in the market cap ranking of cryptocurrency market [90],
IoTChain [91] differs from other projects in the way it implements the consensus. IoTChain uses
Practical Byzantine Fault Tolerance (pBFT) to achieve main chain consensus, and use the DAG’s IOTA
structure for subchains. Until the mainnet swap occurs, the ITC tokens are based on Ethereum’s
ERC-20 standard [92] These tokens give a user the right to use a specific device. Single Payment
Verification (SPV) derived from Bitcoin protocol, is used to verify the presence of a transaction inside
a block without download the entire Blockchain. They also introduce the concept of ChainCode
verification to preserve and remunerate personal data. Any company who intends to do big data
analysis, receive aggregated data without access specific user data. After the execution of the analysis,
users will be remunerated for being part of the analysis.
_7.8. Others_
There are an increasing number of Blockchain–IoT projects that have been implemented and are
still in the start-up phase. Next the main ones:
Vite [93] is one of the few existing projects that use a DAG structure with smart contract
_•_
mechanism. The project extends the capability of the Solidity language (Solidity++) introducing an
asynchronous architecture. The architecture relies on a DAG ledger structure called block-lattice.
Like IOTA’s project, VITE generates snapshot but using a new hierarchical HDPoS consensus
algorithm; each account chain in the ledger generates local consensus results, and the snapshot
chain at the highest level selects the final global consensus from the local consensus results.
Nucleus Vision [94]: Using a sensor the people who decide to enter a shop are remunerated via
_•_
mobile app using Nucleus Vision architecture. The sensor can sense mobile Id, temperature,
motion, pressure, acceleration and sound. A deep learning infrastructure is used to optimize
supply–demand.
Ruff [95]: Ruff use DPoS for the consensus, specialized node to control the network. They have a
_•_
development board kit and a JavaScript library to build IoT systems to connect.
Modum [96]: Modum is a supply-chain system that integrates Blockchain technology,
_•_
smart contracts and sensory devices into a single, passive solution. Core business of Modum is
targeted to pharmaceutical companies that must employ expensive temperature-stabilized trucks
and containers via 3rd party logistics providers to transport medicine. Modum offers a solution
to substantially reduce these costs, by integrating a temperature sensor into medicinal shipments
to monitor its temperature. All data is recorded into the Ethereum Blockchain, ensuring full
transparency, accountability and data integrity.
CPChain [97]: Cyber Physical Chain (CPChain) use a modified Byzantine Fault Tolerance
_•_
algorithm (LBFT) to reach the consensus. They use an architecture called PDash in which the data
is separated from the transactions using distributed data storage (IPFS) for data, and a Blockchain
for the transactions.
Yee [98]: To avoid data overload at node level, Yee project introduces a new concept for the
_•_
distribution and retrieval of validated data across the network using a distance function and a
corresponding routing table rule to retrieve relevant data from the correct nodes. To validate
-----
_Appl. Sci. 2020, 10, 6749_ 15 of 23
transaction, the project introduces a third-party node called YeeWallet. Therefore, no direct
transaction to Blockchain, but a hybrid permissioned/permissionless Blockchain.
**8. Blockchain and IoT Main Use Cases**
Every year IoT devices become more and more capable in terms of RAM and CPU, opening the
door to a wide range of new use cases. The synergy with distributed ledgers is currently being tested
in many application areas. Below are the main 4 areas (Figure 5) in which the Blockchain–IoT synergy
is enjoying greater success and the largest number of projects.
**Figure 5. Blockchain and IoT main use cases.**
_8.1. Smart City/Home Security_
In the new concept of smart city falls a multitude of innovations and new use cases purely
technological that allow the coverage of areas previously unthinkable. Smart traffic lights, autonomous
vehicles supervision, environment monitoring and tourist services are just some of the possible areas
of interest where Blockchain and IoT can drive change. The birth of distributed renewable energy
resources has reshaped the role of energy consumers from pure consumers to prosumers who can
also generate and sell energy. New peer-to-peer networks were born that allow this kind of energy
trading. However, ensuring security and trust within the network between commercial entities in
the distributed energy sector is a complex challenge. The advent of Blockchain technology offers the
opportunity to ensure secure energy trading on P2P networks. Some recent studies use Blockchain
technologies to address these challenges, from consortium-based Blockchain [99], to privacy preserving
transactions [100].
_8.2. Healthcare_
Healthcare becomes one of the main socio-economic problems due to the aging population
while it also poses new challenges in traditional health services due to limited hospital resources.
The recent SARS-CoV-2 pandemic has shown how entire national health services can go into crisis.
Recent advances in the field of wearable health devices in health data bring opportunities in the
promotion of remote health services at home or in clinics. Privacy protection and security assurance
are crucial and still open challenges. Securing the IoT devices operating on healthcare networks using
a Blockchain can potentially overcome these challenges. Griggs et al. [101] presented an architecture
in which data generated by medical sensors are managed and shared through the use of smart
contract. Throughout the whole procedure, privacy can be kept thanks to the underneath Blockchain.
-----
_Appl. Sci. 2020, 10, 6749_ 16 of 23
An innovative solution has been introduced by Rahman et al. [102] where a Blockchain-based mobile
edge computing framework is used for an in-home therapy management.
_8.3. Industry 4.0 Product Tracking_
The manufacturing industry in recent years is experiencing an improvement from automated
production to so-called “smart production” [103]. Blockchain and IoT can help manage the
incredible amount of data from the various supply chains: product design, raw material supply,
manufacturing, recycling, distribution, retail and after-sales service. All this data raises a problem
of interoperability that can be solved through a secure peer-to-peer standard based on a Blockchain
network [104]. The rise of 5G networks provides a tremendous boost to the IoT devices that will
run on them. In the Industrial sector many challenges and interesting scenarios are opening for this
type of devices running on Blockchain networks [105]. Updating the firmware of the IoT devices is a
crucial problem in the industry because these devices need to be updated regularly to remedy security
breaches. A classic firmware upgrade scheme involves the use of cloud servers. If the cloud server is
compromised, the device update is blocked and in the worst case, it could allow malicious firmware
to be uploaded to company components. Pillai et al. [106] propose a Blockchain-based solution for
managing firmware updates in IoT. It preserves the integrity of firmware by linking the latest version
information by previous versions information with the help of a smart contract mechanism.
_8.4. Supply-Chain Tracking_
In industry, a supply chain is a set of activities, information, components and resources involved
in delivering a product or service to a consumer. A final product often consists of multiple components
forged and delivered by different manufacturers across countries. Deploy an anti-fraud technology in
every part of the supply chain can be extremely expensive. Many studies have shown how the use of
the Blockchain/IoT combination can efficiently solve the problem. Kim et al. [107] analyze a traceability
ontology and translate some of its representations to smart contracts that execute a provenance trace
and enforce traceability constraints on the Ethereum Blockchain platform. Kshetri [108] examines
how Blockchain and IoT is likely to affect key supply-chain management objectives such as cost,
quality, speed, dependability, risk reduction, sustainability and flexibility. Large IT-companies such as
IBM, PWC, Almaviva and many others have their own frameworks currently in production in many
supply chains based on Blockchain and IoT.
**9. One Step Forward: Artificial Intelligence in a Blockchain–IoT Architecture: A Disruptive**
**Research Vision**
Recently, the application of artificial intelligence techniques in IoT systems has brought numerous
advantages to the implementation of Blockchains. This obviously requires that the devices are
provided with adequate computational capacity and that are able to efficiently optimize their energy
consumption [109–112]. As example the use of IoT sensors with computational capacity will allow
the activation of anti-fraud mechanisms that can prevent the incorrect activation of the exchange of
cryptocurrency in the Blockchain due to tampering with IoT sensors. This will lead to an increase
in security in the distributed ledgers which is the basis of the Blockchain technology. Furthermore,
the processing of big data is an increasingly topical issue [113] and the companies that deal with it
have the legal and moral responsibility to safeguard the data entrusted to them. Blockchains and AI
can have a substantial impact on the way they are managed. All data on the Blockchain are validated
and they cannot be tampered. This means that Blockchains are the perfect storage facility for sensitive
or personal data that if treated with care using AI, can help unlock valuable personalized experiences
for users. A good example is the healthcare, where data is used to detect, diagnose and prevent
diseases [114–116]. In the near future it will be crucial to understand how AI, IoT and Blockchain
can be used together. It might be useful to understand how AI can help Blockchain and vice versa.
-----
_Appl. Sci. 2020, 10, 6749_ 17 of 23
Some of the applications would be big data management for AI, predictive models [117], investment
management platforms.
**10. Conclusions**
This paper has provided a systematic review by discussing the application prospects of Blockchain
technology in the IoT industry, the foundations of both systems, and the strategic importance of the
Blockchain–IoT convergence. 500 articles and whitepapers has been screened as documented in
PRISMA 2009 flowchart (Supplementary File S2). Some were discarded as non-innovative, too generic
or not sufficiently IoT-oriented. At the end of the study 118 sources were used for the drafting of the
article, including 23 online available Internet resource. 36 are whitepapers with a high risk of bias.
In order to mitigate the bias issue, we proceeded to analyze the projects that had source code available
and we also tested the related Blockchains. In summary, Blockchain technology has huge potential in
IoT systems like supply chain, medical transportation and smart city [77]. But like any system still
in an embryonic state, there are many challenges to be faced and risks to be considered. Specifically,
this paper first introduced the principal security risks connected to IoT systems in Section 4, then the
core theory of Blockchain technology, going deep into the ’consensus’ algorithms and cryptography
concepts in Section 5, ending with an excursus on the main projects in the Blockchain–IoT area. PRISMA
2009 checklist (Supplementary File S1) has been compiled. The following conclusions were drawn:
Europe hosts the most important project, but Asia is the most powerful in promoting the link
_•_
between Blockchain and IoT.
The number of projects in Blockchain–IoT domain is growing fast and many are already in
_•_
production stage.
Big international players like Microsoft, Volkswagen, Fujitsu and countries like China have
_•_
established important partnerships with existing projects.
In financial terms, cryptocurrency traded on exchanges suffers extreme volatility, those related to
_•_
the Blockchain and IoT world are also of a considerable scarcity of volumes.
Here are the remaining open issues and research directions:
Storage: one of the main advantages of the Blockchain is its decentralization, but the ledger must
_•_
be stored on the nodes themselves and IoT devices have low computational resources and very
low storage capacity.
Processing Power: Encryption and consensus algorithms can be very CPU-intensive and IoT
_•_
systems have different types of devices which have very different computing capabilities [118],
and not all of them will be able to run the same encryption algorithms at the required speed.
Legal and Compliance: Blockchain is the very first architecture able to connect the entire world
_•_
without a central control. Connecting countries with different laws without a legal supervision is
a serious issue for both manufacturers and service.
Scalability: In the Blockchain world there is a famous trilemma that says that if you want security
_•_
and decentralization, it will be necessary to sacrifice scalability. Overcoming the Blockchain
trilemma will lead to a new level of adoption for distributed ledgers.
Finally, the authors have briefly presented the future research trend that includes the introduction
of AI mechanisms to enhance the capability of IoT devices in Blockchain systems.
**[Supplementary Materials: The following are available online at http://www.mdpi.com/2076-3417/10/19/6749/](http://www.mdpi.com/2076-3417/10/19/6749/s1)**
[s1, Supplementary File S1: PRISMA 2009 checklist, Supplementary File S2: PRISMA 2009 flow diagram.](http://www.mdpi.com/2076-3417/10/19/6749/s1)
**Author Contributions: Conceptualization, L.F. and A.P.; Methodology, A.P. and N.S.; Validation, A.P.; Investigation,**
L.F.; Resources, L.F.; Data curation, L.F. and A.P.; Writing—Original draft, L.F. and A.P.; Writing—Review and
editing, L.F., A.P. and N.S.; Supervision, A.P. and N.S. All authors have read and agreed to the published version
of the manuscript.
**Funding: The APC was funded by Università degli Studi Guglielmo Marconi.**
-----
_Appl. Sci. 2020, 10, 6749_ 18 of 23
**Conflicts of Interest: The authors declare no conflict of interest.**
**Abbreviations**
The following abbreviations are used in this manuscript:
PoW Proof of concept
PoS Proof of Stake
DPoS delegated Proof of Stake
pBFT Practical Byzantine Fault Tolerance
IoT Internet of Things
DNS Domain name server
DDoS Distributed denial-of-service
RFID Radio-frequency identification
SOAP2 Simple object access protocol v.2
MITM Man-in-the-middle attack
SaaS Software as a service
NFC Near-field communication
PoS Point of Sale
PKI Public key infrastructure
DAG Direct Acyclic Graph
KYC Know your customer
IPFS InterPlanetary File System
**References**
1. Giuliano, R.; Mazzenga, F.; Neri, A.; Vegni, A.M. Security access protocols in IoT capillary networks.
_[IEEE Internet Things J. 2016, 4, 645–657. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2016.2624824)_
2. [3rd Cyberattack ‘Has Been Resolved’ After Hours of Major Outages. 2016. Available online: https://](https://www.nbcnewyork.com/news/local/major-websites-taken-down-by-internet-attack/2040013/)
[www.nbcnewyork.com/news/local/major-websites-taken-down-by-internet-attack/2040013/ (accessed on](https://www.nbcnewyork.com/news/local/major-websites-taken-down-by-internet-attack/2040013/)
10 August 2020).
3. [SonicWall. 2018 SonicWall Annual Threat Report. 2018. Available online: https://d3ik27cqx8s5ub.cloudfront.](https://d3ik27cqx8s5ub.cloudfront.net/sonicwall.com/media/pdfs/resources/2018-snwl-cyber-threat-report.pdf)
[net/sonicwall.com/media/pdfs/resources/2018-snwl-cyber-threat-report.pdf (accessed on 10 August 2020).](https://d3ik27cqx8s5ub.cloudfront.net/sonicwall.com/media/pdfs/resources/2018-snwl-cyber-threat-report.pdf)
4. Tawfik, M.; Almadani, A.; Alharbi, A.A. A Review: The Risks And weakness Security on the IoT. IOSR J.
_Comput. Eng.-(Iosr-Jce) 2017, 12–17, e-ISSN: 2278-0661, p-ISSN: 2278-8727._
5. Bastiaan, M. Preventing the 51%-Attack: A Stochastic Analysis of Two Phase Proof of Work in Bitcoin.
In Proceedings of the 22nd Twente Student Conference on IT, Enschede, The Netherland, 23 January 2015.
6. Kulkarni, A.; Sathe, S. Healthcare applications of the Internet of Things: A Review. Int. J. Comput. Sci.
_Inf. Technol. 2014, 5, 6229–6232._
7. Samaniego, M.; Deters, R. Blockchain as a Service for IoT. In Proceedings of the 2016 IEEE international
conference on internet of things (iThings) and IEEE green computing and communications (GreenCom) and
IEEE cyber, physical and social computing (CPSCom) and IEEE smart data (SmartData), Berlin, Germany,
4–8 April 2016; pp. 433–436.
8. Khan, M.A.; Salah, K. IoT security: Review, blockchain solutions, and open challenges. Future Gener.
_[Comput. Syst. 2018, 82, 395–411. [CrossRef]](http://dx.doi.org/10.1016/j.future.2017.11.022)_
9. [Minoli, D.; Occhiogrosso, B. Blockchain mechanisms for IoT security. Int. Things 2018, 1, 1–13. [CrossRef]](http://dx.doi.org/10.1016/j.iot.2018.05.002)
10. Novo, O. Blockchain meets IoT: An architecture for scalable access management in IoT. IEEE Int. Things J.
**[2018, 5, 1184–1195. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2018.2812239)**
11. Ding, S.; Cao, J.; Li, C.; Fan, K.; Li, H. A novel attribute-based access control scheme using blockchain for IoT.
_[IEEE Access 2019, 7, 38431–38441. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2019.2905846)_
12. Pieroni, A.; Noemi, S.; Brilli, M. Industry 4.0 revolution in autonomous and connected vehicle a
non-conventional approach to manage big data. J. Theor. Appl. Inf. Technol. 2018, 96, 10–18.
13. Scarpato, N.; Pieroni, A.; Di Nunzio, L.; Fallucchi, F. E-health-IoT universe: A review. Management 2017, 21, 46.
[[CrossRef]](http://dx.doi.org/10.18517/ijaseit.7.6.4467)
-----
_Appl. Sci. 2020, 10, 6749_ 19 of 23
14. Pieroni, A.; Scarpato, N.; Brilli, M. Performance Study in Autonomous and Connected Vehicles a Industry
4.0 Issue. J. Theor. & Appl. Inf. Technol. 2018, 96.
15. Arcidiacono, G.; Pieroni, A. The revolution lean six sigma 4.0. Int. J. Adv. Sci. Eng. Inf. Technol. 2018,
_[8, 141–149. [CrossRef]](http://dx.doi.org/10.18517/ijaseit.8.1.4593)_
16. Sumitra, B.; Pethuru, C.R.; Misbahuddin, M. A Survey of Cloud Authentication Attacks and Solution
Approaches. Int. J. Innov. Res. Comput. Commun. Eng. 2014, 2, 6245–6253.
17. El Mouaatamid, O.; Lahmer, M.; Belkasmi, M. Internet of Things Security: Layered classification of attacks
and possible Countermeasures. Electron. J. Inf. Technol. 2016, 9, 66–80.
18. Mitrokotsa, A.; Rieback, M.R.; Tanenbaum, A.S. Classifying RFID attacks and defenses. Inf. Syst. Front.
**[2010, 12, 491–505. [CrossRef]](http://dx.doi.org/10.1007/s10796-009-9210-z)**
19. [Van Oort, L. Tap Track, NFC Relay Attacks. 2016. Available online: https://www.taptrack.com/article/](https://www.taptrack.com/article/blog/nfc-relay-attacks/)
[blog/nfc-relay-attacks/ (accessed on 10 August 2020).](https://www.taptrack.com/article/blog/nfc-relay-attacks/)
20. [Dua, A. How Secure Is NFC Technology? 2019. Available online: https://rfid4u.com/how-secure-is-nfc-](https://rfid4u.com/how-secure-is-nfc-technology/)
[technology/ (accessed on 10 August 2020).](https://rfid4u.com/how-secure-is-nfc-technology/)
21. [Nakamoto, S. Bitcoin: A Peer-to-Peer Electronic Cash System. Available online: https://bitcoin.org/bitcoin.](https://bitcoin.org/bitcoin.pdf)
[pdf (accessed on 10 August 2020).](https://bitcoin.org/bitcoin.pdf)
22. Housley, R. Public Key Infrastructure (PKI). In Wiley Online Library; 15 April 2004. Available online:
[https://onlinelibrary.wiley.com/doi/abs/10.1002/047148296X.tie149 (accessed on 10 August 2020).](https://onlinelibrary.wiley.com/doi/abs/10.1002/047148296X.tie149)
23. Zheng, Z.; Xie, S.; Dai, H.N.; Chen, X.; Wang, H. Blockchain challenges and opportunities: A survey. Int. J.
_[Web Grid Serv. 2018, 14, 352. [CrossRef]](http://dx.doi.org/10.1504/IJWGS.2018.095647)_
24. Biryukov, A.; Khovratovich, D.; Pustogarov, I. Deanonymisation of clients in Bitcoin {P2P} network.
In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Scottsdale,
AZ, USA, 3–7 November 2014.
25. Sasson, E.B.; Chiesa, A.; Garman, C.; Green, M.; Miers, I.; Tromer, E.; Virza, M. Zerocash: Decentralized
anonymous payments from bitcoin. In Proceedings of the 2014 IEEE Symposium on Security and Privacy,
San Jose, CA, USA, 18–21 May 2014; pp. 459–474.
26. Jakobsson, M.; Juels, A. Proofs of Work and Bread Pudding Protocols. In Proceedings of the Secure
Information Networks: Communications and Multimedia Security, Leuven, Belgium, 20–21 September 1999;
Springer: Berlin, Germany; pp. 258–272.
27. [Cryptoslate. Proof-of-Work Coins. 2019. Available online: https://cryptoslate.com/cryptos/proof-of-work/](https://cryptoslate.com/cryptos/proof-of-work/)
(accessed on 10 August 2020).
28. [GoChain Team. Peercoin Explained: The Proof of Stake Pioneer. Available online: https://bitfalls.com/](https://bitfalls.com/2018/03/11/peercoin-explained-proof-stake-pioneer/)
[2018/03/11/peercoin-explained-proof-stake-pioneer/ (accessed on 10 August 2020).](https://bitfalls.com/2018/03/11/peercoin-explained-proof-stake-pioneer/)
29. Castro, M.; Liskov, B. Practical Byzantine Fault Tolerance. In Proceedings of the Third Symposium on
Operating Systems Design and Implementation SE - OSDI ’99, New Orleans, LA, USA, 22–25 February 1999;
USENIX Association: Berkeley, CA, USA; pp. 173–186.
30. Castro, M.; Liskov, B. Practical byzantine fault tolerance and proactive recovery. Acm Trans. Comput. Syst.
**[2002, 20, 398–461. [CrossRef]](http://dx.doi.org/10.1145/571637.571640)**
31. Karantias, K.; Kiayias, A.; Zindros, D. Proof-of-burn. In International Conference on Financial Cryptography and
_Data Security; Springer: Cham, Switzerland, 2020; pp. 523–540._
32. [Gauld, S.; von Ancoina, F.; Stadler, R. The Burst Dymaxion. Available online: https://www.burst-coin.org/](https://www.burst-coin.org/wpcontent/uploads/2017/07/The-Burst-Dymaxion1.00.pdf)
[wpcontent/uploads/2017/07/The-Burst-Dymaxion1.00.pdf (accessed on 10 August 2020).](https://www.burst-coin.org/wpcontent/uploads/2017/07/The-Burst-Dymaxion1.00.pdf)
33. Dziembowski, S.; Faust, S.; Kolmogorov, V.; Pietrzak, K. Proofs of space. In Annual Cryptology Conference;
Springer: Berlin/Heidelberg, Germany, 2015; pp. 585–605.
34. [Popov, S. The Tangle, IOTA Whitepaper. White Pap. 2017. Available online: http://www.descryptions.com/](http://www.descryptions.com/Iota.pdf)
[Iota.pdf (accessed on 10 August 2020).](http://www.descryptions.com/Iota.pdf)
35. [Popov, S. Saa, Finardi, Equilibria in the Tangle. Comput. Ind. Eng. 2019, 136, 160–172. [CrossRef]](http://dx.doi.org/10.1016/j.cie.2019.07.025)
36. Baird, L. The Swirlds Hashgraph Consensus Algorithm: Fair, fast, Byzantine Fault Tolerance; Technical Report
[SWIRLDS-TR-2016 1; Swirlds, Inc.: Richardson, TX, USA, 2016. Available online: https://www.swirlds.](https://www.swirlds.com/downloads/SWIRLDS-TR-2016-01.pdf)
[com/downloads/SWIRLDS-TR-2016-01.pdf (accessed on 10 August 2020).](https://www.swirlds.com/downloads/SWIRLDS-TR-2016-01.pdf)
37. [Buterin, V.; Dryja, T.E. Ethash. Available online: https://eth.wiki/en/concepts/ethash/ethash (accessed on](https://eth.wiki/en/concepts/ethash/ethash)
10 August 2020).
-----
_Appl. Sci. 2020, 10, 6749_ 20 of 23
38. [Lee, C. Litecoin Project. Github, 2019. Available online: https://github.com/litecoin-project/litecoin](https://github.com/litecoin-project/litecoin)
(accessed on 10 August 2020).
39. [Van Saberhagen, N. Cryptonote v 2.0. 2013. Available online: https://cryptonote.org/whitepaper.pdf](https://cryptonote.org/whitepaper.pdf)
(accessed on 10 August 2020).
40. The Hdac Team. Hdac: Transaction Innovation—IoT Contract & M2M Transaction Platform based on
[Blockchain. 2018. Available online: https://github.com/Hdactech/doc/wiki/Whitepaper (accessed on](https://github.com/Hdactech/doc/wiki/Whitepaper)
10 August 2020).
41. [The Lynx Team. Technical White Paper 1.1. White Pap. 2019. Available online: http://cdn.getlynx.io/2019-](http://cdn.getlynx.io/2019-03-17_Lynx_Whitepaper_v1.1.pdf)
[03-17_Lynx_Whitepaper_v1.1.pdf (accessed on 10 August 2020).](http://cdn.getlynx.io/2019-03-17_Lynx_Whitepaper_v1.1.pdf)
42. [Komodo Team. Komodo: An Advanced Blockchain Technology, Focused on Freedom. Available online: https:](https://komodoplatform.com/wp-content/uploads/2018/05/2018-05-09-Komodo-White-Paper-Full.pdf)
[//komodoplatform.com/wp-content/uploads/2018/05/2018-05-09-Komodo-White-Paper-Full.pdf](https://komodoplatform.com/wp-content/uploads/2018/05/2018-05-09-Komodo-White-Paper-Full.pdf)
(accessed on 10 August 2020).
43. Octavian Oncescu. Purple Protocol—A Scalable Platform for Decentralized Applications and Tokenized
[Assets. Available online: https://purpleprotocol.org/whitepaper/ (accessed on 10 August 2020).](https://purpleprotocol.org/whitepaper/)
44. [Diaz, D.; Duffield, E. The Dash Whitepaper. Available online: https://github.com/dashpay/dash/wiki/](https://github.com/dashpay/dash/wiki/Whitepaper)
[Whitepaper (accessed on 10 August 2020).](https://github.com/dashpay/dash/wiki/Whitepaper)
45. Mazieres, D. The stellar consensus protocol: A federated model for internet-level consensus.
_[Stellar Dev. Found. 2015, 32. Available online: http://www.scs.stanford.edu/~dm/20160606-scp-talk.pdf](http://www.scs.stanford.edu/~dm/20160606-scp-talk.pdf)_
(accessed on 10 August 2020).
46. [Kwon, J.; Buchman, E. Cosmos: A Network of Distributed Ledgers. Available online: https://cosmos.](https://cosmos.network/resources/whitepaper)
[network/resources/whitepaper (accessed on 10 August 2020).](https://cosmos.network/resources/whitepaper)
47. Pike, D.; Nosker, P.; Boehm, D.; Grisham, D.; Woods, S.; Marston, J. Proof-of-Stake-Time Whitepaper.
Available online: [https://www.vericoin.info/downloads/VeriCoinPoSTWhitePaper10May2015.pdf](https://www.vericoin.info/downloads/VeriCoinPoSTWhitePaper10May2015.pdf)
(accessed on 10 August 2020).
48. [Shield. PoS Boo, MNs and QP in Detail. Available online: https://medium.com/@shieldxsh/pos-boo-mns-](https://medium.com/@shieldxsh/pos-boo-mns-and-qp-in-detail-6b9e61e3acee)
[and-qp-in-detail-6b9e61e3acee (accessed on 10 August 2020).](https://medium.com/@shieldxsh/pos-boo-mns-and-qp-in-detail-6b9e61e3acee)
49. StakeNet Team. Trustless Proof of Stake (TPoS), Wallet Staking and Pooled Staking. Available online:
[https://stakenet.io/TPoS_Factsheet.pdf (accessed on 10 August 2020).](https://stakenet.io/TPoS_Factsheet.pdf)
50. Ren, L. Proof of Stake Velocity: Building the Social Currency of the Digital Age. Self-Published White Paper.
[Available online: https://www.cryptoground.com/storage/files/1528454215-cannacoin.pdf (accessed on](https://www.cryptoground.com/storage/files/1528454215-cannacoin.pdf)
10 August 2020).
51. Kiayias, A.; Russell, A.; David, B.; Oliynykov, R. Ouroboros: A provably secure proof-of-stake blockchain
protocol. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and
_Lecture Notes in Bioinformatics), 37th Annual International Cryptology Conference, Santa Barbara, CA, USA,_
_20–24 August 2017; Springer: Berlin, Germany, 2017; pp. 357–388._
52. The Tron Foundation. [Advanced Decentralized Blockchain Platform. Available online: https://tron.](https://tron.network/static/doc/white_paper_v_2_0.pdf)
[network/static/doc/white_paper_v_2_0.pdf (accessed on 10 August 2020).](https://tron.network/static/doc/white_paper_v_2_0.pdf)
53. [Schuh, F.; Larimer, D. Bitshares 2.0: General Overview. Available online: http://docs.bitshares.org/](http://docs.bitshares.org/downloads/bitshares-general.pdf)
[downloads/bitshares-general.pdf (accessed on 10 August 2020).](http://docs.bitshares.org/downloads/bitshares-general.pdf)
54. Larimer, D.; Scott, N.; Zavgorodnev, V.; Johnson, B.; Calfee, J.; Vandeberg, M. Steem An Incentivized,
[Blockchain-based Social Media Platform. 2016. Available online: https://assets.ctfassets.net/sdlntm3tthp6/](https://assets.ctfassets.net/sdlntm3tthp6/resource-asset-r407/41cc9041e3d76a683feba1bc33ce5fdf/26069ed0-86be-43a3-9fb2-1523d3b52512.pdf)
[resource-asset-r407/41cc9041e3d76a683feba1bc33ce5fdf/26069ed0-86be-43a3-9fb2-1523d3b52512.pdf](https://assets.ctfassets.net/sdlntm3tthp6/resource-asset-r407/41cc9041e3d76a683feba1bc33ce5fdf/26069ed0-86be-43a3-9fb2-1523d3b52512.pdf)
(accessed on 10 August 2020).
55. ARK.io. Ark Ecosystem Whitepaper. [Available online: https://ark.io/Whitepaper.pdf (accessed on](https://ark.io/Whitepaper.pdf)
10 August 2020).
56. [Beddows, O.; Kordek, M. The Lisk Protocol. Available online: https://lisk.io/documentation/lisk-protocol/](https://lisk.io/documentation/lisk-protocol/index.html)
[index.html (accessed on 10 August 2020).](https://lisk.io/documentation/lisk-protocol/index.html)
57. [Larimer, D. EOS. IO Technical White Paper. Available online: https://github.com/EOSIO/Documentation](https://github.com/EOSIO/Documentation)
(accessed on 10 August 2020).
58. Team, Z. The Zilliqa Technical Whitepaper. Retrieved Sept. 2017, 16, 2019.
59. [Steeley L. Introduction to Sawtooth PBFT. Available online: https://www.hyperledger.org/blog/2019/02/](https://www.hyperledger.org/blog/2019/02/13/introduction-to-sawtooth-pbft)
[13/introduction-to-sawtooth-pbft (accessed on 10 August 2020).](https://www.hyperledger.org/blog/2019/02/13/introduction-to-sawtooth-pbft)
-----
_Appl. Sci. 2020, 10, 6749_ 21 of 23
60. [Team, N. NEO White Paper: A Distributed Network for the Smart Economy. Available online: https:](https://docs.neo.org/docs/en-us/basic/whitepaper.html)
[//docs.neo.org/docs/en-us/basic/whitepaper.html (accessed on 10 August 2020).](https://docs.neo.org/docs/en-us/basic/whitepaper.html)
61. Rogaway, P.; Shrimpton, T. Cryptographic hash-function basics: Definitions, implications, and separations
for preimage resistance, second-preimage resistance, and collision resistance. In International Workshop on
_Fast Software Encryption; Springer: Berlin/Heidelberg, Germany, 2004; pp. 371–388._
62. Wang, M.; Duan, M.; Zhu, J. Research on the Security Criteria of Hash Functions in the Blockchain.
In Proceedings of the 2nd ACM Workshop on Blockchains, Cryptocurrencies, and Contracts, ACM Asia
Conference on Computer and Communications Security, Incheon, Korea, 4–8 June 2018; Association for
Computing Machinery: New York, NY, USA, 2018.
63. Rachmawati, D.; Tarigan, J.T.; Ginting, A.B.C. A comparative study of Message Digest 5(MD5) and SHA256
algorithm. In Proceedings of the Journal of Physics: Conference Series, 2nd International Conference on
Computing and Applied Informatics 2017, Medan, Indonesia, 28–30 November 2017; IOP Publishing: Bristol,
UK, 2018.
64. Gupta, P.; Kumar, S. A comparative analysis of SHA and MD5 algorithm. Architecture 2014, 1, 5.
65. [Eastlake, D.; Jones, P. US Secure Hash Algorithm 1 (SHA1); RFC Editor, USA. Available online: https://dl.acm.](https://dl.acm.org/doi/book/10.17487/RFC3174)
[org/doi/book/10.17487/RFC3174 (accessed on 10 August 2020).](https://dl.acm.org/doi/book/10.17487/RFC3174)
66. Kircanski, A.; Shen, Y.; Wang, G.; Youssef, A.M. Boomerang and slide-rotational analysis of the SM3
hash function. In International Conference on Selected Areas in Cryptography; Springer: Berlin/Heidelberg,
Germany, 2012; pp. 304–320.
67. Boneh, D. Twenty years of attacks on the RSA cryptosystem. Not. AMS 1999, 46, 203–213.
68. [Koblitz, N. Elliptic curve cryptosystems. Math. Comput. 1987, 48, 203–209. [CrossRef]](http://dx.doi.org/10.1090/S0025-5718-1987-0866109-5)
69. Belinky, M.; Rennick, E.; Veitch, A. The Fintech 2.0 Paper: Rebooting Financial Services.
_[Santander InnoVentures/Oliver Wyman/Anthemis, 2015. Available online: https://www.oliverwyman.com/](https://www.oliverwyman.com/content/dam/oliver-wyman/global/en/2015/jun/The_Fintech_2_0_Paper_Final_PV.pdf)_
[content/dam/oliver-wyman/global/en/2015/jun/The_Fintech_2_0_Paper_Final_PV.pdf (accessed on](https://www.oliverwyman.com/content/dam/oliver-wyman/global/en/2015/jun/The_Fintech_2_0_Paper_Final_PV.pdf)
10 August 2020).
70. [Kaminsky, D. Black ops of TCP/IP 2011. Black Hat USA 2011. Available online: https://www.slideshare.net/](https://www.slideshare.net/dakami/black-ops-of-tcpip-2011-black-hat-usa-2011)
[dakami/black-ops-of-tcpip-2011-black-hat-usa-2011 (accessed on 10 August 2020).](https://www.slideshare.net/dakami/black-ops-of-tcpip-2011-black-hat-usa-2011)
71. consultancy.uk. Blockchain Technology: How it Works, Main Advantages and Challenges. Available online:
[https://www.consultancy.uk/news/13484/blockchain-technology-how-it-works-main-advantages-and-](https://www.consultancy.uk/news/13484/blockchain-technology-how-it-works-main-advantages-and-challenges)
[challenges (accessed on 10 August 2020).](https://www.consultancy.uk/news/13484/blockchain-technology-how-it-works-main-advantages-and-challenges)
72. [coin.dance. Bitcoin Legality by Country. Available online: https://coin.dance/poli/legality (accessed on](https://coin.dance/poli/legality)
10 August 2020).
73. TheTIE. Exchange Real Trading Volume Investigation. [Available online: https://docs.google.com/](https://docs.google.com/spreadsheets/d/13_L5V9elxQ3xps62BeYVyr_Wu-9vfyAyN5tGqLNoV9Y/edit#gid=1415549973)
[spreadsheets/d/13_L5V9elxQ3xps62BeYVyr_Wu-9vfyAyN5tGqLNoV9Y/edit#gid=1415549973 (accessed](https://docs.google.com/spreadsheets/d/13_L5V9elxQ3xps62BeYVyr_Wu-9vfyAyN5tGqLNoV9Y/edit#gid=1415549973)
on 10 August 2020).
74. [Visa.com. Visa Acceptance for Retailers Available online: https://usa.visa.com/run-your-business/small-](https://usa.visa.com/run-your-business/small-business-tools/retail.html)
[business-tools/retail.html (accessed on 10 August 2020).](https://usa.visa.com/run-your-business/small-business-tools/retail.html)
75. Evans, D. The internet of things: How the next evolution of the internet is changing everything.
_Cisco White Pap. 2011, 1, 1–11._
76. Bahga, A.; Madisetti, V.K. Blockchain platform for industrial internet of things. J. Softw. Eng. Appl.
**[2016, 9, 533–546. [CrossRef]](http://dx.doi.org/10.4236/jsea.2016.910036)**
77. Pieroni, A.; Scarpato, N.; Di Nunzio, L.; Fallucchi, F.; Raso, M. Smarter city: Smart energy grid based on
[blockchain technology. Int. J. Adv. Sci. Eng. Inf. Technol. 2018, 8, 298–306. [CrossRef]](http://dx.doi.org/10.18517/ijaseit.8.1.4954)
78. Orecchini, F.; Santiangeli, A.; Zuccari, F.; Pieroni, A.; Suppa, T. Blockchain technology in smart city: A new
opportunity for smart environment and smart mobility. In International Conference on Intelligent Computing &
_Optimization; Springer: Cham, Switzerland, 2018; pp. 346–354._
79. Atlam, H.F.; Alenezi, A.; Alassafi, M.O.; Wills, G. Blockchain with internet of things:
[Benefits, challenges, and future directions. Int. J. Intell. Syst. Appl. 2018, 10, 40–48. [CrossRef]](http://dx.doi.org/10.5815/ijisa.2018.06.05)
80. Reyna, A.; MartíN, C.; Chen, J.; Soler, D. On blockchain and its integration with IoT. Challenges and
opportunities. Future Gener. Comput. Syst. 2018, 88, 173–190.
81. [CoinMarketCap. IOTA Capitalization. Available online: https://coinmarketcap.com/currencies/iota/](https://coinmarketcap.com/currencies/iota/)
(accessed on 10 August 2020).
-----
_Appl. Sci. 2020, 10, 6749_ 22 of 23
82. Douceur, J.R. The sybil attack. In International Workshop on Peer-to-Peer Systems; Springer: Berlin/Heidelberg,
Germany, 2002; pp. 251–260.
83. Brewer, E. CAP twelve years later: How the“ rules” have changed. Computer 2012, 45, 23–29.
84. [Team, V. Vechain Whitepaper. Available online: https://whitepaper.io/document/578/vechain-whitepaper](https://whitepaper.io/document/578/vechain-whitepaper)
(accessed on 10 August 2020).
85. DNV GL Buys Stake in VeChain and Announces Authority Masternode Status Medium. Available
[online: https://medium.com/@bsc44/dnv-gl-buys-stake-in-vechain-and-announces-authority-masternode-](https://medium.com/@bsc44/dnv-gl-buys-stake-in-vechain-and-announces-authority-masternode-status-c42992a16a2e)
[status-c42992a16a2e (accessed on 10 August 2020).](https://medium.com/@bsc44/dnv-gl-buys-stake-in-vechain-and-announces-authority-masternode-status-c42992a16a2e)
86. [Cahrenheit. Introducing Cahrenheit. Medium. 2018. Available online: https://medium.com/@Cahrenheit/](https://medium.com/@Cahrenheit/introducing-cahrenheit-cb017bf5dd6b)
[introducing-cahrenheit-cb017bf5dd6b (accessed on 10 August 2020).](https://medium.com/@Cahrenheit/introducing-cahrenheit-cb017bf5dd6b)
87. Team, W. Waltonchain White Paper (V 1.0.4). [Available online: https://www.digitalcoindata.com/](https://www.digitalcoindata.com/whitepapers/walton-whitepaper.pdf)
[whitepapers/walton-whitepaper.pdf (accessed on 10 August 2020).](https://www.digitalcoindata.com/whitepapers/walton-whitepaper.pdf)
88. Team, I. IoTeX: A Decentralized Network for Internet of Things Powered by a Privacy-Centric
[Blockchain. 2018. Available online: https://whitepaper.io/document/131/iotex-whitepaper (accessed on](https://whitepaper.io/document/131/iotex-whitepaper)
10 August 2020).
89. Herrero-Collantes, M.; Garcia-Escartin, J.C. Quantum random number generators. _Rev._ _Mod._ _Phys._
**2017, 89, 015004.**
90. [CoinMarketCap. CoinMarketCap. IoTChain Capitalization. Available online: https://coinmarketcap.com/](https://coinmarketcap.com/currencies/iot-chain/)
[currencies/iot-chain/ (accessed on 10 August 2020).](https://coinmarketcap.com/currencies/iot-chain/)
91. [Team, I. IoTChain Whitepaper. Available online: https://iotchain.io/whitepaper/ITCWHITEPAPER.pdf](https://iotchain.io/whitepaper/ITCWHITEPAPER.pdf)
(accessed on 10 August 2020).
92. Buterin, V. A next-generation smart contract and decentralized application platform. White Pap. 2014, 3.
[Available online: https://people.cs.georgetown.edu/~clay/classes/fall2017/835/papers/Etherium.pdf](https://people.cs.georgetown.edu/~clay/classes/fall2017/835/papers/Etherium.pdf)
(accessed on 10 August 2020).
93. Liu, C.; Wang, D.; Wu, M. Vite: A High Performance Asynchronous Decentralized Application Platform.
[Available online: https://static.coinpaprika.com/storage/cdn/whitepapers/6393951.pdf (accessed on](https://static.coinpaprika.com/storage/cdn/whitepapers/6393951.pdf)
10 August 2020).
94. [The Nucleus Team. Connecting the unconnected. White Pap. 2018. Available online: https://cryptorating.](https://cryptorating.eu/whitepapers/Nucleus-Vision/light-paper.pdf)
[eu/whitepapers/Nucleus-Vision/light-paper.pdf (accessed on 10 August 2020).](https://cryptorating.eu/whitepapers/Nucleus-Vision/light-paper.pdf)
95. [Ruff IoT Blockchain Whitepaper. Available online: https://cryptorating.eu/whitepapers/Ruff-Chain/](https://cryptorating.eu/whitepapers/Ruff-Chain/WhitePaper.html)
[WhitePaper.html (accessed on 10 August 2020).](https://cryptorating.eu/whitepapers/Ruff-Chain/WhitePaper.html)
96. Data Integrity for Supply Chain Operations, Powered by Blockchain Technology. Available
[online: https://modum.io/sites/default/files/documents/2018-05/modum-whitepaper-v.-1.0.pdf?utm_](https://modum.io/sites/default/files/documents/2018-05/modum-whitepaper-v.-1.0.pdf?utm_source=icogrind)
[source=icogrind (accessed on 10 August 2020).](https://modum.io/sites/default/files/documents/2018-05/modum-whitepaper-v.-1.0.pdf?utm_source=icogrind)
97. [Decentralized Infrastructure for Next Generation Internet of Things. Available online: https://cpchain.io/](https://cpchain.io/download/CPChain_Whitepaper_English.pdf/)
[download/CPChain_Whitepaper_English.pdf/ (accessed on 10 August 2020).](https://cpchain.io/download/CPChain_Whitepaper_English.pdf/)
98. [Yee: A Blockchain-Powered & Cloud based Social Ecosystem. Available online: https://doc.yeeco.io/YeeCo-](https://doc.yeeco.io/YeeCo-V0.2-EN.pdf)
[V0.2-EN.pdf (accessed on 10 August 2020).](https://doc.yeeco.io/YeeCo-V0.2-EN.pdf)
99. Li, Z.; Kang, J.; Yu, R.; Ye, D.; Deng, Q.; Zhang, Y. Consortium Blockchain for Secure Energy Trading in
Industrial Internet of Things. IEEE Trans. Ind. Inform. 2018, 14, 3690–3700.
100. Aitzhan, N.Z.; Svetinovic, D. Security and privacy in decentralized energy trading through multi-signatures,
blockchain and anonymous messaging streams. IEEE Trans. Dependable Secur. 2018, 15, 840–852.
101. Griggs, K.N.; Ossipova, O.; Kohlios, C.P.; Baccarini, A.N.; Howson, E.A.; Hayajneh, T. Healthcare blockchain
system using smart contracts for secure automated remote patient monitoring. J. Med. Syst. 2018, 42, 130.
102. Rahman, M.A.; Hossain, M.S.; Loukas, G.; Hassanain, E.; Rahman, S.S.; Alhamid, M.F.; Guizani, M.
Blockchain-based mobile edge computing framework for secure therapy applications. _IEEE Access_
**2018, 6, 72469–72478.**
103. Kusiak, A. Smart manufacturing. Int. J. Prod. 2018, 56, 508–517.
104. Liu, X.L.; Wang, W.M.; Guo, H.; Barenji, A.V.; Li, Z.; Huang, G.Q. Industrial blockchain based framework for
[product lifecycle management in industry 4.0. Robot.-Comput.-Integr. Manuf. 2020, 63, 101897. [CrossRef]](http://dx.doi.org/10.1016/j.rcim.2019.101897)
105. Mistry, I.; Tanwar, S.; Tyagi, S.; Kumar, N. Blockchain for 5G-enabled IoT for industrial automation:
[A systematic review, solutions, and challenges. Mech. Syst. Signal Process. 2020, 135, 106382. [CrossRef]](http://dx.doi.org/10.1016/j.ymssp.2019.106382)
-----
_Appl. Sci. 2020, 10, 6749_ 23 of 23
106. Pillai, A.; Sindhu, M.; Lakshmy, K.V. Securing firmware in Internet of Things using blockchain.
In Proceedings of the 2019 5th International Conference on Advanced Computing & Communication
Systems (ICACCS), Tamil Nadu, India, 15–16 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 329–334.
107. Kim, H.M.; Laskowski, M. Toward an ontology-driven blockchain design for supply-chain provenance.
_[Intell. Syst. Account. Financ. Manag. 2018, 25, 18–27. [CrossRef]](http://dx.doi.org/10.1002/isaf.1424)_
108. Kshetri, N. 1 Blockchain’s roles in meeting key supply chain management objectives. Int. J. Inf. Manag.
**[2018, 39, 80–89. [CrossRef]](http://dx.doi.org/10.1016/j.ijinfomgt.2017.12.005)**
109. Iazeolla, G.; Pieroni, A. Energy saving in data processing and communication systems. Sci. World J. 2014,
_[2014, 452863. [CrossRef] [PubMed]](http://dx.doi.org/10.1155/2014/452863)_
110. Pieroni, A.; Iazeolla, G. Engineering QoS and energy saving in the delivery of ICT services. In Sustaining
_Power Resources through Energy Optimization and Engineering; IGI Global: Hershey, PA, USA, 2016._
[111. Iazeolla, G.; Pieroni, A. Power management of server farms. Appl. Mech. Mater. 2014, 492, 453–459. [CrossRef]](http://dx.doi.org/10.4028/www.scientific.net/AMM.492.453)
112. Bracciale, L.; Loreti, P.; Detti, A.; Paolillo, R. Melazzi Lightweight named object: An ICN-based abstraction
[for IoT device programming and management. IEEE Internet Things J. 2019, 6, 5029–5039. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2019.2894969)
113. Arcidiacono, G.; De Luca, E.W.; Fallucchi, F.; Pieroni, A. The use of Lean Six Sigma methodology in
digital curation. In Proceedings of the First Workshop on Digital Humanities and Digital Curation
Co-Located with the 10th Conference on Metadata and Semantics Research (MTSR 2016), Goettingen,
Germany, 22 November 2016.
114. Ferroni, P.; Zanzotto, F.M.; Riondino, S.; Scarpato, N.; Guadagni, F.; Roselli, M. Breast cancer prognosis using
[a machine learning approach. Cancers 2019, 11, 328. [CrossRef]](http://dx.doi.org/10.3390/cancers11030328)
115. Ferroni, P.; Zanzotto, F.M.; Scarpato, N.; Riondino, S.; Nanni, U.; Roselli, M.; Guadagni, F. Risk Assessment
for Venous Thromboembolism in Chemotherapy-Treated Ambulatory Cancer Patients. Med Decis. Mak.
**[2016, 37, 234–242. [CrossRef]](http://dx.doi.org/10.1177/0272989X16662654)**
116. Guadagni, F.; Scarpato, N.; Patrizia, F.; D’Ottavi, G.; Boavida, F.; Roselli, M.; Garrisi, G.; Lisi, A.
Personal and sensitive data in the e-health-IoT universe. In International Internet of Things Summit;
Springer: Cham, Switzerland, 2015; pp. 504–514.
117. Cardarilli, G.C.; Nunzio, L.D.; Fazzolari, R.; Giardino, D.; Matta, M.; Patetta, M.; Re, M.; Spanò, S.
[Approximated computing for low power neural networks. Telkomnika 2019, 17, 1236–1241. [CrossRef]](http://dx.doi.org/10.12928/telkomnika.v17i3.12409)
118. Cardarilli, G.C.; Di Nunzio, L.; Fazzolari, R.; Re, M.; Silvestri, F.; Spanò, S. Energy consumption saving in
[embedded microprocessors using hardware accelerators. Telkomnika 2018, 16, 1019–1026. [CrossRef]](http://dx.doi.org/10.12928/telkomnika.v16i3.9387)
_⃝c_ 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/APP10196749?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/APP10196749, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2076-3417/10/19/6749/pdf?version=1601341391"
}
| 2,020
|
[
"Review"
] | true
| 2020-09-26T00:00:00
|
[
{
"paperId": "f2cd295ebf034239c6e8f755ca438e9214b1e09f",
"title": "Public Key Infrastructure (PKI)"
},
{
"paperId": "5b3711cf8dd639f02f275f51f35c45adc988051e",
"title": "Improving Security on Blockchain and Its Integration with IoT"
},
{
"paperId": "479c14c7fd370f78d4e443f2cc8960945cd2e20a",
"title": "Industrial blockchain based framework for product lifecycle management in industry 4.0"
},
{
"paperId": "dff116b66dd4fb4350473873f165a2b8039e9445",
"title": "Proof-of-Burn"
},
{
"paperId": "ef8e9fcadab13b5d20e03d9b56d491da754b1953",
"title": "Blockchain for 5G-enabled IoT for industrial automation: A systematic review, solutions, and challenges"
},
{
"paperId": "67fb84f37fd19b356df8080a5785c4e110c1277f",
"title": "Approximated computing for low power neural networks"
},
{
"paperId": "b3bfc404f0a617020154ede7850ccca27a41f458",
"title": "A Novel Attribute-Based Access Control Scheme Using Blockchain for IoT"
},
{
"paperId": "ebee555ceaa1b0a3c1073428a7de94302ca846ea",
"title": "Breast Cancer Prognosis Using a Machine Learning Approach"
},
{
"paperId": "f158464004e729b661b30a0cd824242adfa09948",
"title": "Securing Firmware in Internet of Things using Blockchain"
},
{
"paperId": "d37ba11fea1f0944210f9d674bde0f81340b7fbf",
"title": "Lightweight Named Object: An ICN-Based Abstraction for IoT Device Programming and Management"
},
{
"paperId": "a4b2837509af0c33ac182a5b84bd47e2e38a61f7",
"title": "Blockchain-Based Mobile Edge Computing Framework for Secure Therapy Applications"
},
{
"paperId": "305edd92f237f8e0c583a809504dcec7e204d632",
"title": "Blockchain challenges and opportunities: a survey"
},
{
"paperId": "d1e1525328f9b4673cac409453d7261cdbb5a703",
"title": "Blockchain Technology in Smart City: A New Opportunity for Smart Environment and Smart Mobility"
},
{
"paperId": "876755ee628d358266e6faa5590ae9ea92e08435",
"title": "Blockchain mechanisms for IoT security"
},
{
"paperId": "9978f6a9ff6ebbbf6cf134ea9324da6271f3e07d",
"title": "Security and Privacy in Decentralized Energy Trading Through Multi-Signatures, Blockchain and Anonymous Messaging Streams"
},
{
"paperId": "85c150aaa9a449a2ce4b469e015bae3733b475e4",
"title": "Consortium Blockchain for Secure Energy Trading in Industrial Internet of Things"
},
{
"paperId": "01157f7c700e92323a5933e00c71cf001a8bac88",
"title": "Blockchain with Internet of Things: Benefits, Challenges, and Future Directions"
},
{
"paperId": "6d661299a8207a4bff536494cec201acee3c6c1c",
"title": "Healthcare Blockchain System Using Smart Contracts for Secure Automated Remote Patient Monitoring"
},
{
"paperId": "87ac3d59c41b0f0f8c4a0e4a8ca86d273e693561",
"title": "Energy Consumption Saving in Embedded Microprocessors Using Hardware Accelerators"
},
{
"paperId": "f269b208f77cfaa3e076fe06c77b76f0186ab231",
"title": "Research on the Security Criteria of Hash Functions in the Blockchain"
},
{
"paperId": "677d276996ba7a84b9078e9c413cbb1d8820a15e",
"title": "1 Blockchain's roles in meeting key supply chain management objectives"
},
{
"paperId": "9ad22da8353ca1799c7026115c8987e3172ae6dd",
"title": "Blockchain Meets IoT: An Architecture for Scalable Access Management in IoT"
},
{
"paperId": "0bd7dfda1dfca713044dac2c16d23124968e9c1b",
"title": "A comparative study of Message Digest 5(MD5) and SHA256 algorithm"
},
{
"paperId": "f1bebaca31e488eff44323bd60e88ad99995d1ce",
"title": "Smarter City: Smart Energy Grid based on Blockchain Technology"
},
{
"paperId": "ae0f231aa180277bc00214bada06bccacef79b4c",
"title": "The Revolution Lean Six Sigma 4.0"
},
{
"paperId": "d93d4b8a821a6761341fa9ace16b0e3a2ab232c3",
"title": "Smart manufacturing"
},
{
"paperId": "41ef0c603f069b858415fcdb5b0b0d28ecb10640",
"title": "E-health-IoT Universe: A Review"
},
{
"paperId": "fade0abb1b3bcbbe3a205d00f306bdb997b56277",
"title": "Equilibria in the Tangle"
},
{
"paperId": "81f6442e50890b990598e637a44b2d8d10329710",
"title": "IoT security: Review, blockchain solutions, and open challenges"
},
{
"paperId": "44dacdec625e31df66736a385e7001ef33756c5f",
"title": "Ouroboros: A Provably Secure Proof-of-Stake Blockchain Protocol"
},
{
"paperId": "73e27f92507f1211a8fd57322921e8af2378d78d",
"title": "Security Access Protocols in IoT Capillary Networks"
},
{
"paperId": "343c9f511c7ec0a72ca58835ae4b67080d7de92c",
"title": "Risk Assessment for Venous Thromboembolism in Chemotherapy-Treated Ambulatory Cancer Patients"
},
{
"paperId": "2c85f49b5c16b3a74696899c588b0b7b00d4c274",
"title": "Internet of Things Security: Layered classification of attacks and possible Countermeasures"
},
{
"paperId": "631cc57858eb1a94522e0090c6640f6f39ab7e18",
"title": "Blockchain as a Service for IoT"
},
{
"paperId": "628c2bcfbd6b604e2d154c7756840d3a5907470f",
"title": "Blockchain Platform for Industrial Internet of Things"
},
{
"paperId": "3cc86552f5ef1668c9204c3633ab088fb3010a93",
"title": "Quantum random number generators"
},
{
"paperId": "ae343dfe076095523579468f8baf5574fdc0e707",
"title": "Personal and Sensitive Data in the e-Health-IoT Universe"
},
{
"paperId": "fe7c1437c4da50f9fd9cfadc8a1ac6c59ea3ffe5",
"title": "Proofs of Space"
},
{
"paperId": "b97bb77c5c1909a259e99dc8d8051ef33a2a6def",
"title": "Energy Saving in Data Processing and Communication Systems"
},
{
"paperId": "e4b92eccc5bc7ededff579232f5bed5186bf8302",
"title": "Deanonymisation of Clients in Bitcoin P2P Network"
},
{
"paperId": "3797d924fc5f832b72a84c512e92021906793965",
"title": "Zerocash: Decentralized Anonymous Payments from Bitcoin"
},
{
"paperId": "51e2eb777e12d4776a876fe37ec777adf1a9204c",
"title": "Byzantine Fault Tolerance"
},
{
"paperId": "4cd6b1f7d49d272a38c64f74c0e095af0207cc2f",
"title": "Power Management of Server Farms"
},
{
"paperId": "988f8e2f9c4b4416590412c6019fdc2eeddf044e",
"title": "Boomerang and Slide-Rotational Analysis of the SM3 Hash Function"
},
{
"paperId": "0b1b57ebfa4b2a9ddc05121a62193d9dc24b2224",
"title": "Computing in Asia: A Sampling of Recent Success Stories"
},
{
"paperId": "c9c73f5a1668b8bf12aae2efb6ac5a5a2c34002a",
"title": "CAP twelve years later: How the \"rules\" have changed"
},
{
"paperId": "5395e1001d66e41d6b7145020a3979d4952c3cb8",
"title": "‘Shield’"
},
{
"paperId": "7806dd58fbe9024741a4d9665a770ffefc1495eb",
"title": "Classifying RFID attacks and defenses"
},
{
"paperId": "8255f0081b318416ddad119b5b5917ee248063b9",
"title": "General Overview"
},
{
"paperId": "956991b1db85f8586c6f0fb1ba239721aa080c89",
"title": "Connecting the unconnected"
},
{
"paperId": "1a438164c1ca074c40baa6c3279cb5e0c573313e",
"title": "Cryptographic Hash-Function Basics: Definitions, Implications, and Separations for Preimage Resistance, Second-Preimage Resistance, and Collision Resistance"
},
{
"paperId": "48326c5da8fd277cc32e1440b544793c397e41d6",
"title": "Practical byzantine fault tolerance and proactive recovery"
},
{
"paperId": "35516916cd8840566acc05d0226f711bee1b563b",
"title": "The Sybil Attack"
},
{
"paperId": "455be8ed59887cc7004270cfe62f0749ef11270f",
"title": "US Secure Hash Algorithm 1 (SHA1)"
},
{
"paperId": "1745e5dbdeb4575c6f8376c9e75e70650a7e2e29",
"title": "Proofs of Work and Bread Pudding Protocols"
},
{
"paperId": "8132164f0fad260a12733b9b09cacc5fff970530",
"title": "Practical Byzantine fault tolerance"
},
{
"paperId": "681c8ee05056447b010cef9e8ff9ec9642e7da4d",
"title": "Smart Manufacturing"
},
{
"paperId": null,
"title": "Introduction to Sawtooth PBFT"
},
{
"paperId": null,
"title": "Visa.com. Visa Acceptance for Retailers"
},
{
"paperId": null,
"title": "GoChain Team"
},
{
"paperId": null,
"title": "How it Works, Main Advantages and Challenges"
},
{
"paperId": null,
"title": "Ethash"
},
{
"paperId": null,
"title": "Data Integrity for Supply Chain Operations, Powered by Blockchain Technology"
},
{
"paperId": null,
"title": "The Burst Dymaxion"
},
{
"paperId": null,
"title": "Buys Stake in VeChain and Announces Authority Masternode Status Medium"
},
{
"paperId": null,
"title": "Trustless Proof of Stake (TPoS), Wallet Staking and Pooled Staking"
},
{
"paperId": null,
"title": "EOS. IO Technical White Paper. Available online: https://github.com/EOSIO/Documentation (accessed on 10 August 2020)"
},
{
"paperId": null,
"title": "Purple Protocol—A Scalable Platform for Decentralized Applications and Tokenized Assets"
},
{
"paperId": null,
"title": "A Network of Distributed Ledgers"
},
{
"paperId": null,
"title": "A Blockchain-Powered & Cloud based Social Ecosystem"
},
{
"paperId": null,
"title": "Proof-of-Stake-Time Whitepaper"
},
{
"paperId": null,
"title": "A Distributed Network for the Smart Economy"
},
{
"paperId": null,
"title": "Waltonchain White Paper (V 1.0.4)"
},
{
"paperId": null,
"title": "Advanced Decentralized Blockchain Platform"
},
{
"paperId": null,
"title": "Komodo: An Advanced Blockchain Technology, Focused on Freedom"
},
{
"paperId": null,
"title": "Steem An Incentivized, Blockchain-based Social Media Platform. 2016"
},
{
"paperId": null,
"title": "Exchange Real Trading Volume Investigation"
},
{
"paperId": null,
"title": "https://tron"
},
{
"paperId": null,
"title": "Acceptance for Retailers Available online: https://usa.visa.com/run-your-business/smallbusiness-tools/retail.html (accessed on 10 August 2020)"
},
{
"paperId": null,
"title": "How Secure Is NFC Technology?"
},
{
"paperId": null,
"title": "Litecoin Project"
},
{
"paperId": "44ee1bf827396f8a08f54be78e1b868c11de23bc",
"title": "Toward an ontology-driven blockchain design for supply-chain provenance"
},
{
"paperId": "884ec7bf425110faa605aaacd1ee1b6e37f47c3a",
"title": "Industry 4.0 revolution in autonomous and connected vehicle a non-conventional approach to manage big data"
},
{
"paperId": "6b193d6b3863e2ce866575142bfe3475d3b05be9",
"title": "Decentralized Infrastructure for Next Generation Internet of Things"
},
{
"paperId": null,
"title": "Performance Study in Autonomous and Connected Vehicles a Industry 4.0 Issue"
},
{
"paperId": null,
"title": "Introducing Cahrenheit"
},
{
"paperId": null,
"title": "IoTeX: A Decentralized Network for Internet of Things Powered by a Privacy-Centric Blockchain"
},
{
"paperId": null,
"title": "Hdac: Transaction Innovation—IoT Contract & M2M Transaction Platform based on Blockchain"
},
{
"paperId": "3bca79946878e775e32f4097acbd1267d6d5553d",
"title": "A Review: the Risks And weakness Security on the IoT"
},
{
"paperId": "c5c6eefc414d32637890dbe40a1440e46f68e10f",
"title": "BITSHARES 2.0: GENERAL OVERVIEW"
},
{
"paperId": "2eaedfaea90ebfb569870ac8caca2579f21739df",
"title": "Engineering QoS and Energy Saving in the Delivery of ICT Services"
},
{
"paperId": "ae06aba4ef80fd11a08728fb162d6d38f58430a8",
"title": "The use of Lean Six Sigma methodology in digital curation"
},
{
"paperId": null,
"title": "Tap Track , NFC Relay Attacks"
},
{
"paperId": null,
"title": "The Swirlds Hashgraph Consensus Algorithm: Fair, fast, Byzantine Fault Tolerance ; Technical Report SWIRLDS-TR-2016 1"
},
{
"paperId": null,
"title": "3rd Cyberattack ‘Has Been Resolved’ After Hours of Major Outages"
},
{
"paperId": "3babb89369eed603ce3c702f447ee6274429cda1",
"title": "The Stellar Consensus Protocol: A Federated Model for Internet-level Consensus"
},
{
"paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a",
"title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM"
},
{
"paperId": "03366d1fda3b24651c71ec6ce21bb88f34872e40",
"title": "Preventing the 51%-Attack: a Stochastic Analysis of Two Phase Proof of Work in Bitcoin"
},
{
"paperId": null,
"title": "The Fintech 2.0 Paper: Rebooting Financial Services"
},
{
"paperId": "fd508ac59f2aa7ffb3a5eaa125d57463bc037e6d",
"title": "A Survey of Cloud Authentication Attacks andSolution Approaches"
},
{
"paperId": "b1110264f5efa9848bfa647cc8f7f8ea7ecebc34",
"title": "A Comparative Analysis of SHA and MD 5 Algorithm"
},
{
"paperId": "33822617ce5a200360d5871bea6e5a4cc88659b7",
"title": "Healthcare applications of the Internet of Things : A Review"
},
{
"paperId": "8499c0b3d1138200fdebb88f964100d54a531878",
"title": "Proof of Stake Velocity: Building the Social Currency of the Digital Age"
},
{
"paperId": "5bafdd891c1459ddfd22d71412d5365de723fb23",
"title": "CryptoNote v 2.0"
},
{
"paperId": "3d4c709be74c6926b220f4cf7d8adf50082f3886",
"title": "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)"
},
{
"paperId": "87d555f552f44868198daa7ec7d3a255a130f333",
"title": "How the Next Evolution of the Internet Is Changing Everything"
},
{
"paperId": null,
"title": "Black ops of TCP/IP 2011"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "63743fbad8c3dfc233841db678192d32329479ee",
"title": "Replay Attack"
},
{
"paperId": "e85c8789db8a9379c60a1621878222bcbe7a11a3",
"title": "TWENTY YEARS OF ATTACKS ON THE RSA CRYPTOSYSTEM"
},
{
"paperId": "dcc324cc5b90ef070dda6d06d9e360b2fba95894",
"title": "Elliptic curve cryptosystems"
},
{
"paperId": "6393dcefef0533513f425880ace1cb1fa11ef94d",
"title": "Consultancy"
},
{
"paperId": null,
"title": "Flooding attack/DOS attack"
},
{
"paperId": null,
"title": "Eavesdropping: under the term eavesdropping fall the techniques used to intercept communications that occur within a channel established between two authorized users"
},
{
"paperId": null,
"title": "MNs and QP in Detail"
},
{
"paperId": null,
"title": "Black Hat USA"
},
{
"paperId": null,
"title": "Data Stealing problem. This type of attack involves hacking the data and security of cloud systems by stealing system access credentials"
},
{
"paperId": null,
"title": "Bitcoin Legality by Country"
},
{
"paperId": null,
"title": "The Lisk Protocol"
},
{
"paperId": null,
"title": "Cryptoslate . Proof - of - Work Coins . 2019 Peercoin Explained : The Proof of Stake Pioneer"
},
{
"paperId": null,
"title": "ARK . io . Ark Ecosystem Whitepaper"
},
{
"paperId": null,
"title": "The Lynx Team. Technical White Paper 1.1. White Pap"
},
{
"paperId": null,
"title": "SonicWall Annual Threat Report"
},
{
"paperId": null,
"title": "CoinMarketCap . CoinMarketCap . IoTChain Capitalization Team , I . IoTChain Whitepaper"
},
{
"paperId": null,
"title": "IoTChain Capitalization"
},
{
"paperId": null,
"title": "IO Technical White Paper"
}
] | 24,326
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02c31a54cd1af00cb6d12f617a93ea45ab1a5e25
|
[
"Computer Science",
"Medicine"
] | 0.834962
|
Near-channel classifier: symbiotic communication and classification in high-dimensional space
|
02c31a54cd1af00cb6d12f617a93ea45ab1a5e25
|
Brain Informatics
|
[
{
"authorId": "50828564",
"name": "Michael Hersche"
},
{
"authorId": "7461535",
"name": "Stefan Lippuner"
},
{
"authorId": "1960644",
"name": "Matthias Korb"
},
{
"authorId": "1710649",
"name": "L. Benini"
},
{
"authorId": "145044221",
"name": "Abbas Rahimi"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"https://braininformatics.springeropen.com",
"https://link.springer.com/journal/40708"
],
"id": "b409d450-86da-43c3-b686-a9eb1f61675c",
"issn": "2198-4026",
"name": "Brain Informatics",
"type": "journal",
"url": "http://www.springer.com/40708"
}
|
Brain-inspired high-dimensional (HD) computing represents and manipulates data using very long, random vectors with dimensionality in the thousands. This representation provides great robustness for various classification tasks where classifiers operate at low signal-to-noise ratio (SNR) conditions. Similarly, hyperdimensional modulation (HDM) leverages the robustness of complex-valued HD representations to reliably transmit information over a wireless channel, achieving a similar SNR gain compared to state-of-the-art codes. Here, we first propose methods to improve HDM in two ways: (1) reducing the complexity of encoding and decoding operations by generating, manipulating, and transmitting bipolar or integer vectors instead of complex vectors; (2) increasing the SNR gain by 0.2 dB using a new soft-feedback decoder; it can also increase the additive superposition capacity of HD vectors up to 1.7×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times$$\end{document} in noise-free cases. Secondly, we propose to combine encoding/decoding aspects of communication with classification into a single framework by relying on multifaceted HD representations. This leads to a near-channel classification (NCC) approach that avoids transformations between different representations and the overhead of multiple layers of encoding/decoding, hence reducing latency and complexity of a wireless smart distributed system while providing robustness against noise and interference from other nodes. We provide a use-case for wearable hand gesture recognition with 5 classes from 64 EMG sensors, where the encoded vectors are transmitted to a remote node for either performing NCC, or reconstruction of the encoded data. In NCC mode, the original classification accuracy of 94% is maintained, even in the channel at SNR of 0 dB, by transmitting 10,000-bit vectors. We remove the redundancy by reducing the vector dimensionality to 2048-bit that still exhibits a graceful degradation: less than 6% accuracy loss is occurred in the channel at − 5 dB, and with the interference from 6 nodes that simultaneously transmit their encoded vectors. In the reconstruction mode, it improves the mean-squared error by up to 20 dB, compared to standard decoding, when transmitting 2048-dimensional vectors.
|
p g
### RESEARCH
### Open Access
# Near‑channel classifier: symbiotic communication and classification in high‑dimensional space
#### Michael Hersche[1,2*], Stefan Lippuner[1], Matthias Korb[1,3], Luca Benini[1,4] and Abbas Rahimi[2]
**Abstract**
Brain-inspired high-dimensional (HD) computing represents and manipulates data using very long, random vectors
with dimensionality in the thousands. This representation provides great robustness for various classification tasks
where classifiers operate at low signal-to-noise ratio (SNR) conditions. Similarly, hyperdimensional modulation (HDM)
leverages the robustness of complex-valued HD representations to reliably transmit information over a wireless
channel, achieving a similar SNR gain compared to state-of-the-art codes. Here, we first propose methods to improve
HDM in two ways: (1) reducing the complexity of encoding and decoding operations by generating, manipulating,
and transmitting bipolar or integer vectors instead of complex vectors; (2) increasing the SNR gain by 0.2 dB using
a new soft-feedback decoder; it can also increase the additive superposition capacity of HD vectors up to 1.7× in
noise-free cases. Secondly, we propose to combine encoding/decoding aspects of communication with classification into a single framework by relying on multifaceted HD representations. This leads to a near-channel classification
(NCC) approach that avoids transformations between different representations and the overhead of multiple layers
of encoding/decoding, hence reducing latency and complexity of a wireless smart distributed system while providing robustness against noise and interference from other nodes. We provide a use-case for wearable hand gesture
recognition with 5 classes from 64 EMG sensors, where the encoded vectors are transmitted to a remote node for
either performing NCC, or reconstruction of the encoded data. In NCC mode, the original classification accuracy of
94% is maintained, even in the channel at SNR of 0 dB, by transmitting 10,000-bit vectors. We remove the redundancy
by reducing the vector dimensionality to 2048-bit that still exhibits a graceful degradation: less than 6% accuracy
loss is occurred in the channel at 5 dB, and with the interference from 6 nodes that simultaneously transmit their
−
encoded vectors. In the reconstruction mode, it improves the mean-squared error by up to 20 dB, compared to standard decoding, when transmitting 2048-dimensional vectors.
**Keywords: High-dimensional computing, Communication, Classification, Electromyography**
**1 Introduction**
With the rapid growth in the number of deployed sensing nodes in the physical world [1–3] and their interconnection with sensor networks, Swarms, or the Internet
of Things [4], the world around us has become noticeably smarter [5]. Machine learning (ML), either being
*Correspondence: hersche@iis.ee.ethz.ch
1 Integrated Systems Laboratory, ETH Zurich, Zurich, Switzerland
Full list of author information is available at the end of the article
deployed in the cloud or at the edge near the sensor [6–
9], plays a crucial role in extracting relevant information
from the sensors and data spread in space. The standard
approach is to create a layered system that separates the
communication, including source and channel coding,
from the ML. Such a layered approach imposes unnecessary transitions between the layers which adds to latency
and complexity. Hence, there is a need for a representational system that effectively merges communication and
© The Author(s) 2021. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which
permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the
original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or
other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line
to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory
regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this
[licence, visit http://creativecommons.org/licenses/by/4.0/.](http://creativecommons.org/licenses/by/4.0/)
-----
ML layers into a single framework for wireless distributed smart sensing systems, as shown in Fig. 1.
One viable option is to exploit novel representations in
high-dimensional (HD) computing [10–13], where data
are represented by very long, random vectors (dimension
D = 1000 – 10,000). Inspired by the size of the brain’s
circuits, these vectors are _holographic and (pseudo)ran-_
dom with independent and identically distributed (i.i.d.)
components [10]. As the vectors are composed through
a set of well-defined mathematical operations, they can
be queried, decomposed [14], and reasoned about [15,
16]. For learning and classification tasks, HD computing
was initially applied to text analytics, where each discrete
symbol can be readily mapped to a random vector to be
combined across text [17–20]. More recently, HD computing has been extended to operate with a set of analog
inputs [21–25], mainly in several biosignal processing
applications, or with event-driven inputs from neuromorphic dynamic vision sensors [26].
HD vectors are very tolerant to noise, variations, or
faulty computations due to their redundant i.i.d. representation, in which information symbols are spread
holographically across many components [10, 20, 27].
This makes HD computing a prime candidate for implementation on emerging nanoscale hardware operating
at low signal-to-noise (SNR) conditions [28–30]. In a
similar vein, methods have been proposed to make use
of the robustness of HD vectors in various communication layers [31–37]. Particularly, recent hyperdimensional
modulation (HDM) [33] can be interpreted as a spreading modulation scheme whose spreading gain linearly
improves with the vector dimension, allowing higher
error tolerance with increased dimensionality. Multiple
spread vectors are superposed before transmission; at the
receiver, an iterative feedback decoder denoises the query
vector by subtracting the estimated vectors. In low SNR
channels where each value cannot be reliably demodulated, HDM can still achieve successful demodulation of
symbols without requiring an explicit error correction.
In an initial effort, it was shown that HDM exhibits a
comparable bit error rate (BER) to that of low-density
parity check (LDPC) and Polar codes at a lower number of operations in decoding [33]. Moreover, HDM was
shown to be more collision tolerant than conventional
orthogonal modulations (e.g., OFDM) in highly congested low power wide area networks [34]. However,
the HDM proposed in [33] represents symbols using
complex-valued components in a vector, hence we call it
Complex-HDM, which requires more bits per symbol to
be transmitted and involves energy-hungry fast Fourier
transform (FFT) operations in encoding and decoding.
Here, we first address these shortcomings of ComplexHDM by simplifying its encoding/decoding operations,
and improving its SNR gain. Next, we demonstrate how
our approach can effectively blur the boundaries between
communication and ML by relying on a unified HD representation system. This paper makes the following three
main contributions (highlighted in Fig. 1 as well).
First, in Sect. 3, we propose Integer-HDM that superposes bipolar vectors. These vectors can be rematerialized in an encoder with a combination of simple lookup
and permutation operations that are hardware-friendly
[38]. Further, the burden of decoding complexity is lowered by using associative memory (AM) searches, purely
with integer arithmetic instead of performing FFT. Such
best match searches use cheap clean-up operations,
which scale better than FFT searches on long codes, and
can be efficiently implemented with analog in-memory
computing [30]. Our Integer-HDM achieves the same
SNR gain as the Complex-HDM [33] under additive
Gaussian white noise (AWGN) without relying on the
expensive FFT operations in encoding and decoding.
Secondly, to improve the SNR gain, we propose a
soft-feedback decoding mechanism which additionally
takes the estimation’s confidence into account (Sect. 4).
Although the soft-feedback involves floating-point operations, it improves the SNR gain of the Integer-HDM by
0.2 dB at a BER of 10[−][4] . To simplify the soft-feedback
decoder, it is quantized to 4.1 fixed-point without any
degradation in the SNR gain under AWGN. Further, we
have observed that our soft-feedback decoder can be
combined with an optimized minimum mean-squared
error (MMSE) readout to increase the number of superposed vectors, which can be successfully decomposed in
a noise-free case. This effectively improves the capacity
of HD superposition by 1.7× for noise-free information
retrieval; we improve the number of encoded information
-----
bits in a 500-dimensional HD vector [14] from 0.7 to
1.2 bits/dimension.
Thirdly, we propose to combine channel coding,
source coding, and ML classification into a single unified layer exploiting multifaceted HD representations.
This approach avoids transformations between representations and the addition of multiple layers of encoding/decoding. The approach is inspired by the structural
similarities between the Integer-HDM encoding and the
spatial feature encoding in HD classifiers used for multichannel biosignal classification tasks [22, 25]. In practice,
we reuse the spatial encoding for both data transmission
and classification; hence, we avoid the transition between
different representations. The encoded vector can be reliably transmitted to the receiver, where it is either decoded
to analyze the underlying data, or directly classified, enabling near-channel classification (NCC). In Sect. 5, we
present a use case for wearable hand gesture recognition
(5-class) based on electromyography (EMG) signals from
64 sensors [22] where encoded vectors are transmitted to
perform either NCC, or reconstruct the underlying features at the receiver. In NCC mode, the 10,000-bit representation shows great robustness by maintaining the
noise-free accuracy of 94% at SNR as low as 0 dB. Reducing the vector dimension to 2048-bit—where there is no
redundancy—also exhibits graceful degradation in the
presence of AWGN and interference from other sensor
nodes, allowing up to −5 dB SNR and up to 6 simultaneously sending sensor nodes at less than 6% accuracy loss,
compared to the noise-free case. Moreover, the soft-feedback decoder guarantees successful reconstruction of the
features even in noisy environments and improves the
mean-squared reconstruction error by up to 20 dB compared to standard decoding at dimension D = 2048.
In the remainder of the paper, Sect. 2 provides background into HD computing, the creation and decomposition of HD superpositions, and HDM. Section 6
concludes the paper.
**2 Background**
**2.1 High‑dimensional computing**
The brain’s circuits are massive in terms of numbers of neurons and synapses, suggesting that large circuits are fundamental to the brain’s functioning. HD computing [10]—aka
holographic reduced representations [12], semantic pointer
architecture [39], or vector symbolic architectures [13,
40]—explores this idea by looking at computing with vectors as ultrawide words. These vectors are _D-dimensional_
(the number of dimensions is in the thousands) and
(pseudo)random with independent and identically distributed (i.i.d.) components. They thus conform to a holographic or holistic representation: the encoded information
is distributed equally over all the D components such that
no component is more responsible for storing any piece of
information than another. Such representation maximizes
robustness for the most efficient use of redundancy [10].
In this work, we focus on multiply–add–permute (MAP)
architectures [13], which define the multiplication ( ∗ ) as the
element-wise multiplication between two vectors, the addition ( + ) as the element-wise addition among multiple vectors, and the permutation ( � ) as the random shuffling of
the vector elements. Multiplication and permutation yield
dissimilar vectors compared to their input vector, whereas
addition preserves similarity and is often used to represent sets. The permutation can be realized with hardwarefriendly, cyclic shifts ( ρ ). We compare two D-dimensional
vectors x and y with the cosine similarity:
< x, y >
c =,
(1)
||x||2 · ||y||2
where < ., . > is the ℓ2-inner product and ||.||2 the ℓ2norm. The cosine similarity reflects the angle between
vectors, neglecting their length/norm.
Creating HD representations starts with building a dictionary (aka item memory) **IM = {e1, e2, ..., eN** }, where
**ei ∈{−1, 1}[D] are atomic vectors with the elements in**
each vector being a Rademacher random variable (i.e.,
equal chance of values being “ −1 ” or “ +1”).
The high dimensionality guarantees all elements in
the dictionary to be orthogonal with high probability,
aka quasi-orthogonality. Information can be encoded
by HD superposition: a string of information symbols
(q1, q2, ..., qV ), qi ∈{1, 2, ..., N } ∀i is mapped to the corresponding element in the dictionary, permuted, and
superposed via addition:
V
**x(q1, q2, ..., qV ) =** � �v�eqv �, (2)
v=1
V
= � �v�c(qv)[T] - E�, (3)
v=1
where [T] is the transpose, E := (e1, e2, ..., eN ) ∈{−1, 1}[D][×][N]
the matrix representation of the IM containing the
atomic vectors as columns, and c(qv) ∈{0, 1}[N] an all-zero
vector except element qv that is one. Note that all permutations �v are distinct.
The individual vectors in the superposition can be
retrieved by the associative memory (AM) search:
**cˆv =** [1] v [(][x][)][,] (4)
D **[E][T][ ·][ �][−][1]**
where **cˆv ∈** R[N] . The estimated index qˆv is the one with
the highest value in cˆv:
-----
qˆv = argmax **cˆv[q].** or triple errors. This yields an SNR gain of 1.75 dB at a
q=1,...,N (5) BER of 10[−][5] . Overall, the presented decoding resulted in
similar SNR gains compared to LDPC and Polar codes
Increasing the number of superposed vectors yields a
[33].
higher information density; therefore, HD superposition
can be used for compression. For example, it has been
**3 Integer‑HDM**
successfully applied for compressing model weights in
This section is the first main contribution of the paper:
deep neural networks [41]. However, the number of cor
we introduce Integer-HDM, a new modulation scheme
rect retrievals from highly compressed representations
that transmits the superposition of bipolar vectors,
is limited by the number of superposed vectors _V; an_
depicted in Fig. 2. We present a novel encoding scheme
increasing _V yields a lower signal-to-interference ratio_
that effectively increases the IM size (i.e., the dictionary)
(SIR) for retrieval.
while keeping the memory footprint small, which allows
The superposition **x has integer-valued elements**
to achieve a high code throughput even on resource
instead of bipolar elements; it can be bipolarized by set
limited devices. An iterative unit-feedback decoder
ting negative elements to “ −1 ” and positive to “ +1 ”. If the
decomposes the transmitted vector to get the estimated
number of superposed vectors is even, ties at zero are
bit-string. Our decoder is inspired by Complex-HDM
broken at random, or by simply adding another deter
[33], but instead of requiring FFT operations it relies only
ministic (random) vector to the superposition before
on efficient AM searches. We experimentally evaluate the
bipolarizing (see [38]). Even though bipolarizing the
SNR gain in an AWGN channel and show that our novel
superposition is common practice in HD computing, it
encoding achieves the same SNR gain as Complex-HDM.
heavily affects both the number of retrievable vectors and
the noise resiliency in HD superposition.
**3.1 Memory‑efficient encoding**
We start with the description of a memory-efficient
**2.2 Hyperdimensional modulation**
encoding of a binary input string **u of length** _k to a_
Hyperdimensional modulation (HDM) [33] superposes
_D-dimensional integer vector, defined as_
complex-valued vectors using the rows of the discrete
Fourier transform (DFT) matrix as entries in the IM. The � : {0, 1}[k] −→ Z[D]. (6)
mapping is realized by transforming the sparse vector cv
with a DFT, whereas the readout matrix corresponds to We define the throughput r of the code in bits per chanthe inverse DFT, which can be efficiently implemented nel usage
with FFT and inverse FFT. Additional information is
encoded by having multiple non-zero values in cv66, and r = [k] (7)
D [.]
modulating the non-zero values with phase-shift keying.
Decoding is performed in multiple iterations, subtract- The ultimate goal is to find an encoding function �
ing the last iteration’s estimation from the superposition with a high code throughput while ensuring that the
for the next estimation. An additional cyclic redundancy encoded vector is robust against errors occurring during
check (CRC) validates the estimation’s correctness; if the transmission.
CRC fails, the decoder searches through a list of most The left side of Fig. 2 illustrates the proposed encodprobable alternative solutions correcting single, double, ing scheme. First, the input string **u is divided into** _V_
-----
equally sized sub-strings ( u1, u2, ..., uV ). Each sub-string
is encoded separately with its corresponding encoding
module. In the following we will explain the encoding
of **u1, and then the generalization to all other encoding**
modules.
First, the bit-to-index (b2i) block maps the bit-string u1
of length _k/V to the IM index q1, rotation index r1, and_
sign index s1 . For generating the indexes, we split the
bit-string into three slices that are mapped to their corresponding integer values. The resulting indexes are then
further used for decoding information in the HD space.
The IM builds the central part of the encoding and
serves as a random but fixed dictionary. It stores N bipolar vectors of dimension D, where the entries are drawn
randomly with an equal number of “ +1 ” and “ −1 ”. The
IM index q1 is used to read out the corresponding vector
in the IM. The number of information bits kq which can
be encoded with an IM of size N is
kq = log2(N ). (8)
The IM grows exponentially with the number of bits we
want to encode. As a consequence, the code throughput
of tightly resource-limited devices would be restricted.
To relax the memory requirements, we extend the encoding by rotation encoding ρ[r][1], which applies a cyclic rotation by r1 positions to the vector. A cyclic rotation is an
alternative, hardware-friendly random permutation. The
shifted result is quasi-orthogonal to its input vector. The
number of available shifts is limited to the number of
dimensions D, resulting in a maximum of
kr = log2(D), (9)
additionally encoded bits. The rotation encoding _virtu-_
_ally increases the IM size by factor D, without requiring_
any additional memory. In the next step, the vector is
multiplied with the sign modulator s1 ∈{−1, 1} . This further gives
ks = 1, (10)
bit.
We illustrate the encoding with an example assuming
dimension D = 64 and an IM size of N = 8 . The bitstring **u1 contains** kq + kr + ks = 3 + 6 + 1 = 10 bits,
e.g., u1 = (0100100010) . The bit-to-index block splits the
bit-string into three slices (010|010001|0) and maps them
to the corresponding integer indexes q1 = 2, r1 = 17, and
s1 = (−1) . Finally, the encoded vector is
**x1[′]** [=][ s][1][ ·][ ρ][r][1] [�]eq1 � = (−1) · ρ[17](e2). (11)
with a unique, random permutation �v per encoding
block and superposed, resulting in the final vector x . The
final throughput of the code is
r = [V][ (][k][q][ +][ k][r][ +][ k][s][)], (12)
D
�log2(N ) + log2(D) + 1�
= [V] . (13)
D
**3.2 FFT‑free decoding based on associative memory**
We present an iterative unit-feedback decoder, depicted
in Fig. 2, which decomposes the transmitted vector **y to**
estimate the bit-string **uˆ . It consists of an estimation and**
a feedback stage. In the estimation stage, the indexes qˆv,
rˆv, and sˆv are guessed for every block _v individually. The_
estimated indexes are encoded to the corresponding vector xˆv using the same encoding as described in the previous part. To perform the estimation in the next iteration,
the encoded vectors xˆv are subtracted from the input vector **y removing the interference from other vectors in the**
superposition.
The estimation in block v starts with computing the inner
products between the inversely permuted input vector and
all elements in the associative memory (AM):
**cˆv[q, r] =** [1] �[−]v [1]�yˆv�[�], eq >, (14)
D [< ρ][−][r][�]
where �[−]v [1][(][.][)][ is the inverse permutation of block ] _[v][ and ]_
ρ[−][r] the cyclic shift by (−r) elements. The estimated item
and rotation indexes are those that maximize the absolute value of the inner product:
qˆv, ˆrv = q=1,...,argmaxN r=1,...,D ��cˆv[q, r]��, (15)
and the estimated sign is the sign of the maximizing
inner product:
sˆv = sign(cˆv[ˆqv, ˆrv]). (16)
After encoding the estimated indexes to the vectors **xˆv,**
the input vector is cleaned up for the estimation in the
next iteration i + 1:
**yˆv[(][i][+][1][)]** = y − � **xˆj[(][i][)][.]**
(17)
j�=v
The described encoding steps are identical among different encoding blocks; the same IM is shared among all
blocks. In the last step, the encoded vectors are permuted
� V �
= y − � **xˆj[(][i][)]** + ˆxv[(][i][)][.] (18)
v=1
In the first iteration, all feedback vectors are initialized
to zero, i.e., **xˆv[(][0][)]** = 0 . The decoding is repeated until all
-----
estimated indexes converge, or until a maximum number
of iterations is reached without convergence. Finally, the
estimated indexes are mapped to the bit-string uˆ .
The computations in the proposed unit-feedback
decoder are dominated by the AM search depicted in
Eq. (14). These AM searches allow for a high degree of
parallelism and only require additions and subtractions,
thanks to the bipolar representation of the dictionary.
Moreover, the search can be efficiently deployed to a
computational memory [42], such as phase-change memory, where the inner product is computed in constant
time at O(1) in the analog domain leveraging Kirchhoff’s
law. When applied to a language classification problem,
performing the AM search in the phase-change memory
has shown to be over 100× more energy efficient than in
an optimized digital implementation [30].
**3.3 Experimental results**
This section evaluates the BER vs. SNR performance for
Integer-HDM and other state-of-the-art (SoA) codes. We
assume an AWGN channel with the received signal in the
baseband y being modeled as:
**y = x + n,** (19)
where x is the sent vector containing V accumulated vectors, and n is AWGN with n ∼ N (0, SNRV **[I][D][)][ and ][SNR][ the ]**
signal-to-noise ratio. We define the energy per information bit over noise floor Eb/N0 := SNR/2r.
Figure 3a shows the BER vs. SNR behavior of IntegerHDM when varying the number of superposed vectors V
and the IM size N while fixing the dimension to D = 512 .
Transmitting a single vector ( V = 1 ) shows the highest
noise resiliency but results in the lowest code throughput
( r = 0.031 − 0.041 for N = 64 − 2048 ). Integer-HDM
allows us to flexibly increase the number of superposed
vectors resulting in a linear increase in code throughput;
e.g., superposing nine vectors achieves the highest coding rate of r = 0.37 . Transmitting more vectors at the
same time reduces the self-induced SIR; hence, a higher
SNR is required to achieve the same BER.
The number of decoding iterations of the same code
configurations is shown in Fig. 3b. Iterative decoding is
not helpful when transmitting only one vector ( V = 1 )
as no denoising of other superposed vectors is needed;
thus, decoding is terminated after the first iteration.
Conversely, the number of decoding iterations depends
heavily on the number of superposed vectors, the IM
size, and the SNR, when superposing more than one
vector. However, the number of iterations converges
towards two when increasing the SNR. More importantly, a low number of iterations is observed in low
BER regimes (where the code is eventually operating);
e.g., Integer-HDM in configuration V = 7 and N = 512
|V=1 N=512 V=3 N=512 V=5 N=5 V=1 N=2048 V=3 N=2048 V=5 N=20|12 V=7 N=512 V=9 N=512 48 V=7 N=2048 V=9 N=2048|
|---|---|
requires ≈ 0 dB at BER = 10[−][4] and takes only 2.44
decoding iterations at the same SNR.
Next, we compare Integer-HDM to Complex-HDM
[33] and a Polar code. Like in Complex-HDM [33], we
evaluate the codes in short block lengths ( D = 512 ) at
a throughput of r = 1/4 . Complex-HDM sends vectors
with complex-valued elements of block length D = 256
at a throughput of rc = 1/2 bits per _complex channel_
use, which is equivalent to our setting with r = 1/4 bits
per real channel use and a block length of D = 512.
The integer codes are configured to V = 7 and
N = 512, yielding a throughput of r = 0.2598 . A rate
1/4 Polar code at equal block length 512 serves as a second baseline. We use it according to the downlink configuration specified by 3GPP for 5G New Radio (NR)
[43]: the information bits are appended by 24 CRC bits
and encoded by the Polar encoding with rate-matching. The encoded bits are transmitted with BPSK. For
-----
decoding the soft symbols, we use CRC-aided successive cancellation list decoding with list length L = 4
[44]. As the L = 4 list decoder utilizes a part of the
information in the CRC bits, we count two of these
towards the parity bits. We consider the remaining 22
bits as effective information bits for the comparison,
as block errors are not detected in the HDM case. As
a result, the effective information bits comprised 106
information bits plus 22 CRC information bits for the
Polar code.
Figure 4 shows the waterfall diagram of all considered
codes. Our proposed Integer-HDM with unit-feedback
decoder performs on par with Complex-HDM [33]
without needing CRC-aided decoding nor FFT operations. Moreover, it requires fewer decoding iterations
than Complex-HDM (2.44 vs. 2.9 @0 dB SNR). The
rate 1/4 Polar code outperforms the HD-based codes: it
requires 1.2 dB less SNR at BER of 10[−][6] . However, this
comes at the cost of a higher number of decoding operations: Polar codes have shown to require 1.2× more
decoding operations than Complex-HDM (336 vs. 280
operations per information bit) [33]. The high decoding
complexity has an impact on the overall power consumption of the system that includes encoding, transmission,
and decoding [45]. Complex-HDM has already been
shown to require fewer decoding operations than Polar
codes. We further reduce the number of iterations by
lowering the number of decoding iterations and replacing
the FFT-based decoding with cheap AM searches, that
can be efficiently implemented in the analog domain [30].
**4 Soft‑feedback decoding**
This section proposes enhancements to the decoder,
introducing a new soft-feedback strategy and quantization schemes for more efficient decoding. Figure 5 depicts
the soft-feedback decoding mechanism that scales the
currently estimated vector according to the confidence of
the previous estimation. Estimations with low confidence
are attenuated in the feedback, which results in a damped
behavior. We show that the new soft-feedback decoding
increases the number of correct vectors retrieved in both
the AWGN and noise-free case.
10[−][2]
10[−][3]
Integer-HDM unit-feedback
10[−][4]
10[−][5]
10[−][6]
10[−][7]
1.5 2 2.5 3 3.5 4 4.5
**4.1 Soft‑feedback decoding**
The feedback stage reconstructs the estimated vector
to remove the noise from the superposition in order to
increase the SIR. However, it is not clear in advance how
much the past estimations should influence the future
ones. The unit-feedback strategy, used both in ComplexHDM and our standard Integer-HDM, weighs all estimations equally with factor one, which can have limitations.
For example, if the number of wrong estimations outweighs the correct ones, the feedback decreases the SIR
instead of increasing it. Moreover, we observed oscillatory behavior in the unit-feedback decoder, illustrated in
Fig. 6.
|Col1|Col2|Col3|Intege Intege|r-HDM uni r-HDM soft|t-feedback -feedback|
|---|---|---|---|---|---|
||||Co|mplex-HD Polar [38|M [29] ]|
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
|||||||
_Eb/N0 [dB]_
**Fig. 4** Bit error rate (BER) of considered codes with k = 128
information bits and D = 512 real-valued transmission symbols
-----
1.1
1
0.9
0.8
operation in the AM search is the inner product between
the query vector **y˜ and all vectors in the dictionary**
**eq ∈{−1, 1}[D] . We quantize the query vector before the**
AM search by mapping it to the nearest neighbor from
the set of values in the original, noise-free case:
Q(y[i], V [′]) = argmin
l=−V [′],−V [′]+2,...,V [′]−2,V [′][ ||][y][[][i][] −] [l][||][2][.] (21)
0.7
0.6
0 1 2 3 4 5 6 7
|Un Un Un|Un Un Un|it feedb. it feedb. it feedb.|v=1 v=2 v=3|Soft fe Soft fe Soft fe|edb. v=1 edb. v=2 edb. v=3|Corr Wro|ect est. ng est.|
|---|---|---|---|---|---|---|---|
|||||||||
|||||||||
|||||||||
|||||||||
|||||||||
Iteration
**Fig. 6** Confidence cˆ during iterative decoding of V = 3 vectors using
either unit-feedback or soft-feedback. The correct estimations are
marked in green and the incorrect in red
To this end, we propose a soft-feedback scaling function, which attenuates estimations with low confidence:
**x˜v = max�cˆv, 1�** - ˆxv, (20)
where cˆv := ��cˆv[ˆqv, ˆrv])�� is the highest absolute inner
product interpreted as the confidence of the previous
estimation. As the inner product can exceed one, we limit
the feedback scaling to be less or equal to one. The example in Fig. 6 illustrates the soft-feedback scaling’s effectiveness: the oscillations are no longer present, and we
converge to the correct solution.
**4.2 Quantized Soft‑feedback decoding**
For most FEC codes, the decoding complexity is significantly higher than the coding complexity. This also holds
for our proposed Integer-HDM; therefore, any reduction
of the computational requirements for decoding is desirable. We start by quantizing the decoder to fixed-point,
where we quantize every value in the decoder to a fixedpoint representation with m magnitude bits (integer) and
_q fractional bits, denoted as “fixed-point m.q”. The quan-_
tization has the main effect on the input vector y as well
as the damped feedback vector x˜ . The range of expected
values of the input vector depends on the number of
added vectors _V. For example, with_ V = 3, we expect
values in {−3, −1, 1, 3}, which can be represented by
m = 3 integer bits. If we reduced the number of integer
bits, high values get clipped, which is not desirable in the
decoding process. The feedback scaling takes values in
[0, 1]; a quantization to q = 1 fractional bits and arbitrary
_m yields scaling factors in {0, 0.5, 1}._
In addition to the quantization of the general decoder
to fixed-point arithmetic, we further reduce the complexity by quantizing the AM search. The dominating
Figure 7 shows the histograms of the elements in an
encoded vector with dimension D = 512 and V = 7 . The
elements in **x take values in** {−7, −5, ..., 5, 7}, whereas
values with large amplitude are less probable than small
values, which are close to 0. We then add AWGN (0 dB
SNR) to the encoded vector, yielding **y . In the readout-**
quantization, we map the values to the nearest neighbor
of the values in the original, noise-free case. Moreover,
we limit the values to V [′] due to the low probability of
values with large amplitudes. In the extreme case, we set
V [′] = 1, which would reduce the inner product to a Hamming similarity computation. If V [′] - 1, the inner product
can be computed with integer or binary arithmetic, mapping the values to a Thermometer code.
**4.3 MMSE‑optimized readout**
We consider an alternative AM readout matrix to **E**
determined by minimizing the mean-squared error
between the estimated cˆv and the ground truth vector cv
[14]:
**cˆv = Fv[T]** [·][ x][,] (22)
where we assume no sign and rotation encoding for simplicity. The minimum mean square error (MMSE) estimator can be found by solving a linear regression problem,
providing a training set of _R samples with ground truth_
symbol vectors **cv and their encoded HD superposi-**
tion **x . The MMSE readout matrix F can be found with**
stochastic gradient descent (SGD) minimizing the MSE
between ground truth symbol vectors **cv and estimated**
symbol vectors cˆv on the training. Note that we neither
have to inversely permute the superposition x nor require
the knowledge of the underlying dictionary; the readout
-----
matrix is only learned based on empirical data. However,
a separate readout matrix Fv is needed for every superposed vector, which increases the memory footprint, specifically with large V.
The MMSE readout has been shown to increase the
number of superposed vectors that can be successfully
retrieved with high probability pc [14], compared to
the standard AM search. Consequently, this results in a
higher operational capacity of the superposition which is
defined as the number of bits/dimension:
float
10[−][5]
_V_ _[′]=1_
_V_ _[′]=3_
10[−][6] _V_ _[′]=5_
_V_ _[′]=7_
10[−][7]
_−3_ _−2_ _−1_ 0 1 2 3
SNR[dB]
10[0]
10[−][1]
10[−][2]
10[−][3]
10[−][4]
Capacity(pc) = [V]
D
�pclog2(pcN )
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|float V ′=1 ′||||||
|V =3 V ′=5||||||
+(1 − pc)log2
� N ��
.
N − 1 [(][1][ −] [p][c][)]
(23)
**4.4 Experimental results**
We compare our novel soft-feedback decoder in AWGN
simulation using both full-precision floating-point and
quantized decoder. Moreover, we evaluate the accuracy
of the correct retrieval of HD superpositions in the noisefree case using different decoding strategies.
**Fig. 8** AM readout-quantization 1/4 rate HDM soft-feedback decoder
with D = 512, N = 512, and V = 7
10[−][2]
10[−][3]
10[0]
10[−][1]
**_4.4.1 Soft‑feedback decoding_**
First, we compare the soft-feedback with the unitfeedback decoder used in Integer-HDM and ComplexHDM, shown in Fig 4. The Integer-HDM code is in
the same configuration as in the previous experiment
(i.e., D = 512, N = 512, and V = 7 ). The soft-feedback
decoder is able to increase the SNR gain by 0.2 dB compared to the unit-feedback decoder. As a result, IntegerHDM with soft-feedback reduces the SNR gap to the
Polar 1/4 code (0.7 dB gap at BER = 10[−][4] and 0.8 dB at
BER = 10[−][5]).
**_4.4.2 Quantized Soft‑feedback decoding_**
We analyze the performance of the soft-feedback decoder
when quantizing specific parts of the decoder, described
in Sect. 4.2. We start with the quantization of the AM
readout, i.e., the values in the query vectors y˜ fed to the
AM readout. The results in Fig. 8 illustrate that when
quantizing the vector elements to bipolar values (i.e.,
{−1, 1} at V [′] = 1 ), the code performance degrades significantly, compared to the full-precision AM readout.
Similar degradation was observed when quantizing the
encoded vector x to bipolar values before sending it over
the channel. On allowing more levels ( V [′] = 7 ), however,
the code performance can be re-established.
When quantizing the entire decoding to fixed-point
arithmetic (see Fig. 9), one fractional and four integer
bits are sufficient to achieve the same performance as
the decoder in floating-point. In addition to the desired
|Col1|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|||||||
|||||||
|||||||
|flo fix-po|at int 4.0|||||
reduction in decoding complexity, this result also gives
valuable insight into the soft-feedback decoder: a feedback scale taking values in c ∈{0, 0.5, 1} is sufficient. This
yields three options for feedback: take estimation fully
into account ( c = 1 ), ignore it ( c = 0 ), or partly use it
( c = 0.5).
**_4.4.3 Recall from noise‑free superpositions_**
Finally, we experimentally evaluate the decoding performance of the presented feedback decoder and different readout matrices (standard AM and MMSE) in the
noise-free case. We measure the probability of correct
retrieval pc and derive the operational capacity as in (23).
For comparison, we use the same configurations as in
[14]: we fix the dimension D = 500 and vary the IM size
N ∈{5, 15, 100} and the number of superposed vectors
V ∈{1, 2, ..., 300} . No sign and rotation encoding are used
in these experiments.
10[−][4]
10[−][5]
10[−][6]
10[−][7]
_−3_ _−2_ _−1_ 0 1 2 3
SNR[dB]
**Fig. 9** Decoder quantization 1/4 rate HDM soft-feedback decoder
with D = 512, N = 512, and V = 7
-----
Figure 10 shows the accuracy and the resulting capacity for the decoder without feedback, with unit-feedback,
and soft-feedback. Moreover, we conducted experiments
with the MMSE estimator with and without feedback.
The MMSE decoder performed similarly with unit and
soft-feedback; therefore, we only show unit-feedback
results.
Considering the estimator’s accuracy without feedback in small IM sizes ( N = 5 ), the MMSE readout can
decode a much larger number of superposed vectors with
100% accuracy, compared to the standard AM readout
( V = 134 vs. V = 12 ). However, the advantage of MMSE
over AM readout vanishes when increasing the IM sizes
( N = 100).
The feedback decoder significantly increases the number of correctly retrieved vectors in small IM size when
using both the MMSE and AM readout ( V = 250 and
V = 100 for AM soft-feedback and MMSE unit-feedback, respectively). Moreover, the soft-feedback further
increases the accuracy compared to unit-feedback, especially in larger IM sizes ( N = 100 ). Generally, the feedback decoder moves the corner point of 100% correct
recoveries to larger Vs; however, the accuracy descent is
much steeper compared to non-iterative estimations. The
later yet steeper descent of the feedback decoder shows
that the denoising is only effective until a certain SIR (i.e.,
the number of added vectors V). If the SIR gets too low,
most of the estimations are wrong, and the feedback adds
even more interference.
Considering the capacity, MMSE unit-feedback significantly improves the capacity in small dictionary sizes
( N = 5 ) compared to the current SoA MMSE readout
(1.2 vs. 0.7 bits/dimension). This capacity cannot be
achieved in larger dictionary sizes. On the contrary, the
AM readout with unit or soft-feedback keeps the maximum capacity constant ( ≈ 0.6 bits/dimension), with the
soft-feedback achieving slightly higher capacity than the
unit-feedback.
**5 Case study: hybrid near‑channel classification**
**and data transmission in EMG‑based gesture**
**recognition**
This section extends the application of pure data transmission with a classification task in EMG-based gesture
recognition [22], illustrated in Fig. 11. Our hybrid system
provides two modes: (1) a classification mode, where the
received bipolar vector is used to estimate the gesture
using an AM search; (2) a data transmission mode, where
the quantized features are reconstructed at the receiver
for further analysis. In related work, alternative hybrid
approaches compress EMG data using rakeness-based
compressed sensing [46] or with a stacked auto encoder
-----
|Col1|Col2|Col3|
|---|---|---|
|Map|||
[47], before sending the data to the receiver. The received
data can be reconstructed or classified using an artificial neural network (ANN). However, these representations are sensitive to noise when used in connection with
ANNs [48], while the HD representation in our approach
is naturally robust against noise, as we will experimentally show in this section.
**5.1 flexEMG dataset**
We use the dataset from a study in [22], which contains
recordings of three healthy, male subjects. Each subject
participated in three sessions recorded on three different
days. We only use sessions one and three, which contain a
separate training set and test set. The subjects performed
four different gestures (fist, raise, lower, open) plus the
rest class in ten runs, yielding a total of 10 · 5 = 50 trials
per training and test set. The data were acquired with 64
electrodes, uniformly distributed on a flexible 16 × 4 grid
of size 29.3 cm ×8.2 cm. Finally, the data were sampled at
1 kS/s and sent to a base-station over BLE.
**5.2 Hybrid encoding**
**_5.2.1 Classification_**
We propose a spatiotemporal encoding, which differs
from [22] by exclusively using bipolar MAP operations
instead of multiplicative mappings. First, the data of
every EMG channel is pre-processed the same way, passing it through a digital notch filter with a 60 Hz stopband
and a Q-factor of 50, an 8th-order Butterworth bandpass filter (1–200 Hz), an absolute value computation, a
moving average filter with 100 taps, and then downsampled by 100×, yielding ten samples per second. Moreover,
the samples are normalized with the 95% quantile of the
training data per channel, which results in features fch[t] [ in ]
[0, 1] with high probability (i.e., p = 0.95 on the training
set).
For mapping features to HD vectors, we quantize them
to L = 128 levels and map them to a corresponding value
vector stored in a continuous IM (CiM) [23]. The CiM is
shared among all channels and is constructed as follows.
First, a bipolar seed vector is drawn randomly, which corresponds to level l = 1 . For level l = 2, we invert D/(2L)
values at random positions. For the remaining levels, we
continue inverting an increasing number of bits until we
have inverted _D/2 elements for level l = L, which yields_
orthogonal vectors for level l = 1 and l = L . This mapping is fully bipolar and more hardware-friendly than the
multiplicative mapping used in [22], which relies on multiplicative floating-point operations.
The embedded value vector is circularly permuted,
depending on the channel index, and superposed resulting in the compressed representation x[t] . The encoding is
completed by bipolarizing **x[t] and building a 5-gram out**
of five consecutive vectors with random permutations
( � ) and binding ( ∗ ). Overall, the encoding achieves a
throughput of
r = [64 channels][ ·][ 7 bits][ ·][ 5 gram], (24)
D
which can, depending on the dimension of the HD vector, result in compression (e.g., r = 4.375@D = 512).
The encoded vector is modulated (e.g., with BPSK)
and sent to the receiver over a wireless channel. At the
receiver, the demodulated signal **y is finally classified**
with an AM search. The AM stores a prototype vector
per class. Each prototype is learned by accumulating all
encoded vectors of the training samples for each class
and finally bipolarizing the vectors. For classification, the
query vector y is compared to all prototype vectors using
-----
the AM readout. The class with the corresponding best
matching prototype is the estimated label [23].
**_5.2.2 Data transmission_**
The availability of the underlying data, which led to a
certain decision or classification, can be helpful in many
applications, e.g., allowing interpretability of the model
or analysis of the data by a medical specialist. To address
this demand, we propose an additional data transmission
mode, where the spatially encoded vector x[t] is sent to the
receiver and decoded with an iterative HDM decoder.
This comes with minimal additional requirements at
the sensing node, compared to the standard approaches
where features are encoded with separate source and
channel coding.
In contrast to the quasi-orthogonal IM used for encoding in the previous Sect. 3, the CiM is non-orthogonal,
i.e., not every quantization level qi has an orthogonal
vector. This makes the exact decoding of the features difficult; however, the distance preserving CiM mapping
reduces the effective error in the reconstruction. For
example, an estimation of eq+1 instead of eq translates to
an error of only 1/L.
**Table 1 Classification accuracy (%) on 5-class EMG-based**
gesture recognition task using 64-channel flexEMG data [22]
**Classifier** **SVM[a] [49]** **HD[a] [22]** **HD (ours)**
**Representation** **Float** **Float** **Bipolar**
**Subject** **Session**
1 1 98.13 99.60 97.20
3 100.00 99.20 98.20
2 1 99.53 98.33 98.53
3 96.47 97.53 96.07
3 1 99.60 90.40 92.27
3 83.07 90.87 82.53
Average 96.13 95.99 94.13
We compare a linear SVM, an HD classifier with multiplicative embedding, and
our HD classifier with bipolar CiM embedding. Both HD classifiers operate at
dimension D = 10 000
a Reproduced
**5.3 Experimental results**
**_5.3.1 Classification_**
We assess the classification performance in the noisefree, single-node AWGN, and multi-node interference
case. The classification accuracy is defined as the ratio
between the number of correct estimations and the total
number of estimations, given that the classifier makes a
new estimation every 100 ms. All models were implemented and tested in MATLAB 2019b.
Table 1 shows the classification accuracy in the noisefree case. A support vector machine (SVM) with linear
kernel and cost parameter C = 500 on pre-processed,
flattened features in float-32 precision with dimension
320 (64 channels 5-gram) [49] as well as an HD classifier
with multiplicative mapping [22] serve as baselines. Both
HD classifiers operate at a dimension of D = 10, 000 .
The SVM marginally outperforms the HD classifiers by
0.14% and 2%; however, in contrast to the HD classifiers,
the SVM does not support online updates of the model,
which is crucial for practical deployment of EMG applications [49]. The bipolar feature embedding using the
CiM instead of the float-based multiplicative mapping in
the HD classification yields only a small accuracy degradation (95.99% vs. 94.13%).
Next, we evaluate the classification accuracy when the
query vector was exposed to noise:
**y = x + n,** (25)
100
90
80
70
60
50
40
30
20
_−30_ _−25_ _−20_ _−15_ _−10_ _−5_ 0 5 10
SNR [dB]
**Fig. 12** Classification accuracy (%) in 5-class gesture recognition on
64-channel EMG data. The transmitted HD vector is interfered with
AWGN
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
||||||||||
||||||||||
||||||D= D=|512 (r=4 1024 (r=|.375) 2.186)||
||||||D= D= D=|2048 (r= 4096 (r= 10 000 (r|1.094) 0.547) =0.224)||
||||||AM re|adout|||
||||||Bi N|polarize o quantiz|d ation||
where **x ∈{−1, 1}[D]** is the encoded vector and
**n ∼** N (0, SNR1 **[I][D][)][ AWGN. Figure ]** [12][ shows the aver-]
age classification accuracy for different vector dimensions, depending on the SNR. In the high SNR regime
(SNR = 10 dB), a reduction in the dimension results in
slight accuracy degradation (e.g., 93.91%@D = 8192 vs.
86.32%@D = 512 ). When decreasing the SNR, we see a
graceful accuracy degradation with superior performance
when using higher dimension: at D = 4096, the absolute
accuracy loss compared to the noise-free case is less than
4% in low SNR until −10 dB SNR (91.16% vs. 94.13%).
As an additional experiment, we bipolarize the query
vector y before the AM search, shown in dashed lines.
This allows a more efficient AM search only requiring
Hamming distance computation; however, it results in
-----
100
90
80
70
60
50
40
30
0
_−10_
_−20_
_−30_
_−40_
20
_−30_ _−25_ _−20_ _−15_ _−10_ _−5_ 0 5 10
SNR [dB]
**Fig. 13** Classification accuracy (%) in 5-class gesture recognition on
64-channel EMG at D = 2048 . The transmitted HD vector is interfered
with other nodes
_−50−30_ _−20_ _−10_ 0 10
SNR [dB]
**Fig. 14** Mean-squared error (MSE) for reconstructing features from
spatially encoded vector x decoded with soft-feedback or with AM
readout without feedback decoder
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
||||||||||
||||||||||
||||||||1 6|node nodes|
||||AM r||eadout Bipolariz|ed|12 18|nodes nodes|
||||||No quant|ization|24|nodes|
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|D=512 D=1024|||||
||D=512 D=1024||||
||||||
|Decoder AM search Soft-feedba||ck|||
lower classification accuracies in the low SNR regime
(SNR < 0 dB).
Furthermore, we demonstrate the robustness of our
distributed representations in the presence of interference from unrelated nodes as well as AWGN, shown in
Fig. 13. The nodes operate at D = 2048 where the effective throughput is r = 1.094 ; hence, the encoding does
not add any redundancy. The HD representation exhibits robustness against the interference: when interfering
with up to 6 nodes at large SNR (10 dB), the classification
accuracy drops by only 4.07% (93.50% vs. 89.43%). Moreover, a graceful accuracy degradation is observed at low
SNR of −5 dB and 6 interfering nodes, where an accuracy
of 87.75% is maintained.
**_5.3.2 Reconstruction of features_**
Finally, we reconstruct the encoded features with the
soft-feedback decoder in the presence of AWGN. We
measure the mean-squared error (MSE) between reconstructed and original features during active gesture intervals of all subjects in sessions 1 and 3. The time between
trials is not considered for reconstruction. Also, the
encoded vector is exposed to AWGN.
Figure 14 shows the MSE depending on the SNR using
either the soft-feedback decoder or the AM search without feedback. Akin to previous classification results,
higher dimensional representations show higher noise
resiliency yielding a lower MSE. Moreover, the soft-feedback further improves the retrieval of the features with
up to 10 dB MSE reduction compared to AM readout
without feedback. As a result, the soft-feedback decoder
allows the vector dimension to be reduced while still
ensuring lower MSE: at 10 dB SNR, soft-feedback at
dimension D = 2048–8192 achieves lower MSE than
AM readout in all considered dimensions D ≤ 8192 . At
dimension D = 2048, the soft-feedback decoder achieves
a maximal reconstruction gain of 20 dB MSE at 10 dB
SNR compared to AM readout without feedback.
For illustration, Fig. 15 depicts the original features of
subject 1 in the training session of the first session, the
reconstructed features with soft-feedback decoder, and
the reconstructed features with the AM readout without
feedback. The reconstructed features from the AM readout without feedback shows many faulty estimations that
do not follow the ground truth, being particularly visible
as peaks during the rest state. In contrast, the soft-feedback decoder’s estimation follows the ground truth more
accurately.
**6 Conclusion**
This paper investigates the use of robust and distributed
HD representations in wireless communication and classification. We propose a novel encoding, called IntegerHDM, that generates integer-valued vectors based on
bipolar seed vectors, cyclic shift encoding, sign modulation, and superposition. A new soft-feedback decoder successfully decomposes the vectors, improving the decoding
performance in both noise-free and AWGN scenarios.
Achieving a similar SNR gain as complex HDM [33], the
proposed Integer-HDM does not require FFT operations
and can be quantized to low-resolution fixed-point arithmetic. In a classification use-case, an EMG-based hand
gesture recognition demonstrates the robustness of HD
representations against AWGN and other interfering sensing nodes; and thus, the same spatial encoding can be used
for classification as well as reconstruction of the underlying features. Further investigations can be made into
-----
|Lower|Ope|n|Col4|Raise|
|---|---|---|---|---|
the decoding of bipolarized superpositions, and N-gram
encoded vectors, e.g., using resonator networks [50, 51].
**Acknowledgements**
Not applicable.
**Authors’ contributions**
AR first defined the research question and proposed the direction. MH realized
the proposed method and performed the experiments. MK proposed the softfeedback decoder. SL gave main inputs in setting up the baselines in Sect. 3.3.
MH and AR contributed in writing the paper with inputs from SL, MK, and LB.
All authors read and approved the final manuscript.
**Funding**
This project was supported in part by ETH Research Grant 09 18-2, and by the
IBM PhD Fellowship Program.
**Availability of data and materials**
The flexEMG dataset analyzed during the current study is available under
[https://github.com/a-moin/flexemg [22].](https://github.com/a-moin/flexemg)
**Declarations**
**Competing interests**
The authors declare that they have no competing interests.
**Author details**
1 Integrated Systems Laboratory, ETH Zurich, Zurich, Switzerland. 2 IBM
Research-Zurich, Zurich, Switzerland. [3] Institute of Microelectronics and Integrated Circuits, Bundeswehr University, Munich, Germany. [4] Department
of Electrical, Electronic, and Information Engineering, University of Bologna,
Bologna, Italy.
Received: 31 January 2021 Accepted: 29 July 2021
**References**
1. Bogue R (2014) Towards the trillion sensors market. Sensor Rev
34(2):137–142
2. Liu S, Cai W, Liu S, Zhang F, Fulham M, Feng D, Pujol S, Kikinis R (2015)
Multimodal neuroimaging computing: the workflows, methods, and
platforms. Brain Informat 2(3):181–195
3. Rawnaque FS, Rahman KM, Anwar SF, Vaidyanathan R, Chau T, Sarker F,
Mamun KAA (2020) Technological advancements and opportunities in
Neuromarketing: a systematic review. Brain Informat 7(1):10
4. Chettri L, Bera R (2020) A comprehensive survey on internet of things
(IoT) toward 5G wireless systems. IEEE Internet Things J 7(1):16–32
5. Rabaey JM (2020) Human-centric computing. IEEE Trans Very Large Scale
Integr (VLSI) Syst 28(1):3–11
6. Samie F, Bauer L, Henkel J (2019) From cloud down to things: an overview of machine learning in internet of things. IEEE Internet Things J
6(3):4921–4934
7. Yang K, Shi Y, Yu W, Ding Z (2020) Energy-efficient processing and robust
wireless cooperative transmission for edge inference. IEEE Internet Things
J 7(10):9456–9470
8. Deng S, Zhao H, Yin J, Dustdar S, Zomaya AY (2019) Edge intelligence:
the confluence of edge computing and artificial intelligence. arXiv
7(8):7457–7469
9. Fafoutis X, Marchegiani L, Elsts A, Pope J, Piechocki R, Craddock I (2018)
Extending the battery lifetime of wearable sensors with embedded
machine learning. In: 2018 IEEE 4th World Forum on Internet of Things
(WF-IoT), pp. 269–274
10. Kanerva P (2009) Hyperdimensional computing: an introduction to computing in distributed representation with high-dimensional random vectors.
Cogn Comput 1(2):139–159
11. Kanerva P (2019) Computing with high-dimensional vectors. IEEE Design
Test 36(3):7–14
12. Plate TA (1995) Holographic reduced representations. IEEE Trans Neural
Netw 6(3):623–641
13. Gayler RW (1998) Multiplicative binding, representation operators and anal[ogy (Workshop Poster). http://cogprints.org/502/](http://cogprints.org/502/)
14. Frady EP, Kleyko D, Sommer FT (2018) A theory of sequence indexing
and working memory in recurrent neural networks. Neural Comput
30(6):1449–1513
-----
15. Kanerva P (2000) Large patterns make great symbols: an example of learning from example. In: Wermter S, Sun R (eds) Hybrid Neural Syst. Springer,
Berlin, Heidelberg, pp 194–203
16. Kanerva P (2010) What we mean when we say “What’s the dollar of
Mexico?”: Prototypes and mapping in concept space. AAAI Fall SymposiumTechnical Report FS-10-08:2-6
17. Kanerva P, Kristoferson J, Holst A (2000) Random indexing of text samples
for latent semantic analysis. In: Proceedings of the Annual Meeting of the
Cognitive Science Society 22(22)
18. Joshi A, Halseth JT, Kanerva P (2016) Language geometry using random
indexing. In: International Symposium on Quantum Interaction, pp.
265–274
19. Recchia G, Sahlgren M, Kanerva P, Jones MN (2015) Encoding sequential
information in semantic space models: comparing holographic reduced
representation and random permutation. Comput Intell Neurosci
2015:986574–986574
20. Rahimi A, Kanerva P, Rabaey JM (2016) A robust and energy-efficient classifier using brain-inspired hyperdimensional computing. In: Proceedings of
the 2016 International Symposium on Low Power Electronics and Design
- ISLPED ’16, pp. 64–69. ACM Press, New York, New York, USA
21. Räsänen O (2015) Generating hyperdimensional distributed representations
from continuous-valued multivariate sensory input. In: Proceedings of the
37th Annual Meeting of the Cognitive Science Society, pp. 1943–1948
22. Moin A, Zhou A, Rahimi A, Benatti S, Menon A, Tamakloe S, Ting J,
Yamamoto N, Khan Y, Burghardt F, Benini L, Arias AC, Rabaey JM (2018) An
EMG gesture recognition system with flexible high-density sensors and
brain-inspired high-dimensional classifier. Proc IEEE Int Symp Circuits Syst
2018–May:1–5
23. Rahimi A, Kanerva P, Benini L, Rabaey JM (2019) Efficient biosignal processing using hyperdimensional computing: network templates for combined
learning and classification of ExG signals. Proc IEEE 107(1):123–143
24. Chang EJ, Rahimi A, Benini L, Wu AYA (2019) Hyperdimensional computingbased multimodality emotion recognition with physiological signals. In:
2019 IEEE International Conference on Artificial Intelligence Circuits and
Systems (AICAS), pp. 137–141
25. Burrello A, Cavigelli L, Schindler K, Benini L, Rahimi A (2019) Laelaps: an
energy-efficient seizure detection algorithm from long-term human ieeg
recordings without false alarms. In: 2019 Design, Automation and Test in
Europe Conference and Exhibition (DATE), pp. 752–757. IEEE
26. Mitrokhin A, Sutor P, Fermüller C, Aloimonos Y (2019) Learning sensorimotor control with neuromorphic sensors: toward hyperdimensional active
perception. Sci Robotics 4(30):6736
27. Hersche M, Sangalli S, Benini L, Rahimi A (2020) Evolvable hyperdimensional
computing: unsupervised regeneration of associative memory to recover
faulty components. In: 2020 2nd IEEE International Conference on Artificial
Intelligence Circuits and Systems (AICAS), pp. 281–285
28. Li H, Wu TF, Rahimi A, Li K-S, Rusch M, Lin C-H, Hsu J-L, Sabry MM, Eryilmaz
SB, Sohn J, Chiu W-C, Chen M-C, Wu T-T, Shieh J-M, Yeh W-K, Rabaey JM,
Mitra S, Wong H-SP (2016) Hyperdimensional computing with 3D VRRAM
in-memory kernels: Device-architecture co-design for energy-efficient,
error-resilient language recognition. In: 2016 IEEE International Electron
Devices Meeting (IEDM), pp. 1–16
29. Wu TF, Li H, Huang P-C, Rahimi A, Rabaey JM, Wong H-SP, Shulaker MM,
Mitra S (2018) Brain-inspired computing exploiting carbon nanotube FETs
and resistive RAM: Hyperdimensional computing case study. In: 2018 IEEE
International Solid-State Circuits Conference-(ISSCC), pp. 492–494
30. Karunaratne G, Le Gallo M, Cherubini G, Benini L, Rahimi A, Sebastian A
(2020) In-memory hyperdimensional computing. Nat Electron 3(June):1–11
31. Jakimovski P, Becker F, Sigg S, Schmidtke HR, Beigl M (2011) Collective communication for dense sensing environments. In: 2011 Seventh International
Conference on Intelligent Environments, pp. 157–164
32. Kleyko D, Lyamin N, Osipov E, Riliskis L (2012) Dependable mac layer architecture based on holographic data representation using hyper-dimensional
binary spatter codes. In: Bellalta B, Vinel A, Jonsson M, Barcelo J, Maslennikov
R, Chatzimisios P, Malone D (eds) Multiple access communications. Springer,
Berlin, Heidelberg, pp 134–145
33. Kim H-S (2018) HDM: Hyper-dimensional modulation for robust low-power
communications. In: 2018 IEEE International Conference on Communications (ICC), pp. 1–6
34. Hsu CW, Kim HS (2019) Collision-tolerant narrowband communication
using non-orthogonal modulation and multiple access. In: 2019 IEEE Global
Communications Conference (GLOBECOM), pp. 1–6
35. Verma D, Bent G, Taylor I (2017) Towards a distributed federated brain architecture using cognitive IoT devices. In: The Ninth International Conference
on Advanced Cognitive Technologies and Applications (COGNITIVE)
36. Tomsett R, Bent G, Simpkin C, Taylor I, Harbourne D, Preece A, Ganti R (2019)
Demonstration of dynamic distributed orchestration of node-RED IoT
workflows using a vector symbolic architecture. In: 2019 IEEE International
Conference on Smart Computing (SMARTCOMP), pp. 464–467
37. Hsu C-W, Kim H-S (2020) Non-orthogonal modulation for short packets in
massive machine type communications. In: GLOBECOM 2020–2020 IEEE
Global Communications Conference, pp. 1–6
38. Schmuck M, Benini L, Rahimi A (2019) Hardware optimizations of dense
binary hyperdimensional computing: rematerialization of hypervectors,
binarized bundling, and combinational associative memory. ACM J Emerg
Technol Comput Syst 15(4):1–25
39. Eliasmith C (2013) How to Build a Brain. Oxford University Press, Oxford
40. Gayler RW (2004) Vector symbolic architectures answer Jackendoff’s chal[lenges for cognitive neuroscience. arXiv preprint arXiv:cs/0412059](http://arxiv.org/abs/cs/0412059)
41. Cheung B, Terekhov A, Chen Y, Agrawal P, Olshausen B (2019) Superposition
of many models into one. Adv Neural Inform Process Syst 32:10868–10877
42. Sebastian A, Le Gallo M, Khaddam-Aljameh R, Eleftheriou E (2020) Memory
devices and applications for in-memory computing. Nat Nanotechnol
15(7):529–544
43. Bioglio V, Condo C, Land I (2020) Design of polar codes in 5G New Radio.
IEEE Communications Surveys and Tutorials (c) 1–1
44. Balatsoukas-Stimming A, Parizi MB, Burg A (2015) LLR-based successive cancellation list decoding of polar codes. IEEE Trans Signal Process
63(19):5165–5179
45. Ganesan K, Grover P, Rabaey J (2011) The power cost of over-designing
codes. In: 2011 IEEE Workshop on Signal Processing Systems (SiPS), pp.
128–133
46. Marchioni A, Mangia M, Pareschil F, Rovatti R, Setti G (2018) Rakeness-based
compressed sensing of surface electromyography for improved hand
movement recognition in the compressed domain. In: 2018 IEEE Biomedical
Circuits and Systems Conference (BioCAS), pp. 2018–2021
47. Cao Y, Zhang H, Choi YB, Wang H, Xiao S (2020) Hybrid deep learning model
assisted data compression and classification for efficient data delivery in
mobile health applications. IEEE Access 8:94757–94766
48. Xiang L, Zeng X, Wu S, Liu Y, Yuan B (2021) Computation of cnn’s sensitivity
to input perturbation. Neural Process Lett 53(1):535–560
49. Moin A, Zhou A, Rahimi A, Menon A, Benatti S, Alexandrov G, Tamakloe S,
Ting J, Yamamoto N, Khan Y et al (2021) A wearable biosensing system with
in-sensor adaptive machine learning for hand gesture recognition. Nat
Electron 4(1):54–63
50. Frady EP, Kent SJ, Olshausen BA, Sommer FT (2020) Resonator networks, 1:
an efficient solution for factoring high-dimensional, distributed representations of data structures. Neural Comput 32(12):2311–2331
51. Kent SJ, Frady EP, Sommer FT, Olshausen BA (2020) Resonator networks, 2:
factorization performance and capacity compared to optimization-based
methods. Neural Comput 32(12):2332–2388
**Publisher’s Note**
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC8371050, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://braininformatics.springeropen.com/track/pdf/10.1186/s40708-021-00138-0"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-08-17T00:00:00
|
[
{
"paperId": "b05c9b6876adebf429aaa07b1ff0c099b929853d",
"title": "Computation of CNN’s Sensitivity to Input Perturbation"
},
{
"paperId": "57df1e7d9010e397222e74d5b0240c1198cce3ab",
"title": "A wearable biosensing system with in-sensor adaptive machine learning for hand gesture recognition"
},
{
"paperId": "2d74229c68a011eeec898c7887ea141463e150c7",
"title": "Non-Orthogonal Modulation for Short Packets in Massive Machine Type Communications"
},
{
"paperId": "b9bc7a3a5d6291187720f7021cd4f2b4d74fce72",
"title": "Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods"
},
{
"paperId": "0899c5b3ccfd68b9ae6430186381a188ec5230b0",
"title": "Resonator Networks, 1: An Efficient Solution for Factoring High-Dimensional, Distributed Representations of Data Structures"
},
{
"paperId": "b66c8955e5a9fd5b69f71df4851378f27d3a40bd",
"title": "Technological advancements and opportunities in Neuromarketing: a systematic review"
},
{
"paperId": "d3cc59ead5e64be4ef95f30da1f725c182ced868",
"title": "Evolvable Hyperdimensional Computing: Unsupervised Regeneration of Associative Memory to Recover Faulty Components"
},
{
"paperId": "0ff02cddd42e0434ff0e768ab1117ff326e3db6c",
"title": "Memory devices and applications for in-memory computing"
},
{
"paperId": "58df8ecc89886a3d0fa63a847f64f80f4898f639",
"title": "A Comprehensive Survey on Internet of Things (IoT) Toward 5G Wireless Systems"
},
{
"paperId": "9b4a90bf1361935598b2129e7e23c8713ae05702",
"title": "Human-Centric Computing"
},
{
"paperId": "aa18b2670ef158eda2a0453d74e6b89b67cce254",
"title": "Collision-Tolerant Narrowband Communication Using Non-Orthogonal Modulation and Multiple Access"
},
{
"paperId": "40fb2468c3a77c68fe703a6e614f4ad25bd4e3dd",
"title": "Edge Intelligence: The Confluence of Edge Computing and Artificial Intelligence"
},
{
"paperId": "b9952b7ccde6ae8eb94856a1799959e2aa5229c0",
"title": "Energy-Efficient Processing and Robust Wireless Cooperative Transmission for Edge Inference"
},
{
"paperId": "98fc6764bd80487598c840fbeafd76ec2d8f084f",
"title": "Demonstration of Dynamic Distributed Orchestration of Node-RED IoT Workflows Using a Vector Symbolic Architecture"
},
{
"paperId": "9a629da36abb17acf095ea5c1bc96200636fcfd5",
"title": "In-memory hyperdimensional computing"
},
{
"paperId": "e7a979860ddb2ceef9b2c80107023c57a17d79de",
"title": "Computing with High-Dimensional Vectors"
},
{
"paperId": "22f650cd73a6b8916ce88f5f3fe28738f26f1fb2",
"title": "Learning sensorimotor control with neuromorphic sensors: Toward hyperdimensional active perception"
},
{
"paperId": "dba5eb0d1e5762bdaf55a49249962e68fa534c68",
"title": "Laelaps: An Energy-Efficient Seizure Detection Algorithm from Long-term Human iEEG Recordings without False Alarms"
},
{
"paperId": "7d44118645171791ed460fd53b525557236b00f0",
"title": "Hyperdimensional Computing-based Multimodality Emotion Recognition with Physiological Signals"
},
{
"paperId": "48f8882739d9f22f56016d133b5688e082db0d08",
"title": "Superposition of many models into one"
},
{
"paperId": "d02198fc62814021fbbb98a3e9485da68516b031",
"title": "From Cloud Down to Things: An Overview of Machine Learning in Internet of Things"
},
{
"paperId": "40298720a096b5df016b9a385368bbcce7b8067b",
"title": "Efficient Biosignal Processing Using Hyperdimensional Computing: Network Templates for Combined Learning and Classification of ExG Signals"
},
{
"paperId": "158106d6fc5520bab06903685b3d1b18df188cd0",
"title": "Rakeness-based Compressed Sensing of Surface ElectroMyoGraphy for Improved Hand Movement Recognition in the Compressed Domain"
},
{
"paperId": "b3b20a21d78f24a19a0422224fa97de922f1ea64",
"title": "Hardware Optimizations of Dense Binary Hyperdimensional Computing: Rematerialization of Hypervectors, Binarized Bundling, and Combinational Associative Memory"
},
{
"paperId": "b37f59af9694d28d2ff5bdab2a63ea1faa99ca2e",
"title": "HDM: Hyper-Dimensional Modulation for Robust Low-Power Communications"
},
{
"paperId": "ba4a0af9e2ff13f69932551e46e72d6611b8ef73",
"title": "Design of Polar Codes in 5G New Radio"
},
{
"paperId": "2cb35c1484a86af6369150f08b2d6e70bb3acc2b",
"title": "An EMG Gesture Recognition System with Flexible High-Density Sensors and Brain-Inspired High-Dimensional Classifier"
},
{
"paperId": "674ff46c2c3c61397d0ee2afa21d39ef048beb23",
"title": "A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks"
},
{
"paperId": "6bcb64158fa5352202ec451beb92d983a795dfe7",
"title": "Extending the battery lifetime of wearable sensors with embedded machine learning"
},
{
"paperId": "c865b6fd777c13405178e999efd1b87265a66d8b",
"title": "Brain-inspired computing exploiting carbon nanotube FETs and resistive RAM: Hyperdimensional computing case study"
},
{
"paperId": "a9e784c33a8146db96162f08dd4cb3474a55befb",
"title": "Hyperdimensional computing with 3D VRRAM in-memory kernels: Device-architecture co-design for energy-efficient, error-resilient language recognition"
},
{
"paperId": "7a278bbd952b05b9c65a0c32e2cdff5490df7f93",
"title": "A Robust and Energy-Efficient Classifier Using Brain-Inspired Hyperdimensional Computing"
},
{
"paperId": "76120eb0d307ee244b96e621ae8bed9aba0b9d06",
"title": "Language Geometry Using Random Indexing"
},
{
"paperId": "1eb69a18f3f4dda8a1c4c3f1ebf6ad6f8a32f373",
"title": "Multimodal neuroimaging computing: the workflows, methods, and platforms"
},
{
"paperId": "d96a015378d3054f2c6f1744fa412ef3618d1125",
"title": "Encoding Sequential Information in Semantic Space Models: Comparing Holographic Reduced Representation and Random Permutation"
},
{
"paperId": "784bfdef0b2fa0600736f2865ca54f4b6e0eb126",
"title": "Towards the trillion sensors market"
},
{
"paperId": "5fbfb294cc28b632c8de1225c5ea9b3ec40def8f",
"title": "LLR-Based Successive Cancellation List Decoding of Polar Codes"
},
{
"paperId": "03954dcf415daab102fcbfee95d0ec9e4532ec26",
"title": "How to Build a Brain: A Neural Architecture for Biological Cognition"
},
{
"paperId": "8fe96b68d42a954c06601387c7c9a598b7580354",
"title": "Dependable MAC Layer Architecture Based on Holographic Data Representation Using Hyper-Dimensional Binary Spatter Codes"
},
{
"paperId": "3514c345ba075deac7e07b2c8cafc9dc60eed611",
"title": "Collective communication for dense sensing environments"
},
{
"paperId": "f477232c0a0835dcbc4fc6b6283db484695482f9",
"title": "What We Mean When We Say \"What's the Dollar of Mexico?\": Prototypes and Mapping in Concept Space"
},
{
"paperId": "425931e434f6b370cc6cdd2db58873843def7d7f",
"title": "Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors"
},
{
"paperId": "3702d87105631a180912249de92ba6736f070ed0",
"title": "Vector Symbolic Architectures answer Jackendoff's challenges for cognitive neuroscience"
},
{
"paperId": "081b781f9476eb94660bfff4cc2a7abe15e62edc",
"title": "Large Patterns Make Great Symbols: An Example of Learning from Example"
},
{
"paperId": "0c4d193b4e8520dbc583cc7ee59c8417869f67ce",
"title": "Holographic reduced representations"
},
{
"paperId": "89478e48852456dee1c9314af0be21b18022caf2",
"title": "Hybrid Deep Learning Model Assisted Data Compression and Classification for Efficient Data Delivery in Mobile Health Applications"
},
{
"paperId": "eb7775ac969aab9b76e253a9713014764c82dd2c",
"title": "Towards A Distributed Federated Brain Architecture using Cognitive IoT Devices"
},
{
"paperId": "7aba5a1268293ab88f458c6ecd786855010ddccc",
"title": "Generating Hyperdimensional Distributed Representations from Continuous-Valued Multivariate Sensory Input"
},
{
"paperId": "56e26d61424243ff7ce219306c9ef4a1198473b8",
"title": "How to build a brain"
},
{
"paperId": null,
"title": "The power cost of over-designing codes"
},
{
"paperId": "34b9635d7779e219e9d60e0d3d33919ca9bc123c",
"title": "Publisher's Note"
},
{
"paperId": "932c8b99ef910bedd0f49d889230aba308004e0a",
"title": "Random indexing of text samples for latent semantic analysis"
},
{
"paperId": "cd9522328458e09a35e0dd6fc71d8ca879a81e17",
"title": "Multiplicative Binding, Representation Operators & Analogy (Workshop Poster)"
},
{
"paperId": "94ee052b3016fe1967699714ab0eb8d4bfeca26e",
"title": "Multiplicative Binding, Representation Operators & Analogy"
},
{
"paperId": null,
"title": "Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations"
}
] | 17,057
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Education",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02c46d576e2a56cc53c0da14547b018cd84add37
|
[] | 0.902653
|
Security Level Significance in DApps Blockchain-Based Document Authentication
|
02c46d576e2a56cc53c0da14547b018cd84add37
|
Aptisi Transactions on Technopreneurship (ATT)
|
[
{
"authorId": "118175539",
"name": "Q. Aini"
},
{
"authorId": "2269788465",
"name": "Danny Manongga"
},
{
"authorId": "1490932495",
"name": "U. Rahardja"
},
{
"authorId": "2304823516",
"name": "Irwan Sembiring"
},
{
"authorId": "2154984937",
"name": "Vonda Elmanda"
},
{
"authorId": "90061035",
"name": "Adam Faturahman"
},
{
"authorId": "84654272",
"name": "N. Santoso"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
In the development of the Industrial revolution 4.0 to improve and modify the world's industry by integrating production lines, and extraordinary results in the field of technology and information marked its emergence. It can be used to enhance document security systems using Blockchain technology. Blockchain Innovation Authentication has attracted great attention in the world of science and capital markets. The persistent problems of the many available digital currencies and the various tricks of early coin offerings also welcome the well-known discussion of emerging innovations in the field of education. The importance of this paper follows the improvement of the blockchain framework to reveal the importance of decentralized applications (dApps) and blockchain on the future value in education. This study uses a descriptive method, which is a research method used to describe problems that occur in the present or ongoing, aiming to describe what happened as it should when the research was conducted. the novelty of cutting-edge dApps and talk about the title of blockchain progress to meet the positive attributes of future dApps. Readers will come to the conclusion of dApp research and know the continuous improvement in blockchain .
|
**Aptisi Transactions on Technopreneurship (ATT)** **P-ISSN: 2655-8807**
**Vol. 4 No. 3 November 2022** **E-ISSN: 2656-8888**
# Security Level Significance in DApps Blockchain Based Document Authentication
**Qurotul Aini[1], Danny Manongga[2], Untung Rahardja[3], Irwan Sembiring[4], Vonda**
**Elmanda[5], Adam Faturahman[6], Nuke Puji Lestari Santoso[7]**
Department of Science and Technology[1,3,5,6,7], Department of information Technology[2,4 ]
University of Raharja, Indonesia[1,3,5,6,7]
University Kristen Satya Wacana, Indonesia[2,4]
Jenderal Sudirman No.40, Cikokol, Tangerang, Banten, Indonesia[1,3,5,6,7]
Diponegoro No.52-60, Salatiga, Sidorejo, Salatiga, indonesia[2,4]
e-mail: aini@raharja.info [1], danny.manongga@staff.uksw.edu[ 2], untung@raharja.info **[3],**
irwan@uksw.edu **[4], vonda.elmanda@raharja.info** [5], adam.faturahman@raharja.info [6],
nuke@raharja.info [7]
Aini, Q. ., Manongga, D. ., Rahardja, U. ., Sembiring, . I. ., Elmanda, V., Faturahman, A., &
Santoso, . N. P. L. . (2022). Security Level Significance in DApps Blockchain-Based
Document Authentication. Aptisi Transactions on Technopreneurship (ATT), 4(3), 292–305.
**DOI:** https://att.aptisi.or.id/index.php/att/article/view/277
**_Abstract_**
_In the development of the Industrial revolution 4.0 to improve and modify the world's_
_industry by integrating production lines, and extraordinary results in the field of technology and_
_information marked its emergence. It can be used to enhance document security systems using_
_Blockchain technology. Blockchain Innovation Authentication has attracted great attention in the_
_world of science and capital markets. The persistent problems of the many available digital_
_currencies and the various tricks of early coin offerings also welcome the well-known discussion_
_of emerging innovations in the field of education. The importance of this paper follows the_
_improvement of the blockchain framework to reveal the importance of decentralized applications_
_(dApps) and blockchain on the future value in education._ **_This study uses a descriptive_**
**_method, which is a research method used to describe problems that occur in the present or_**
_ongoing, aiming to describe what happened as it should when the research was conducted. The_
**_novelty of cutting-edge dApps and talk about the title of blockchain progress to meet the positive_**
_attributes of future dApps. Readers will come to the conclusion of dApp research and know the_
_continuous improvement in blockchain_
**Keywords:** _Decentralized Application, Blockchain, Authentication, Software Systems, Smart_
_Contract._
**1. Introduction**
By definition, a blockchain is a never-ending chain of blocks containing Authentication
and cryptographic hashes of previous blocks, timestamps, and the information it conveys. Due
to the presence of cryptographic hashes, the information stored in the blockchain is intrinsically
immutable: if one block of information is adjusted, all blocks in a short time must be restored
- 292
Copyright (c) Qurotul Aini[1], Danny Manongga[2], Untung Rahardja[3], Irwan Sembiring[4],
Vonda Elmanda[5], Adam Faturahman[6], Adam Faturahman[7], Nuke Puji Lestari Santoso[8 ]
This work is licensed under a Creative Commons Attribution 4.0 (CC BY 4.0)
# Security Level Significance in DApps Blockchain Based Document Authentication
**Author Notification**
10 October 2022
**Final Revised**
20 October 2022
**Published**
28 October 2022
-----
**Aptisi Transactions on Technopreneurship (ATT)** **P-ISSN: 2655-8807**
**Vol. 4 No. 3 November 2022** **E-ISSN: 2656-8888**
with a new hash value. This immutable component is the principle of the Authentication
blockchain application .
Distributed note support (P2P) for digital currencies has turned into a major executioner
of blockchain use. Many cryptographic tokens, or coins, were sent to the public market, after the
huge jump in Bitcoin market cap. Nonetheless, due to the absence of legal guidelines and
reviews, a large number of tricks, which are considered as "air coins", also brought terrible fame
to blockchain innovation. Questions about the value of cryptographic forms of money have been
raised. Warren Buffett, the popular tycoon investor, insists that this form of digital money will
achieve "terrible perfection", and guarantees that Bitcoin is "possibly square rat poison". Rather
than examining digital currencies, this paper examines the cutting edge of blockchain innovation
and presents decentralized applications (dApps), which are an intelligent type of Authentication
blockchain-enabled programming framework. In the remainder of this article, we survey the
exemplary Authentication blockchain framework in Part II and uncover the value of blockchain
frameworks in Part III. We reviewed best-in-class dApps in Part IV and envisaged the useful
attributes of future dApps in Part V. We also examined considerations when selecting
Authentication blockchain executions in Part VI. Ongoing exploration to drive cutting-edge
blockchain frameworks that address some of these qualities was introduced in Part VII. Segment
VIII completes the article.
**2. Background: Classic Blockchain Systems**
In this segment, we follow the advancement of decentralized records that prompted
exemplary blockchain frameworks embracing public agreement models.
**2.1 Summary Problem**
Concentrated frameworks are censured for being defenseless against the weak link
(SPOF) issue. Then again, decentralized frameworks carried out in a conveyed way experience
the ill effects of the issue of information synchronization, which is likewise summed up as the
issue of Byzantine commanders [1]. At the end of the day, the members in the decentralized
record framework should come to an agreement, a settlement on each message to be
communicated to one another. The Byzantines could accomplish normal adaptation to noncritical failure if the "devoted commanders", fair partners in our specific situation, had a larger
part in settlement on their choices. Notwithstanding, interlopers can play out a Sybil assault to
assume command over a critical part of a public P2P framework by addressing numerous
personalities, which can prompt difficult issues [2]. "twofold spend" in blockchain fueled
decentralized records.
**2.2 Double Spending Issue**
On account of the blockchain's hash affiliation, each coin in the record can be followed
back to the main record when it was made. In this manner, messing with a non-existent coin is
unthinkable in a decentralized public record [3]. In any case, dissimilar to an actual section, a
computerized part can be effectively reproduced by replicating information. In this specific
circumstance, it is fundamental to keep deceitfulness from spending a coin at least a time or
two. In the event that an untrustworthy client of the public record can complete a Sybil assault,
the cash the client spends will be legitimized by most of gatherings, which diminishes the client's
trust too. like the flow and capacity of progress [4].
**2.3 Confirmation Of Work Agreement**
Satoshi Nakamoto applied evidence of work (PoW) to take care of the twofold spending
issue in Bitcoin's most memorable authority report. For this situation, PoW includes a numerical
computation to track down a numeric value that, when hashed, the consequence of the hashing
starts with a particular number of zeroes. With PoW, each companion in a P2P network needs
to rival each other to settle puzzles, otherwise called mining. The champ of each challenge will
_Security Level Significance in DApps …_ - 293
-----
have the honor of making a block and conveying it to peers [5], [6]]. This PoW is innately a
savage power search, while its response can undoubtedly be confirmed by a hashing cycle that
requires (1) intricacy. PoW forces computational costs that purposefully increment the trouble
of parodying character in a Sybil assault to an extremely significant level, because of the
enormous equipment venture expected from a specific organization member. Then again,
effective block-creating friends will get a coin prize for their work. As a matter of fact, regardless
of whether a specific companion has tremendous figuring power, the benefit of utilizing that
capacity to procure coin rewards is higher. instead of going after decentralized frameworks. This
sort of PoW agreement system forestalls interlopers and subsequently safeguards the
decentralized record [7].
**2.4 Than The Definition Of The Blockchain System Is Better**
As indicated over, the traditional meaning of "blockchain" goes past blockchain
innovation that connections blocks of information into an unchanging chain . It applies to a totally
decentralized and conveyed framework that requires all taking part companions to observe
explicit blockchain guidelines to accomplish information synchronization. In this article, we might
want to present a more extensive meaning of a blockchain framework, which is a blend of
blockchain, P2P organization, and agreement model.
Figure 1. Key components of blockchain frameworks.
Figure 1 shows the structure of such a comprehensively characterized blockchain
framework. All members in the P2P network must store blockchain information all by themselves
while synchronizing each block with blocks set by various companions in light of the consensus
model [8]. Honestly, this agreement is addressed by the longest chain agreed upon by most
Friends-Hub hubs.
**3. The Evolution Of The Blockchain System**
The functionality and applications of the various blockchain system generations are
discussed in this section.
_Security Level Significance in DApps …_ - 294
-----
**Aptisi Transactions on Technopreneurship (ATT)** **P-ISSN: 2655-8807**
**Vol. 4 No. 3 November 2022** **E-ISSN: 2656-8888**
**3.1 distributed ledger**
Bitcoin represents the classic blockchain system. As the first distributed ledger, he
has amassed over 10,000 nodes to establish the largest market capitalization of
cryptocurrencies. Bitcoin's most important contribution was solving the double-spending
problem, making digital assets unique and valuable. However, Bitcoin itself is just a
purposeless public distributed ledger and has been criticized by many economists as another
Ponzi scam [9]. With the development of P2P networks, Bitcoin's problem now is mainly the
computational cost of nodes (miners) involved in his PoW efforts. However, these efforts do
not add any value, they only make the system more robust. By convention, such applications
of distributed ledgers are called Blockchain 1.0.
**3.2 Decentralized Proposals**
However, Authenticated blockchain-based applications are currently still limited to
good deals for information centers and capabilities that must undergo changes. The astute
agreement client needs to run the program locally to complete the application. One of the
main reasons is the current bottleneck of Authentication blockchain innovation exhibits, which
cannot solve the problem of many applications. This leaves the possibility of functional
security and application maintenance issues. For example, in the Authentication game
environment there may be fraud aware that is kept away from public scrutiny. With that in
mind, the definitive blockchain application should be a fully facilitated dApp on a P2P
blockchain framework. Preferably, delivered dApps do not require maintenance and
administration by the first designer. Thus, an Authenticated and ideal blockchain application
or administration should have the option to work without human mediation and form a
decentralized independent association (DAO). DAOs are associations represented by rules
coded as smart agreements that have suddenly surged in demand for blockchain [10]. Due
to its independent and programmatic nature, the costs and benefits of the DAO are shared
between all members essentially by recording any activity within the block. In fact, Bitcoin,
the most exemplary Authentication blockchain framework, is an illustration of DAO [11]. As
per the meaning of dApp, dApp is described by four properties:Open source: Due to the
trustworthy nature of blockchain, dApps should open source their code to allow for third party
audits :
- Support for internal cryptocurrencies: Internal currencies are the means by which a
particular dApp's ecosystem runs. Tokens allow dApps to quantify all credits and
transactions between participants in the system, including content providers and
consumers.
- Conveyed Agreement: Agreement among circulated hubs is the premise of
straightforwardness. No essential issue of disappointment: A completely
decentralized framework ought to have no essential issue of disappointment, as all
parts of the application are facilitated and run on the blockchain
**4. Latest DAPPs**
Blockchain technology has been adopted by many industries. The state of the dApps
website summarizes that Ethereum hosts its dApps in various fields, including media,
energy, health, identity, insurance, and exchanges. In practice, however, many cutting-edge
technology dApps are only partially decentralized. Blockstack and Open Bazaar, for
example, only use blockchain for Authentication of user's girlfriend identity and nothing else.
This section provides an overview of the most popular dApps.
_Security Level Significance in DApps …_ - 295
-----
**4.1. Games**
Because it is many gamers' ultimate goal, the video game industry fits perfectly with
the nature of the cryptocurrency ecosystem.To put it another way, the virtual character's
possessions in the game world cannot be replaced and can be exchanged or transferred to
the new game.Consequently, the new trend is blockchain-based games.The majority of
blockchain games are still in their infancy, focusing on virtual asset trading and collectibles
due to transaction fee limitations and delays. Despite the fact that this kind of game is not
at all enjoyable, it made a significant impact on the gaming industry.Probably the current
most well-known blockchain game.That transaction once brought the Ethereum network
down and put pressure on blockchain technology due to its popularity.Players use smart
contracts on the Ethereum blockchain to buy, sell, and breed cats in Crypto Kitties. can do.
This game uniquely distinguishes each CryptoKitty in the game, unlike previous blockchain
collecting games that only allowed the buying and selling of certain items [12], [13]. All cats
differ from other cats in physical traits, traits, and genes. Couples breed cats, and the traits
they inherit from their parents are one-of-a-kind combinations of two.Cats with unusual
characteristics are encouraged to be bred by players.Numerous other blockchain games,
such as Etheremon, CryptoCelebrities, CryptoCountries, and Etherbots, use game
mechanics that are comparable to those of real-world assets.The digital casino is yet
another representative blockchain game.Cryptocurrencies make it simple to create and
broadcast these games Etheroll10, for instance, lets players profit from certain numbers
bets.Vdice, Bitcasino, and Vegas Casino are games that are similar.Fomo3D is also
included in this group.It would appear that the system's transparency and non-fungible
tokens are advantageous to blockchain-based games.
The good news for gamers is that blockchain has emerged as a game-changing
technology. These new concepts have transformed the relationship between gamers and
game companies. In this ecosystem, players become part of the game and create their own
in-game content. A player's in-game behavior can have unintended consequences for game
development. The virtual world of the game becomes a true utopia. However, gaming on
blockchain is still in its early stages. First, the entertainment value of blockchain games lags
far behind traditional video games [14]. As explained above, no matter how game designers
change the way they trade, most blockchain games remain at the collectible exchange level.
A game that only collects tokens with no interaction options won't attract many players.
Second, many players play games only for money, not for fun. Users only buy tokens with
visual representations such as: B. Photos of celebrities, stamps, or countries intended for
commercial trade. After all, games have an unpredictable lifespan. Traditional gameplay
dynamically adjusted the in-game economy and combat parameters and rules.As the game
progresses to achieve better balance. Still, a fully decentralized blockchain game could lead
to a rapid loss of gaming population as operators lose control of the ecosystem.
All in all, Blockchain games have only been available for a short time, but they have
already attracted a lot of attention.The potential of blockchain games has been recognized
by numerous major game companies and game producers, who have begun developing
blockchain-based games.We hope to see blockchain games of high quality soon.
**4.2. User-Generated Content (UGC) Network**
The term "user-generated content", also spelled "user-generated content", refers to
any type of content that: Videos, blogs and Authentication, discussions you create that
others can use.Users and their content are the core values of the system in UGC apps.
Reddit16, 9GAG17, Flickr18, and Wikipedia19 are some of the most widely used UGC apps.
The privacy and security of existing UGC apps are seriously compromised. First, it's easy
to steal original content from other popular websites from small content creators. Second,
this large social media platform is dedicated to the collection and sale of personal
information about users to advertisers so that they can target ads to those users. Since it
has no central authority, blockchain can solve this problem. Now we turn to the three most
popular blockchain-based UGC platforms [15].
_Security Level Significance in DApps …_ - 296
-----
**Aptisi Transactions on Technopreneurship (ATT)** **P-ISSN: 2655-8807**
**Vol. 4 No. 3 November 2022** **E-ISSN: 2656-8888**
a. STEEM
Steem is a cryptocurrency rewards platform for publishers built on the
blockchain.Additionally, Steem has its own cryptocurrency, STEEM.STEEM can be
purchased and exchanged for a variety of cryptocurrencies.The concept of mining
with human intelligence was proposed by Steem.The platform does not charge thirdparty transaction fees and lets users convert original creations like articles, music,
and other forms of creation into money.
b. GEMS
Gems is a decentralized crowdsourcing protocol for human tasks on the
Ethereum blockchain, according to the white paper.Gems is a marketplace where
requesters post microtasks, hire workers, and pay workers to complete the tasks,
much like Amazon Mechanical Turk (MTurk).However, MTurk charges a significant
amount for transaction fees because it acts as an intermediary.Additionally, worker
results vary in accuracy, necessitating multiple payments to requesters for the same
task in order to reach consensus.Gems are made to solve the aforementioned
issues.A staking mechanism to guarantee task completion, a Gems Trust Score to
evaluate employee integrity, and a payment system to reduce transaction fees are
all features of the Gems Protocol.
**4.3. Internet of Things**
The term "Internet of Things" (IoT) refers to the process of connecting billions of
physical devices with sensors and/or actuators to the Internet to control our environment
and share data. To bridge the digital and physical worlds, data can be collected and
consolidated for communication without human intervention. Authentication blockchainbased IoT solutions are great for making business processes easier, making customer
experiences better, and making a lot of money [12]. Since IoT applications are, by definition,
distributed, previous research found that blockchain has significant potential for IoT
solutions. In addition, applications that involve transactions and interactions can use
blockchain as their foundation and Authentication.
a. Smart Hardware
Mechanization is a critical idea in IoT applications. Shrewd equipment that
interfaces with the organization ought to have the option to perform predefined
activities without human intercession. This prerequisite impeccably fits the idea of
brilliant agreements running on blockchains [14]. With the straightforward and
permanent savvy gets, various gatherings in an IoT stage can lay out trustful
connections without convoluted discussions and guidelines. For instance, a visitor
looking into a future lodging shouldn't enroll at the front work area, however rather
pay for the room through a shrewd agreement, which then trains the entryway and
all brilliant machines in the particular space to oblige the client. Then again, the client
who has wound up in a tight spot financially can not get to the room or the offices in
it [15].
b. Supply Chain
IoT is carrying a colossal effect on supply chains. In the blockchain period,
the combination of brilliant agreements with supply chains will additionally advance
the frameworks. Store network the board includes various partners and thinks about
capable intricacy. Different degrees of providers, producers, specialist organizations,
wholesalers, and retailers make record-keeping and correspondences wasteful. IoT
and savvy agreements can improve in general strategy by organizing tangible
_Security Level Significance in DApps …_ - 297
-----
information, documentation, and straightforwardness to guidelines. For instance, a
postponement in the shipment of some unrefined substance can be recognized by
the IoT organization and its contingent arrangement determined in a straightforward
brilliant agreement can be naturally executed to submit make-up requests, so the
effect on the assembling system can be limited. For this situation, various messages
and phone interchanges are supplanted by a regularly concurred shrewd agreement,
which can save a tremendous measure of time and assets [16].
**4.4. Sharing Economy Credits**
The sharing economy requires a credit framework to promote the commitments of
the members within the framework and maintain decency among them. However, ordinary
credits granted by a centralized business association may not be considered as a real
motivator, as the value of the credit may be driven by the association, while the members
may have to withdraw and use these credits somewhere or for something else. The finding
in this paper useful features of dApps and ongoing developments in blockchain technology,
such as payment channels, new agreement models, Authentication, and private blockchains.
This segment talks about the possibility of using blockchain for such an environment [17].
a. File Share Credit
The chance of record sharing has been examined since the dangerous
reception of the BitTorrent P2P organization. As of late, the Interplanetary Records
Framework (IPFS), a decentralized P2P disseminated document framework, has
arisen with the goal to interface PCs with a similar document framework and to
disperse huge datasets. IPFS can get to documents in any organization by the
record addresses, every one of which is put away as a byte string. To more readily
work with IPFS with credit motivating forces, document coin is a symbolic convention
whose blockchain runs on a clever agreement model, called Evidence-of-Spacetime,
where blocks are made by excavators that store the information. The record coin
convention gives an information stockpiling and recovery administration by means
of an organization of free stockpiling suppliers that don't depend on a solitary
organizer, with the end goal that: 1) clients pay to store and recover information, 2)
capacity diggers procure tokens by offering stockpiling, 3) recovery excavators
procure tokens by serving information. The record coins can be traded as far as
we're concerned dollars, Bitcoins, Ethereum, and that's only the tip of the iceberg.
To put it plainly, document coin makes a decentralized stockpiling organization
(DSN) and a digital currency commercial center on top of it [18].
b. Data Sharing Credits
The idea of sharing comparison is introduced in information sharing/data
transmission situations. RightMesh claims to be the world's most memorable,
purpose-built, program-based mobile network that provides availability to all.
Availability in P2P mode over Wi-Fi, Bluetooth and Wi-Fi Direct. The moment the
client and interest saw each other, they structured a different section for individuals
to engage and share, and it evolved from that point on. Excessive repetition can
strengthen network organization [19] [20]. In a densely populated area, more
accessible individuals and centers can join the cross-functional organization, which
builds the strength of the organization. To increase support, a cross-section center
provider receives RMESH tokens and a decentralized payment cycle using the
Ethereum staking.
**5. DApps Desired Characteristics**
In the future, dApps will require a blockchain move that satisfies the beneficial qualities
that come with it, as the application scenarios discussed above demonstrate :
_Security Level Significance in DApps …_ - 298
-----
**Aptisi Transactions on Technopreneurship (ATT)** **P-ISSN: 2655-8807**
**Vol. 4 No. 3 November 2022** **E-ISSN: 2656-8888**
**5.1. Better Performance**
a. Low Latency
Long exchange delays have been a fundamental problem since the advent of
Bitcoin. Since the typical time for Bitcoin hubs to mine a block is 10 minutes, the typical
exchange confirmation time is close to 60 minutes (since a regular customer sits 6
blocks). Despite the fact that react idle has been essentially reduced to 15 seconds in
Ethereum, the inactivity time is low enough to keep common application collaboration
operations going. In fact, longer delays confuse customers and make dApps less
ruthless with existing non-blockchain options [21]. For example, a typical customer of a
blockchain-based informal community website will often need the framework to cater to
their liking or post an activity within 2-3 seconds.
b. High Throughput
Today's electronic frameworks, e.g. informal organizations, multiplayer internet
games, web-based shopping malls, require a blockchain step to help a large number of
customers stay active in the most dynamic way. shop. Therefore, the ability to handle
multiple concurrent traffic is fundamental in the dApp platform structure. However, the
current blockchain stages are actually suffering from the adverse effects of throughput
bottlenecks. For example, CryptoKitties, which became popular when it was sent, at
one point accounted for almost 30% of all transactions on Ethereum, leading to a record
accumulation of around 30,000 transactions to come.
**5.2.Flexible Maintainability**
a. Enabling System Upgrades
Since blockchain advancements are still in their infancy, it is inevitable that a
blockchain framework will require revisions that begin with one interpretation and then
the next. Anyway, given the idea of a P2P agreement, a hard fork is the main method
for existing blockchain frameworks to overhaul, which could lead to lack of participating
network hubs. A more likely problem with a hard fork is that there will be many
comparable tokens sharing a typical start, which will confuse customers. For example,
when Bitcoin and Bitcoin Money, 'Ethereum' (ETH) and 'Ethereum Exemporary' (etc.)
parted in July 2016. To achieve this goal, framework updater was needed for the
frameworks. modern blockchain framework, which works with visibility control of the
dApps passed to them.
b. Simple BUG Recuperation
Numerous previous works have investigated security concerns associated with
smart agreements.It is fundamentally difficult to guarantee that a significant smart deal
is error-free, despite the fact that the majority of framework bugs and errors can be
avoided through careful implementation and rigorous testing.The complexity of some
dApps makes the situation even worse.The immutable nature of blockchain data, on the
other hand, prevents dApps from being modified, rendering fix transmission
impossible.Therefore, the blockchain step will enable dApp designers to adapt error
recovery methods, particularly for fundamental issues that have the potential to disrupt
the entire dApp environment [22].
**6. Consideration When Choosing**
**6.1. Implementation Blockchain**
Various Authenticated blockchain implementations with negligible contrast in key niche
areas are constantly emerging to fill different gaps in the existing framework. When selecting a
potential blockchain innovation, one can expect stable operation, but adaptable if necessary.
This can be estimated by looking at how often the organization has 'hard forks' and how many
subordinate companies (forks on GitHub). It is also advantageous if the potential company has
_Security Level Significance in DApps …_ - 299
-----
local designer functional areas (interior and exterior) - which can be estimated by the size of, for
example, supports, cameras. coding, branching and Authentication [23]. Depending on a dApp,
one can look for blockchain innovations that support smart agreements and some kind of flexible
payment, such as payment channels and Authentication, as well as a dApps currency model
based on basic dApps. top-notch facilities and supports the right programming dialect for the
task. To identify some of these considerations, the Bitcoin project and the Ethereum project will
be examined, however, other businesses may be faced with comparative and test correlations
when selecting innovations. fit to appear.
a. Low Latency
The GitHub Bitcoin Hub records 571 customers and over 18,000 commits.
There are many client runtimes and APIs in different dialects with different
developments. For example, there is a Java library through the bitcoin project (and
possibly many others). The bitcoinj project has 95 backers and over 3000 commits.
According to bitnodes, there are about 10,000 full hubs running bitcoin (these are hubs
that show full confirmations of the entire blockchain exchange for trading, rather than a
thin client dependent on a full hub). to the trustee to do it for him). As of spring 2017,
there were over 10,000 bitcoin projects on GitHub. Bitcoin has been active since
January 2009. As reported by GitHub, the Bitcoin source has only been forked a few
times, although the number of dynamic forks is much lower .The handling of actual forks
as well as the market mayhem and checks performed after these forks make it difficult
to choose recently forked projects. According to blockchain data, the peak normal 7day transactions every 24 hours, by all accounts, is around 425,000 or 4.92 transactions
per second (TPS). However, some tests suggest it is most likely capable of reaching 7
TPS (with a block size of 1 Mb). The most notable typical exchange fee according to
BitInfoCharts is around $55. With such low exchange rates and high exchange fees, it
clearly makes no sense to trade at an extremely granular rate for dApps ( this severely
limits the type of usage that can be imagined without a flexible installment agreement).
b. Ethereum
The Ethereum project has several major GitHub repositories. As of August
2018, the go-ethereum store has 318 sponsors, the cpp-ethereum vault has 136
customers, ethereum-j has 69 benefactors. Strength, one of Ethereum's brightest
regulatory dialects, has 263 clients. In total, in these repositories there are about 60,000
commits. Almost certainly, a portion of the customers will switch between different parts
of the project, but any reasonable person would agree that Ethereum is essentially
equivalent to Bitcoin in terms of the number of engineers transacting with the business
. According to ethernodes, 16,000 complete hubs are running Ethereum.It is difficult to
determine the number of forks (similar to benefactors) because of the way that
Ethereum is coordinated in various activities.For instance, there are 6800 forks in goethereum, 2000 in cpp-ethereum, and 890 in ethereumj.Since July 30, 2015, Ethereum
has been in use.The ether filter shows that the highest number of transactions after
every 24 hours is 1,349,890, or 15.62 transactions per second. This is almost twice as
many as Bitcoin's 7 transactions per second.Rouhani and Discouragements have
demonstrated that the equality client outperforms the geth client when it comes to
Ethereum's transaction speed.According to BitInfoCharts, the most notable typical
exchange fee is approximately $4.15.Similar to bitcoin, there are concerns regarding
the ability to execute transactions at a fine level of granularity without overburdening the
institution's transaction throughput and paying a higher settlement fee.contrary to the
importance of the sent information.
c. Other Blockchain
Usually, there are many forks of these two companies to browse and many
other new blockchain releases. A significant number of these tasks have yet to undergo
the same investigation that major chains like Ethereum and Bitcoin have undergone.
There aren't many reviews by free meetings that look at things like the farthest
hypothetical range of exchange throughput, top-down security assessments, financial
_Security Level Significance in DApps …_ - 300
-----
**Aptisi Transactions on Technopreneurship (ATT)** **P-ISSN: 2655-8807**
**Vol. 4 No. 3 November 2022** **E-ISSN: 2656-8888**
issues and action plans, and lots more. different worries. Many people with immature
networks may not proceed, with unclear instructions. dApp designers should consider
both the specialized plausibility and the enduring robustness of tasks before choosing
a particular innovation for events. Right now, this kind of survey should be completed
by dApp designers, but has unimaginable potential for the local crawl area to
fundamentally assess the options available, giving out best practices, what to avoid,
how to improve and how to achieve adaptability and Sustainability.
**6.2. Models Novel Consensus**
However, the imaginative use of the PoW agreement model started a new era of
blockchain, which was also punished because of the blackout nature: all participating centers in
the PoW network do the work. useless digital work to generate blocks, which costs a lot [24].
power measure. For example, the annual energy use file for Bitcoin mining alone is already
11.8% higher than Switzerland and about 30% higher than Australia with more than 7 million
square kilometers of land. Similarly, note that this energy use is still growing rapidly for Bitcoin
at a rate >500% from May 2017 to May 2018. In fact, a late review predicts that Bitcoin
exchanges may consume more energy than Denmark in 2020 Moreover, PoW Support is also
an inherent reason for high forex fees and prolonged inertia. As a result, the development of an
efficient agreement model for future blockchain frameworks has been a contentious topic in both
academia and industry.We'll look at some brand-new smart chord patterns in this section [25].
**6.3 DApps ABC**
DApps ABC or Decentralized Applications ABC is an application that is not owned by
anyone, cannot be turned off, and the system cannot be down, and its security has been tested.
Picture 2. DApps Verification.
Publishers are trustworthy as they have the option to add their credentials to Blockchain
records for added security. hence the Blockchain Verification system designed to increase the
security of users which verifies the identity of the users and allows them to connect to digital
currency technology resources.
_Security Level Significance in DApps …_ - 301
-----
a. Proof Of Stake (PoS)
As we uncovered in Segment II-D, PoW uses equipment speculation to forestall
personality fashions in Sybil assaults. Interestingly, the PoS agreement model attempts to track
down an elective answer for this issue. Not quite the same as PoW, the organization members
need not take care of numerical issues to compose a block. All things considered, the maker of
a block is haphazardly picked in light of the member's responsibility for (i.e., the more stake a
member has, the more probable it can turn into a block maker). Under this situation, how much
tokens one hub holds turns into the obstruction of the personality fashion [26].
Table 1. Classification of PoW, PoS & DPoS
PoW PoS DPoS
Metaphor City State Democratic Capitalism System Parliamentary System
System
Mechanism One CPU One Vote One Token One Vote Vote for Delegates
Block Rewards To Miners Solved PoW To Token Holders as Interest To Elected Supernode
Producing Blocks
At the end of the day, the framework gatecrashers should hold a larger part of the coins
available for use to perform 51% assault. As a matter of fact, this is very troublesome: because
of the laws of organic market, the cost of tokens in a framework will persistently increment when
the gatecrashers start their buy, which might rebuff them monetarily. All the more curiously,
when the gatecrashers become the significant partners of a computerized cash, they lose their
inspiration to assault: their assault will upset the activity of the money, which thus acquaints
monetary harm with the interlopers. According to another viewpoint, the PoS is like PoW as far
as making blocks creates boundaries. The main contrast is that PoS urges network members
to put away their cash on tokens, instead of mining machines. So does PoS address the
enormous above presented by numerical critical thinking in PoW while forestalling Sybil
assaults? The response is agreed. In any case, it doesn't imply that PoS is the ideal agreement
model. One basic issue in PoS is the judicious forks by the partners [27]. As we talked about,
PoS uses stake to supplant the PoW calculation. In any case, when a block maker in a PoS
blockchain makes a fork, there is no expense for all partners to immediately follow the subchain. Actually, one fork will twofold the partners tokens and two forks will significantly increase
them. Nothing remains to be lost for the partners to follow all chains and get numerous coins in
various sub chains. Such a large number of forks on one blockchain will present disarray and
disarrays, subsequently lessening the worth of the organization. Because of these
considerations, a couple of digital forms of money accessible in the market depend on PoS, like
Peercoin and ShadowCash.
b. Delegated Proof Of Stake (DPoS)
The DPoS agreement model, as made sense in ''DPOS Agreement Calculation - The
Missing White Paper'' for STEEM, takes care of the character manufacture issue from another
viewpoint: network members delegate their privileges of delivering blocks to a little gathering of
supernodes. The way that DPoS makes hindrances for personality produced in Sybil assault is
the trouble of turning into a supernode. In a run of the mill DPoS agreement, the partners need
to decide in favor of their favored block maker up-and-comers, and those effectively chosen get
prizes from making right and opportune blocks [28]. With DPoS, The computational above for
PoW is disposed of since the block makers don't need to rival each other in numerical
calculations. Additionally, the partners can't perform sane forks, since the votes allotted to the
partners are restricted in amount, for example relative to the quantity of tokens they hold. Then
again, the chosen block makers are directed by most of the partners to play out their obligations
for the impetuses produced by making new blocks. Any malevolent ways of behaving from block
makers will be accounted for and inadequate block makers will be removed as an outcome. The
_Security Level Significance in DApps …_ - 302
|Col1|PoW|PoS|DPoS|
|---|---|---|---|
|Metaphor|City State Democratic System|Capitalism System|Parliamentary System|
|Mechanism|One CPU One Vote|One Token One Vote|Vote for Delegates|
|Block Rewards|To Miners Solved PoW|To Token Holders as Interest|To Elected Supernode Producing Blocks|
-----
**Aptisi Transactions on Technopreneurship (ATT)** **P-ISSN: 2655-8807**
**Vol. 4 No. 3 November 2022** **E-ISSN: 2656-8888**
quantity of block makers is dependent upon various executions. For instance, EOS has 21
supernodes while Asch51 has 101 representatives [29]. Block makers may likewise act as an
administration passage [30]. Any proposed change on framework boundaries, for example,
exchange expense, block size, witness pay or block spans, should be supported by a greater
part of block makers. Since there is just a set number of block makers in DPoS, and the
democratic methodology can promptly screen out bad quality up-and-comers, it is more
straightforward for the framework to enhance itself regarding execution. Likewise, DPoS
includes generally low dormancy, high effectiveness, and adaptability. Nonetheless, there are
questions around the system of designated block makers: rivals censure that DPoS is definitely
not a decentralized stage since ensuring the immaculateness of block producers is
inconceivable. The little gathering of block makers might plot to boost their own advantages [31].
Likewise, since the block makers will get rewards, a gathering of competitors who didn't get
chosen might make forks on the fundamental chain, which brings about different chains too. In
outline, DPoS proposes to use the force of partner endorsement casting a ballot to determine
agreement issues in a fair and vote based way [32].
c. Between Consensus And Comparison Model
We might want to use three political models as the allegory for PoW, PoS, and DPoS
[33]. As the original blockchain framework, PoW is the first P2P agreement model for
blockchains, which is similar to popularity based casting a ballot in old European city-states. Its
''One computer chip One Vote'' thought is the very same as the ''Exclusive One Vote'' structure
[34]. Be that as it may, when the size of the framework increments to a specific level, this type
of a majority rules system becomes wasteful. Then again, PoS gets revenue created with cash
reserve funds, so that recently produced tokens are disseminated to those partners in relation
to their ongoing property. More tokens show more advantages in the framework, which is a
component of the entrepreneur frameworks: the method for creation gets a recurring, automated
revenue from their activity [35]. In contrary, DPoS acquires from the political model of
parliamentary frameworks embraced by numerous nations: delegates are chosen by people in
general to settle the legitimate and social issues productively.
**7. Conclusion**
The conclusion in blockchain frameworks provide the basis for decentralized
applications by influencing the advancement of cryptography, P2P system administration, and
agreement models. We have discussed the general definition of the Authentication blockchain
framework and assessed its historical context in this article. We have already talked about the
scenarios that dApp applications, as we will see, will be the focus of blockchain in the future.
The finding in this paper is useful features of dApps and ongoing developments in blockchain
technology, such as payment channels, new agreement models, Authentication, and private
blockchains. leading to cutting-edge Internet service providers. By using Blockchain
Authentication, transactions do not need to depend on only one server. Blockchain
Authentication users can also avoid various frauds that can occur because: there is a hack.
Blockchain is not only used in the world of cryptocurrency but can also be applied in various
industries, especially in the financial sector, public sector, technology and media, healthcare,
retail, and various other industrial sectors.
**Acknowledgments**
We would like to thank the Ministry of Education, Culture, Research, and Technology
of Higher Education PKM which funded the research in accordance with SK
0357/E5/AK.04/2022. And thanks also to University of Raharja, particularly to Alphabet
Incubator, who has helped complete this research.
_Security Level Significance in DApps …_ - 303
-----
**References**
[1] A. Tandon, P. Kaur, M. Mäntymäki, and A. Dhir, “Blockchain applications in
management: A bibliometric analysis and literature review,” _Technol. Forecast. Soc._
_Change, vol. 166, p. 120649, 2021._
[2] F. Elghaish et al., “Blockchain and the ‘Internet of Things’’ for the construction industry:
research trends and opportunities,’” Autom. Constr., vol. 132, p. 103942, 2021.
[3] W. Y. Ng _et al., “Blockchain applications in health care for COVID-19 and beyond: a_
systematic review,” Lancet Digit. Heal., vol. 3, no. 12, pp. e819–e829, 2021.
[4] M. Massaro, “Digital transformation in the healthcare sector through blockchain
technology. Insights from academic research and business developments,”
_Technovation, p. 102386, 2021._
[5] C. Esposito, M. Ficco, and B. B. Gupta, “Blockchain-based authentication and
authorization for smart city applications,” Inf. Process. Manag., vol. 58, no. 2, p. 102468,
2021.
[6] E. Guustaaf, U. Rahardja, Q. Aini, H. W. Maharani, and N. A. Santoso, “Blockchainbased education project,” Aptisi Trans. Manag., vol. 5, no. 1, pp. 46–61, 2021.
[7] Y. Xie et al., “Applications of blockchain in the medical field: Narrative review,” J. Med.
_Internet Res., vol. 23, no. 10, p. e28613, 2021._
[8] M. K. Lim, Y. Li, C. Wang, and M.-L. Tseng, “A literature review of blockchain technology
applications in supply chains: A comprehensive analysis of themes, methodologies and
industries,” Comput. Ind. Eng., vol. 154, p. 107133, 2021.
[9] T.-M. Choi and T. Siqin, “Blockchain in logistics and production from Blockchain 1.0 to
Blockchain 5.0: An intra-inter-organizational framework,” _Transp. Res. Part E Logist._
_Transp. Rev., vol. 160, p. 102653, 2022._
[10] B. E. Giri, D. Manongga, and A. Iriani, “Using social networking analysis (SNA) to analyze
collaboration between students (case Study: Students of open University in Kupang),”
_Int. J. Comput. Appl, vol. 85, pp. 44–49, 2014._
[11] I. D. Astuti, S. Rajab, and D. Setiyouji, “Cryptocurrency Blockchain Technology in the
Digital Revolution Era,” Aptisi Trans. Technopreneursh., vol. 4, no. 1, pp. 9–15, 2022.
[12] A. P. Balcerzak, E. Nica, E. Rogalska, M. Poliak, T. Klieštik, and O.-M. Sabie,
“Blockchain technology and smart contracts in decentralized governance systems,”
_Adm. Sci., vol. 12, no. 3, p. 96, 2022._
[13] U. Rahardja, A. S. Bist, M. Hardini, Q. Aini, and E. P. Harahap, “Authentication of Covid19 Patient Certification with Blockchain Protocol”.
[14] B. Sundarakani, A. Ajaykumar, and A. Gunasekaran, “Big data driven supply chain
design and applications for blockchain: An action research using case study approach,”
_Omega, vol. 102, p. 102452, 2021._
[15] F. Völter, N. Urbach, and J. Padget, “Trusting the trust machine: Evaluating trust signals
of blockchain applications,” Int. J. Inf. Manage., p. 102429, 2021.
[16] R. Somya, D. Manongga, and M. A. I. Pakereng, “Service-oriented business intelligence
(SoBI) for academic and financial data integration in university,” in _2018 International_
_Seminar on Application for Technology of Information and Communication, 2018, pp. 1–_
5.
[17] R. T. Palar, D. Manongga, and W. H. Utomo, “An appropriate cloud computing business
model and its services for developing countries: A comparison of cloud computing
business model in Indonesia,” Int. J. Comput. Appl., vol. 43, no. 18, 2012.
[18] H. Jang, S. H. Han, and J. H. Kim, “User perspectives on blockchain technology: usercentered evaluation and design strategies for dapps,” IEEE Access, vol. 8, pp. 226213–
226223, 2020.
[19] H. Latifah and Z. Fauziah, “Blockchain Teaching Simulation Using Gamification,” Aptisi
_Trans. Technopreneursh., vol. 4, no. 2, pp. 184–191, 2022._
[20] P. A. Christianto, E. Sediyono, and I. Sembiring, “Modification of Case-Based Reasoning
Similarity Formula to Enhance the Performance of Smart System in Handling the
Complaints of in vitro Fertilization Program Patients,” Healthc. Inform. Res., vol. 28, no.
3, p. 267, 2022.
[21] Y. Wang, Z. Li, G. Gou, G. Xiong, C. Wang, and Z. Li, “Identifying DApps and user
behaviors on ethereum via encrypted traffic,” in International Conference on Security and
_Security Level Significance in DApps …_ - 304
-----
**Aptisi Transactions on Technopreneurship (ATT)** **P-ISSN: 2655-8807**
**Vol. 4 No. 3 November 2022** **E-ISSN: 2656-8888**
_Privacy in Communication Systems, 2020, pp. 62–83._
[22] C. Udokwu, P. Brandtner, A. Norta, A. Kormiltsyn, and R. Matulevičius, “Implementation
and evaluation of the DAOM framework and support tool for designing blockchain
decentralized applications,” Int. J. Inf. Technol., vol. 13, no. 6, pp. 2245–2263, 2021.
[23] S. Roopashree, J. Anitha, T. R. Mahesh, V. V. Kumar, W. Viriyasitavat, and A. Kaur, “An
IoT based authentication system for therapeutic herbs measured by local descriptors
using machine learning approach,” Measurement, vol. 200, p. 111484, 2022.
[24] C. Wang, D. Wang, G. Xu, and D. He, “Efficient privacy-preserving user authentication
scheme with forward secrecy for industry 4.0,” Sci. China Inf. Sci., vol. 65, no. 1, pp. 1–
15, 2022.
[25] Y. Zheng, W. Liu, C. Gu, and C.-H. Chang, “PUF-based mutual authentication and key
exchange protocol for peer-to-peer IoT applications,” _IEEE Trans. Dependable Secur._
_Comput., 2022._
[26] A. H. Sodhro, A. I. Awad, J. van de Beek, and G. Nikolakopoulos, “Intelligent
authentication of 5G healthcare devices: A survey,” Internet of Things, p. 100610, 2022.
[27] S. Shamshad, M. F. Ayub, K. Mahmood, S. Kumari, S. A. Chaudhry, and C.-M. Chen,
“An enhanced scheme for mutual authentication for healthcare services,” _Digit._
_Commun. Networks, vol. 8, no. 2, pp. 150–161, 2022._
[28] D. Apriliasari and B. A. P. Seno, “Inovasi Pemanfaatan Blockchain dalam Meningkatkan
Keamanan Kekayaan Intelektual Pendidikan,” J. MENTARI Manajemen, Pendidik. dan
_Teknol. Inf., vol. 1, no. 1, pp. 68–76, 2022._
[29] M. Shen, H. Lu, F. Wang, H. Liu, and L. Zhu, “Secure and Efficient Blockchain-assisted
Authentication for Edge-Integrated Internet-of-Vehicles,” _IEEE Trans. Veh. Technol.,_
2022.
[30] P. A. Sunarya, “Penerapan Sertifikat pada Sistem Keamanan menggunakan Teknologi
Blockchain,” J. MENTARI Manajemen, Pendidik. dan Teknol. Inf., vol. 1, no. 1, pp. 58–
67, 2022.
[31] S. R. Akhila, Y. Alotaibi, O. I. Khalaf, and S. Alghamdi, “Authentication and Resource
Allocation Strategies during Handoff for 5G IoVs Using Deep Learning,” _Energies, vol._
15, no. 6, p. 2006, 2022.
[32] T. Schmidbauer, J. Keller, and S. Wendzel, “Challenging channels: Encrypted covert
channels within challenge-response authentication,” in _Proceedings of the 17th_
_International Conference on Availability, Reliability and Security, 2022, pp. 1–10._
[33] B. Rawat, A. S. Bist, D. Supriyanti, V. Elmanda, and S. N. Sari, “AI and Nanotechnology
for Healthcare: A survey,” APTISI Trans. Manag., vol. 7, no. 1, pp. 86–91, 2023.
[34] V. Elmanda, A. E. Purba, Y. P. A. Sanjaya, and D. Julianingsih, “Efektivitas Program
Magang Siswa SMK di Kota Serang Dengan Menggunakan Metode CIPP di Era
Adaptasi New Normal Pandemi Covid-19,” ADI Bisnis Digit. Interdisiplin J., vol. 3, no. 1,
pp. 5–15, 2022.
[35] A. Adeel _et al., “A multi‐attack resilient lightweight IoT authentication scheme,”_ _Trans._
_Emerg. Telecommun. Technol., vol. 33, no. 3, p. e3676, 2022._
_Security Level Significance in DApps …_ - 305
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.34306/att.v4i3.277?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.34306/att.v4i3.277, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "https://att.aptisi.or.id/index.php/att/article/download/277/187"
}
| 2,022
|
[] | true
| 2022-10-28T00:00:00
|
[
{
"paperId": "79230bf676e3cd4e8122816cf3b3d35dcf0820d1",
"title": "Secure and Efficient Blockchain-Assisted Authentication for Edge-Integrated Internet-of-Vehicles"
},
{
"paperId": "5187cdc8cb9cfb4a951e6c954ac92d238d932004",
"title": "Penerapan Sertifikat pada Sistem Keamanan menggunakan Teknologi Blockchain"
},
{
"paperId": "769226bb94e60a619c8c9678a1fc12797829cc2d",
"title": "Inovasi Pemanfaatan Blockchain dalam Meningkatkan Keamanan Kekayaan Intelektual Pendidikan"
},
{
"paperId": "af3e59774b372fa908342bb30acd079ccde16a21",
"title": "Intelligent authentication of 5G healthcare devices: A survey"
},
{
"paperId": "6fdf54d64b267c1264a0656d22ae8ae6b3bf8137",
"title": "Challenging Channels: Encrypted Covert Channels within Challenge-Response Authentication"
},
{
"paperId": "68ddc3e5330417db7718e913118907581725476e",
"title": "Blockchain Technology and Smart Contracts in Decentralized Governance Systems"
},
{
"paperId": "9840af5e308150bd0ea5ca13a4412fb72876fe90",
"title": "An IoT based authentication system for therapeutic herbs measured by local descriptors using machine learning approach"
},
{
"paperId": "c4ed0590e84627c29f1cd73d6cff9989c052e0d1",
"title": "Blockchain Teaching Simulation Using Gamification"
},
{
"paperId": "a66e91e03d4840ef8ce86ec1c01f0ec597cba26e",
"title": "Modification of Case-Based Reasoning Similarity Formula to Enhance the Performance of Smart System in Handling the Complaints of in vitro Fertilization Program Patients"
},
{
"paperId": "514e509296d9c2da25392e5686df56cc767c85fa",
"title": "Blockchain in logistics and production from Blockchain 1.0 to Blockchain 5.0: An intra-inter-organizational framework"
},
{
"paperId": "47670995f5340b979559676535dd3d437f5cda09",
"title": "Authentication and Resource Allocation Strategies during Handoff for 5G IoVs Using Deep Learning"
},
{
"paperId": "b690fe45b14a6ad33b0d08c1b28d632f0f6ab8dd",
"title": "AI and Nanotechnology for Healthcare: A survey"
},
{
"paperId": "b6f903066104d4546958013fca4ebd5c17025769",
"title": "Efektivitas Program Magang Siswa SMK di Kota Serang Dengan Menggunakan Metode CIPP di Era Adaptasi New Normal Pandemi Covid-19"
},
{
"paperId": "860098891b300e1088ae1f920983dc18587b3992",
"title": "Cryptocurrency Blockchain Technology in the Digital Revolution Era"
},
{
"paperId": "3e0dcdf167ad2a272b3619d1d70f5a6bf1add4c6",
"title": "Blockchain and the ‘Internet of Things' for the construction industry: research trends and opportunities"
},
{
"paperId": "1de325952eb93eaffa8c025d47eea9c249d2d529",
"title": "Implementation and evaluation of the DAOM framework and support tool for designing blockchain decentralized applications"
},
{
"paperId": "38feca91a929a1a30fe04218e23b5d91e52183a8",
"title": "Trusting the trust machine: Evaluating trust signals of blockchain applications"
},
{
"paperId": "b071917060a55ef90f7553fb4ff6f98a05f75c0c",
"title": "Blockchain applications in health care for COVID-19 and beyond: a systematic review"
},
{
"paperId": "77d938c9e243ee916cf2760043b649ea44629c7d",
"title": "Digital transformation in the healthcare sector through blockchain technology. Insights from academic research and business developments"
},
{
"paperId": "1948b5c517e0ee7ad095a797ee3d42cc74f1227f",
"title": "Efficient privacy-preserving user authentication scheme with forward secrecy for industry 4.0"
},
{
"paperId": "28874ccfd3b803b3c15ddf2325b1e67935477aca",
"title": "An enhanced scheme for mutual authentication for healthcare services"
},
{
"paperId": "28315f9910d41c0a6182d9d3f3acd7e7a34800a6",
"title": "Blockchain applications in management: A bibliometric analysis and literature review"
},
{
"paperId": "c61ee4eaf2e0510b00e60d7a2678624e91237062",
"title": "PUF-Based Mutual Authentication and Key Exchange Protocol for Peer-to-Peer IoT Applications"
},
{
"paperId": "f3bcce35c1fa11da40607abada40da5852c1a551",
"title": "Applications of Blockchain in the Medical Field: Narrative Review"
},
{
"paperId": "f9ccac18408d126daa864996723004b303076b2e",
"title": "Big data driven supply chain design and applications for blockchain: An action research using case study approach"
},
{
"paperId": "c920af2a1ef51a48e8e9fb4ea65181cab6862c10",
"title": "A multi‐attack resilient lightweight IoT authentication scheme"
},
{
"paperId": "bc0aa8fde23961a5ba46d072cde74a734c7df3f3",
"title": "Service-Oriented Business Intelligence (SoBI) for Academic and Financial Data Integration in University"
},
{
"paperId": "41eaa2210ca9e2b833470febca70d61d3a3fac20",
"title": "Using Social Networking Analysis (SNA) to Analyze Collaboration between Students (Case Study: Students of Open University in Kupang)"
},
{
"paperId": "5b44c6b6ce403421bac3444f8d1897023da0d316",
"title": "An Appropriate Cloud Computing Business Model and Its Services for Developing Countries: A Comparison of Cloud Computing Business Model in Indonesia"
},
{
"paperId": "cfe0b305ed5e1ccfb6f5c5bbf59014c1567a9631",
"title": "A literature review of blockchain technology applications in supply chains: A comprehensive analysis of themes, methodologies and industries"
},
{
"paperId": "63009b9c9392e044d30e0877899d43706b186298",
"title": "Blockchain-based authentication and authorization for smart city applications"
},
{
"paperId": "6e25bdbb42b80905d6f1fd835e86c5d04ccf5e87",
"title": "User Perspectives on Blockchain Technology: User-Centered Evaluation and Design Strategies for DApps"
},
{
"paperId": "0e23dd7c1b0ec963574f9ef97a2708b31acc6e15",
"title": "Identifying DApps and User Behaviors on Ethereum via Encrypted Traffic"
},
{
"paperId": null,
"title": "Blockchainbased education project"
},
{
"paperId": null,
"title": "Authentication of Covid - 19 Patient Certification with Blockchain Protocol ”"
}
] | 12,665
|
en
|
[
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Agricultural and Food Sciences",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02ca10957f0d791c05c4cd7808f129dd6d13607b
|
[] | 0.856693
|
Use of Blockchain to Improve Case Studies in Food Supply
|
02ca10957f0d791c05c4cd7808f129dd6d13607b
|
Blockchain Frontier Technology
|
[
{
"authorId": "2159810798",
"name": "Haryanto"
},
{
"authorId": "2209153309",
"name": "Rio Argi Fiananda"
},
{
"authorId": "2209164532",
"name": "Shilvia Wanri"
},
{
"authorId": "2148558301",
"name": "Sunarsih"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Blockchain Front Technol"
],
"alternate_urls": null,
"id": "3c3b9aa4-3fe3-4b90-b0df-edad76fea5c0",
"issn": "2808-0009",
"name": "Blockchain Frontier Technology",
"type": "journal",
"url": null
}
|
The current food supply is a linear economic model that directly or indirectly fulfills needs. However, this model has several weaknesses, such as the relationship between members of the supply chain or the lack of information to consumers about the origin of the product. In this paper we propose a new supply chain model via blockchain. This new model enables the concept of a circular economy and removes many of the disadvantages of the current supply chain. To coordinate all the transactions that occur in the food supply, a multi-agent system is created for this paper.
|
**Blockchain Frontier Technology (B-Front)** **P-ISSN: 2808-0831**
**Vol. 1 No. 2 January 2022** **E-ISSN: 2808-0009**
# Use of Blockchain to Improve Case Studies
in Food Supply
**Haryanto[1], Rio Argi Fiananda[2], Shilvia Wanri[3], Sunarsih[4]**
Information Systems, University of Raharja[1,2]
Management Information System, University of Raharja[3,4]
Indonesia
e-mail:haryanto@raharja.info, rio.argi@raharja.info, shilvia@raharja.info,
sunarsih@raharja.info
Haryanto, Fiananda, R. A., Shilvia Wanri, & Sunarsih. (2022). Use of Blockchain to Improve
Case Studies in Food Supply. Blockchain Frontier Technology, 1(2), 96–102.
**DOI: https://doi.org/10.34306/bfront.v1i2.256**
**_Abstract_**
_The current food supply is a linear economic model that directly or indirectly fulfills_
_needs. However, this model has several weaknesses, such as the relationship between_
_members of the supply chain or the lack of information to consumers about the origin of the_
_product. In this paper, we propose a new supply chain model via blockchain. This new model_
_enables the concept of a circular economy and removes many of the disadvantages of the_
_current supply chain. A multi-agent system is created for this paper to coordinate all the_
_transactions that occur in the food supply._
**Keywords: Blockchain Use, Improving, Food Supply**
**1. Introduction**
Blockchain is currently gaining interest from a wide variety of industries: finance,
healthcare, other sectors, utilities, and the government sector [1]. Reasons for the increased
interest: With blockchain, applications work only through trusted intermediaries. Now they can
operate in a decentralized way, without the need for a verification system, and achieve the
same functionality with the same level of reliability [2]. This was not possible until the
blockchain was created. With the adoption of a trustless network, blockchain emerged. This is
possible because, in a network that uses blockchain, you can make transfers without needing
to trust other users. With fewer intermediaries, transactions become faster between users [3].
Moreover, the use of cryptography in the blockchain ensures that the information is secure.
Blockchain is an accounting ledger that records all transactions made by users [4]. This has
researchers and developers in the Internet of Things (IoT) looking for ways to connect IoT with
the blockchain. Today the supply chain is a core area for companies concerned with the
transportation of products between parties. However, the problem with this sector is that its
scale can cause delays and failures in the delivery of goods as well as other problems [5]. In
addition, large distributors need large volumes of workers to fulfill all the demands of the
stores. All of this can lead to major delays in order processing and increase the chances of
lost orders. To solve this problem, companies have automated all of their processes,
contributing to a significant increase in the number of businesses and distributors in the supply
chain. However, the increasing amount of digital data and the expansion of Internet companies
mean that there is also a greater risk of attacks on their databases. The hacker may intend to
change, steal or delete data [6].
We suggest an alternative way to solve this problem. In our case study (ie, agricultural
supply chain), we will consider two different scenarios [7]. First, we provide security to the data
of companies involved in the supply chain with the inclusion of blockchain. Second, a
multi-agent system will be used for organizational matters. It has been proven that multi-agent
_Use Of Blockchain to Improve Case Studies in Food Supply_ ■ 96
# Use of Blockchain to Improve Case Studies
-----
**Blockchain Frontier Technology (B-Front)** **P-ISSN: 2808-0831**
**Vol. 1 No. 2 January 2022** **E-ISSN: 2808-0009**
systems provide efficient solutions to a wide variety of problems. This includes, but is not
limited to, the use of agents for image classification, decentralized network control, real-time
troubleshooting, and Internet of Things applications. In this paper, we propose a new supply
chain model. This new model enables the use of a circular economy in supply chains [8]. In
addition, it coordinates everything that happens in the supply chain. In each supply chain
member, an agent is defined to coordinate all operations and transactions performed by that
supply chain member 2. Related work
Blockchain is a distributed data structure that is replicated and shared among network
members. It was introduced with Bitcoin to solve the problem of double-spending. As a result
of how the nodes in Bitcoin (which are called miners) mutually validate agreed transactions,
the Bitcoin blockchain establishes owners and declares what they own [9]. Blockchain is built
using cryptography. Each block is identified by its cryptographic hash and each block refers to
the hash of the previous block [10]. It establishes links between blocks, forming a blockchain.
For this reason, users can interact with the blockchain by using a pair of public and private
keys. Miners on the blockchain need to approve transactions and the order in which they
occur. Otherwise, individual copies of this blockchain can diverge resulting in forks; miners
then have different views of how transactions occur, and it is not possible to save a single
blockchain until the fork is not resolved [11]. To overcome this, a distributed consensus
mechanism is needed in every blockchain network. Blockchain's way of solving the fork
problem is that each blockchain node can connect to the next block. Only the correct random
number with SHA-256 has to be found so you have the zero count expected by the blockchain
[12]. Whichever node can solve this puzzle has generated what is called a proof-of-work (pow)
and forms the next block of the chain. Since a one-way cryptographic hash function is
involved, any other node can easily verify that the answer provided meets the requirements
[13]. Note that forks may still occur on a network when two competing nodes mine blocks
almost simultaneously. The fork is usually resolved automatically by the next block. With the
adoption of blockchain, smart contracts are included to make transactions between different
users faster and more effective. Nick Szabo introduced this concept in 1994 and defined a
smart contract as "a computerized transaction protocol that enforces the terms of a contract"
[14]. Szabo suggested that contractual clauses could be transferred to the code, thereby
reducing the need for intermediaries in transactions between parties. In the blockchain
context, smart contracts are scripts that are stored on the blockchain. Smart contracts have a
unique address on the blockchain (that is, they reside in a block with a hash that identifies
them) [15]. We can trigger a smart contract in a transaction by indicating an address on the
blockchain. It is executed independently and automatically in a defined manner on each node
in the network, according to the data contained in the transactions that are triggered. A
multi-agent system is a computerized system consisting of several intelligent agents that
interact with each other [16].
Multi-agent systems are used to solve complex problems with excellent results.
Multiagent systems are used in a variety of applications. The author presents a multi-agent
system for the smart use of electricity in the Smart home and thereby, increasing its energy
efficiency. Another problem that multi-agent systems have solved effectively is voice
monitoring in various situations. Implementation of a multi-agent system [17].
_Use Of Blockchain to Improve Case Studies in Food Supply_ - 97
-----
**Blockchain Frontier Technology (B-Front)** **P-ISSN: 2808-0831**
**Vol. 1 No. 2 January 2022** **E-ISSN: 2808-0009**
**Figure 1. Supply Models**
This is a linear model from producers and imports to retailers and food services.
Through the inclusion of the blockchain, the supply chain is now decentralized and all
transactions are placed on the blockchain [18]. Each member of the supply chain can write
their transactions on the blockchain. However, supply chain members can only read
blockchain blocks they have a direct connection with [19]. logistics is not a new problem, a
multi-agent system is proposed to provide solutions to logistics problems. In addition, another
successful application of multi-agent systems is the problem of distributed computing.
Therefore, several proposals that we found in the literature combine the advantages of
blockchain and multi-agent systems [20]. The various systems that integrate blockchain and
multi-agent systems work are worth mentioning. This work proposes to use both technologies
to increase security and privacy in decentralized energy networks. The authors propose a
model that employs agents and blockchain for a ride-sharing system [21]. Apart from that,
there are other applications of blockchain and multi-agent systems. The authors propose an
innovative blockchain model for IoT. However, after seeing its sophistication, we believe that
the current model of blockchain and multi-agent systems have some drawbacks. We propose
a new model that utilizes smart contracts and a multi-agent system, which aims to improve
efficiency in the management of the logistics system. This paper describes a case study that
verifies the proposed model, focusing specifically on the agricultural supply chain sector [22].
**2. Method**
A new model for farm tracking is presented in this paper. The proposed model
involves blockchain, a smart contract, to coordinate food tracking in agricultural supply chains
[23]. Through the implementation of this new model, the agricultural supply chain is currently
experiencing improvements based on the addition of the blockchain. Figure.1 is the current
supply chain and supply chain architecture via blockchain. Both models are described below,
including the advantages provided by the new supply chain model. 1) Current supply chain:
The current model starts with manufacturers and imports [24]. These two members of the
supply chain transmit their products and data to the next layer of the supply chain. On the next
layer are exports, processors, and wholesalers. It is the middle layer that processes the basic
products that are received by the supply chain. Finally, in the last layer are retailers and food
services that sell products [25]. The main disadvantage of this model is that data is centralized
in each element of the supply chain and other elements cannot see transactions. The main
implication of this loss is that consumers have no way of verifying the source of the food to be
purchased. In addition, there is no way to ensure that consumer data is reliable. 2) Supply
chain via blockchain: The model changes with the addition of blockchain to the agricultural
supply chain. Now all members of the supply chain store all their transactions on the
blockchain. This allows for higher security in transactions. In addition, the new model corrects
current supply chain weaknesses. The data is decentralized and each member can read data
essential for their operations on the blockchain. For example, a manufacturer can view a
processor's product info and a transportation provider's pick-up details. there is no way to
ensure that consumer data is reliable. 2) Supply chain via blockchain: The model changes with
the addition of blockchain to the agricultural supply chain. Now all members of the supply
chain store all their transactions on the blockchain.
**3. Results and Discussion**
This allows for higher security in transactions. In addition, the new model corrects current
supply chain weaknesses. The data is decentralized and each member can read data
essential for their operations on the blockchain. For example, a manufacturer can view a
processor's product info and a transportation provider's pick-up details. there is no way to
_Use Of Blockchain to Improve Case Studies in Food Supply_ - 98
-----
**Blockchain Frontier Technology (B-Front)** **P-ISSN: 2808-0831**
**Vol. 1 No. 2 January 2022** **E-ISSN: 2808-0009**
ensure that consumer data is reliable. 2) Supply chain via blockchain: The model changes with
the addition of blockchain to the agricultural supply chain. Now all members of the supply
chain store all their transactions on the blockchain. This allows for higher security in
transactions. In addition, this new model corrects current supply chain weaknesses. The data
is decentralized and each member can read data essential for their operations on the
blockchain. For example, a manufacturer can view a processor's product info and a
transportation provider's pick-up details. This allows for higher security in transactions. In
addition, the new model corrects current supply chain weaknesses. The data is decentralized
and each member can read data essential for their operations on the blockchain. For example,
a manufacturer can view a processor's product info and a transportation provider's pick-up
details. This allows for higher security in transactions. In addition, the new model corrects
current supply chain weaknesses. The data is decentralized and each member can read data
essential for their operations on the blockchain. For example, a manufacturer can view a
processor's product info and a transportation provider's pick-up details. This new model is
available via the blockchain. Coordinate all members of the supply chain.
**Figure 2. Supply chain via blockchain architecture.**
Each layer sends data from its transactions to the blockchain. In addition, the layers that
govern articles communicate with each other with smart contracts. These smart contracts are
for buying and selling goods.
_Use Of Blockchain to Improve Case Studies in Food Supply_ - 99
-----
**Blockchain Frontier Technology (B-Front)** **P-ISSN: 2808-0831**
**Vol. 1 No. 2 January 2022** **E-ISSN: 2808-0009**
**Figure 3. The figure shows the concept of the linear and circular economy.**
This change in the market model is made possible by the inclusion of the blockchain.
This model has 5 layers: 1) On the layer the manufacturer is the manufacturer's agent. This
agent coordinates all the operations that producers have to perform (e.g. buying materials,
selling products, etc.). 2) At the processor layer is the processor agent. This agent coordinates
all the tasks performed at this layer (e.g. buying key materials, selling products, contracting
transportation providers, etc.). 3) In the transport layer are the transport-providing agents. This
agent coordinates all transportation between other members of the supply chain. 4) At the
retail layer is the retail agent. These agents coordinate the purchase of materials from
processors and sales to consumers. Lastly, 5) at the blockchain layer is the blockchain agent.
This new supply chain via the blockchain enables a new market model called the circular
economy. You can find the changing market model in picture.3. Meanwhile, the current supply
chain follows the Take – Make – Dispose of the model.
With the supply chain via the blockchain, the circular economy model is enabled. This new
market model follows the Make - Use -Recycle model. This new model allows for a
self-sufficient economy. With the use of blockchain, all products can be traced from their origin
to their sale and subsequent recycling. The advantage of this model over a linear economy is
that all products are tracked with the blockchain and with this traceability it is possible to give
the final consumer confidence about the origin of the product, whether it is recycled, whether it
is used for the first time, etc.
**5. Conclusions**
This research presents a new blockchain approach to improve the current supply
chain. The novelty of this paper lies in the blockchain for storing all transaction information
in the supply chain of the proposed case study. In addition, multi-agent systems use smart
contracts to manage the entire supply chain process more efficiently, this eliminates the
middleman and enables a circular economy market.
Our model can be used to improve any supply chain. The case studies were
undertaken in this proposal focus on the agricultural sector. Our model has increased
security and efficiency because it is automated by the agent system. By combining the
blockchain we provide an agricultural system with solid security features. Shipments can be
tracked, origin and destination authenticated, and proof of all transactions can be stored
and not manipulated. Another novelty of this paper is the agent verifying that both parties
comply with the smart contract terms. If the agent detects that one of the parties does not
meet the conditions set, a penalty is imposed and the agent keeps the money in the control
entity until the agreed conditions are met. This makes our model more efficient than the
current model. In addition, it can track and authenticate orders. In addition, a rating and
reward system is introduced in the supply chain via blockchain to recognize and reward the
most fulfilling members of this new supply chain model. Future research lines include
enhancing the multi-agent system by introducing new agents for monitoring procedures. In
addition, our model can be enhanced by integrating a case-based reasoning (CBR)
system.
**References**
[1] U. Rahardja, “Meningkatkan Kualitas Sumber Daya Manusia Dengan Sistem Pengembangan
Fundamental Agile,” ADI Bisnis Digit. Interdisiplin J., vol. 3, no. 1, pp. 63–68, 2022.
[2] R. Yunita, M. S. Shihab, D. Jonas, H. Haryani, and Y. A. Terah, “Analysis of The Effect of
Servicescape and Service Quality on Customer Satisfaction at Post Shop Coffee Tofee in Bogor
_Use Of Blockchain to Improve Case Studies in Food Supply_
100
-----
**Blockchain Frontier Technology (B-Front)** **P-ISSN: 2808-0831**
**Vol. 1 No. 2 January 2022** **E-ISSN: 2808-0009**
City,” Aptisi Trans. Technopreneursh., vol. 4, no. 1, pp. 66–74, 2022.
[3] D. Rustiana, J. D. Pratama, T. Mudabbir, and M. A. Fahmi, “Adoption Computerized Certificate
Transparency And Confidentiality,” Int. J. Cyber IT Serv. Manag., vol. 2, no. 1, pp. 1–10, 2022.
[4] S. Azizah and B. P. K. Bintoro, “Determining Factors of Continuance Intention to Use QR Code
Mobile Payment on Urban Millennials in Indonesia Empirical Study on Mobile Payment
Funds,” ADI J. Recent Innov., vol. 3, no. 2, pp. 121–138, 2022.
[5] L. A. Faza, P. M. Agustini, S. Maesaroh, A. C. Purnomo, and E. A. Nabila, “Motives For
Purchase of Skin Care Product Users (Phenomenology Study on Women in DKI Jakarta),” ADI
_J. Recent Innov., vol. 3, no. 2, pp. 139–152, 2022._
[6] C. S. Bangun and N. A. Santoso, “Inovasi Pengembangan Kartu Ujian Online pada Web Portal
dengan Metode Waterfall,” J. MENTARI Manajemen, Pendidik. dan Teknol. Inf., vol. 1, no. 1,
pp. 1–8, 2022.
[7] H. Purwantih, Z. F. Rahayu, W. Amelia, R. Dwi, and H. M. Bilqis, “Rancang Bangun Sistem
Seleksi Rekrutmen Karyawan Dan Guru Berbasis Website Pada Sekolah Citra Bangsa
Tangerang,” ADI Bisnis Digit. Interdisiplin J., vol. 1, no. 2, pp. 60–70, 2020.
[8] N. Lutfiani, F. P. Oganda, C. Lukita, Q. Aini, and U. Rahardja, “Desain dan Metodologi
Teknologi Blockchain Untuk Monitoring Manajemen Rantai Pasokan Makanan yang
Terdesentralisasi,” InfoTekJar J. Nas. Inform. dan Teknol. Jar., vol. 5, no. 1, pp. 18–25, 2020.
[9] F. P. Oganda, M. Hardini, and T. Ramadhan, “Pengaruh Penggunaan kontrak cerdas pada
Cyberpreneurship Sebagai Media Pemasaran dalam Dunia Bisnis,” ADI Bisnis Digit.
_Interdisiplin J., vol. 2, no. 1, pp. 55–64, 2021._
[10] A. C. Purnomo, B. Pramono, and F. P. Oganda, “Design of Information System in Admission of
New Students Based on Web in SMK Al Amanah,” Aptisi Trans. Manag., vol. 3, no. 2, pp.
159–167, 2019.
[11] B. Mardisentosa, U. Rahardja, K. Zelina, F. P. Oganda, and M. Hardini, “Sustainable Learning
Micro-Credential using Blockchain for Student Achievement Records,” in 2021 Sixth
_International Conference on Informatics and Computing (ICIC), 2021, pp. 1–6._
[12] F. P. Oganda, “PEMANFAATAN SISTEM IJC (iLearning Journal Center) SEBAGAI MEDIA
E-JOURNAL PADA PERGURUAN TINGGI DAN ASOSIASI,” CSRID (Computer Sci. Res.
_Its Dev. Journal), vol. 11, no. 1, pp. 23–33, 2020._
[13] T. Ramadhan, Q. Aini, S. Santoso, A. Badrianto, and R. Supriati, “Analysis of the potential
context of Blockchain on the usability of Gamification with Game-Based Learning,” Int. J.
_Cyber IT Serv. Manag., vol. 1, no. 1, pp. 84–100, 2021._
[14] A. G. Prawiyogi, A. S. Anwar, M. Yusup, N. Lutfiani, and T. Ramadhan, “Pengembangan
Program Studi Bisnis digital bagi pengusaha dengan perangkat lunak lean,” ADI Bisnis Digit.
_Interdisiplin J., vol. 2, no. 2, pp. 52–59, 2021._
[15] T. Ramadhan and R. D. Destiani, “Pengetahuan Manajemen Keuangan Bisnis Terhadap Niat
Mahasiswa Bisnis Digital dalam Berwirausaha,” ADI Bisnis Digit. Interdisiplin J., vol. 3, no. 1,
pp. 59–62, 2022.
[16] D. Apriani, T. Ramadhan, and E. Astriyani, “Kerja Lapangan Berbasis Website Untuk Sistem
Informasi Manajemen Praktek (Studi Sistem Informasi Program Studi Kasus Merdeka Belajar
Kampus Merdeka (MBKM) Universitas Raharja,” ADI Bisnis Digit. Interdisiplin J., vol. 3, no.
1, pp. 24–29, 2022.
[17] Q. Aini, U. Rahardja, and A. Khoirunisa, “Blockchain Technology into Gamification on
Education,” IJCCS (Indonesian J. Comput. Cybern. Syst., vol. 14, no. 2, pp. 1–10, 2020, doi:
10.22146/ijccs.53221.
[18] N. Sari, W. A. Gunawan, P. K. Sari, I. Zikri, and A. Syahputra, “Analisis Algoritma Bubble Sort
Secara Ascending Dan Descending Serta Implementasinya Dengan Menggunakan Bahasa
Pemrograman Java,” ADI Bisnis Digit. Interdisiplin J., vol. 3, no. 1, pp. 16–23, 2022.
[19] D. Rifai, S. Fitri, and I. N. Ramadhan, “Perkembangan Ekonomi Digital Mengenai Perilaku
Pengguna Media Sosial Dalam Melakukan Transaksi,” ADI Bisnis Digit. Interdisiplin J., vol. 3,
no. 1, pp. 49–52, 2022.
_Use Of Blockchain to Improve Case Studies in Food Supply_
101
-----
**Blockchain Frontier Technology (B-Front)** **P-ISSN: 2808-0831**
**Vol. 1 No. 2 January 2022** **E-ISSN: 2808-0009**
[20] P. Rashi, A. S. Bist, A. Asmawati, M. Budiarto, and W. Y. Prihastiwi, “Influence Of Post Covid
Change In Consumer Behaviour Of Millennials On Advertising Techniques And Practices,”
_Aptisi Trans. Technopreneursh., vol. 3, no. 2, pp. 85–92, 2021._
[21] M. F. Elkarimah and U. Sutisna, “Pendampingan pengajaran Metode Iqro’untuk guru-guru di
TPA Hayatinnur Tambun Selatan Bekasi,” ABSYARA J. Pengabdi. Pada Masy., vol. 2, no. 2, pp.
178–184, 2021.
[22] P. Edastama, S. Purnama, R. Widayanti, L. Meria, and D. Rivelino, “The Potential Blockchain
Technology in Higher Education Learning Innovations in Era 4.0,” Blockchain Front. Technol.,
vol. 1, no. 01, pp. 104–113, 2021.
[23] T. Wahyuningsih, F. P. Oganda, and M. Anggraeni, “Design and Implementation of Digital
Education Resources Blockchain-Based Authentication System,” Blockchain Front. Technol.,
vol. 1, no. 01, pp. 74–86, 2021.
[24] T. Ayuninggati, E. P. Harahap, and R. Junior, “Supply Chain Management, Certificate
Management at the Transportation Layer Security in Charge of Security,” Blockchain Front.
_Technol., vol. 1, no. 01, pp. 1–12, 2021._
[25] N. Lutfiani, E. P. Harahap, Q. Aini, A. D. A. R. Ahmad, and U. Rahardja, “Inovasi Manajemen
Proyek I-Learning Menggunakan Metode Agile Scrumban,” InfoTekJar J. Nas. Inform. dan
_Teknol. Jar., vol. 5, no. 1, pp. 96–101, 2020._
_Use Of Blockchain to Improve Case Studies in Food Supply_
102
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.34306/bfront.v1i2.256?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.34306/bfront.v1i2.256, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYSA",
"status": "HYBRID",
"url": "https://journal.pandawan.id/b-front/article/download/256/215"
}
| 2,022
|
[] | true
| 2022-01-13T00:00:00
|
[] | 6,190
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02ccb726813155385495d7c87ed939dccc6d5472
|
[
"Computer Science"
] | 0.846118
|
Distributed computation and control of robot motion dynamics on FPGAs
|
02ccb726813155385495d7c87ed939dccc6d5472
|
SN Applied Sciences
|
[
{
"authorId": "1835702",
"name": "Vinzenz Bargsten"
},
{
"authorId": "1816695029",
"name": "J. de Gea Fernández"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SN Appl Sci"
],
"alternate_urls": null,
"id": "4766beb3-bec6-4b4a-a5fc-cf12ba2339b6",
"issn": "2523-3963",
"name": "SN Applied Sciences",
"type": null,
"url": "https://www.springer.com/engineering/journal/42452"
}
|
Driven by advances in miniaturized electronics, many robotic systems today consist of modular hardware components. This leads to numerous computing units that are distributed within such systems. In order to make better use of such hardware structures from a computational point of view, the implementation of classical control approaches should be reconsidered. Following this idea, a method for distributed computation of motion dynamics of robotic systems using Field Programmable Gate Arrays is discussed. In this approach, local low-level actuator controllers are regarded as interconnected nodes, which are aware of their actuated degree of freedom and the resulting motion of the attached rigid body link element. A modified recursive Newton–Euler algorithm is computed by these nodes, where each node only exchanges data with the neighboring nodes. In the computations, a linear dependency on the dynamic parameters is kept in order to simplify the direct use of dynamic parameters estimated from experimental data. Implementation details and experimental results using a robotic manipulator arm are presented. The experiments show that the method allows compliant motion control. Payloads and external forces acting on a link are considered in the distributed computation of the model by informing the node controlling the respective link of the additional forces.
|
**Research Article**
# Distributed computation and control of robot motion dynamics on FPGAs
**[Vinzenz Bargsten[1] · José de Gea Fernández[1]](http://orcid.org/0000-0001-8884-2103)**
Received: 10 February 2020 / Accepted: 12 May 2020 / Published online: 17 June 2020
© The Author(s) 2020 OPEN
**Abstract**
Driven by advances in miniaturized electronics, many robotic systems today consist of modular hardware components.
This leads to numerous computing units that are distributed within such systems. In order to make better use of such
hardware structures from a computational point of view, the implementation of classical control approaches should be
reconsidered. Following this idea, a method for distributed computation of motion dynamics of robotic systems using
Field Programmable Gate Arrays is discussed. In this approach, local low-level actuator controllers are regarded as interconnected nodes, which are aware of their actuated degree of freedom and the resulting motion of the attached rigid
body link element. A modified recursive Newton–Euler algorithm is computed by these nodes, where each node only
exchanges data with the neighboring nodes. In the computations, a linear dependency on the dynamic parameters is
kept in order to simplify the direct use of dynamic parameters estimated from experimental data. Implementation details
and experimental results using a robotic manipulator arm are presented. The experiments show that the method allows
compliant motion control. Payloads and external forces acting on a link are considered in the distributed computation
of the model by informing the node controlling the respective link of the additional forces.
**Keywords Distributed control · FPGA · Robot motion dynamics · Recursive Newton–Euler · Actuator control · Dynamic**
control
## 1 Introduction
While it is still common for heavy industrial robotic systems
with a large control cabinet to do the actuator control in a
central system, it has become unfavorable for lightweight
systems as used for human–robot collaboration or mobile
systems. In particular, availability of micro-controllers and
miniaturized power electronics has advanced greatly in
the recent decades, while at the same time costs have
dropped. Instead of routing every motor’s power lines
and every sensors connection through the structure to a
central control system, local control loops computed by
the electronics placed at each actuator are used. Consequently, a shared communication bus and a power supply line routed through the structure are sufficient. These
increasing local computational capabilities and decision
logic motivate researching distributed control approaches
and even question the centralized computation of classical algorithms. One aspect is to make better use of the
hardware structure. However, using decentralized control
techniques has a number of additional advantages, such
as simplification of the controller design problem, lower
latencies in local control loops, and support of modularity.
**[Electronic supplementary material The online version of this article (https://doi.org/10.1007/s42452-020-2898-6) contains](https://doi.org/10.1007/s42452-020-2898-6)**
supplementary material, which is available to authorized users.
- Vinzenz Bargsten, vinzenz.bargsten@dfki.de; José de Gea Fernández, jose.de_gea_fernandez@dfki.de | [1]DFKI GmbH Robotics
Innovation Center, Robert‑Hooke‑Str. 1, 28359 Bremen, Germany.
SN Applied Sciences (2020) 2:1239 | https://doi.org/10.1007/s42452-020-2898-6
V l (0123456789)
-----
In this work, the focus is on the distributed computation
of motion dynamics of a robotic system. A model of the
robot motion dynamics is the basis for many advanced
motion and force control approaches. Classically, it is
computed centrally in a fixed control cycle. Such model
relates the actuation forces or torques with the motion
of the system and external forces and is therefore fundamental for advanced motion and force control of robotic
systems. However, since these models depend on the
state of all degrees of freedom, the common approach is
to obtain the complete state of each actuator, compute
centrally the control loop including the motion dynamic
model, and send an updated motor command to each
actuator. This means, all the state and sensory information has to be messaged from the distributed units to a
central point prior to computation of the control update.
After the update, the new commands are messaged to the
actuators in return. This synchronizes the control actions
to a frequency constrained on one hand by the requirement to be computed online and on the other hand by
response time and thus control stability. A solution to this
problem applied in industrial robot manipulators is to use
a communication system specifically tailored to the complete system in order to meet the requirements of control
frequency and latency.
However, this approach does not scale well to robotic
systems which become more modular, more complex in
terms of number of degrees of freedom (DOFs), types of
actuators, and increasing amount of sensory information
coming from hardware distributed all over the system. For
these applications, a central control approach becomes
difficult to design in terms of complexity and reaction
time. Using local control loops can help to reduce the
design effort and support modularity.
### 1.1 Related work
Creating models and control systems in a distributed fashion is a principle being studied in many technical areas,
and similar principles can also be found in the non-technical domain. In political systems, for instance, this is known
as subsidiarity, “the principle that a central authority
should have a subsidiary function, performing only those
tasks which cannot be performed effectively at a more
immediate or local level” [28]. From the perspective of the
technical domain, there are some potential advantages:
(1) It can be simpler and a more structured approach
to define smaller models and interconnecting them
than dealing with complex, monolithic models. This
can be exploited in modeling of multi-body physical
systems. For instance, Eberhard et. al. discuss hierarchical modeling approaches [18]. Also, the structure
of multi-body systems can be exploited to parallelize the forward dynamics computation in order to
simulate the system behavior more efficiently [17].
In robotics, decomposition of complex problems into
smaller subproblems of lower dimensional space is
a traditional approach, for instance, introduced by
Brooks [11] and extended later for behavior-based
control [2] in the area of behavior-based systems.
These design principles aim to reduce the complexity
of the tasks and allow to build a complex controller
incrementally. Similarly, in the area of reinforcement
learning, there exist approaches such as hierarchical
reinforcement learning, which is based on decomposing the robot’s task, either manually or automatically,
into a hierarchy of subtasks [9], being motivated by
the fact that standard reinforcement learning would
poorly scale with large state spaces.
(2) Subsequently, it can help to reduce the controller
design effort. D’Andrea et. al [15] discuss the control
of interconnected systems, in particular systems,
which “consist of similar units which directly interact
with their nearest neighbors.” For such kind of systems, the authors argue against centralized control
schemes, as “It is also not feasible to control these
systems with centralized schemes—the typical outcome of most optimal control design techniques—
as these require high levels of connectivity, impose
a substantial computational burden, and are typically more sensitive to failures and modeling errors
than decentralized schemes.” Similarly, Massioni et.
al. argue for investigation of decentralized or distributed controller architectures for large scale systems
and present a procedure for “simplifying the computational complexity of the problem as well as for finding a controller with a distributed architecture” [25].
(3) Latencies can be reduced within actual implementations by parallelization or distribution of the computations in hardware. In [29], Paine et. al. discuss an
actuator-level control approach. The aspect of distributed computation is focused on decoupling the
actuator dynamics of serial elastic actuators form the
link dynamics. This is reasonable, as on the one hand
the system boundaries of the dynamics of one actuator can be chosen such that the actuator dynamics
are not directly dependent on the other actuators’
states. And on the other hand, it is highly beneficial
for the responsiveness of the serial elastic actuators,
because latencies are reduced and control frequencies can be increased in dedicated local controllers.
Similarly, Zhao et. al motivate the use of distributed
controllers, particularly in view of the computational and communication requirements of complex
human-centered robotics. In [48], the authors study
-----
the stability and performance of distributed controllers, where stiffness and damping servos are implemented in distinct processors. As experimental evaluation, an operational space controller is implemented
in a mobile base. A combination of these two studies
can be found in [49].
Using implementations based on Field Programmable Gate Arrays (FPGAs) is particularly interesting
for motion control applications, because they allow
high sampling rates and a very flexible programming of parallel tasks. For example, in [47] Shao &
Sun describe a motion control system using a FPGA
for computation of servo control loops. Regarding
the particular algorithm used for the computation of
the dynamics in the work presented here, in an early
work [33] Rajagopalan discusses the computational
viewpoint in a parallel computation setup. The author
describes a partitioning approach of the Newton–
Euler algorithm, such that a parallel computation on
multiple transputers is enabled. The focus is solely put
on the parallel computation of the dynamic model in
order to reduce the computation times, in contrast to
viewing actuator controllers as nodes interconnected
to each other. The approach significantly reduces
computation time by using parallel computation on
transputer processors. While nowadays a usual office
computer will suffice for computing a 6-DOF rigid
body dynamic model sufficiently fast, the complexity of robotic systems in terms of number of DOFs
and model complexity increased in the meantime as
well. It is therefore reasonable to continue to research
methods which allow to scale with this trend.
(4) Furthermore, some systems such as robots within a
swarm require distributed control approaches, because
these individual units need to function autonomously
and still achieve common goals by interacting with
each other. In fact, the distinction of complex robotic
systems to swarm robotics narrows down to the
mechanical connection of the links. However, a growing number of “smart” actuators, grippers, sensors,
platforms, and other modules used to build up robotic
systems highly adapted to the target purpose in a
modular way blur this distinction. These systems can
also be mechanically reconfigurable at runtime, such
as the modular robotic platform shown in Fig. 1 [10, 37].
And on the other hand, there are swarm robots which
do in fact connect mechanically in order to achieve a
common manipulation or locomotion goal [7, 22]. For
instance, Baele et. al. state:
When it is advantageous to do so, these swarm
robots can dynamically aggregate into one or
more symbiotic organisms and collectively inter
**Fig. 1 Example of a modular robotic system, consisting of mobile**
robot Coyote III, manipulator arm, and payload modules, which can
attach to each other using an electro-mechanical interface (EMI).
One of the two EMIs of Coyote III provides additional 2 DOFs [10, 37]
**Fig. 2 Left: body of the humanoid robot RH5; right: partial view on**
the structure of the more than 50 freely programmable units distributed across the system
act with the physical world via a variety of sensors and actuators.
A robotic system composed of numerous actuators
and sensors can be viewed similarly, as this usually
involves freely programmable electronic devices distributed all over the system. An example for such kind
of systems is shown in Fig. 2; a more detailed description is given by Peters et. al. in [30].
As we can see, distributed computation and control are
intensively investigated and in the field of robot motion
dynamics development is rather twofold—on the one
hand toward modular hardware and, on the other hand,
toward parallelization on the algorithmic level.
_Contributions The approach presented here advances_
into the direction of distributed control of a robotic system’s motion dynamics and aims at bringing the two
-----
concepts closer together. In particular, the local actuator
controllers of a robotic manipulator arm are extended
such that a dynamic model of the system is incorporated
directly on a local level using a distributed computation of
the motion dynamics. This creates a network of interconnected nodes. Each node uses the knowledge of the kinematics and dynamics of its controlled link to propagate
the kinematic and dynamic quantities to the mechanically
neighboring nodes. For the evaluation, a robotic arm composed of six revolute actuators interconnected by rigid
links is used. Each of the arm’s actuators is controlled by a
dedicated electronic unit based on a FPGA, which is used
to compute the control law using the dynamic model. The
distributed dynamics computation enables the system to
compliantly react to disturbances while tracking a trajectory, to exert external forces on the environment, and to
take payload forces into account.
_Paper Organization The next section will describe the_
derivation of the dynamic model and the algorithm used
in this work. On the basis of these results, Sect. 3 describes
the approach to distribute the computations, including
the implemented software components, and the structure
of the implemented actuator controllers. Subsequently,
Sect. 4 presents the experimental results of the application using a robotic manipulator arm, which are discussed
in Sect. 5. Finally, Sect. 6 discusses the conclusions.
## 2 Modeling
In order to control the motion of a robotic system in a
compliant way based on the actuation torques or forces,
it is necessary to know the underlying relationship. For
the purpose of motion control of rigid body systems, the
most notable formulation is an inverse dynamics model,
computing the actuation forces and torques 휏(t) ∈ ℝ[n] as a
function of the motion described in terms of the generalized joint positions q ∈ ℝ[n], velocities q̇, and accelerations
q̈:
_𝜏(t) = 퐟(q(t), ̇q(t), ̈q(t))_ (1)
The use of such models allows to compute the feed-forward torques and forces for a reference trajectory, and
to decouple the often highly nonlinear dynamics of the
robotic system. Consequently, linear feedback controllers
with significantly reduced gains can be used, since these
only need to compensate for the remaining model-plant
mismatch, rather than considering the complete dynamics as error when not using model-based control. With
this reduction in feedback, accurate trajectory tracking in
an undisturbed case is much less dependent on the controller gains and the feedback controllers can rather be
tuned such that a desired compliance wrt. external forces
is obtained.
The derivation of equations of motion for rigid body
system as well as the development of controllers based on
them has been extensively studied and is covered in many
classical text books [5, 14, 23, 36]. The most notably used
algorithms are the Lagrange method and the recursive
Newton–Euler method. The Lagrange method is based on a
balance of potential and kinetic energy and, using the classical formulation, from this balance derives the equations of
motion explicitly.
On the other hand, the recursive Newton–Euler algorithm is based on the balance of forces and torques. Early
work describing the process of modeling the relationship
(1) for a robotic system using the Newton–Euler method
can be found in [27]. Due to the iterative computation, the
algorithm scales much better with the number of degrees
of freedom, i.e., with a computational complexity of about
O(n[1]) compared to about O(n[3]) for the Lagrange formulation [44]. While an iterative reformulation of the Lagrange
formulation exists that reduces the complexity to about
O(n[1]), it is not as efficient in terms of the number of multiplications and additions as the recursive Newton–Euler algorithm [21]. In addition, the recursive Newton–Euler algorithm
has a structure suitable for a distributed computation and is
therefore chosen in this work.
The above-mentioned methods derive the equations
of motion from physical insight. However, the numerical
values of the parameters contained in these equations are
yet unknown. Thus, the model equations are derived in a
way such that the unknown dynamic parameters are constant and linearly dependent on the rest of the model. The
following sections will describe the algorithm used for the
distributed dynamics computation in more detail which is
based on the definitions given in [36], with a reformulation
as in [1, 6] in order to obtain a linear dependence on the
dynamic parameters.
### 2.1 Recursive Newton–Euler algorithm revisited
Instead of taking the whole system with all its links into
account at once, a single link body and its actuated joints
are considered as sub-system. For such a sub-system, a balance of forces and torques acting on the i − th link’s body is
derived individually. This balance is described for the forces
by Newton’s second law (2), giving the relation to the linear
acceleration p̈ Ci of the center of mass of the body:
fi − fi+1 + mi g0 = mi ̈pCi, (2)
where fi ∈ ℝ[3] is the force acting on body i, mi is the mass
of body _i, and_ g0 is the vector of gravity acceleration.
-----
Additionally, Euler Equation (3) gives us the relation of
torques acting on the body and the respective rotational
motion:
_𝜇i −_ _𝜇i+1 + fi × ri−1,Ci −_ fi+1 × ri,Ci
= 퐈i ̇𝜔i + 𝜔i × 퐈i𝜔i,
(3)
where 휇i ∈ ℝ[3] is the torque acting on body i, 휔i is the
angular velocity, ri,Ci denotes the vector from frame i origin to the body’s center of mass, and Ii ∈ ℝ[3][x][3] is the inertia
tensor of body i,
Ii,xx Ii,xy Ii,xz
⎛ ⎞
Ii = ⎜Ii,xy Ii,yy Ii,yz⎟ . (4)
⎜⎝Ii,xz Ii,yz Ii,zz⎟⎠
The dependence on joint states and thus time has been
omitted here to improve readability. To take advantage
of constant parameters, we further relate Eqs. (2) and (3)
to a reference coordinate frame attached to body i. The
balance of forces and torques for body i is then given by
fi[i] [=][ 퐑]i[i]+1[f][ i]i+[+]1[1] [+][ m][i][ ̈][p]C[i] i (5)
ri[i]−1,i [+][ r]i[i],Ci
_𝜇[i]_
i [= −][f][ i]i [×]
(
)
+ 퐑[i]i+1[𝜇]i[i]+[+]1[1]
+ 퐑[i]i+1[f][ i]i+[+]1[1] [×][ r]i[i],Ci
+ 퐈[i]i[̇𝜔]i[i]
+ 𝜔[i]
i [×][ 퐈]i[i][𝜔][i]i[,]
(6)
acting on the link along the joint’s axis of motion. Using
the Denavit–Hartenberg convention, this is the z-axis of
the previous body’s coordinate system, zi[i]−[−]1[1][ . Thus, the ]
joint torque is given by
_휏i = 휇i[i]_ T **퐑ii−1[z]i[i]−[−]1[1]** (7)
A detailed description of the algorithm can be found in
[36, p. 283ff].
### 2.2 Linearity of the dynamic parameters
If the dynamic parameters contained in Eq. (7) are not known
beforehand, they can be estimated by means of experiments, also known as parameter identification [8, 42]. For the
purpose of simplifying the parameter identification procedure numerically, a notable property of the above equations
can be exploited: When referring the terms of the equations
of motion to a coordinate system attached to the respective
body, the dynamic parameters become independent of the
joint configuration and are constant. In fact, the equations
can be reordered such that a linear dependency is obtained.
The vector of joint torques and forces is then defined by
_𝜏_ = 퐘(q, ̇q, ̈q) 𝜋. (8)
Note that 퐘 can still contain highly nonlinear terms wrt.
the system state. This partitioning and thus the parameter vector 휋 depend on the choice of coordinate systems.
Referring the first and second moment of inertia parameters to a coordinate system attached to body i, with its
origin and z-axis aligned with the distal joint of the body,
the following partitioning of (5) and (6) can be derived
when written in matrix-vector form:
where the 퐑[i]i+1[ denotes the rotation matrix used to change ]
a vectors reference coordinate system from i + 1 to _i._
Thus, the superscript denotes the reference frame, e.g., f [c]
b
denotes the force vector of body b expressed in coordinate
system c. The gravity acceleration has been joined into p̈ Ci .
An additional vector ri[i]−1,i[ is introduced, which is the vec-]
tor pointing from the coordinate system i − 1 origin to the
origin of coordinate system i. The algorithm solves these
two expressions in two steps. In the first step, known as
the forward recursion, starting from the base link, the kinematic quantities are calculated iteratively for each body.
These are namely the angular velocity as well as the linear
and angular acceleration. These quantities depend on the
geometry of the kinematic chain, and, depending on the
application, each joint state or reference state in terms of
joint position qi, velocity q̇ i, and acceleration q̈ i . The second
step is known as the backward recursion. In this step, starting from the distal link(s), the forces and torques for each
body are calculated iteratively according to (5) and (6).
For a revolute joint, the resulting joint or actuation
torque can then be computed by projecting the torques
The choice of parameter vector (11) was also influenced owing to compatibility wrt. a previously used
(f i
i
_휇[i]_
i
)
= 퐖i휋i + 퐓[i]W,i+1
(9)
(f ii++11 ) .
_휇[i][+][1]_
i+1
The matrix 퐓[i]W,i+1[ gives the relation to the wrench transmis-]
sion of the neighboring body i + 1 of the chain on body i,
where 퐓[i]W,i+1[ is defined as:]
**퐓[i]W,i+1** [=] (퐒(ri[i]−퐑1,[i]ii+[)]1[퐑][i]i+1 **[퐑]ퟎi[i]+1)** . (10)
The vector 휋i ∈ ℝ[10] of dynamic parameters for body i,
where the inertia tensor 퐈[̄]i has been derived from 퐈i by
referring it to the origin of the attached coordinate frame
according to the parallel axis theorem, is determined as
_𝜋i = (mi miri[i],Ci,x_ [m][i][r]i[i],Ci,y [m][i][r]i[i],Ci,z
_̄Ii[i],xx_ _[̄][I]i[i],xy_ _[̄][I]i[i],xz_ _[̄][I]i[i],yy_ _[̄][I]i[i],yz_ _[̄][I]i[i],zz[)][T]_ [.]
(11)
-----
implementation based on the Lagrange formalism and
thus to allow for comparison and reusing previously estimated parameters (further details are given in Appendix).
The corresponding matrix 퐖i ∈ ℝ[6][×][10] is given by Eq. (12)
using the following definitions:
⎞
⎟
⎟⎠
�
�f i
i
_휇[i]_
i
�
= 퐖i휋i + 퐓[i]W,C1
(17)
f Cj
Cj
_휇Cj_
Cj
⎛
⎜
⎜⎝
�f [C][1]
C1
_휇[C][1]_
C1
+ … + 퐓[i]W,Cj
( p̈ [i] **퐒( ̇𝜔[i]** **ퟎ** )
**퐖i =** i i[) +][ 퐒][(][𝜔][i]i[)][퐒][(][𝜔][i]i[)] (12)
**퐒(ri[i]−1,i[)][p][̈]** i[i] **[퐒][(][r]i[i]−1,i[)][퐒][(][ ̇𝜔][i]i[) +][ 퐒][(][𝜔][i]i[)][퐒][(][𝜔][i]i[) −]** **[퐒][(][p][̈]** i[i][)][ 퐋][(][ ̇𝜔][i]i[) +][ 퐒][(][𝜔][i]i[)][퐋][(][𝜔][i]i[)]
⎛ 0 − _휔z_ _휔y_ ⎞
**퐒(휔) ∶=** ⎜ _휔z_ 0 − _휔x⎟_, (13)
−휔y _휔x_ 0
⎜⎝ ⎟⎠
and
_휔x 휔y 휔z 0_ 0 0
⎛ ⎞
**퐋(휔) ∶=** ⎜ 0 휔x 0 휔y 휔z 0 ⎟, (14)
⎜⎝ 0 0 휔x 0 휔y 휔z⎟⎠
such that the relation 퐈휔 = 퐋(휔)[(]퐈xx, 퐈xy, 퐈xz, 퐈yy, 퐈yz, 퐈zz)T
holds.
By concatenation, the vector of all dynamic parameters
_휋_ = _휋[T]_ )T is obtained. Looking at the complete
[(] 1 [,][ …][,][ 휋]n[T]
chain of interconnected bodies, an expression for all the
wrench vectors stacked, e.g.,
⎛�f 11
⎜ _𝜇1[1]_
⎜ ⋮
⎜⎜�f nn
_𝜇[n]_
n
⎜⎝
�
⎞
⎟
⎟
�⎟
⎟
⎟⎠
= 퐖(q, ̇q, ̈q)𝜋,
(15)
The parameter vector has to be adapted accordingly, in
the same order as the added columns of 퐖 . Equally as in
(7), the projection on the motion axis for a revolute joint
can be rewritten as:
( )T
_휏i =_ **퐖[T]휇,i[z]i[i]−1** _휋,_ (18)
where 퐖휇,i refers to the lower 3 rows of the matrix 퐖i.
To summarize, the dynamic parameters are separated
from the rest of the model and a linear relation has been
obtained as defined in (8).
### 2.3 External forces and compliance
The mapping from an external force or torque Fext acting on the system structure to the actuation forces and
torques is given by
_휏ext = 퐉(q)[T]_ Fext, (19)
where 퐉(퐪) denotes the Jacobian wrt. the contact point
and has to be evaluated for the joint configuration q. Different measures exist to characterize the velocity and force
_manipulability in the workspace of the system by analyz-_
ing the properties of the Jacobian matrix. In the event of
reaching a singular configuration, this results in reduction
in the matrix rank of 퐉(퐪)[퐓], and in practice, configurations
in which this matrix becomes ill-conditioned pose a problem as well [35, Sec. 1.10, 11.2.2, 29.5].
Mechanically this can mean, for example, some elements of an external force vector are solely supported
by the structure and cannot be influenced by the actuation forces. In such case, the ability of the system to
react compliantly to such external forces in the respective direction is lost. Equivalently, when reaching the
mechanical limit of a joint, a further compliant reaction
in the respective work space direction is only possible
using the remaining degrees of freedom. This is a general limitation of compliant control schemes based on
the compliance in the actuation system, since the work
space has to be mapped to the available joint space.
The problem can be mitigated by providing a sufficient
is obtained with the matrix 퐖 of the form
⎛ **퐖1 퐓[1]W,2[(]** … ) ⎞
0 ⋱ ⋱
⎜ ⎟
⎜ ⋮ ⋱ ⎟
**퐖** = ⎜ ⋮ ⎟. (16)
⎜ ⎟
⎜ 0 퐖n−1 퐓[n]W[−],n[1][퐖][n] ⎟
0 … 0 **퐖n**
⎜⎝ ⎟⎠
External forces acting on a body can be taken into account
here by addition in (9). In this case, the structure of the
matrix **퐖(q, ̇q, ̈q) can be kept by adding the external**
wrench into additional columns and extension of the
parameter vector 휋 by 1-elements.
_Tree structures In case, body i has more than one con-_
secutive body, e.g., a tree-structured mechanism, we can
apply the algorithm similarly. Firstly, the forward recursion
is computed for every following kinematic chain, e.g., for
every branch. In the backward recursion, the wrenches of
every of the branches are added in (9), such that for every
consecutive body (child) C1 … Cj a term is appended:
-----
number of DOFs or additional flexibilities in the structure, by planning trajectories which avoid coming close
to singular configurations, or even covering the whole
structure with soft materials.
### 2.4 Friction
For robotic actuators, often using gear reduction mechanisms, friction can become a significant fraction of the
joint torque or force. If the torques or forces are measured directly at the output shaft, the joint and gear friction can be compensated for easily, i.e., by an additional
feedback loop controlling the torque at the output shaft.
However, often this measurement is not available such
as in the case presented in Sect. 4. In this case, a friction
model is required to estimate the additional motor torque
or even motor current required for a compensation. Several friction models are available. A first—in many cases
sufficient—approximation is to introduce Coulomb and
viscous friction terms such as 𝜏c,i(t) = FC,i sign( ̇qi(t)) and
_𝜏v,i(t) = Fv,i ̇qi(t), respectively. To avoid numerical prob-_
lems, it is reasonable to replace the signum function by a
function fulfilling the Lipschitz continuity condition, such
as atan(k ⋅ q̇ ) . The design parameter k allows to control the
steepness around zero velocity. Most often friction parameters can be estimated with the other dynamic parameters
simultaneously. Additionally, a measurement offset bi can
be necessary in practice, which can as well be estimated as
part of the identification procedure. Keeping the parameters separate, the model for the friction torque 휏F,i(t) of a
rotational joint i can be summarized as
FC,i⎞
_𝜏F,i(t) =_ [�]atan(k ⋅ q̇ i(t)) ̇qi(t) 1[�⎛]⎜Fv,i ⎟ . (20)
bi
⎜⎝ ⎟⎠
Using more complicated friction models such as Stribeck
friction models (see, e.g., [24]) usually requires a preliminary, separate identification per actuator, or training in
case of using machine learning approaches, in order to
properly distinguish friction effects from other effects. In
this case, a separate identification helps to avoid overfitting, e.g., undesired mapping of friction parameters onto
other effects, and to avoid introduction of further states
such as temperature into the identification procedure carried out for the mass and inertia parameters. Further discussion of friction effects from the perspective of control
can be found in [4].
### 2.5 Identification of the dynamic parameters
While the forward recursion is only dependent on the kinematic information of the system, i.e., the link geometries
|Robot|Col2|
|---|---|
|kinematics||
|Col1|Col2|
|---|---|
|Parameter Estimation||
**Fig. 3 Tasks and data flow used to model and identify robot**
dynamics experimentally
and joint DOFs, the backward recursion depends on the
dynamic parameters for each body of the system. Usually
geometric information is accurately available from CAD
data or can be obtained by direct measurement. Dynamic
properties on the other hand are often only very approximately available from CAD data. For instance, the mass
distribution of more complex parts bought from external
suppliers could be inaccurately known, and others, such
as wires, are hard to approximate. An alternative is provided by experimental identification: A high-gain joint
controller is used to track a reference trajectory, while the
motion and the generated actuation torques and forces,
which were necessary, are measured. Using the previously
derived model, the dynamic parameters can be estimated
from the measurement data. Note that not all dynamic
effects have an influence on the joint torque, but will be
transmitted directly by the structure, or those effects are
indistinguishable from each other. Therefore, a reduction
of the dynamic parameter vector can be performed. This
means to identify the linear combinations and unidentifiable parameters, either by rules or numerically by inspecting a regression matrix based on 퐘(q, ̇q, ̈q) . Figure 3 gives
an overview of an exemplary procedure from the initial
theoretical modeling of the robot motion dynamics to a
model having experimentally identified parameters.
In more detail, this type of classical identification procedure usually includes the following main steps.
-----
The first step is to derive the system’s equations of
motion from physical insight as discussed in Sect. 2.1. With
this prior information, the second step is to design experiments, which will be used to generate the measurement
data. To reduce the experimental effort and to obtain useful rich measurement data, a reference trajectory in joint
space can be optimized to maximize the expected richness
wrt. the informational content of the resulting measurement. Assuming a trajectory sampled at K points in time
with sampling period Ts, evaluating (8) for each sampled
set of joint motion, we obtain a concatenated matrix for
the complete trajectory:
⎛ **퐘[�]q(Ts), ̇q(Ts), ̈q(Ts)[�]** ⎞
⎜ … ⎟
_𝛷_ = ⎜⎜퐘[�]q(kTs), ̇q(kTs), ̈q(kTs)[�]⎟⎟ . (21)
…
⎜ ⎟
**퐘[�]q(KTs), ̇q(KTs), ̈q(KTs)[�]**
⎜⎝ ⎟⎠
The equation to be solved by estimating the parameter
vector ̂𝜋 is then given by
_𝜏msr = 𝛷̂𝜋,_ (22)
where 휏msr ∈ ℝ[Kn] is the concatenated vector of measured
joint torques or forces. A practical method to find a trajectory such that ̂𝜋 can be optimally estimated within the
robot constraints is described by Swevers et. al. in [41, 42].
It uses Fourier series as fundamental functions to generate
the joint trajectories, which determine q(kTs) : Specifically,
the trajectories are based on the Fourier series function
nharm
qi(t) = qi,0 + ∑ ( ai,h sin(h 휔f t) + bi,h cos(h 휔f t) ), (23)
h=1
which determines the joint angles qi(t) for each joint i as
function of the Fourier coefficients qi,0, ai,h, and bi,h . This
results in smooth excitation trajectories, whose frequency
spectrum is band limited by the choice of 휔f and the number of harmonics nharm and thus allows to avoid excitation
of unmodeled effects having higher resonant frequencies.
Joint velocities q̇ i(t) and accelerations q̈ i(t) can be computed by analytical differentiation of (23), which is especially useful to avoid a numerical differentiation for the
second derivative q̈ . The Fourier coefficients are optimized
using the d-optimality criterion as measure for the information content [39]. A number of alternate methods can be
found in the literature, which mainly differ in the way the
trajectory is parameterized and the criteria used for the
optimization (see, e.g., [3, 19, 31, 32, 40]). With the computational power available today, the algorithm efficiency is
not as critical as some decades ago, therefore greatly simplifying the implementation of an optimization routine.
The optimization is subject to a number of constraints. In
particular, joint limits such as joint motion ranges, velocity
limits, and acceleration limits have to be fulfilled. In addition, limits in the Cartesian workspace are necessary to avoid
collisions with the environment. In this work, a set of box
constraints have been imposed on the coordinate origins of
the last two links of a 6-DOF manipulator arm, such that the
_z-coordinate constraints prevent collisions with the table on_
which the arm is mounted on. In an iterative procedure, it is
also possible to include approximate limits on the required
joint torques. Based on (23) and using the d-optimality criterion as measure of information content, the optimization
problem is as follows:
minimize
qi,0,ai,h,bi,h ∀i,h [−] [log][(][ det][(][ ̄𝛷][T][ ̄𝛷] [) )]
∀ k subject to:
qmin ≤ q(kTs) ≤ qmax,
zmin ≤ ze(kTs) ≤ zmax,
q̇ min ≤ q̇ (kTs) ≤ q̇ max,
q̈ min ≤ q̈ (kTs) ≤ q̈ max.
(24)
Here 훷 has been reduced to a matrix 𝛷[̄] having full matrix
rank by removing columns not contributing to the rank
and merging columns of linear combinations.
The third step is then to carry out the experiments and to
obtain the measurement data, namely the measured actuation torques or forces 휏msr as well as the measured joint positions. Eventually—and often in case of prototype robotic
systems—the experimental results indicate a decrease or
increase in the constraints used for the experiment optimization. In such case, a number of iterations are necessary to
find suitable values. This equally applies to the fundamental
frequency 휔f and the number of harmonics nharm.
Finally, using the obtained measurement data, the last
step is to estimate the dynamic parameters. On the basis of
(22), a linear estimation problem has to be solved. Without
further constraints, a simple least squares estimator can be
used. However, it is reasonable to also impose constraints
on the estimation problem, in order to ensure physical consistency of the model parameters such as positive mass
parameters and positive definite inertia tensors. Thus, the
parameter vector is estimated by
_̂𝜋_ = argmin(𝜏msr − _̄𝛷̂𝜋)T_ (𝜏msr − _̄𝛷̂𝜋),_ (25)
_̂𝜋_
subject to the physical consistency constraints (see, e.g.,
[43, 46]).
-----
**Fig. 4 SpaceClimber robotic actuator and FPGA electronics [20]**
## 3 Distributed dynamics computation
Let us consider a chain of rigid bodies interconnected by
rotational joints as is the case for the robotic arm shown
geometry
_ϑ, d, a, α_
1. Angular velocity
_ωi[i]_ [=][ R]i[i]−1 �ωi[i]−[−]1[1] [+ ˙][q][i][ z][0]
2. Angular acceleration
_ω˙_ _i[i]_ [=][ R]i[i]−1 �ω˙ _i[i]−[−]1[1]_ [+ ¨][q][i][ z][0]
_ωi[i]−[−]1[1][,][ ˙][ω]i[i]−[−]1[1][,][ ¨][p][i]i[−]−[1]1_ 3. Linear acceleration
ous nodelink state from previ- _p¨[i]i_ [=][ R]i[i]−1[p][¨][i]i[−]−[1]1 [+ ˙][ω]i[i] _[×][ r]_
4. Wrench matrix
**Wi =** �S(pr¨[i]i)¨p[i]i **[S][(][r][)(][S][( ˙]S[ω]( ˙i[i]ω[)) +]i[i][) +][ S][ S][(]**
1. Wrench vector for this body
�µfi[i][i]i� = Wi· πi + �µ[i]i[+1] +
2. Resulting actuation torque
�fi[i][−][1]� _τi = µ[i]iT Rii−1[z][0]_
_µ[i]i[−][1]_
3. Wrench vector for previous body
wrench excerted on
previous node �fi[i][−][1]� = �R[i]i[−][1]fi[i]�
_µ[i]i[−][1]_ **R[i]i[−][1]µ[i]i**
_πi =_ �mi miri[i]−1,Ci,x _[. . . m]_
dynamic parameters
**Fig. 5 Operations to be carried out by each actuator controller in**
order to compute the inverse dynamics. Left connections to proximal neighbor, right connections to distal neighbor. Top and bottom
in Fig. 8. Each of the rotational joints is actively actuated,
and each actuator is controlled by a dedicated stack of
electronics as shown in Fig. 4.
The structure of the algorithm presented in the previous section is mapped onto these computational units.
In particular, each actuator controller has to carry out
the same computations using the inputs provided by
the neighboring actuator controllers and the state of the
controlled joint itself. The operations to be carried out are
summarized in Fig. 5.
For the first actuator controller, the kinematic quantities such as the angular velocity, the linear and angular
acceleration are configured as constants or supplied by the
central controller, e.g., in case of a moving base. The first
actuator controller then calculates the resulting motion
taking into account the actuator motion and the geometry of the connected body. This information is provided
to the next adjacent actuator controller. This procedure
joint state
˙q, ¨q
_[i][ω]i[i]−[−]1[1]_ _[×][ z][0]�_
_ωi[i][,][ ˙][ω]i[i][,][ ¨][p][i]i_
link state to next
[+][ ω]i[i] _[×][ (][ω]i[i]_ _[×][ r]i[i]−1,i[)]_ node
**[S][(][ω]i[i][)]** **0** �
[(][ω]i[i][)][ −] **[S][(¨][p][i]i[)][ L][( ˙][ω]i[i][) +][ S][(][ω]i[i][)][L][(][ω]i[i][)]**
�
_r)fi[i][+1]_
�fi i+1 �
_µ[i]i+1_
wrench acting on this
node
_[r]i[i]−1,Ci,z_ _[I][ˆ]i,xx[i]_ _[. . .][ ˆ][I]i,zz[i]_ �T
connections: local state (or reference) variables and configuration
variables (dashed)
-----
continues until the actuator controller of the most distal
link is reached, which does not have any further neighbors. It initiates the backward recursion, by calculating
the wrench vector and transmitting this information to
the previous proximal controller. When the first actuator
controller has calculated its wrench vector, the computation is finished and every controller knows the required
torque for the actuator as computed by the model.
This whole procedure can be carried out periodically
and at a high frequency, e.g., in the kHz range. Only little information needs to be exchanged, in particular
only between directly neighboring actuator controllers.
In this way, a centralized controller is relieved from the
time-critical tasks to read all the joint states, to compute
the complete inverse dynamics model, and to write the
updated torque or force information back to every actuator controller. The following update policies are generally
feasible: 1) sequential update of the actuator controllers,
2) parallel updates, i.e., each actuator controller computes
the update and informs the neighboring controllers simultaneously, and 3) internal loop runs faster than complete
chain update, i.e., the local controllers transiently update
using the local state obtained from local sensor information more often than exchanging the state updates with
the neighboring actuator controllers.
### 3.1 Implementation
Each actuator is controlled by a stack of electronics (Fig. 4)
which is equipped with a FPGA for the computations and
signal processing. This is the third of multiple generations
of these FPGA-based actuator electronics, which have
been developed for use in various robotic systems. The
advantages of using a FPGA-based approach are the ability
for inherent parallel processing capabilities and the perspective to switch to radiation tolerant devices [38].
_Central/PC Implementation Initially, the model equa-_
tions have been implemented as a C-library. It allows to
compute the inverse dynamics model for central control
methods and is used for the identification procedure as
well as for simulation purposes. Additional software libraries and scripts have been developed for the identification
to solve the tasks shown in Fig. 3. The central controller
uses the component-based software framework ROCK [34]
to communicate with the hardware, to generate the reference trajectory, and to log the measurement data.
_FPGA LinkDyn Component In order to compute the_
dynamics according to Fig. 5 on FPGAs, a component
named LinkDyn has been programmed using VHDL. The
component is composed of modules as shown in Fig. 6.
This module expects the following inputs: sine and
cosine of the Denavit–Hartenberg (DH) parameter 훼, the
DH parameters a, d, and 휗, and the state or reference
_trigger/ack_
_fwd. rec._
_trigger/ack_
_bwd. rec._
_memory_
_read/write_
|Mode and Access Control|Forward Recursion Backward Recursion|Sine, Cosine|Multiplier|
|---|---|---|---|
||||Multiplier|
|||Matrix Mult.||
||||Multiplier|
|||Cross Product||
||||Adder|
||Value Storage, Block RAM|||
**Fig. 6 Internals of the implemented VHDL component** _LinkDyn to_
compute the dynamics of a single node on a FPGA
of the controlled actuator in terms of qi, q̇ i, q̈ i . As output, the computed torque is provided. All other quantities such as the linear acceleration, angular acceleration, and velocity of the proximal neighboring body are
exchanged via a shared memory (block ram) interface.
Triggers and acknowledge signals are used to start the
forward or backward computation.
The main process of the component consists of a
number of sub-routines. In addition to the two sub-routines for the forward and backward dynamics computation itself, these are matrix-vector multiplication, cross
product calculation, and handling of reading and writing
requests from the interface. Matrix-vector multiplication
and cross product calculation are a shared routine for all
the steps required for the dynamics computation. These
are only triggered internally and take precedence over
all other sub-routines. Also, a running dynamics computation will not be interrupted, but postpone additional
requests. The sine and cosine terms are calculated using
a look-up table approach, as the target FPGA (Xilinx
Spartan 6) provides sufficient block ram resources. By
mirroring and inverting, only [1]
4[ th of a sine period has to ]
be stored. Alternatively, algorithms such as the CORDIC
(Coordinate Rotation Digital Computer) or a polynomial
approximation can be used. For all variables, a 32-bit
fixed-point representations is used. It has been validated
by simulation that for a setup as the one used here the
differences to using floating point representations are
insignificant.
_Actuator Controllers Each actuator controller is struc-_
tured as shown in Fig. 7. At the innermost position, a current controller controls the actuator’s motor current by
using pulse width modulation (PWM). The reference current is mainly computed from the local dynamics model
and a friction compensation. Additionally, a cascade of
high-gain feedback controllers are used. If no limitations
of position range and maximum velocity are violated, the
output of these controllers is highly limited (e.g., to 3–5%
of the nominal current). On the one hand, these controllers mainly have a guarding function and can switch to a
-----
_q_
_˙q_
_i_
|Col1|Col2|Col3|
|---|---|---|
||||
||||
||||
|Cascaded High- Gain Position / Feed-back Limiter Velocity Control q¨ d u Local Dynamics Computation τ i C Cu or nr te rn ot PWM AR cto ub ao tt τ l or Friction q Compensation d Kp - State MUX q˙ d Kd -|Col2|Col3|Col4|
|---|---|---|---|
|||||
|q d||||
|q˙ d||||
|||||
|||||
|||||
**Fig. 7 Structure of the joint-level control loop including the dynamics model implemented on a FPGA**
higher output, in order to keep the actuator state within
the configured limits. On the other hand, the tracking
performance in the presence of small model-plant mismatches can be improved due to the high gain and integral action of the controller cascade.
The reference of the model computation is selectable
(see block State MUX), either using the actual measured
state or using a reference state. As in a classical computed
torque control scheme, the input to the model has an
additional parallel implementation of a position and velocity feedback controller (blocks 퐊퐩, 퐊퐝 ). Therefore, the reaction to disturbances can be configured assuming a second-order system, in analogy to a mass–spring–damper
system. These controllers, which provide the input to the
dynamics model computation, i.e., the linearized and
decoupled system, are limited individually. This way, the
reference torque is restricted to sensible ranges, under
the condition that the reference trajectory is feasible and
the dynamic parameters are physically consistent. On the
lowest level in the control loop, the actuation torques are
limited by the motor current controller to the permissible range. In effect, the implemented approach provides
multiple layers within the control architecture which deal
with the limitation of actuation torques/forces. As a last
resort, the system will also deactivate based on comparing the system state to maximum thresholds. This provides
the basic functionality of a compliant manipulator to a
higher-level robot control architecture, which can then
react—possibly in a larger time frame—to deviations and,
for instance, re-plan the reference trajectory in order to
avoid an obstacle.
_Data Exchange Figure 8 shows exemplarily the com-_
munication for the COMPI manipulator arm. Initially, the
base actuator controller (at J1) receives the base accelerations from the central control PC, computes the forward
recursion (yellow blocks) based on the link’s geometry
and joint state, and sends the result to the next actuator
controller (at J2). Instead of further propagation of the kinematic quantities, the most distal actuator controller (J6)
will start the backwards recursion (green blocks) by computing the wrench, using the dynamic parameters of the
link it controls. The base actuator controller will complete
the backward recursion by sending the computed base
wrench to the central control PC. This procedure has been
implemented for a frequency of 1kHz.
## 4 Experimental results
Using the 6-DOF COMPI manipulator arm (Fig. 8), experiments have been carried out to evaluate different aspects
of a compliant control scheme based on the distributed
computation of the dynamics according to the proposed
method. In particular, to evaluate incrementally the
**Forward**
**Backward**
**Recursion**
**Recursion**
Propagation
Propagation
of link ve
of forces and
locities and
moments
accelerations
**Fig. 8 Left: COMPI manipulator arm (initially without cover). Right:**
communication procedure of the distributed dynamics computation is shown exemplarily for the system
-----
**Table 1 Joint limits used for the experiment optimization**
Joint # qmax [rad] q̇ max [rad∕s] q̈ max [rad∕s[2]]
J1 3.00 1.99 8.90
J2 1.57 1.49 1.57
J3 1.90 1.49 3.14
J4 3.00 1.99 8.90
J5 1.57 1.49 8.90
J6 3.00 1.99 8.90
distributed dynamics computation within the control
scheme, the evaluation starts in a static contact case and
subsequently advances to position control at a static reference, compliant control when tracking a Cartesian space
trajectory and handling an additional force exerted by a
payload. These experiments are described in more detail
in the following paragraphs.
### 4.1 Parameter identification
Since for all of the following control experiments the basis
is a model of the motion dynamics of the test system, this
section gives an overview of the application of the methods described in Sect. 2.5 for an experimental identification of the system.
The model of the 6-DOF manipulator arm contains 10
dynamic parameters per link and thus a parameter vector
of 60 parameters. However, a further inspection reveals
unidentifiable parameters and linear combinations, resulting in a reduction of the full parameter vector from 60
parameters to a set of 36 combined parameters.
Using this prior model, the experiment optimization has been carried out with the number of harmonics nharm = 5, the fundamental frequency 휔f = 0.1s, and
a period Tp = 10s . The trajectory and the measurement
data are sampled with a period of Ts = 1ms . The joint limits used as constraints in the optimization problem are
given in Table 1. In addition, the coordinate origin of the
5th and 6th (last) link has been constrained to a minimum
height of 0.25m to avoid a collision with the table the arm
is mounted on.
The experiments have been carried out then using
high-gain position feedback controllers of the actuators
in order to closely follow the desired trajectory. To avoid
using any further interpolation techniques in these measurements, one period of fade-in and fade-out between the
actual trajectory and the initial position of the robot arm
is prepended and appended, respectively, to allow for a
smooth start and stop. This results in a joint trajectory of
similar type as the one shown in Fig. 9. Except the 10 sec of
fade-in and fade-out each, only one period is shown here.
**Fig. 9 Exemplary identification trajectory for a single joint result-**
ing from the experiment optimization based on Fourier series.
One period (10s) to fade-in from the initial position has been prepended, and one period to fade-out to the final position has been
appended
However, it is reasonable to measure a multiple of periods
and to average the measured torques to reduce random
measurement noise.
With the measurement data of the actual motion and
the corresponding torques generated by the actuators,
the parameters have been estimated according to Eq. (25).
Prior to the estimation, the model has been extended with
a friction model as described in Eq. (20), adding 3 parameters to be estimated per actuator for the friction effects.
Including the coefficient k, controlling the steepness of the
Coulomb friction term, in the minimization problem, or
using more complex friction models did not significantly
improve the estimation result in this case. Thus, a fixed
value of k = 100 has been used. Since the manipulator arm
uses a mechanical spring to support the second actuator,
the forces generated by the spring have been computed
according to the spring constant and the deflection and
then added to the second actuator’s torque measurement.
Finally, a validation of the model is shown in Fig. 10.
The validation is based on the measured torques when
tracking an eight-shaped Cartesian space trajectory, also
used for the compliance tests described in Sect. 4.4. The
comparison shows that the model-computed torques are
predominantly in good agreement with the measured
torques, such that the model can be used for the further
developments.
### 4.2 Static contact force
As a first test, a setup as shown in Fig. 11 has been used.
In this scenario, the gripper holds a screw driver, which is
pushed against a fixed board. Accelerations and velocities are therefore zero (neglecting elastic effects), and
-----
**Fig. 10 Results from a validation experiment based on a Cartesian**
space trajectory; comparison of measured torques and torques
computed by the model using the estimated parameters
we can exclude effects such as the second moment of
inertia and most friction effects. Moreover, the reference
position is chosen such that the initial force is near zero,
and the influence of the position and velocity controllers can be neglected as well. Thus, the focus is on two
aspects: the forward recursion, testing the kinematic
transformations, and the distributed propagation of the
external force of the last link, including the gripper, from
the last node to the remaining nodes.
By setting a nonzero force vector in the last actuator controller’s registers, the model will include this
**Fig. 11 Experimental setup used in the static contact force experi-**
ment using COMPI manipulator
information in the model computation. Consequently,
all actuator controllers generate the necessary joint torques, creating the requested force. Since the manipulator is in contact, we can measure the generated force
exerted on the board using the 6-DoF force/torque sensor at the wrist as a ground truth. In Fig. 12, the commanded and measured force is shown. The commanded
force is generated by a sinusoidal signal of 40N in x direction (horizontal). The measured data show that the force
in x direction resembles the commanded force, but saturates at ca. 30N. In addition, an influence on the measured force in z direction is visible. This can result from
kinematic inaccuracies, i.e., non-orthogonal orientation
of the gripper or the grasped screw driver wrt. the board.
Also, elastic effects were visible during the experiment,
in particular a bending of the board. The amplitude of
the commanded force has been chosen to approximately
cover the full range considering the nominal torque of
Meas. Force x Meas. Force y
Meas. Force z Ref. Force x
40
20
0
0 20 40 60
Time [s]
**Fig. 12 End-effector forces generated using the open-loop model**
computation on the COMPI manipulator
-----
**Fig. 13 Cartesian position error of the end-effector of the** _COMPI_
manipulator for different controller gains when disturbed by an
external force
the actuators (28Nm). A model-plant mismatch due to
additional effects such as static friction would reduce
the available torque and thus leads to a saturation of the
generated force. In view of these effects and, in particular, taking into account the fact of controlling the generated torque via the motor currents, the measured force is
still in good agreement with the open-loop commanded
reference force and well within the range of expectable
results.
### 4.3 Point control
In a second experiment, compliant control at a static reference joint position has been evaluated. The focus in this
test was to determine the reaction of the system to disturbance forces applied to the end-effector. The experiment
has been carried out for different settings of the controller gains Kp and Kd of the position and velocity feedback
controllers, respectively. The outputs of these controllers
are added to the joint reference acceleration which is fed
into the model computation. This means a control error
in one joint state can influence the control action at the
other actuator controllers, as the model decouples joint
accelerations wrt. the actuator torques. For three different settings—high damping, medium damping, and low
damping—of the Kp and Kd parameters of the actuator
controllers, the reaction to disturbances is shown in Fig. 13.
The measurements show that the changes in the controller
parameters have the intended effect, e.g., in case of a high
Kd value the system’s reaction wrt. the position is much
higher damped than for a low value of Kd, where some
periods of a damped oscillation become visible.
**Fig. 14 Experimental setup (top) and deflection from reference**
path wrt. a disturbance force vector (bottom)
### 4.4 Trajectory tracking and compliance
Extending the previous experiment, in this experiment the
manipulator arm tracks a trajectory in the Cartesian work
space, while an external disturbance force is applied. The
reference trajectory is defined by a Lissajous figure, resulting in an eight-shaped trajectory as shown in Fig. 15. For
the joint space control used here, the inverse kinematics
are solved centrally in order to obtain the reference joint
positions, joint velocities, and joint accelerations. The
purpose of this experiment is to evaluate the complete
distributed controller setup, where especially the compliant reaction to disturbances is focused. The experimental
setup is shown in Fig. 14.
As ground truth, the end-effector force sensor is used
again. This allows to measure the forces and map them to
the positional displacement in order to get a quantitative
-----
**Fig. 15 Cartesian reference (solid) and measured position (dashed) of the end-effector of the** _COMPI manipulator when disturbed while_
moving along a reference trajectory without payload. Time indicated by arrows at every second
information of the compliance. The experiment shows that
the manipulator arm reacts compliantly to disturbances
at the end-effector but also at any other link of the arm,
while tracking the reference trajectory. A high compliance,
e.g., low feedback gains, has been set. Thus, the arm reacts
rather soft to disturbances. In Cartesian space, the endeffector compliance in this joint configuration results in
a value of about 10 N / 0.25 m (Fig. 17). However, in case
of large deviations from the reference trajectory, e.g., due
to contact, the feedback would lead to undesirable high
contact forces. It is therefore reasonable to additionally
limit the outputs of the feedback controllers, thereby
implicitly saturating the contact forces. The manipulator
will thus give in to a disturbance above this saturation and
move back without further increasing the forces. This is an
important feature in the field of human–robot collaboration, where contact forces must not grow indefinitely, and
even large deviations from the reference trajectory should
be tolerated by the system for forces exerted by a human
operator.
Nonetheless, the experiment showed that when using
such additional saturations, the robot manipulator is still
able to recover immediately from large disturbances and
keeps tracking the trajectory accurately, e.g., comparable to using high-gain position controllers. On the other
hand, if additional external forces are actually desired, the
model should reflect that information as shown in the next
section.
### 4.5 Payload
Instead of considering the external force as a disturbance
resulting in a deviation from the reference trajectory due
to the compliance, in this experiment, the external force is
to be compensated for such that the reference trajectory is
tracked closely while the force is applied. Thus, this is a test
if an additional force is correctly considered in the model
computation, provided the respective actuator controller
node is informed about this additional force acting on the
link it controls. However, further forces should still result in
compliant reaction of the manipulator arm.
For this purpose, the previous experiment is carried out
again, but with a payload added to the gripper. The payload is part of a car transmission and has a mass of approximately 0.8 kg. Without further modeling, the trajectory
tracking accuracy is degraded due to the low feedback
gains. In fact, in this setting the manipulator arm is not
at all able to carry the weight as shown in Fig. 19, which
results in a severe deviation of, e.g., more than 0.75m in
end-effector height. This is the expected reaction, as again
a high compliance has been configured, and therefore, the
feedback control loops are not capable to compensate for
such model-plant mismatch.
However, by informing the last actuator controller of
the gravity force introduced by the weight added to its
connected link, i.e., the gripper, the tracking accuracy is
improved again to the original level as in the tracking
experiment without payload, while the system still reacts
compliantly to external disturbances (Figs. 16 and 18).
## 5 Discussion
The purpose of this study was to reconsider the implementation of a classical control algorithm to make better
use of the modular hardware modern robotic systems are
composed of. In particular, the results show the feasibility
to distributedly compute an inverse dynamics model—
which usually is a function of all joint states—by partitioning the algorithm to match a robotic system’s modular
hardware structure. Thus, this work extends the approach
used in [29] where actuator-level dynamics are handled
locally; here, also the highly coupled model of rigid-body
-----
**Fig. 16 Cartesian reference (solid) and measured position (dashed) of the end-effector of the** _COMPI manipulator when disturbed while_
moving along a reference trajectory with payload. Time indicated by arrows at every second
**Fig. 17 Cartesian position error and applied disturbance force dur-**
ing the tracking experiment without payload
dynamics is computed locally by allowing the distributed
nodes to communicate with their neighboring nodes.
From the computational perspective, the results show
that using FPGA-based electronics to compute such kind
of model computations is possible with reasonable effort
and that it is a benefit since model computation, controllers, communication, motor commutation, and sensor
processing are handled in parallel. On the one hand, this
extends the work described in [47], where a single FPGA is
**Fig. 18 Cartesian position error and applied disturbance force dur-**
ing the tracking experiment with payload
**Fig. 19 Degradation of the tracking performance due to an**
unmodeled payload with a mass of 0.8kg
-----
used for the linear servo control loops only, while additionally a digital signal processor (DSP) is still used to carry out
the nonlinear model computations at a lower sampling
rate. On the other hand, this can be viewed as an adaption of the approach in [33] to achieve a distribution better
suited to the hardware of a robotic system, i.e., allowing for
boundaries around each actuator/link pair.
The presented result also shows the arrangement of
the model equations such that a linear dependency of the
dynamic parameters is kept in the distributed computation. This is beneficial since it allows direct use of experimentally identified parameters and can simplify future
work of an online adaption of the parameters.
The experimental results show that a compliant behavior can be achieved for a lightweight robotic manipulator
without the use of dedicated joint torque sensors, even
if harmonic drive transmissions with a gear ratio of 1:100
are used. Instead of joint torque sensors, motor current
measurements have been used to identifying the model
parameters from experiments and the subsequent control implementation. The joint torques computed by the
identified model showed a good agreement with the
torques estimated from motor currents. Independent of
the proposed distributed control method, the accuracy
and sensitivity wrt. external forces could be increased by
additionally using more advanced estimation methods
for sensor-less torque control. For example, disturbance
observers can be applied [26, 45], or in combination with
models based on machine learning [13]. Other methods
use force–torque sensors of the system and inertial measurements to improve joint torque estimation [16], or use
dither signals to actively reduce friction effects [12]. An
exact quantitative comparison of the methods is hardly
possible, since the differences in hardware are substantial,
i.e., between geared transmissions, direct drive mechanisms [26], or highly backdrivable cable-driven systems
[13], and in addition, discrete levels of constant forces
are considered. In approximate comparison, though
the experimental results showed that a sinusoidal force
applied onto a surface in a static arm configuration was
tracked well until saturation effects were visible.
As discussed in Sect. 2.3, a compliant control scheme
relying on the compliance in the joint space of the system
is limited to the available joint space. Thus, the compliance is limited depending on the joint configuration, in
particular if joint limits are reached.
The results are valid for the case of rigid body models
and a smaller number of DOFs. Open questions are how
the results scale with an increasing system complexity,
either in number and type of DOFs or in view of more
flexible systems which require more complex models. A
possible future work is to evaluate how the method can
be improved to handle larger communication latencies or
the reduction in the frequency the neighboring nodes are
exchanging their updates with each other, for instance,
by implementing a prediction method which tracks the
state changes of the neighboring nodes for intermediate
episodes. Moreover, further experiments are needed, in
which the proposed method is combined with modular
hardware, which allows reconfiguration during operation,
such as systems shown in Fig. 1.
## 6 Conclusions
A method to distributedly compute the inverse dynamics
model for a robotic system and to use the model in local
actuator controllers has been presented. An implementation using a robotic manipulator arm, utilizing distributed
FPGA-based electronics controlling the individual joints,
has been shown. The implementation enabled a robotic
manipulator arm to be compliantly controlled and to
cope with additional external forces. Using the method
described, a central processing unit is relieved from the
task of computing the dynamics, and thus, also the associated requirements of data exchange of every actuator
controller with a central point are not necessary. In view
of the increasing use of distributed, modular hardware in
robotic systems, the functionality developed in this work
can serve as a basis to actually allow composing modular
robotic systems which directly support compliant control
capabilities, i.e., without modeling the composed system
and implementing a central model-based controller first.
**Acknowledgements Open Access funding provided by Projekt DEAL.**
[This work was performed as part of the projects BesMan (http://](http://robotik.dfki-bremen.de/en/research/projects/besman.html)
[robotik.dfki-bremen.de/en/research/projects/besman.html) and](http://robotik.dfki-bremen.de/en/research/projects/besman.html)
_[TransFIT (http://robotik.dfki-bremen.de/en/research/projects/trans](http://robotik.dfki-bremen.de/en/research/projects/transfit.html)_
[fit.html), supported through grants of the German Federal Ministry](http://robotik.dfki-bremen.de/en/research/projects/transfit.html)
for Economic Affairs and Energy (BMWi) (FKZ 50RA1216, 50RA1217,
50RA1701, 50RA1702, 50RA1703).
### Compliance with ethical standards
**Conflict of interest The authors declare that there is no conflict of**
interest.
**Open Access This article is licensed under a Creative Commons Attri-**
bution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as
long as you give appropriate credit to the original author(s) and the
source, provide a link to the Creative Commons licence, and indicate
if changes were made. The images or other third party material in this
article are included in the article’s Creative Commons licence, unless
indicated otherwise in a credit line to the material. If material is not
included in the article’s Creative Commons licence and your intended
use is not permitted by statutory regulation or exceeds the permitted
use, you will need to obtain permission directly from the copyright
[holder. To view a copy of this licence, visit http://creativecommons](http://creativecommons.org/licenses/by/4.0/)
[.org/licenses/by/4.0/.](http://creativecommons.org/licenses/by/4.0/)
-----
## Appendix
The following two paragraphs describe the intermediate
steps to obtain two alternative partitions of the inverse
dynamics model (8), depending on the choice of coordinate systems.
_Parameter vector wrt. last coordinate system Merging (5)_
into (6) cancels out 퐑[i]i+1[f][ i]i+[+]1[1] [×][ r]i[i],Ci[ . Furthermore, expressing ]
the vectors referring the center of mass to the origin of
frame i and i − 1,
ri[i]−1,Ci [=][ r]i[i]−1,i [+][ r]i[i],Ci [,] (26)
Rewriting Eqs. (33) and (34), we can express the force /
torque vector for body _i as single vector, e.g., wrench_
vector:
**퐈̂[i]i** [∶=][ 퐈]i[i] [+][ m][i][퐒][T] [(][r]i[i]−1,Ci [)][퐒][(][r]i[i]−1,Ci [)][,] (32)
By application on (32), we get for 휇[i]
i[ and ][f][ i]i [:]
_𝜇i[i]_ [= −] **[퐑]i[i]−1[p][̈]** i[i]−[−]1[1] [× (][m][i][r]i[i]−1,Ci [) +][ ̂][퐈]i[i][̇𝜔]i[i] [+][ 𝜔]i[i] [×][ ̂][퐈]i[i][𝜔]i[i]
+ ri[i]−1,i [×][ 퐑]i[i]+1[f][ i]i+[+]1[1] [+][ 퐑]i[i]+1[𝜇]i[i]+[+]1[1]
fi[i] [=][ 퐑]i[i]+1[f][ i]i+[+]1[1] [+][ 퐑]i[i]−1[p][̈] i[i]−[−]1[1][m][i]
+ [(]퐒( ̇𝜔[i]i[) +][ 퐒][(][𝜔][i]i[)][퐒][(][𝜔][i]i[)][)][m][i][r]i[i]−1,Ci
(33)
(34)
we obtain for 휇[i]
i[:]
_𝜇i[i]_ [= −(][m][i][p][̈] C[i] i [) ×][ r]i[i]−1,Ci [+][ 퐈]i[i][̇𝜔]i[i] [+][ 𝜔]i[i] [×][ 퐈]i[i][𝜔][i]i
+ ri[i]−1,i [×][ 퐑]i[i]+1[f][ i]i+[+]1[1] [+][ 퐑]i[i]+1[𝜇]i[i]+[+]1[1][.]
)
= 퐖i휋i + 퐓[i]W,i+1
(35)
(f ii++11
_휇[i][+][1]_
i+1
)
.
(27)
(f i
i
_휇[i]_
i
Here, the center of mass’ acceleration, p̈ [i]
Ci[, depends on the ]
its location, to be estimated from experimental data.
Expressing p̈ [i]
Ci[ as a function of the ][i][ −] [th][ origin’s accelera-]
tion, p̈ [i]
i[, known from the joint states and the robot geom-]
etry, using the vector ri[i]−1,Ci[ pointing from origin to center ]
of mass, gives
( )
p̈ C[i] i [=][ 퐑]i[i]−1[p][̈] i[i]−[−]1[1] [+][ ̇𝜔]i[i] [×][ r]i[i]−1,Ci [+][ 𝜔]i[i] [×] _𝜔[i]i_ [×][ r]i[i]−1,Ci . (28)
For the term p̈ C[i] i [×][ r]i[i]−1,Ci[, in (][27][) we can exploit the follow-]
ing relation:
The matrix 퐖i is consecutively given
**퐖i =** (퐑ii−1[p][̈] i[i]−[−]1[1] **퐠(𝜔[i]i[)]** **ퟎ** ), (36)
**ퟎ** − **퐒(퐑[i]i−1[p][̈]** i[i]−[−]1[1][)][ 퐡][(][𝜔][i]i[)]
using the definitions
**퐠(𝜔[i]i[) =][ 퐒][(][ ̇𝜔]i[i][) +][ 퐒][(][𝜔]i[i][)][퐒][(][𝜔]i[i][)][,]** (37)
**퐡(𝜔[i]i[) =][ 퐋][(][ ̇𝜔]i[i][) +][ 퐒][(][𝜔]i[i][)][퐋][(][𝜔]i[i][)][,]** (38)
as well as 퐋 as defined by (14). Thus, the resulting vector
_휋i ∈_ ℝ[10] of dynamic parameters for body i is defined by
(
)
_̇𝜔[i]i_ [×][ r]i[i]−1,Ci × ri[i]−1,Ci
= 퐒[T] (ri[i]−1,Ci [)][퐒][(][r]i[i]−1,Ci [)][ ̇𝜔]i[i]
_𝜋i = (mi miri[i]−1,Ci,x_ [m][i][r]i[i]−1,Ci,y [m][i][r]i[i]−1,Ci,z
_̂Ii[i],xx_ _[̂][I]i[i],xy_ _[̂][I]i[i],xz_ _[̂][I]i[i],yy_ _[̂][I]i[i],yz_ _[̂][I]i[i],zz[)][T]_ [.]
_Parameter vector wrt. next coordinate system It is also pos-_
sible to chose a vector related to the next coordinate system origin i for the first moment of inertia parameters.
While equally correct, some additional terms remain in
the expressions for the force and torque vector of body i.
For the implementation, this set of parameters is chosen
to use the previously identified parameters used with the
Lagrange model and thus to allow a direct comparison of
the results.
In particular, we express the linear acceleration of the
center of mass of body i, p̈ [i]
Ci[, wrt. ][r]i[i],Ci[:]
( )
p̈ C[i] i [=][ ̈][p]i[i] [+][ ̇𝜔]i[i] [×][ r]i[i],Ci [+][ 𝜔]i[i] [×] _𝜔[i]i_ [×][ r]i[i],Ci . (40)
For the inertia tensor, we shift to the coordinate system i:
**퐈̄[i]i** [∶=][ 퐈]i[i] [+][ m][i][퐒][T] [(][r]i[i],Ci [)][퐒][(][r]i[i],Ci [)][,] (41)
(39)
( )
_휔[i]i_ [×] _휔[i]i_ [×][ r]i[i]−1,Ci × ri[i]−1,Ci
= 휔[i]i [×][ 퐒][T] [(][r]i[i]−1,Ci [)][퐒][(][r]i[i]−1,Ci [)][휔]i[i]
where 퐒 is defined as
(29)
(30)
⎛ 0 − _휔z_ _휔y_ ⎞
**퐒(휔) ∶=** ⎜ _휔z_ 0 − _휔x⎟_, (31)
−휔y _휔x_ 0
⎜⎝ ⎟⎠
and the relation 퐒(휔) = −퐒(휔)[T] (skew-symmetric) holds,
and for the cross product of two vectors ℝ[3] we have
_휔_ × r = 퐒(휔)r = −퐒(r)휔 . Expressing the inertia tensor 퐈[i]
i
wrt. the origin of the coordinate system i − 1 instead of
the center of mass allows for further terms in (32) to be
combined. In particular, we get an additional term according to the Parallel Axis Theorem (Steiner’s theorem), such
that the new inertia tensor 퐈[̂][i]
i[ is given by:]
-----
Again merging (5) into (6), and using (40) and (41) now
gives:
_𝜇i[i]_ [=][ m][i][퐒][(][r]i[i]−1,i[)][p][̈] i[i]
+mi퐒(ri[i]−1,i[)][(][퐒][(][ ̇𝜔][i]i[) +][ 퐒][(][𝜔][i]i[)][퐒][(][𝜔][i]i[)][)][r]i[i],Ci
−mi퐒(p̈ i[i][)][r]i[i],Ci
+퐈[̄][i]i[̇𝜔]i[i] [+][ 𝜔]i[i] [×][ ̄][퐈]i[i][𝜔][i]i
+ri[i]−1,i [×][ 퐑]i[i]+1[f][ i]i+[+]1[1] [+][ 퐑]i[i]+1[𝜇]i[i]+[+]1[1]
(42)
fi[i] [=][ 퐑]i[i]+1[f][ i]i+[+]1[1] [+][ ̈][p]i[i][m][i][ +][ (][퐒][(][ ̇𝜔]i[i][) +][ 퐒][(][𝜔]i[i][)][퐒][(][𝜔]i[i][)][)][m][i][r]i[i],Ci (43)
Using the vector 휋i ∈ ℝ[10] of dynamic parameters for body
_i,_
_𝜋i = (mi miri[i],Ci,x_ [m][i][r]i[i],Ci,y [m][i][r]i[i],Ci,z
_̄Ii[i],xx_ _[̄][I]i[i],xy_ _[̄][I]i[i],xz_ _[̄][I]i[i],yy_ _[̄][I]i[i],yz_ _[̄][I]i[i],zz[)][T]_ [,]
the matrix 퐖i is consecutively given by
(44)
( p̈ [i] **퐠(𝜔[i]** **ퟎ** )
i i[)]
**퐖i =** . (45)
**퐒(ri[i]−1,i[)][p][̈]** i[i] **[퐒][(][r]i[i]−1,i[)][퐠][(][𝜔][i]i[) −]** **[퐒][(][p][̈]** i[i][)][ 퐡][(][𝜔][i]i[)]
_Projection on motion axis Eq. (15) describes the forces and_
torques, e.g., a 6-DOF wrench, of the complete body i. To
obtain (7), the following transformations can be used
_휏i = 휇i[i]_ T **퐑ii−1[z]i[i]−[−]1[1]**
= [(]퐖휇,i휋[)][T] zi[i]−1
= 휋[T] **퐖[T]휇,i[z]i[i]−1**
(46)
2. Arkin RC (1998) Behavior-based robotics, 1st edn. MIT Press,
Cambridge
3. Armstrong BS (1988) Dynamics for robot control: friction modeling and ensuring excitation during parameter identification.
STANFORD UNIV CA DEPT of COMPUTER SCIENCE, Tech. rep
4. Armstrong-Hélouvry B (1991) Control of machines with friction.
OCLC: 990688490
5. Asada H, Slotine JJE (1986) Robot analysis and control. Wiley,
New York
6. Atkeson CG, An CH, Hollerbach JM (1986) Estimation of inertial parameters of manipulator loads and links. Int J Robot Res
[5(3):101–119. https://doi.org/10.1177/027836498600500306](https://doi.org/10.1177/027836498600500306)
7. Baele G, Bredeche N, Haasdijk E, Maere S, Michiels N, Van de
Peer Y, Schmickl T, Schwarzer C, Thenius R (2009) Open-ended
on-board evolutionary robotics for robot swarms. In: 2009 IEEE
congress on evolutionary computation, pp 1123–1130. IEEE,
[Trondheim, Norway. https://doi.org/10.1109/CEC.2009.4983072](https://doi.org/10.1109/CEC.2009.4983072)
8. Bargsten V, Zometa P, Findeisen R (2013) Modeling, parameter
identification and model-based control of a lightweight robotic
manipulator. In: 2013 IEEE international conference on control
[applications (CCA), pp 134–139. IEEE, Hyderabad, India. https://](https://doi.org/10.1109/CCA.2013.6662756)
[doi.org/10.1109/CCA.2013.6662756](https://doi.org/10.1109/CCA.2013.6662756)
9. Botvinick MM, Niv Y, Barto AC (2009) Hierarchically organized behavior and its neural foundations: a reinforcement
[learning perspective. Cognition 113(3):262–280. https://doi.](https://doi.org/10.1016/j.cognition.2008.08.011)
[org/10.1016/j.cognition.2008.08.011](https://doi.org/10.1016/j.cognition.2008.08.011)
10. Brinkmann W, Cordes F, Roehr TM, Christensen L, Stark T, Sonsalla RU, Szczuka R, Mulsow NA, Bernhard F, Kuehn D (2018)
Modular payload-items for payload-assembly and system
enhancement for future planetary missions. In: 2018 IEEE
[aerospace conference, pp 1–12. IEEE, Big Sky, MT. https://doi.](https://doi.org/10.1109/AERO.2018.8396614)
[org/10.1109/AERO.2018.8396614](https://doi.org/10.1109/AERO.2018.8396614)
11. Brooks RA (1990) A robust layered control system for a mobile
robot. In: Winston PH, Shellard SA (eds) Artificial intelligence
at MIT. MIT Press, Cambridge, pp 2–27
12. Cho H, Kim M, Lim H, Kim D (2014) Cartesian sensor-less
force control for industrial robots. In: 2014 IEEE/RSJ International conference on intelligent robots and systems, pp
[4497–4502. IEEE, Chicago, IL, USA. https://doi.org/10.1109/](https://doi.org/10.1109/IROS.2014.6943199)
[IROS.2014.6943199](https://doi.org/10.1109/IROS.2014.6943199)
13. Colome A, Pardo D, Alenya G, Torras C (2013) External force
estimation during compliant robot manipulation. In: 2013
IEEE international conference on robotics and automation, pp
[3535–3540. IEEE, Karlsruhe, Germany. https://doi.org/10.1109/](https://doi.org/10.1109/ICRA.2013.6631072)
[ICRA.2013.6631072](https://doi.org/10.1109/ICRA.2013.6631072)
14. Craig JJ (2005) Introduction to robotics: mechanics and control,
3rd edn. Pearson/Prentice Hall, Upper Saddle River
15. D’Andrea R, Dullerud G (2003) Distributed control design for
spatially interconnected systems. IEEE Trans Autom Control
[48(9):1478–1495. https://doi.org/10.1109/TAC.2003.816954](https://doi.org/10.1109/TAC.2003.816954)
16. Del Prete A, Mansard N, Ramos OE, Stasse O, Nori F (2016) Implementing torque control with high-ratio gear boxes and without
[joint-torque sensors. Int J Humanoid Rob 13(01):1550044. https](https://doi.org/10.1142/S0219843615500449)
[://doi.org/10.1142/S0219843615500449](https://doi.org/10.1142/S0219843615500449)
17. Duan S, Anderson K (2000) Parallel implementation of a low
order algorithm for dynamics of multibody systems on a dis[tributed memory. Comput Syst Eng Comput 16(2):96–108. https](https://doi.org/10.1007/PL00007191)
[://doi.org/10.1007/PL00007191](https://doi.org/10.1007/PL00007191)
18. Eberhard P, Schiehlen W (1998) Hierarchical modeling in multi[body dynamics. Arch Appl Mech 68(3–4):237–246. https://doi.](https://doi.org/10.1007/s004190050161)
[org/10.1007/s004190050161](https://doi.org/10.1007/s004190050161)
19. Gautier M, Khalil W (1992) Exciting trajectories for the identification of base inertial parameters of robots. Int J Robot Res
[11(4):362–375. https://doi.org/10.1177/027836499201100408](https://doi.org/10.1177/027836499201100408)
20. Hilljegerdes J, Kampmann P, Bosse S, Kirchner F (2009) Development of an intelligent joint actuator prototype for climbing and
**퐖[T]휇,i[z]i[i]−1**
)T
_휋,_
=
(
where 퐖휇,i refers to the lower 3 lines of the matrix 퐖i.
For a chain of bodies interconnected with rotational
joints, we can consecutively define
�
�T
� �
⎛ **퐖[T]𝜇,1[z]0[1]** ⎞
⎜ ⎟
**퐘(q, ̇q, ̈q) =** ⎜ ⋮ ⎟, (47)
⎜� �T ⎟
⎜⎝ **퐖[T]𝜇,n[z]n[n]−1** ⎟⎠
while keeping the linear relation to the dynamic parameters as in (8).
## References
1. An CH, Atkeson CG, Hollerbach J (1988) Model-based control
of a robot manipulator (Artificial Intelligence Series). MIT Press,
New York
-----
walking robots. In: Mobile robotics-solutions and challenges, pp
942–949. Istanbul, Turkey
21. Hollerbach JM (1980) A recursive lagrangian formulation of
maniputator dynamics and a comparative study of dynamics
formulation complexity. IEEE Trans Syst Man Cybern 10(11):730–
[736. https://doi.org/10.1109/TSMC.1980.4308393](https://doi.org/10.1109/TSMC.1980.4308393)
22. Kernbach S, Hamann H, Stradner J, Thenius R, Schmickl T,
Crailsheim K, van Rossum A, Sebag M, Bredeche N, Yao Y, Baele
G, de Peer YV, Timmis J, Mohktar M, Tyrrell A, Eiben A, McKibbin
S, Liu W, Winfield AF (2009) On adaptive self-organization in artificial robot organisms. In: 2009 computation world: future computing, service computation, cognitive, adaptive, content, pat[terns, pp 33–43. IEEE, Athens, Greece. https://doi.org/10.1109/](https://doi.org/10.1109/ComputationWorld.2009.9)
[ComputationWorld.2009.9](https://doi.org/10.1109/ComputationWorld.2009.9)
23. Khalil W, Dombre E (2002) Modeling, identification and control
of robots. HPS, London. OCLC: 248269638
24. Klotzbach S, Henrichfreise H (2002) Entwicklung, implementierung und einsatz eines nichtlinearen reibmodells für die
numerische simulation reibungsbehafteter mechatronischer
systeme
25. Massioni P, Verhaegen M (2009) Distributed control for identical
dynamically coupled systems: a decomposition approach. IEEE
[Trans Autom Control 54(1):124–135. https://doi.org/10.1109/](https://doi.org/10.1109/TAC.2008.2009574)
[TAC.2008.2009574](https://doi.org/10.1109/TAC.2008.2009574)
26. Murakami T, Yu F, Ohnishi K (1993) Torque sensorless control in
multidegree-of-freedom manipulator. IEEE Trans Industr Elec[tron 40(2):259–265. https://doi.org/10.1109/41.222648](https://doi.org/10.1109/41.222648)
27. Orin D, McGhee R, Vukobratović M, Hartoch G (1979) Kinematic
and kinetic analysis of open-chain linkages utilizing NewtonEuler methods. Math Biosci 43(1–2):107–130. [https://doi.](https://doi.org/10.1016/0025-5564(79)90104-4)
[org/10.1016/0025-5564(79)90104-4](https://doi.org/10.1016/0025-5564(79)90104-4)
28. Oxford English dictionary: “Subsidiarity”. (2018)
29. Paine N, Mehling JS, Holley J, Radford NA, Johnson G, Fok CL,
Sentis L (2015) Actuator control for the NASA-JSC Valkyrie
humanoid robot: a decoupled dynamics approach for torque
control of series elastic robots: actuator control for the NASA[JSC Valkyrie Humanoid robot. J Field Robot 32(3):378–396. https](https://doi.org/10.1002/rob.21556)
[://doi.org/10.1002/rob.21556](https://doi.org/10.1002/rob.21556)
30. Peters H, Kampmann P, Simnofske M (2017) Konstruktion eines
zweibeinigen humanoiden Roboters. In: Proceedings of the
2. VDI Fachkonferenz Humanoide roboter. VDI Fachkonferenz
Humanoide Roboter, December 5-6, München, Germany. VDI
Fachkonferenz Humanoide Roboter
31. Presse C, Gautier M (1993) New criteria of exciting trajectories
for robot identification. In: [1993] Proceedings IEEE international
conference on robotics and automation, pp 907–912. IEEE Com[put. Soc. Press, Atlanta, GA, USA. https://doi.org/10.1109/ROBOT](https://doi.org/10.1109/ROBOT.1993.292259)
[.1993.292259](https://doi.org/10.1109/ROBOT.1993.292259)
32. Rackl W, Lampariello R, Hirzinger G (2012) Robot excitation trajectories for dynamic parameter estimation using optimized
B-splines. In: 2012 IEEE international conference on robotics
[and automation, pp 2042–2047. IEEE, St Paul, MN, USA. https://](https://doi.org/10.1109/ICRA.2012.6225279)
[doi.org/10.1109/ICRA.2012.6225279](https://doi.org/10.1109/ICRA.2012.6225279)
33. Rajagopalan R (1996) Distributed computation of inverse
dynamics of robots. In: Proceedings of 3rd international conference on high performance computing (HiPC), pp 106–112. IEEE
[Comput. Soc. Press, Trivandrum, India. https://doi.org/10.1109/](https://doi.org/10.1109/HIPC.1996.565806)
[HIPC.1996.565806](https://doi.org/10.1109/HIPC.1996.565806)
34. ROCK: The Robot Construction Kit. http://www.rock-robotics.org
(2018)
35. Siciliano B, Khatib O (eds) (2008) Springer handbook of robotics.
Springer, Berlin
36. Siciliano B, Sciavicco L, Villani L (2008) Robotics: modelling, planning and control (advanced textbooks in control and signal processing), 1., 2nd printing. edn. Springer, Berlin
37. Sonsalla R, Akpo JB, Kirchner F (2015) Coyote III: development of
a modular and highly mobile micro rover. In: Proceedings of the
13th symposium on advanced space technologies in robotics
and automation (ASTRA-2015). ESA, Noordwijk, The Netherlands
38. Sonsalla R, Hanff H, Schöberl P, Stark T, Mulsow NA (2017)
DFKI-X: a novel, compact and highly integrated robotics joint
for space applications. In: Proceedings of the 17th European
space mechanisms and tribology symposium. ESMATS, Hatfield,
United Kingdom
39. Spall JC (2003) Introduction to stochastic search and optimization: estimation, simulation, and control. Wiley, New York
40. Swevers J, Ganseman C, De Schutter J, Van Brussel H (1996)
Experimental robot identification using optimised periodic
[trajectories. Mech Syst Signal Process 10(5):561–577. https://](https://doi.org/10.1006/mssp.1996.0039)
[doi.org/10.1006/mssp.1996.0039](https://doi.org/10.1006/mssp.1996.0039)
41. Swevers J, Ganseman C, Tukel D, de Schutter J, Van Brussel H
(1997) Optimal robot excitation and identification. IEEE Trans
[Robot Autom 13(5):730–740. https://doi.org/10.1109/70.631234](https://doi.org/10.1109/70.631234)
42. Swevers J, Verdonck W, De Schutter J (2007) Dynamic model
identification for industrial robots. IEEE Control Syst Mag
[27(5):58–71. https://doi.org/10.1109/MCS.2007.904659](https://doi.org/10.1109/MCS.2007.904659)
43. Traversaro S, Brossette S, Escande A, Nori F (2016) Identification
of fully physical consistent inertial parameters using optimization on manifolds. In: 2016 IEEE/RSJ international conference
on intelligent robots and systems (IROS), pp 5446–5451. IEEE,
[Daejeon, South Korea. https://doi.org/10.1109/IROS.2016.77598](https://doi.org/10.1109/IROS.2016.7759801)
[01](https://doi.org/10.1109/IROS.2016.7759801)
44. Turney JL, Mudge TN, Lee C (1980) Equivalence of two formulations for robot arm dynamics
45. Wahrburg A, Bos J, Listmann KD, Dai F, Matthias B, Ding H (2018)
Motor-current-based estimation of cartesian contact forces and
torques for robotic manipulators and its application to force
[control. IEEE Trans Autom Sci Eng 15(2):879–886. https://doi.](https://doi.org/10.1109/TASE.2017.2691136)
[org/10.1109/TASE.2017.2691136](https://doi.org/10.1109/TASE.2017.2691136)
46. Wensing PM, Kim S, Slotine JJE (2018) Linear matrix inequalities
for physically consistent inertial parameter identification: a statistical perspective on the mass distribution. IEEE Robot Autom
[Lett 3(1):60–67. https://doi.org/10.1109/LRA.2017.2729659](https://doi.org/10.1109/LRA.2017.2729659)
47. Shao Xiaoyin, Sun Dong (2006) Development of an FPGA-based
motion control asic for robotic manipulators. In: 2006 6th world
congress on intelligent control and automation, pp 8221–8225.
[IEEE, Dalian, China. https://doi.org/10.1109/WCICA.2006.17135](https://doi.org/10.1109/WCICA.2006.1713577)
[77](https://doi.org/10.1109/WCICA.2006.1713577)
48. Zhao Y, Paine N, Kim KS, Sentis L (2015) Stability and performance limits of latency-prone distributed feedback controllers.
[arXiv:1501.02854 [cs]](http://arxiv.org/abs/1501.02854)
49. Zhao Y, Sentis L (2018) Distributed impedance control of
latency-prone robotic systems with series elastic actuation.
[arXiv:1811.11573 [cs]](http://arxiv.org/abs/1811.11573)
**Publisher’s Note Springer Nature remains neutral with regard to**
jurisdictional claims in published maps and institutional affiliations.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s42452-020-2898-6?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s42452-020-2898-6, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://link.springer.com/content/pdf/10.1007/s42452-020-2898-6.pdf"
}
| 2,020
|
[] | true
| 2020-06-17T00:00:00
|
[
{
"paperId": "59bc8bd77a5f9b8b09d56c977595c37a6376a4c9",
"title": "Walking Robots"
},
{
"paperId": "24a761eb76280480ba3506ecaae0974590a5d546",
"title": "Subsidiarity"
},
{
"paperId": "15410167844c52a16bbc3e0bfcb9131d2ac226a0",
"title": "Distributed Impedance Control of Latency-Prone Robotic Systems with Series Elastic Actuation"
},
{
"paperId": "312c3c27a979bbc99d7bb79a5944d60be359566c",
"title": "Motor-Current-Based Estimation of Cartesian Contact Forces and Torques for Robotic Manipulators and Its Application to Force Control"
},
{
"paperId": "67453398d5c6097f789885ab03c5e31e5e8d9516",
"title": "Modular payload-items for payload-assembly and system enhancement for future planetary missions"
},
{
"paperId": "00918a80bf36031a37e9525bf95f684cf6daa089",
"title": "Linear Matrix Inequalities for Physically Consistent Inertial Parameter Identification: A Statistical Perspective on the Mass Distribution"
},
{
"paperId": "61559288ca0e93d5c4960c48839e2ab1f8e66602",
"title": "Identification of fully physical consistent inertial parameters using optimization on manifolds"
},
{
"paperId": "72a53fe035072d2c67bfaa995c731a6415cee7a8",
"title": "Implementing Torque Control with High-Ratio Gear Boxes and Without Joint-Torque Sensors"
},
{
"paperId": "bcd86eec4ef0cc1553bbb08df315b985b037edc3",
"title": "Actuator Control for the NASA‐JSC Valkyrie Humanoid Robot: A Decoupled Dynamics Approach for Torque Control of Series Elastic Robots"
},
{
"paperId": "f466c4e259d9301b91cbc06eee085531f687b121",
"title": "Stability and Performance Limits of Latency-Prone Distributed Feedback Controllers"
},
{
"paperId": "c2926680d32112d87df09a2b0398fd7004e49fe5",
"title": "Cartesian sensor-less force control for industrial robots"
},
{
"paperId": "3ca321241e57b2f2ef0ab2872aaae46e5c570b40",
"title": "Modeling, parameter identification and model-based control of a lightweight robotic manipulator"
},
{
"paperId": "e65fd6d37681b07b75e72fc32673b64fea22b376",
"title": "External force estimation during compliant robot manipulation"
},
{
"paperId": "f01a87f319539657cc64b24f1aeb262e3b365327",
"title": "Robot excitation trajectories for dynamic parameter estimation using optimized B-splines"
},
{
"paperId": "3402002efd65b8002d03d1d455b196087abe7fc8",
"title": "Hierarchically organized behavior and its neural foundations: A reinforcement learning perspective"
},
{
"paperId": "97c9eef70cf388e4b6c859133951eb3c959f1c33",
"title": "On Adaptive Self-Organization in Artificial Robot Organisms"
},
{
"paperId": "4f7cbe9e1bbc52f38c26d6be6671458711fdd3c9",
"title": "Development of an Intelligent Joint Actuator Prototype for Climbing and Walking Robots"
},
{
"paperId": "8582b35b79602d72138028641da3b750f6c1c9d0",
"title": "Open-ended on-board Evolutionary Robotics for robot swarms"
},
{
"paperId": "44f6c6f6ae7eb137a9186d46b11687d93e611297",
"title": "Distributed Control for Identical Dynamically Coupled Systems: A Decomposition Approach"
},
{
"paperId": "26f2f16f09bb1bf8b23935952dd60a8d26b3eeec",
"title": "Springer Handbook of Robotics"
},
{
"paperId": "3c33ab83d8ab29c1af0b82fd26cee88cbddb7239",
"title": "Dynamic Model Identification for Industrial Robots"
},
{
"paperId": "44dc01cb651d9a9659cc281d5d3d6dee13cf3f50",
"title": "Introduction to Stochastic Search and Optimization. Estimation, Simulation, and Control (Spall, J.C.; 2003) [book review]"
},
{
"paperId": "b9da1726fbad93d4558c393bab112f12c86e3077",
"title": "Development of an FPGA-Based Motion Control ASIC for Robotic Manipulators"
},
{
"paperId": "1b6f306c0f4e9492c2d63397f26e97749224f22e",
"title": "Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control"
},
{
"paperId": "4ccae5e3f1c682ee9a83a4ea491b0410b54097e2",
"title": "Distributed control design for spatially interconnected systems"
},
{
"paperId": "66a0a78b58de108f5f7ee4015cd681fdc6812fd8",
"title": "Modeling, Identification and Control of Robots"
},
{
"paperId": "115e9ae70393b0d26463cd1ebc4109e7a2f5eeb9",
"title": "Introduction to stochastic search and optimization - estimation, simulation, and control"
},
{
"paperId": "4ef80e4de33a749a93048f75d9c183d587a7e711",
"title": "Parallel Implementation of a Low Order Algorithm for Dynamics of Multibody Systems on a Distributed Memory Computing System"
},
{
"paperId": "9abb9a4dceb3480f3b4ffa94e7aa078439523ee9",
"title": "Book reviews"
},
{
"paperId": "1888290d970a5cf1841358af03c9d163ad00e470",
"title": "Hierarchical modeling in multibody dynamics"
},
{
"paperId": "9d9df2110ce50e484ce1574d5048890b05216e2a",
"title": "Optimal robot excitation and identification"
},
{
"paperId": "a58e3b8f595cd9860441e461838c249d4ce0ac83",
"title": "Distributed computation of inverse dynamics of robots"
},
{
"paperId": "8252980827f80fff0606dcf1bb8ba2bf4856d676",
"title": "Experimental robot identification using optimized periodic trajectories"
},
{
"paperId": "2321744535890a30e8ee88c070a9a548c763ed34",
"title": "New criteria of exciting trajectories for robot identification"
},
{
"paperId": "6ad7482ada6572db8bb3d1eb5d83579a1d47a246",
"title": "Torque sensorless control in multidegree-of-freedom manipulator"
},
{
"paperId": "e0a8f3e540d9c787b6d1c70bd65f994926f0ec54",
"title": "Exciting Trajectories for the Identification of Base Inertial Parameters of Robots"
},
{
"paperId": "adc338ee3891275f1b5a47ff0437edd80134656d",
"title": "A robust layered control system for a mobile robot"
},
{
"paperId": "52a36dd48c25d3e6233ad97dc29829cf53856954",
"title": "Introduction to ROBOTICS mechanics and control"
},
{
"paperId": "662810ce284c7a23abe46dd34ee831ea887062b7",
"title": "Estimation of Inertial Parameters of Manipulator Loads and Links"
},
{
"paperId": "fb39b30187babe9bee4afb30b034b5f93b74859a",
"title": "A robust layered control system for a mobile robot"
},
{
"paperId": "fab272a31d1a2121cb452530e6e07c5155d5861b",
"title": "A Recursive Lagrangian Formulation of Maniputator Dynamics and a Comparative Study of Dynamics Formulation Complexity"
},
{
"paperId": "b1456f3ed71e0b5ea5395d3d578ff56f106b697d",
"title": "Kinematic and kinetic analysis of open-chain linkages utilizing Newton-Euler methods"
},
{
"paperId": null,
"title": ":.(1234567890) 34. ROCK: The Robot Construction Kit"
},
{
"paperId": "16d312f881f6ea931271e4b23c9a0ca69633116e",
"title": "DFKI-X: A NOVEL, COMPACT AND HIGHLY INTEGRATED ROBOTICS JOINT FOR SPACE APPLICATIONS"
},
{
"paperId": null,
"title": "Konstruktion eines zweibeinigen humanoiden Roboters"
},
{
"paperId": "f72fb0a26efabb13ee683f8daaf6a4d1ee049afe",
"title": "Model Based Control Of A Robot Manipulator"
},
{
"paperId": "7044756dd9a7fc7b8467612666a892bd73dd2322",
"title": "Robot Analysis And Control"
},
{
"paperId": "5c9d7789579d76e51c69878bd50ed175e9cfe05a",
"title": "COYOTE III : DEVELOPMENT OF A MODULAR AND HIGHLY MOBILE MICRO ROVER"
},
{
"paperId": "e6394850b166fa8d98439ed7dcd6d99bc3ad53c7",
"title": "Springer Handbook of Robotics"
},
{
"paperId": null,
"title": "Robotics: modelling, planning and control (advanced textbooks in control and signal processing)"
},
{
"paperId": null,
"title": "Entwicklung, implementierung und einsatz eines nichtlinearen reibmodells für die numerische simulation reibungsbehafteter mechatronischer systeme"
},
{
"paperId": "42ece6df1ea4c8aa4ee700c1d1b112b8f4a03feb",
"title": "Control of machines with friction"
},
{
"paperId": "9d2d9e2d2b91c5d162ae755b68d998c1b340fad1",
"title": "Introduction to robotics - mechanics and control (2. ed.)"
},
{
"paperId": "ba9284b1810475b2a45b658f093b1fd01a58a1d7",
"title": "Robot analysis and control"
},
{
"paperId": "f0fa3fd5497369d8eb3b3986080cfd00f130595f",
"title": "Dynamics for robot control: friction modeling and ensuring excitation during parameter identification"
},
{
"paperId": "1cc920998208f988a873dbbfa0315274d0b51b57",
"title": "Introduction to Robotics Mechanics and Control"
},
{
"paperId": "0683c94441f62f28f61b112c38945bc3f257d6d8",
"title": "Equivalence of two formulations for robot arm dynamics"
},
{
"paperId": null,
"title": "poorly scale with large state spaces"
},
{
"paperId": null,
"title": "finding a controller with a distributed architecture” [25]"
}
] | 25,510
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Education",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02d3246795d7868df2555738157c5fd72c01221c
|
[
"Computer Science"
] | 0.926436
|
Towards Big Data in Education: The Case at the Open University of the Netherlands
|
02d3246795d7868df2555738157c5fd72c01221c
|
[
{
"authorId": "2673726",
"name": "H. Vogten"
},
{
"authorId": "1747579",
"name": "R. Koper"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
## Towards Big Data in Education: the case at the Open University of the Netherlands
Citation for published version (APA):
Vogten, H., & Koper, R. (2018). Towards Big Data in Education: the case at the Open University of the
Netherlands. In M. Spector, V. Kumar, A. Essa, Y-M. Huang, R. Koper, R. Tortorella, T-W. Chang, Y. Li, & Z.
Zhang (Eds.), Frontiers of Cyberlearning: Emerging Technologies for Teachingand Learning (pp. 125-143).
[Springer. https://doi.org/10.1007/978-981-13-0650-1_7](https://doi.org/10.1007/978-981-13-0650-1_7)
**DOI:**
[10.1007/978-981-13-0650-1_7](https://doi.org/10.1007/978-981-13-0650-1_7)
**Document status and date:**
Published: 05/10/2018
**Document Version:**
Peer reviewed version
**Please check the document version of this publication:**
- A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between
the submitted version and the official published version of record. People interested in the research are advised to contact the author for the
final version of the publication, or visit the DOI to the publisher's website.
- The final author version and the galley proof are versions of the publication after peer review.
- The final published version features the final layout of the paper including the volume, issue and page numbers.
[Link to publication](https://research.ou.nl/en/publications/b833d463-4d66-46c8-b454-e538f232f4b5)
**General rights**
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
- Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
- You may not further distribute the material or use it for any profit-making activity or commercial gain
- You may freely distribute the URL identifying the publication in the public portal.
If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please
follow below link for the End User Agreement:
https://www.ou.nl/taverne-agreement
**Take down policy**
If you believe that this document breaches copyright please contact us at:
pure-support@ou.nl
providing details and we will investigate your claim.
Downloaded from https://research.ou.nl/ on date: 28 Nov. 2023
# Open Universiteit
#### www.ou.nl
-----
### Towards Big Data in Education: the case at the Open University of the Netherlands
**Hubert Vogten**
**Open University of the Netherlands**
**Valkenburgerweg 177**
**6419 AT Heerlen**
**[hubert.vogten@ou.nl](mailto:hubert.vogten@ou.nl)**
**+31 45 5762126**
**Rob Koper**
**Open University of the Netherlands**
**Valkenburgerweg 177**
**6419 AT Heerlen**
**[rob.koper@ou.nl](mailto:rob.koper@ou.nl)**
**+31 45 5762657**
#### Introduction
When reviewing technology developments over the past centuries a pattern
emerges: the rate of these developments is not evenly spread over time, but rather,
there seem to be pivotal moments in time when some key developments and discoveries accelerate and fuel a whole range of derived advancements. Some examples of this flywheel effect are the harnessing of steam power, the introduction of
electrical power, the discovery of the transistor or the visionary work on user interfaces by Douglas Engelbart (Engelbart and English 1968) and his team.
We argue that we have reached such a pivotal moment in time again, although
this time the field is data science. Data science is the emerging intersection of various disciplines such as social science, statistics, and information and computer
science. The internet, social networks, new devices such as mobile devices and
more recently the internet of things are responsible for an explosion of digital data,
which is increasing exponentially each year. Some forecast predict we will produce and consume 40 Zetta bytes by 2020 (Gantz and Reinsel 2012). Data science
is all about making sense of these vast amounts of partly unstructured data, so
called ‘big data’. There have been three key developments, which are intertwined,
that spurred on the data science field.
Firstly, there is the rise of cloud computing, which makes data storage increas
ingly cheap and ubiquitous while at the same time it provides us with cheap, on
-----
2
demand and virtually endless processing power. Cloud computing is also a double
bladed knife, as it not only is the backbone for services that are the source of the
big data in the first place, but it also provides the computing resources, processing
and storage, needed for the data science services themselves. Secondly, there are
the recent advances and developments in distributed computing technologies.
Google’s paper on their MapReduce algorithm (Jeffrey and Sanjay 2008), resulted
in a whole range of distributed software systems, libraries and services with the
common denominator that they scale very well and therefore are very suitable for
processing big data. Thirdly, there have been impressive advancements in field of
machine learning. In fact, to such a degree that nowadays artificial intelligence
and machine learning are considered to be synonymous. Especially deep learning,
which in fact builds on the relative old idea of neural networks reaching back as
far as the 1950’s with the Perceptron project (Rosenblatt 1958), has shown great
promise because large amounts of data combined with ample processing power
made this old idea viable albeit with some essential twists on the original idea.
All these developments, glued together via the internet, provide the necessary
means to do ‘clever stuff’ with these big data or phrased more eloquently, they enable the development of smart services. These smart services will affect all of our
society and hence also education. The idea of educational smart services is not entirely new. Educational datamining or learning analytics have been around for a
while. However, in practice, the data are primarily stemming from the learning
management system and are relative limited. Solutions often use traditional and
proven technologies, such as learning record stores that depend on relational databases. This approach may be appropriate for now but is in our view is too limited
for the next generation of smart services, as relevant data continues to grow exponentially and are not restricted to the LMS. We can expect that data will not merely be the result of human interactions but also will be generated by smart devices
such as wearables and the internet of things. Research carried out at OUNL on the
relation between some biometric variables and learning effectiveness, showed that
traditional learning record stores could not cope with the large data streams produced in the experiment (Di Mitri et al. 2016)
The Open University of the Netherlands (OUNL) launched in 2016 a new pro
ject called ‘Data Sponge’ (DS) with the ambition to research and develop an enterprise level big data infrastructure for OUNL that will enable and stimulate the
development of educational smart services. OUNL is in a relative good position to
do so, as in 2015 OUNL completed a major step in restructuring their educational
model (Schlussmans et al. 2016), moving from a guided self-study model for distance education towards an activated learning model for distance education. This
model change was accompanied by the introduction of a complete new learning
management system (LMS) (Koper 2014) (Vogten and Koper 2014). The combination of this new educational model and new LMS was also a major step towards
a fully digital university and as a result, OUNL has access to a fair amount data.
Several departments at OUNL are already making use of these data: the data
warehouse of OUNL captures data from various administrative systems mainly to
produce information for the management; faculties use the LMS which incorporates a proprietary data store to monitor student’s and tutor’s progress; the Welten
Institute research center has developed an infrastructure for learning analytics that
-----
3
captures biometric data using Google services. What becomes clear from this is
that these efforts are dispersed and therefore are not as effective as they could be.
Furthermore, these initiatives are bounded by their respective departments and as a
result, data is only sparsely available throughout the wider organization. In other
words OUNL has no “single integrated version of the truth” with respect to their
data.
DS should overcome typical obstructions when trying to get hold of the dis
persed data across various source systems and departments. DS has the ambition
to be the single integrated version of the ‘truth’ for researchers, developers of
smart services and OUNL’s management. As a consequence, DS should collect as
much data as possible even though these data may be not used yet. One could argue that it makes no sense to store these unused data as they can be retrieved later
from their respective source systems. This is a faulty assumption however, as we
have to be aware that the vast majority of today’s databases reflect designs from
decades ago, when memory and disks were very small and very expensive. Databases could simply not afford to keep track of a so called change log. Rather, these
databases typically only contain the last known state of an entity which is the result of consecutively applying all incoming changes. As a consequence, if we
don’t take any measures, the history of these changes is lost forever. This change
log can be essential when developing new smart services. So we need an infrastructure that keeps track of all these changes, for a variety of data sources.
Furthermore, some event data are currently not stored in any of OUNL’s sys
tems but are still very relevant when developing smart services. Examples are
mouse clicks, browsing behavior, biometric data etc. DS should be capable to capture these fine grained event data as well, which will not only result in large
amounts of data, but will also impact the throughput requirements and characteristics of DS. The DS architecture should be capable to deal with the backpressure
arising from sudden bursts of vast amounts of incoming data.
These immutable event and changelog data resemble journal entries in a ledger
for the enterprise. Obviously, as these data are immutable, the amount of data will
therefore only grow and therefore DS should be capable of dealing with a very
large ledger. Such a ledger for the whole enterprise is also known as an Enterprise
Data Lake. This ledger can be suitable for some statistical analytics, but most likely, it is not very suitable for most smart services to be used directly. The ledger data have to be transformed into different, more suitable formats, sub-selections and
aggregations for an effective processing by most smart services. The prompt transformation of the event data in the ledger is an essential requirement for DS. The
term ‘prompt’ is relevant here as some of the smart services may have to provide
virtual instantaneous feedback, using the most recent data, while others are much
more lenient and are perfectly fine working with data that is maybe a couple of
days old. DS must be suited for both real-time and more batch oriented smart services. The resulting transformed data of this transformation that can be queried by
the smart services is called the data factory.
Obviously, development of such smart services is an ongoing process. New
smart services will be developed while existing smart services have to be maintained because, for example, the provided data formats have changed as a result of
alterations in one of the source systems. Furthermore, it must be possible to repair
-----
4
bugs in the data transformations without losing any data as a result. The DS architecture should have provisions for updating existing and adding new smart services without the risk of losing any data or producing incorrect results.
**Fig.7.1: high level Data Sponge architecture.**
In addition DS should facilitate the discovery and the development of new
smart services. For it is crucial that data analysts can get a good understanding of
the nature of the available data so they can develop new hypothesis and questions
that could be answered via new smart services. It should also be possible to develop prototypes validating these assumptions in a very agile way. These are typically functions of a Data Lab. The DS architecture must provide the required agility
to support this functionality as well.
Figure 7.1 depicts a high level view on the resulting DS architecture. OUNL
has partnered with SURFsara, which will provide the infrastructure for DS,
through its high performance computing cloud platform (SURFsara 2017). In the
remainder of this chapter we will derive the main none functional requirements of
DS and see how we can meet these requirements. Finally, we will describe the resulting DS architecture in more detail.
-----
5
#### Data Sponge requirements
From the discussion in the previous sections we can derive a set of non-functional
requirements which DS and its underlying architecture has to meet. We define
four major requirements: scalability, availability, reliability and flexibility.
_Scalability is the term we use to describe a system’s ability to cope with in-_
creased load. Load can be parametrized by the size of the data and the amount of
data packages. We shall define ‘coping’ as being able to deliver similar performance even when the load metrics change. Performance can be measured by
throughput, that is: how much data can be processed on average within a certain
time period. This is a good indicator for the extent that the data are up to date. For
near real-time systems, such as online systems, latency is a very important performance indicator. We define latency as the time it takes from the start of the request until the delivery of the requested data. DS must guarantee scalability of
both aspects: DS performance should not deter when more, potentially much
more, data is produced. DS latency should not deter when data load increases.
_Availability will be rather intuitively defined as the ratio of the total time DS is_
operational during a given interval to the length of that interval. High availability
of DS is of utmost importance as downtime would lead not only to inaccurate data
for various smart services, but also to a potential permanent loss of data as incoming data cannot be processed. This is especially the case when these data are not
stored by any other source system of OUNL or are exclusively fed into DS. The
architecture of DS should take into account that disturbances, such as hardware
failures, will not impact its availability.
_Reliability is the measure of how far we can trust the data in DS to be correct_
and up to date. There is an obvious relationship with scalability and availability.
However, a scalable and highly available DS does not in itself guarantee that data
are correct. We must expect incoming data to be erroneous from time to time for
example due to human error. The DS architecture is considered to be reliable
when it provides means to correct such errors once they have been detected.
_Flexibility is a measure to what extent DS can handle changes in the system._
Data fed into DS will change over time as the source systems evolve. This not only applies for new data types, but also for changes in existing data types. Similarly
smart services may require different data types as the services evolve over time.
DS should be able to cope with these changes, without compromising availability
and reliability.
In the next sections we will discusses different architectures that can meet these
requirements and we look in more detail at the proposed architecture for DS. We
will address each requirement in more detail in the next section and discuss how
these requirements influence the architectural choices. Finally, we will present a
high level overview of the DS architecture.
-----
6
#### Consequences for Data Sponge architecture
The DS architecture must meet the scalability, reliability, availability and flexibility requirements. The first requirement, scalability, will impact the DS architecture
most. A major decision is what type technology stack we will use to meet the
scalability requirement, which for a large part determines the DS architecture. One
option is to use, what we will call ‘traditional’ technologies, which typically include a relational database and one or more application and/or web servers. Such a
three tiered approach is very well understood as it is applied in numerous systems
over last decades. An ACID compliant database (Haerder and Reuter 1983), usual
SQL compatible, is essential in this type architecture, as the upper layers very
much depend on the transactions typically provided by these database management systems. This type of architecture typically will scale well up to a certain
point, when the underlying database system becomes too slow. For incoming data
it will cause back pressure issues and as a consequence eventually could lead to
permanent data loss. This typically occurs when the amount of input data is greater than the system can handle for a prolonged period of time. Another consequence is that database latency will be high and this could also lead to a potentially unacceptable increase in overall system latency, simply because the data cannot
be retrieved in due time. Both situations, back pressure on the input data and high
latency in the data throughput are obviously undesirable. We could fix such a situation by upgrading the underlying database hardware, which is known as vertical
scaling. Vertical scaling only goes so far as what the best hardware has to offer,
while at the same time hardware costs increase exponentially when squeezing the
last bit of performance out the server hardware. However, there are alternative approaches that could help alleviate the database bottleneck. Probably the first step
would be to shard the database, which basically is dividing the database into partitions which are hosted on different database servers. But there is a high price to
pay when sharding a relational database. A lot of the logic behind this sharding
has to be handled by the application layer and ordinary operational tasks such as
backing up, schema changes become much more difficult. An example of the increased complexity introduced by sharding of the database is the multi write problem. As data will be distributed over multiple database servers, the application becomes responsible for the data integration, meaning it must keep the databases up
to date with the correct data. This data integration problem is complex and race
conditions can lead to faulty data which is very hard to detect and correct. In other
words, we have lost the benefits of having an ACID compliant database. Alternatively, we could also introduce additional data caches and alternative storages to
increases data throughput. However, such architecture will become very complex
very quickly, which is ultimately very difficult to manage, maintain and understand. In conclusion, using a ‘traditional’ three tier approach has the advantage
that the underlying technologies are very well understood and have proven to
work well. Nevertheless at a certain point the underlying database technology will
not scale anymore without additional measures, which in turn will quickly lead to
an architecture that is very complex, messy and very difficult to maintain.
-----
7
An alternative to these ‘traditional’ technologies are distributed data systems,
which are relative new and received a lot of attention when Google published their
paper ‘MapReduce: Simplified Data Processing on Large Clusters’. Since, an explosion of environments has emerged including many NoSQL databases and numerous variations on the original MapReduce data processing model. What these
applications have in common is the way they approach scalability. Rather than relying on more powerful computer hardware to address scaling as is typical in vertical scaling, they are built around the concept of horizontal scaling. Horizontal
scaling is achieved by adding additional computing resources to a cluster of connected nodes which allows the nodes in the cluster to work in parallel at the same
tasks. The processing and data load is spread amongst the available nodes in the
cluster by one or more supervisor nodes. This approach, theoretically, should scale
limitless as long as additional computing resources are available. Cloud computing fits very nicely into this model as it provides the means to increase and decrease the number of computing resources in the cluster as needed.
Distributed data systems, having horizontal scalability in their DNA, are very
well suited to process large amounts of heterogeneous data. However, this does
not also imply that they are automatically suitable for real time applications typically having low latencies. For example, many MapReduce implementations are
rather batch oriented and therefore have not the required low latencies for near real-time processing of data. We will discuss two different approaches that will address this latency problem of batch oriented distributed data processing frameworks. The first approach is known as the ‘Lambda architecture’ which we will
discuss next.
#### The Lambda architecture in a nutshell
In ‘Big Data: Principles and best practices of scalable realtime data systems’
(Marz and Warren 2015) Marz and Warren describe an architecture that they
dubbed “Lambda Architecture’. This architecture not only addresses the issue of
meeting the low latencies requirements with batch oriented distributed data processing frameworks such as Hadoop, but also addresses the reliability and flexibility requirements.
This architecture is made up by three distinct layers: a batch layer, a speed lay
er and finally a serving layer. The serving layer combines the outcomes of the
batch layer and speed layer into multiple up to date views on the input data. Up to
date means that the latency of the serving layer is sufficiently low so data in the
views can act as input for real-time systems. Figure 7.2 depicts a high level overview of the Lambda architecture.
-----
8
**Fig.7.2: the Lambda Architecture.**
The batch layer uses an immutable master data set as input to re-compute, on
regular intervals, the data in views of the batch layer. This processing of the data
may take minutes or even hours. Clearly the computed batch views are out of date
by the time this processing has been completed as under while new data has been
pouring into the master data set. For this reason the architecture also includes the
speed layer. The speed layer is responsible for calculating exactly the same views
as the batch layer does, but with the distinction that the serving layer only processes the input data that is not already processed by the batch layer. Because the
batch layer regularly catches up with the speed layer, the amount of data to be
processed by the speed layer at any given moment in time is fairly limited. This
limited data set can easily be processed with sufficiently low latencies. The speed
layer can use a variety of sub-architectures such as micro batch jobs, micro
batched streams or single item streams.
Finally, the serving layer is responsible for merging the outcomes of the batch
views and the real-time views into up-to-date views on the input data. The Lambda architecture solves two major problems. First, it provides the low latencies required by near real-time applications, whilst at the same time allows the use of
batch oriented distributed technologies such MapReduce to do the majority of the
data processing. But maybe as important, the architecture introduces the necessary
resilience against faults in the data processing which could be caused for example
by changing requirements, modified data formats or programming errors. The key
to this resilience is keeping the original input data in an immutable data store. This
ensures that no original data is lost and each view can be recomputed at any time.
-----
9
Updating both the programming for the batch and speed layer with the necessary
changes and or fixes, followed by the reprocessing of all input data in the master
dataset will return the system in a valid and correct state again. This meets our reliability and flexibility requirement as it allows us to deal with faults and changed
requirements.
Although this architecture solves the low latency demands of our scalability re
quirement, it also introduces additional complexity. First, we need to synchronize
the speed layer with the batch on regular intervals, by unloading data from the
speed layer once the batch layer views have been updated. Secondly, and more
importantly, the speed layer does use a different technology stack from the batch
layer and as a consequence the programming code of the batch layer cannot directly be reused in the speed layer. Having two code bases increases the likelihood of
interpretation differences and programming errors, while maintenance efforts are
at least doubled because every piece of code has to be programmed twice.
The architecture and technologies used in the speed layer differs depending on
whether the real-time views are updated synchronously or asynchronously. In case
the speed layer views are updated synchronously, the updating process is stopped
until all processing has been completed. In most cases this is undesirable, and an
asynchronous approach is therefore preferred in which a stream processor acts as
buffer avoiding back pressure in the data providers. The data provider will continue immediately after the data is queued by the stream processor. This way, peaks
and sudden bursts of data can be easily accommodated. There are many stream
processing frameworks available, but in combination with big data processing
Apache Kafka (J Kreps et al. 2011) is a very popular choice. Kafka provides a unified, high-throughput, low-latency platform for handling real-time data feeds. The
persistent multi-subscriber message queue is built as a distributed transaction log.
These features make Kafka an appealing choice as streaming framework for the
speed layer.
Interestingly, it is the main architect of Kafka, Jay Kreps who questions the
Lambda architecture (Jay Kreps 2014) and proposes an alternative architecture
exploiting the unique properties of Kafka, while maintaining the resilience offered
by the Lambda architecture.
#### The Kappa architecture, in a nutshell
Jay Krepps argues in ‘I Heart Logs’ (J Kreps 2014) that streaming micro services
using Kafka’s distributed persistent messagebus, could replace the batch layer of
the Lambda architecture. By doing so, one of the main drawbacks of the Lambda
architecture, the need to maintain two different application environments for the
batch and speed layer, can be overcome. This approach is dubbed ‘Kappa architecture’ with an obvious wink to the ‘Lambda Architecture’. Kreps recognizes that
one of the strong points of the Lambda architecture is it resilience to cope with
changes and bugs by exploiting its immutable master data set. The proposed
‘Kappa’ architecture also provides this resilience, albeit in a slightly different and
-----
10
more implicit fashion, by using Kafka’s unique persistent multi-subscriber message streams.
**Fig. 7.3: the ‘Kappa’ architecture**
Figure 7.3 depicts the ‘Kappa architecture’ based on Kafka. It becomes imme
diately obvious that the batch layer has disappeared in this architecture. A stream
processing framework converts all input data, persisted through Kafka input topics
into the required views. This approach very much resembles a speed layer of the
Lambda architecture that is tuned for asynchronous data processing. However, in
the case of the Kappa architecture, all input data will be processed by the stream
infrastructure and not only the most recent data as it is the case with the Lambda
architecture.
But how does this architecture achieve the resilience of the Lambda architec
ture? To answer this question we have to look a little closer at the Kafka architecture. Kafka is a distributed messaging system, a real-time stream processor and
distributed data store in one closely integrated package. Kafka retains messages,
by topic, as an immutable log. The retention period can be configured by topic and
may be indefinite. Each topic can have multiple independent subscribers, meaning
that each subscriber is receiving all messages of the topic. Each subscriber maintains a pointer to the last read message, which is simply the index of the last processed message by that subscriber. The collection of immutable topic logs very
much resembles the immutable master data set of the Lambda architecture. So if
we must recalculate our output views as a result of programming errors or perhaps
emerging requirements, we can feed the complete topic log again to the stream
processing system by simply resetting the last read index of the relevant topic subscribers. While this reprocessing is taking place, which may take many hours, the
system would be producing out of date, albeit correct, data. Depending on the type
of defect being fixed, it could be preferable to serve more up to date, but less correct, data as long as the reprocessing has not yet had time to catch up. It therefore
makes sense not to overwrite the existing output views right away, but instead rename the updated stream processes and the resulting output views by adding a
version number to them. This way the old views and the new corrected views coexists for a period of time. Once the new streams are up to date, the consumers of
the old views can be configured to start using the latest versions of the output
views containing the corrected data. Because both versions of the stream processes
-----
11
and resulting views are constantly being updated with the latest input there is no
immediate pressure to switch all consumers simultaneously, which is essential in
real life situations where a centralized release management of various sub-systems
is at best undesirable and more likely unrealistic. Once all consumers have been
adapted and configured to use the latest versions of the streams and views, we can
delete the old version with its corresponding data and thereby free the used computing resources. This way the Kappa architecture achieves a similar resilience
against erroneous data and programming bugs as the Lambda architecture. Hence,
the Kappa architecture also meets the reliability and flexibility criteria of DS.
In the previous section we did not address another major difference between
the two architectures which has to do with scalability. Although both architectures
can use a distributed message broker such as Kafka, the scalability demands of
this message broker are very different in the Lambda architecture compared to the
Kappa architecture. The Lambda architecture has a message broker in the speed
layer, if it has one at all. This speed layer only processes data not yet processed by
the batch layer and therefore the required low latency is relative easily achieved
when compared to the Kappa architecture where the message broker is responsible
for processing all incoming data. In other words, the Kappa architecture depends
much more on the scalability of the message broker compared to the Lambda architecture. Is Kafka up to this task? Because Kafka is a distributed message broker
it will allow vertical scaling by adding additional nodes to the cluster. Kafka is also a persistent message broker. The persistence of the message streams is achieved
via a distributed NoSQL key/value store, which implementation can be changed
via configuration. This store will scale vertically as well. In fact the developers of
Kafka claim that the system is capable of handling millions of message per second
in a properly configured Kafka cluster with very low latencies. This should be
ample to meet the scalability requirement of DS. This leaves the availability requirement which we will discuss next.
Kafka addresses the availability requirement by introducing a failover mecha
nism for each topic in the Kafka cluster. A Kafka topic is split into one or more
partitions, and each partition is responsible for processing a shard of the total message stream. The distribution is determined by the hash value of a unique message
key. The partitions themselves are distributed as evenly as possible over the available Kafka nodes in the cluster. Each partition is replicated across a configurable
number of Kafka nodes for fault tolerance and each partition has one node which
acts as the ‘leader’ and zero or more nodes which act as ‘followers’. The leader
handles all read and write requests for the partition while the followers passively
replicate the leader. If the leader fails, one of the followers will automatically become the new leader. Each node acts as a leader for some of its partitions and as
follower for others so the load and risks are well balanced within the cluster guaranteeing the availability of the services provided by the cluster should one or more
nodes in the cluster fail. In case of a catastrophic failure where none of the replicas
are available two alternative recovery scenarios are available. Either wait for a
synchronized replica to come back to life and choose this replica as the leader or
alternatively choose the first replica that comes back to life, as the leader, which is
not necessarily fully synchronized. This is a tradeoff between availability and reli
-----
12
ability. Kafka can be configured either way, but by default reliability is sacrificed
over availability.
So when properly configured we may conclude that Kafka also meets the relia
bility requirement of DS and thereby meets all four requirements. This combined
with the advantages of the reduced complexity through a single technology stack
makes is an appealing choice for DS. However, the message broker is only one,
although very important, part of overall Kappa architecture. The stream processing
system is the other part and it must meet the scalability, availability, flexibility and
reliability requirements as well.
#### The stream processing system
We didn’t pay much attention to the stream processing system so far, but it is an
essential component of the Kappa architecture. The stream processing system is
focused around so called micro services, which are responsible for small parts of
the transformation of the data, very similar to pipelines known from Unix
(Kleppmann and Kreps 2015). There are various implementations of these stream
processing frameworks such as Apache Storm, Apache Samza, Spark Streams and
more recently Kafka Streams (KS). Having a native stream processing framework
integrated in Kafka makes an interesting proposition for DS, as this reduces the
learning curve and ensures optimal integration. Next we will have a more detailed
look at KS and review how KS meets our requirements.
Kafka stream processing applications are ordinary Java applications that can be
run everywhere without any special requirements. For packing and deployment
KS relies on external specialized tools such as Puppet, Docker, Mesos, Kubernetes
or even YARN. So KS does not rely on a proprietary deployment manager. From
a deployment perspective, a Kafka stream is just another service that may have
some local state on disk, which is just a cache that can be recreated at any time if
it is lost or if the streaming application is moved to another node. Kafka will partition and balance the load over the running instances of the streaming application.
This partitioning is what enables data locality, scalability, high performance, and
fault tolerance.
So KS meets the scalability and availability requirements of DS, given it has
been be properly configured. How do KS meet our reliability and flexibility requirements? To answer this question, we must have a closer look at a concept
known as ‘Stream Table Duality’. We have seen that Kafka threats messages as an
immutable changelog. This changelog would therefore only be growing, which
could become problematic. To keep the changelog manageable, Kafka has a feature called log compaction. Log compaction determines the most recent version of
a changelog entry for every key and discards all other changelog entries for that
key. The compacted changelog effectively can be regarded as a traditional state
table. KS uses this duality of the changelog to the fullest by interpreting a stream
as a changelog of a table and tables as a changelog of a stream.
-----
13
**Stream as Table: A stream can be considered a changelog of a table, where**
each data record in the stream captures a state change of the table. A stream is thus
a table in disguise, and it can be easily turned into a ‘real’ table by replaying the
changelog from beginning to end to reconstruct the table.
**Table as Stream: A table can be considered a snapshot, at a point in time, of**
the latest value for each key in a stream. A table is thus a stream in disguise, and it
can be easily turned into a ‘real’ stream by iterating over each key-value entry in
the table.
Because of this duality, the Kafka message broker can used to replicate the lo
cal state stores across nodes in the cluster for fault-tolerance. It also provides a
mechanism to correct mistakes, as the streaming applications also maintain an index to the last processed changelog entry. Recalculating results is a matter of deleting some intermediate topics and resetting the corresponding indexes. The
framework will handle the rest automatically and after some time it takes to catch
up, the results will be up to date again. So probably not unsurprisingly, KS fits
well in the Kappa architecture and meets the reliability and flexibility requirements of DS.
#### Cold Start Problem, CDC to the rescue
Now that we have determined a basic architecture and corresponding implementation framework for DS that meets our global requirements, we focus on something
we will call the cold start problem. The cold start problem refers to initial lack of
data that can directly be fed into DS. In an ideal world all of OUNL’s source systems would be extended with triggers, event listeners and so forth that would provide DS with all event data from these systems. However, this is not very realistic
as this would require a tremendous effort. More realistically, the required modifications will be implemented as these source systems develop over a prolonged period of time. This process could take years to fully complete. How can we survive
this data drought in the meantime?
The most practical and least invasive approach is to develop applications that
monitor changes in the databases of the source systems and thus in effect creating
a simulated change log on these databases. The advantage of this approach is that
the source systems do not have to be affected by this at all, while some of the most
relevant data becomes available for DS straight away with a minimum of effort.
This approach is also known as Change Data Capture (CDC).
How we monitor DB changes very much depends on the available database
technologies and the characteristics of the data involved. For example, some database management systems have out of the box support for an actual changelog,
which is also used for replicating the databases for backup purposes. In these cases
developing a proprietary change listener feeding directly into DS is a realistic approach. If the used database systems do not have support changelogs other scenarios are possible as well. If data is not very volatile and relative limited in size,
such as student course registrations for example, it is possible to create a batch job
-----
14
that determines the delta of the table values on a daily basis and sends its results to
DS.
Obviously, CDC cannot capture data that is not stored in any of the databases
and this approach will eventually miss relevant data. So besides implementing
CDC, efforts must go towards capturing event data in the various systems as well.
However, by establishing a basic DS infrastructure solely based on CDC data, we
can showcase DS and make a more informed case to emphasize the importance to
make changes to various source systems to capture the missing data.
The Confluent platform extends Kafka with a number of very useful additions
among which there is a framework for implementing our CDC requirements,
called Kafka Connect (KC). KC defines two basic interfaces: source connectors
which are producers that feed Kafka with new data and sink connectors which are
consumers that export data from Kafka to various other formats and systems. With
this framework it is possible to develop proprietary connectors. However, the
Confluent platform also ships a number of standard connectors, among which is a
JDBC source and sink connector. These KC connectors can be configured to work
in stand-alone or in distributed mode. Distributed mode obviously is targeted at
scalability and availability. Whether this is a requirement depends very much on
the characteristics of the data, such a volume and volatility. DS will make use of
these connectors to overcome the cold start problem by implementing a CDC solution for some of OUNL’s most essential source systems.
#### Data Formats and schemas
The format and semantics of data will change over time as systems continue to
develop. This is a major challenge for any data transformation process and therefore also for DS. Semantic changes can be very hard to track and failure to do so
can lead to erroneous and unpredictable results in downstream consumers. Unfortunately, besides very tight change management procedures, there is very little in
terms of technology that can be offered to overcome this situation. However, there
are some solutions that can help to keep track of changes in the data formats used.
Various standards have evolved that allow the formal definition of data struc
tures in a programming language independent manner. Up until recent years XML
and more specifically XML DTD’s and XML schema’s where the representations
of choice. More recently, JSON has become very popular and is replacing XML as
format of choice. While XML schemas or XML DTDs allow to formal definition
of the data structures, JSON does not have any possibility to define data structures
out of the box. Furthermore, both formats are very verbose and therefore not very
suitable when processing and streaming large amounts of data. To overcome this
issue several data language and format independent serialization frameworks have
emerged. Probably the best known ones are Apache AVRO, Apache Thrift and
Protocol Buffers. These frameworks provide ways to compact rich data structures
into an efficient binary format and describe the rich data structures by some sort of
schema. Schemas not only play an important role in the definition of the data
-----
15
structures, but also in the evolution of these data structures. When applications
evolve, the data structures change and thereby the schemas must evolve as well.
Merely detecting that data structures have changed is useful by itself as it can trigger an alert that producers and consumers are not compatible anymore. However,
by designing these schemas cleverly, we can achieve compatibility between older
and newer versions of these data structures. Schemas can be backward compatible,
meaning that the consumers using the latest version of the schema can process data from producers using an older version. This can for example be achieved by defining default values for data elements that are added in the new version of the
schema. Forward compatibility is achieved when a consumer using an older schema version can still process data from a producer that uses a newer schema version. This can be achieved by simply ignoring data elements introduced by the
newer schema. Forward compatibility is very important when data is changed upstream and the downstream consumers can’t be updated simultaneously. Forward
compatibility helps to avoid the need of a big bang release of the entire stack of
stream processing applications. In addition, schemas can also be both forward and
backward compatible at the same time, which is obviously the most flexible situation. Figure 7.4 depicts the four cases of producer and consumer compatibility or
the lack of it.
**Fig. 7.4: Schema evolution and compatibility**
Kafka does not support any of the aforementioned serialization frameworks out
of the box. However, Kafka supports some basic stream serializers and de
-----
16
serializers (SERDE), which can be extended. The Confluent platform extends
Kafka s standard SERDEs with an Apache AVRO SERDE. In addition, the Confluent platform also provides a schema registry that allows the versioned storage
of AVRO schemas. This allows the efficient serialization and deserialization of
message data into their appropriate formats, while also guaranteeing data compatibility between producer and consumer. Incompatible data automatically trigger an
error.
Schema compatibility and more specific forward schema compatibility is es
sential component to satisfy our flexibility and reliability requirements. The data
structures in the source systems will evolve over time, and the downstream processing applications should regardless be able to keep performing their task correctly. This allows for a gradual upgrade of the downstream applications enabling
them to start benefiting from the new schema.
#### Data Sponge Architecture
In the previous sections we discussed the general requirements DS has to meet
concerning scalability, availability, reliability, flexibility. We saw that the distributed data systems can overcome scalability issues of more traditional multi-tier
systems. The low latency issue, a scalability requirement for near real time systems can be overcome by incorporating a distributed streaming server into our architecture. We reviewed two architectural approaches to overcome the low latency
issue and concluded that the Kappa architecture using a Kafka only solution will
meet our DS requirements. We argued that sticking to a single framework solution
is enticing as it reduces the learning curve and simplifies operations. We also concluded that DS is facing a cold start problem and that is not realistic to expect
OUNL systems to be adapted on the short term so they feed their data into DS.
CDC using data connectors can help overcome this cold start problem in a fairly
elegant manner. Finally we reviewed schemas and schema evolvement and compatibility as a means to guarantee data correctness for producer and consumers.
For the first implementation of DS we will restrict ourselves by merely inte
grating the most crucial of OUNL source systems in DS. This first implementation
will act as a proof of concept and will be a technical validator and pioneering platform on the one hand and a means for generating awareness of the importance of
data science within OUNL on the other hand.
-----
17
**Fig. 7.5: the Data Sponge Architecture**
Figure 7.5 depicts the resulting DS architecture. The architecture is divided into
two distinct layers. The first layer contains the CDC infrastructure which is using
Kafka Connect to keep track of changes three source systems of OUNL:
- Student Administration: the administrative system of OUNL known as SPIL is
the source for student enrollments, course registrations, and student grades.
- yOUlearn: OUNLs proprietary LMS. It handles all in course processes and in
teractions between tutors and students;
- IDM: OUNLs identity management system and provides all users with a single
identity across various OUNL subsystems. It also incorporates an access manager handling the log-in and log-out to the OUNL.
Integration of these three systems should provide DS with a first solid data set
data that can be for some interesting analyses. At a later stage other systems can
be included in the CDC layer as well. The connectors will be hosted by OUNL itself as the required hardware for running these connectors is fairly limited and
available. Another part of the first layer is handling user data stemming from external systems and devices such as social networks and wearables. These systems
will be connected through their proprietary connectors. Although these external
systems are important, they will be out of scope for the first implementation iteration of DS.
The second layer of the architecture is formed by the stream processing frame
work at which’s core is Kafka with some of the Confluent extensions. The Kafka
-----
18
messaging component is the hub via which all other components communicate.
The Kafka message broker cluster is extended with a cluster of nodes that run the
Kafka stream processing jobs. Both clusters will be hosted by SURFsara as part of
their Big Data Services. An Avro schema registry acts as schema service for the
various data formats used. After the necessary processing of the incoming data,
the results are exported to views that act as inputs for the smart services. These
views are referenced as ‘materialized views’ because they contain data from several sources that are combined into a denormalized data storage. A materialized
view might also contain aggregates or data stemming from some business logic.
The consumer of a materialized view determines which data should be available
and the stream processing framework will be responsible for a continuous, low latency, delivery of these data to that view. A special materialized view will be an
event store that will basically capture all input events into a standardized data
format, which is not necessarily the original format of data. This event store can
act as input for the event streams in case of cataclysmic failure of the total system.
In theory we should be able to rebuild all materialized views, based on this event
store.
#### Next steps
The proposed DS architecture is a result of a journey investigating various solutions for establishing an enterprise level version of the data ‘truth’ for various target groups at OUNL. Practical experience so far is limited to a set of prototypes
that have shown the feasibility of various platforms. In this chapter we have presented the background and motivations for the proposed DS architecture. A prototype has been built that connects to the copy of the yOULearn database via the
standard JDBC source connector. This resulting input stream has been processed
by a stream processing service that does some very basic joins and counts. However, the proof of the pudding is in the eating. We are in process of launching a
Kafka/Confluent cluster on the SURFsara big data infrastructure. The first streaming applications will process some basic data from OUNL’s source systems via
Kafka connector, similar to the prototype and will produce some basic materialized views. We intent to use the data from the materialized view to construct an
appealing info graphic of all learning and teaching activities that are happening at
OUNL. This graphic will be projected on the OUNL’s information screens present
in several buildings for all passing staff, students and visitors to see. This serves a
twofold purpose. Firstly, for the first time in OUNL’s history, it will provide a
feeling of activity at OUNL campus, that otherwise is a somewhat desolate environment characterized by a total lack of students. Remember that OUNL is a distance teaching university and students do not reside on the campus. The secondary
goal is raising awareness of the importance and relevance of the DS project within
OUNL itself.
Real life experience will tell if the proposed architecture is up to the task, or
whether new insights will lead to adaptations. The whole data science field is still
-----
19
very in turmoil at the moment as generally accepted practices are just start to come
into place. Time will tell.
-----
20
#### References
Di Mitri, D., Scheffel, M., Drachsler, H., Börner, D., Ternier, S., & Specht, M.
(2016). Learning Pulse: Using Wearable Biosensors and Learning Analytics
to Investigate and Predict Learning Success in Self-regulated Learning. In
_Proceedings of the First International Workshop on Learning Analytics_
_Across Physical and Digital Spaces, (pp. 34–39). CEUR._
Engelbart, D., & English, W. (1968). A research center for augmenting human
intellect. _Proceedings_ _of_ _the_ _December_ _9-11,_ _1968,._
http://dl.acm.org/citation.cfm?id=1476645. Accessed 4 May 2017
Gantz, J., & Reinsel, D. (2012). The digital universe in 2020: Big data, bigger
digital shadows, and biggest growth in the far east. IDC iView: IDC Analyze
_the future. https://www.emc-technology.com/collateral/analyst-reports/idc-_
the-digital-universe-in-2020.pdf. Accessed 4 May 2017
Haerder, T., & Reuter, A. (1983). Principles of transaction-oriented database
recovery. _ACM_ _Computing_ _Surveys_ _(CSUR)._
http://dl.acm.org/citation.cfm?id=291. Accessed 4 May 2017
Jeffrey, D., & Sanjay, G. (2008). MapReduce: simplified data processing on large
clusters. Communications of the ACM.
Kleppmann, M., & Kreps, J. (2015). Kafka, Samza and the Unix Philosophy of
Distributed Data. _IEEE_ _Data_ _Engineering_ _Bulletin,_ 1–11.
https://www.cl.cam.ac.uk/research/dtg/www/files/publications/public/mk42
8/streamproc.pdf
Koper, R. (2014). Towards a more effective model for distance education. _eleed,_
(10). https://eleed.campussource.de/archive/10/4010
Kreps, J. (2014). Questioning the Lambda Architecture. _O’Reilly._
https://www.oreilly.com/ideas/questioning-the-lambda-architecture
Kreps, J. (2014). _I Heart Logs: Event Data, Stream Processing, and Data_
_Integration._ O’Reilly Media.
https://books.google.nl/books?hl=en&lr=&id=gdiYBAAAQBAJ&oi=fnd&p
g=PR3&dq=I+heart+logs,+I+Heart+Logs,+Event+Data,+Stream+Processin
g,+and+Data+Integration&ots=3wV748ShbL&sig=-GnFj2Rq7vuy1hBamtw3NF0izo. Accessed 4 May 2017
Kreps, J., Narkhede, N., & Rao, J. (2011). Kafka: A distributed messaging system
for log processing. _Proceedings_ _of_ _the_ _NetDB._
http://people.csail.mit.edu/matei/courses/2015/6.S897/readings/kafka.pdf.
Accessed 4 May 2017
Marz, N., & Warren, J. (2015). Big Data: Principles and best practices of scalable
_realtime data systems (1st ed.). Greenwich: Manning Publictions Co._
http://dl.acm.org/citation.cfm?id=2717065. Accessed 4 May 2017
Rosenblatt, F. (1958). The perceptron: A probabilistic model for information
storage and organization in the brain. _Psychological_ _review._
http://psycnet.apa.org/journals/rev/65/6/386/. Accessed 4 May 2017
Schlussmans, K., Van den Munckhof, R., & Nielissen, R. (2016). Active online
-----
21
education: a new educational approach at the Open University of the
Netherlands. In _The Online, Open and Flexible Higher Education_
_Conference (pp. 19–21). Rome._
SURFsara. (2017). Big Data Services. https://www.surf.nl/en/services-and
products/big-data-services/index.html. Accessed 4 May 2017
Vogten, H., & Koper, R. (2014). Towards a new generation of Learning
Management Systems. In _Proceedings of the 6th International Conference_
_on Computer Supported Education (Vol. 1, pp. 513–519). Barcelona:_
CSEDU. doi:10.5220/0004955805140519
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-981-13-0650-1_7?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-981-13-0650-1_7, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,018
|
[] | false
| 2018-10-05T00:00:00
|
[] | 12,126
|
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02d5b46fd528dbd69a6606553ec87ced89b1afb0
|
[
"Computer Science"
] | 0.856151
|
BlendSPS: A BLockchain-ENabled Decentralized Smart Public Safety System
|
02d5b46fd528dbd69a6606553ec87ced89b1afb0
|
Smart Cities
|
[
{
"authorId": "144583532",
"name": "Ronghua Xu"
},
{
"authorId": "46231496",
"name": "S. Nikouei"
},
{
"authorId": "22693377",
"name": "Deeraj Nagothu"
},
{
"authorId": "103081735",
"name": "Alem Fitwi"
},
{
"authorId": "2144836470",
"name": "Yu Chen"
}
] |
{
"alternate_issns": [
"2731-3409"
],
"alternate_names": [
"Smart City"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-1343723",
"https://www.mdpi.com/journal/smartcities"
],
"id": "d0bfb97a-a20e-4896-9afe-1e63e459db20",
"issn": "2624-6511",
"name": "Smart Cities",
"type": null,
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-1343723"
}
|
Due to the recent advancements in the Internet of Things (IoT) and Edge-Fog-Cloud Computing technologies, the Smart Public Safety (SPS) system has become a more realistic solution for seamless public safety services that are enabled by integrating machine learning (ML) into heterogeneous edge computing networks. While SPS facilitates convenient exchanges of surveillance data streams among device owners and third-party applications, the existing monolithic service-oriented architecture (SOA) is unable to provide scalable and extensible services in a large-scale heterogeneous network environment. Moreover, traditional security solutions rely on a centralized trusted third-party authority, which not only can be a performance bottleneck or the single point of failure, but it also incurs privacy concerns on improperly use of private information. Inspired by blockchain and microservices technologies, this paper proposed a BLockchain-ENabled Decentralized Smart Public Safety (BlendSPS) system. Leveraging the hybrid blockchain fabric, a microservices based security mechanism is implemented to enable decentralized security architecture, and it supports immutability, auditability, and traceability for secure data sharing and operations among participants of the SPS system. An extensive experimental study verified the feasibility of the proposed BlendSPS that possesses security and privacy proprieties with limited overhead on IoT based edge networks.
|
# smart cities
_Article_
## BlendSPS: A BLockchain-ENabled Decentralized Smart Public Safety System
**Ronghua Xu** **, Seyed Yahya Nikouei, Deeraj Nagothu, Alem Fitwi** **and Yu Chen ***
Dept. of Electrical and Computer Engineering, Binghamton University, SUNY, Binghamton, NY 13905, USA;
rxu22@binghamton.edu (R.X.); snikoue1@binghamton.edu (S.Y.N.); dnagoth1@binghamton.edu (D.N.);
afitwi1@binghamton.edu (A.F.)
*** Correspondence: ychen@binghamton.edu; Tel.: +1-607-777-6133**
Received: 23 July 2020; Accepted: 27 August 2020; Published: 1 September 2020
[����������](http://www.mdpi.com/2624-6511/3/3/47?type=check_update&version=1)
**�������**
**Abstract:** Due to the recent advancements in the Internet of Things (IoT) and Edge-Fog-Cloud
Computing technologies, the Smart Public Safety (SPS) system has become a more realistic
solution for seamless public safety services that are enabled by integrating machine learning
(ML) into heterogeneous edge computing networks. While SPS facilitates convenient exchanges
of surveillance data streams among device owners and third-party applications, the existing
monolithic service-oriented architecture (SOA) is unable to provide scalable and extensible services
in a large-scale heterogeneous network environment. Moreover, traditional security solutions rely on
a centralized trusted third-party authority, which not only can be a performance bottleneck or the
single point of failure, but it also incurs privacy concerns on improperly use of private information.
Inspired by blockchain and microservices technologies, this paper proposed a BLockchain-ENabled
Decentralized Smart Public Safety (BlendSPS) system. Leveraging the hybrid blockchain fabric,
a microservices based security mechanism is implemented to enable decentralized security
architecture, and it supports immutability, auditability, and traceability for secure data sharing
and operations among participants of the SPS system. An extensive experimental study verified the
feasibility of the proposed BlendSPS that possesses security and privacy proprieties with limited
overhead on IoT based edge networks.
**Keywords: Smart Public Safety (SPS); microservices; blockchain; smart contract; security; Internet of**
Things (IoT); Proof-of-Work (PoW); Byzantine Fault Tolerant (BFT)
**1. Introduction**
Advancement in artificial intelligence (AI) and Internet of Things (IoT) technology makes
the concept of Smart Cities become realistic, and these IoT based smart applications have greatly
improved the citizen’s quality of live and build a safe and sustainable urban environment. However,
it brings multiple new concerns as the IoT is widely adopted in smart communities and smart cities.
The resource constraint IoT devices need a lightweight application mechanism to perform service tasks,
while distributed and heterogeneous network requires a scalable and flexible system infrastructure to
support complicated and cooperative operations among participants in smart cities. To enhance
the adoption of the IoT in smart communities and smart cities, researchers have been looking
into lightweight IoT-based solutions to provide seamless services though integrating heterogeneous
computing devices and different types of networks [1,2].
Being considered among the top concerns in the development of smart cities, smart public safety
(SPS) facilitates the easy exchanges of surveillance data streams among data owners and third-party
service providers. However, it also brings new challenges in architecture, performance, and security.
The SPS relies on a distributed network environment consisting of a large number of IoT devices,
-----
_Smart Cities 2020, 3_ 929
and all participants uses their domain independent platforms with high heterogeneity, dynamics and
non-standard development technologies. Therefore, system architecture should be scalable, flexible,
and efficient to support fast development and easy deployment among participants [3,4]. In addition,
to meet the requirements of instant decision making with high accuracy, geographically scattered edge
devices collect data at local and share data among service providers of different domains. However,
conventional security and management solutions utilize a centralized architecture, which can be
a performance bottleneck and is susceptible to a single point of failure. Additionally, SPS combines
on-line video stream data from the cameras and with data from off-line sources to perform smart
surveillance tasks. Thus, the data in use should be consistent, unaltered, and audible through the entire
lifetime. Given a trustless distributed IoT network environment, an ideal SPS framework should be able
to support decentralization, immutability, and auditability to ensure security and privacy-preserving
data sharing and service operations.
Blockchain, which is a distributed ledger technology (DLT) evolved from Bitcoin [5], has been
widely recognized for a great potential to revolutionize the fundamentals of information and
communication technology. Blockchain is a natural candidate to enable the decentralized architecture
for SPS, in which the data can be securely stored and distributively verified under a peer-to-peer (P2P)
network without relying on a centralized trust authority. Such a decentralized architecture provides
a prospective option to improve system performance and mitigate single point of failure issues existing
in a centralized architecture. In addition, leveraging consensus protocols and public distributed ledgers,
Blockchain provides a verifiable, traceable, and append-only chained data structure of transactions.
Furthermore, integrating Blockchain into SPS framework not only establishes trust connections among
participants, it also guarantees immutability and auditability to ensure data availability, correctness,
and provenance.
In this paper, a BLockchain-ENabled Decentralized Smart Public Safety (BlendSPS) system is
proposed, which is able to support decentralized, efficient, and secure information sharing and services
operations in SPS scenarios. Leveraging the advanced features of the microservices architecture like
fine-granularity and loose-coupling, BlendSPS decouples the functionality into multiple containerized
microservices. Those computationally affordable microservices can be deployed and run on the
resource-constrained IoT devices with limited overhead. A hybrid blockchain fabric is designed as
a fundamental security infrastructure to enable a decentralized architecture, and support immutability,
auditability, and traceability for data sharing given a trustless distributed IoT network environment.
The major contributions of this paper are as follows.
(1) A novel security-by-design system architecture named BlendSPS is proposed. With given threat
models and security goals in SPS, a comprehensive description is presented and underlying
rationales are explained.
(2) To address challenges resulted from dynamic and heterogeneity of IoT-based SPS networks,
a microservices-enabled security framework is introduced and implemented in a edge-fog
computing paradigm.
(3) A hybrid blockchain network architecture is introduced to address trade-offs by adopting
blockchain in SPS system. Relying on a two-level consensus mechanism: intra-domain consensus
and inter-domain consensus, such hybrid blockchain fabric aims to improve the scalability and
efficiency as integrating blockchain with the hierarchical multidomain SPS system.
(4) A proof-of-concept security microservices prototype is implemented and tested on a physical
private blockchain setup including Ethereum and Tendermint. The comprehensive experimental
results demonstrate that it is practical to run the proposed BlendSPS on IoT-based networks with
good performance and security proprieties.
The remainder of this paper is organized as follows. Section 2 reviews background knowledge of
SPS systems, and related works in microservices and blockchain solutions. Given discussions on the
threat model and security goals in Sections 3 and 4 presents the architecture of BlendSPS including
-----
_Smart Cities 2020, 3_ 930
design rational, key components, and security features. Section 5 illustrates the blockchain-based
security mechanism including microservices framework and hybrid blockchain fabric. The prototype
implementation and evaluation are discussed in Section 6. Finally, Section 7 concludes this paper with
ongoing efforts and future directions.
**2. Background Knowledge and Related Work**
_2.1. Smart Surveillance Systems_
With the constant increase in the number of cameras deployed for surveillance purposes,
the surveillance community has noticed the demand for human resources to process video stream
data to make decisions timely [6,7]. The conventional solutions rely on a cloud computing platform for
the pervasive deployment of networked cameras, either static or mobile, which create a huge amount
of surveillance data and atomize the video processing [8,9]. Object detection using machine learning
(ML) [10] and statistical analysis [11] approaches are of main interest in recent years. Owning to the
onerous computation requirement of big data and contextual ML tasks, smart safety surveillance
applications are implemented at the powerful server side.
To minimize the role of the human agents, the second generation of surveillance solutions aims
to implement various intelligent ML algorithms in decision-making tasks, like object detection [12]
and abnormal behavior detection [13] at the centralized cloud. The centralized architecture that needs
to merge raw frames from cameras back to the cloud brings a heavy burden on the communication
network. To reduce the overhead on the communication channels, context information [14] or query
languages [15] has been investigated to promote operators’ awareness. Researchers have also proposed
to improve the efficiency and throughput of the communication networks with better detection
rates, such as reconfiguring the networked cameras [16], utilizing event-driven visualization [17],
and mapping conventional real-time images to 3D camera images [18].
However, relying on a centralized architecture inevitably brings uncertain latency and scalability
challenges. Decentralized surveillance systems are promising to address aforementioned challenges
like limited network access, and more capable to handle mission-critical, delay-sensitive tasks.
Advancements in edge-fog-cloud hierarchical architecture enables real-time surveillance [19].
To support the delay-sensitive, mission-critical tasks that often depend on efficient information fusion,
quick decision-making, and situation awareness, an urban speeding traffic monitoring system using
Fog Computing paradigm was proposed [12]. Merging raw surveillance data streams from drones
on near-site fog computing devices can reduce the network traffic created by sending the video to
a remote cloud.
To support object assessment method for a SPS system, an Instant Suspicious Activity
identiFication at the Edge (I-SAFE) framework was designed leveraging the edge-fog-cloud hierarchy
to detect loitering [20]. In I-SAFE, raw frames from surveillance cameras are feed to an edge device
where low-level features are abstracted. The fog computing nodes collect features from edge side
and perform intermediate-level tasks, including movements recognition, behavior understanding,
and anomaly detection [21]. Finally, the cloud focuses on high level tasks of SPS, such as algorithm
fine tuning, historical pattern analysis, and global statistical analysis.
_2.2. Microservices in IoT_
The traditional IoT based service-oriented architecture (SOA) utilizes a monolithic architecture,
in which service and application software are developed as a single solution deployed on the cloud
server. In monolithic application, the service features are developed as distinguishable functionalities.
Those functions are module independent and interconnected by the same back-end with a dedicate/set
of technology stack, like database. As a result, those earlier monolithic IoT-based applications are
low reusable and scalable owing to the manners of tightly coupled dependence among functions and
components. Therefore, adopting monolithic framework in a distributed IoT-based network inevitably
-----
_Smart Cities 2020, 3_ 931
brings new challenges in terms of scalability, service extensibility, data privacy, and cross-platform
interoperability [22].
Considering as an extension of SOA, microservices architecture only encapsulates a minimal
functional software module as a fine-grained and independently executable unit, which is
self-containment and loose-coupled from remaining system. Unlike monolithic architecture, in which
communication between service units relies on inter process communication (IPC), those microservices
units are geographically scattered across the network, so that they communicate with each others
through a remote procedure call (RPC) manner, such as HTTP RESTful API. Finally, multiple
distributed microservices nodes cooperate with each others to perform the complex functionalities of
whole system. Microservices architecture achieves fine granularity by properly implementing a single
dedicated function with minimal development resources. As those fine grained microservices units
are independent of each others’ developing technologies, microservices architecture is loose coupling
and flexible to enable continuous development, efficient deployment, and easy maintenance.
Thanks to the advanced properties, like scalability, re-usability, extensibility, easy
maintenance, etc., the microservices architecture has been adopted by many smart application
developments to improve the scalability and security requirements in distributed IoT-based system.
From the perspective of architecture performance and security, the IoT-based applications are
advancing from “things”-oriented and centralized ecosystem to a widely and finely distributed
microservices-oriented-ecosystem [22]. To enable efficient and security video surveillance services
at the edge network including large volumes of distributed IoT devices, a robust smart surveillance
systems was proposed by integrating microservices architecture and blockchain technology [1,2,23].
The experimental results verified the feasibility of prototype design to provide a decentralized and
fine-grained access control solution for public safety scenario. A similar design is also implemented by
BlendSM-DDM [24] to provide a lightweight and decentralized security architecture in IoT-based data
marketing systems.
_2.3. Blockchain and Smart Contract_
As a form of DLT, Blockchain initially was implemented as an enabling technology of Bitcoin [5].
Bitcoin aims to provide a cryptocurrency to record and verify commercial transactions among trustless
entities without relying on any centralized third-party trust authority, such as financial institutes or
government agencies. Blockchain relies on a decentralized architecture, where all participants use
a Peer-to-Peer (P2P) network to distributively store and verify data on distributed ledger. To maintain
integrity, consistence, and total order of data on the distributed ledger, a consensus protocol is executed
among a large amount of distributed nodes, called miners or validators. The transactions are
collected by miners who can record those valid transactions in a time-stamped block given the
consensus algorithm. Finally, all transactions on a blockchain is organized as a verifiable, append-only
chained structure of public ledger, in which a new block is identified by a cryptographic hash
and chained to a preceding block in a chronological order. Thanks to the consensus protocol,
participants can access and verify data on the public ledger that is distributively stored and maintained
by “miner-accountants”, as opposed to having to establish and maintain trust relationship with a
transaction counter-party or a third-party intermediary [25].
Emerging from the intelligent property, smart contract (SC) introduces programmability into
blockchain to support variety of customized transaction logic rather than simple cash transactions.
A SC can be considered as a self-executing procedure stored on blockchain, so that users can
achieve predefined agreements among trustless parties through a blockchain network. Leveraging
cryptographic and security mechanisms, a SC combines protocols with user interfaces to formalize
and secure relationships over computer networks [26]. Contract developer can use programming
languages to encapsulate transaction logic and data types into a SC, which is saved at a specific address
of a blockchain. Through exposing a set of public functions or application binary interfaces (ABIs),
a SC can be invoked on receiving a contract-invoking request from users. Finally, data in SC is updated
-----
_Smart Cities 2020, 3_ 932
given predefined transaction logic or contract agreements. Owing to proprieties like decentralization,
autonomy, and self-sufficiency, SC is an ideal solution to provide decentralized application (DApp) in
distributed IoT network.
To enabled decentralized security mechanism for distributed IoT-based applications, leveraging
blockchain and smart contract has been a hot topic in academic and industrial communities.
Some efforts have been reported recently, for example, smart surveillance system [2,23,27], social
credit system [28,29], decentralized data marketing [24], space situation awareness [30], multidomain
avionic system [31], biomedical imaging data processing [32], and access control strategy [33,34].
Given the aforementioned works, blockchain and smart contract are promising to enable a completely
decentralized mechanism to secure data sharing and accessing for distributed IoT-based SPS systems.
**3. Threat Model and Security Goals**
In this section, we first discuss threats to SPS systems, and then provide security goals that
BlendSPS aims to a tackle these threats. Figure 1 illustrates several potential threats to normal
operations in an SPS. The players are defined as four roles: camera, edge device, fog server, and human
user. The camera generates real-time video streams and transfers them to on-site/near-site edge
devices. The edge devices extract lower level features from raw frames, and then send features to a
more powerful fog layer for aggregation. The fog server uses collected features to perform higher
level analytic tasks, like human behavior analysis and anomalous event detection. The user can query
privacy-preserving information from surveillance visualization services based on his/her granted
privileges. Note, an edge device can be a single board computer (SBC) that is mounted on the camera,
which generates the video streams.
**Figure 1. Threat model in smart public safety system.**
**Threat 1: False Frame Injection Attacks.**
The smart surveillance relies on the authenticity of raw video streams from camera to fulfill
features extraction and decision-making tasks. However, adversary can launch the visual layer attacks
to pose a potential threat to safety and security of the infrastructure [35]. Through false frame injection,
attacker can feed fake frames on edge to generate incorrect features. During the decision-making time,
the attacker can also replace original frames with duplicate ones in decision-making process to reduce
detection accuracy, as shown in Figure 1.
**Security Goal 1: False frame detection and verification.**
To design an online frame duplication attack detection for SPS system, an environmental
fingerprint-based detection technique by using Electrical Network Frequency (ENF) was proposed [36].
ENF is the power supply frequency with a nominal frequency of 50/60 Hz depending on the
-----
_Smart Cities 2020, 3_ 933
geographical location. The fluctuations in the ENF is caused by the power supply–demand and
has similar throughout as an electrical grid. The ENF is embedded in both audio and video recordings
that are generated by devices running on the power grid. As the similarity of two ENF signals can be
measured using a correlation coefficient factor, and it could be used as a fingerprint to detect frame
duplication attack.
This paper mainly focuses on false frame verification based on blockchain technology. During
false frame detection process, edge nodes not only execute a online ENF-based false frame detection,
it also claim ENF fingerprints of frames through transactions that will be recorded in the distributed
ledger. Those ENF fingerprints are saved on an immutable distributed ledger that is available to
participants in the blockchain network. Therefore, the fog nodes or surveillance visualization service
providers can verify those checkpoint frames during decision-making process.
**Threat 2: Extracted Features Data Tampering.**
In distributed SPS settings, lower level video processing functions are deployed on edge devices,
and only extracted features are sent back to the fog nodes for further processing for decision-making.
By maliciously tampering with the feature data exchanged between edge and fog, an adversary can
distort feature contextualization or change the behaviors in the anomalous event detection.
**Security Goal 2: Immutability, Traceability, and Auditability for Data Sharing.**
During the video processing, edge devices not only send back extracted features as computation
results, but also claim correctness proofs of feature data through transactions that will be recorded in
the distributed ledger. The consensus protocol guarantees the immutability and integrity of data in
the ledger. The fog node will verify the received features before using them in contextualization and
decision-making tasks.
**Threat 3: Privacy Violation in Surveillance.**
The surveillance visualization provides a spectrum of advanced services, like monitoring traffic
flows or deterring crime, etc. However, it also causes people to grow more concerned about the invasion
of their privacy as performing mass surveillance indiscriminately irrespective of the individual’s
private information [37]. The adversary can breach individual’s privacy by unauthorized accessing
videos or improperly data usage without permission.
**Security Goal 3: Decentralized privacy-preserving mechanism.**
To protect privacy-sensitive attributes that reveal a lot of information about individuals, like faces,
a novel minor privacy protection was proposed by using a face object detector to process collected
video streams in real-time at the edge [38,39]. The localized regions of frame will be reversibly
scrambled though a lightweight scrambling algorithm. We design a decentralized privacy-preserving
mechanism by integrating blockchain with existing sensitive privacy detection and scrambling solution.
The privileges definition and access control rules are encapsulated into separate SCs, which are
deployed on blockchain network. The surveillance service providers can grant service requests
without relying on any third-party authority, and only authorized users are allowed to access the
privacy-preserving information without violating the privacy of individuals.
**4. Blendsps: Rationale and System Design**
Leveraging containerized microservices framework and decentralized blockchain network
architecture, our BlendSPS aims to enable efficient, privacy-preserving and secure data sharing,
and operations in heterogeneous SPS system. Figure 2 illustrates the BlendSPS architecture that
consists of (1) a hierarchical SPS system framework that relies on an edge-fog computing network to
support a distributed smart surveillance as an edge service, (2) a blockchain-enabled security service
_layer that enables lightweight and decentralized security policies, and (3) a hybrid blockchain fabric that_
-----
_Smart Cities 2020, 3_ 934
ensures decentralization and security properties in SPS system. The rationale behind the design of the
BlendSPS is described as follows.
Hierarchical SPS framework is considered as the upper-level applications layer that provides SPS
_•_
services on a heterogeneous network environment, like smart video surveillance, visual layer
attack protection, privacy-preserving video stream accessing, etc.
Blockchain-enabled security service layer functions as the intermediate level infrastructure
_•_
that integrates containerized microservices with blockchain to support a scalable, flexible,
and lightweight security mechanism. As a lightweight virtualization technology, containers
have features, like platform independence, resource abstraction and OS-level isolation.
Therefore, containerized microservices architecture is an ideal design for heterogeneous IoT-based
SPS systems.
Hybrid blockchain fabric provides a fundamental networking and security infrastructure to ensure
_•_
decentralized security enforcement for the SPS system. Leveraging hybrid consensus mechanism
and secure public distributed ledger, blockchain fabric brings security features, like immutability,
auditability and traceability, to efficiently enhance the security issues of existing SPS systems.
**Figure 2. Architecture of BLockchain-ENabled Decentralized Smart Public Safety (BlendSPS).**
_4.1. Hierarchical Sps Framework_
The left top part of Figure 2 demonstrates an user scenario of a SPS which includes three
key elements: smart video surveillance, visual layer attack protection, and video stream privacy
preservation. Video streams are collected by cameras and transmitted to microservices in real-time
on edge devices for feature extraction. The lower level features are extracted by edge devices
and are transferred to more powerful fog nodes where data aggregation and higher level analytic
services, such as human behavior analysis and anomalous even detection, are conducted. To prevent
visual layer attacks on raw video streams, ENF-based false frame detection microservices are
responsible for authenticating on-line video stream. Moreover, extracted environmental fingerprint of
frame is securely recorded into immutable distributed ledger for decentralized off-line verification.
The privacy-conserving surveillance visualization ensures that only authorized user could access
-----
_Smart Cities 2020, 3_ 935
sensitive information, and video frames containing user’s privacy, such as faces or living areas,
are marked and hidden from the public according to specified privacy policies of individuals.
_4.2. Smart Video Surveillance_
Unlike most conventional Deep Neural Networks (DNN) based smart video surveillance solutions
that are implemented in a monolithic architecture, we break the whole surveillance process into
multiple smaller sub-tasks that can be deployed and executed independent of the rest of the system.
The classification of the human behavior is divided into two steps: extracting features for each
individual from raw video streams, and then making a decision based on the handpicked features.
Figure 3 shows the microservices based architecture adopted by our video surveillance on a edge-fog
computation hierarchy model.
**Figure 3. Smart Video Surveillance Microservices: (a) Services at Edge; (b) Services at Fog.**
The lower level tasks, like video processing and object features detection, are performed on the
edge side. Figure 3a shows the connection of the microservices implemented on the edge node, along
with their connections to the fog device for video processing. The raw frames are fed to a microservice
that detects the objects of interest and extracts the location of each of them. Then another tracking
algorithm, optimized for accuracy and speed, is executed to track the aforementioned detected object
in frames of video stream. Given individual’s tracking data, the edge node extracts a set of features to
identify a pattern of individual’s movement based on the object position history. The extracted pattern
features are then sent to the corresponding fog node for classification. To mitigate attacks on extracted
features during prorogation and sharing process among edge and fog, the edge device also generates
an authenticator of features and records it on the blockchain. The fog server can verify the received
features by querying authenticator from blockchain, then use the valid ones in decision-making.
The decision-making microservices are deployed on the more powerful fog side, which is
responsible for the feature contextualization and target behavior classification, as shown by Figure 3b.
Contextualization helps to have better feature representation when two sets of features have similar
values. For example, in a campus environment it is considered normal to detect people in the hallways,
while it is highly suspicious to detect anyone staying in the same hallway for hours. Thus, time may be
a very valuable indicator to help interpret the features. Training a classifier to detect human behavior
requires huge amount of data, and the detection accuracy highly depends on the quality of the training
set [20]. To reduce the need for a complete training data set, we use a fuzzy model to make a decision
based on the walking pattern for each of the individuals’ information sent by the edge node. Readers
interested in details of study on Convolutional Neural Network (CNN)-based objects tracking and
fuzzy decision-making for suspicious activity identification are referred to the works in [20,21].
-----
_Smart Cities 2020, 3_ 936
_4.3. Enf-Based False Frame Detection_
To protect against visual layer attacks in video surveillance, an ENF-based false frame detection
method is designed to detect false frame injection during online video stream generation time at
camera side. The presence of ENF fluctuation traces in multimedia recordings comes from the source
of ENF signal by power system. Thus, comparing the ENF signals between power and multimedia
recordings can authenticate original digital video or audio sources. Figure 4 illustrates the difference of
the ENF fluctuation traces from original video stream, power grid, and the attacked video recording.
**Figure 4. False frame detection by comparing Electrical Network Frequency (ENF) signals from power,**
original, and forged recording.
The estimated signal from both the power recordings and the audio recordings from the
surveillance system are compared using correlation coefficient [40]. We adopt a sliding window-based
mechanism to achieve an efficient online detection task. For each window, a 30 s recording is used for
ENF estimation, and upon correlation comparison, a step of five seconds is used with an overlap of
25 s. The sliding window approach can reduce the delay in detection of the replay attack and consumes
less computational power. As Figure 4 shows, the ENF of duplicated recordings is mismatched with
the ENF of power recordings as sliding window comes, the correlation coefficient for duplicated
recordings will drop and the false frame injection attack is detected. Readers interested in a detailed
study of ENF application in digital media forensic analysis are referred to the work in [36].
Because ENF is considered as an environmental fingerprint, the estimated ENF signals from
multimedia recordings can also be used as an authenticator for offline verification on surveillance data,
like video or audio streams. In case of the false frame detection, a section of recordings can be used
as checkpoints, as Figure 4 shows. Then ENF signals extracted from the checkpoint are saved on the
blockchain. Other users, like fog server or surveillance visualization, can query ENF fingerprints from
blockchain, then verify them before using raw video streams.
_4.4. Privacy Preserving for Video Stream_
To enable surveillance visualization without violating the privacy of individual person
captured in the videos, a privacy preserving mechanism is designed by integrating lightweight
sensitive privacy-attributes detection methods, scrambling technique and blockchain technology.
The non-compute intensive object detection algorithm is responsible for classifying and localizing
those objects on video frames which are deemed sensitive. Then, the scrambling technique reversibly
masks those sensitive objects or regions detected by the object detection scheme. To illustrate how
privacy preserving mechanism works, we use a test picture from the images of groups dataset [41].
Figure 5 presents an example of children privacy protection. Figure 5a shows that children’s faces are
accurately detected and bounded through a face-detection algorithm. In Figure 5b, the faces of the two
-----
_Smart Cities 2020, 3_ 937
children are enciphered following the scrambling process to hide their identity in case of unwarranted
disclose. Readers interested in a detailed study of face identification and scrambling technologies for
privacy protection are referred to the work in [39].
To access video or analytic results from surveillance visualization, users must interact with
blockchain-enabled security services to verify privacy rules and gain proofs that they have the right
approbation, as Figure 2 shows. The data owners or service providers deploy SCs that define privileges
and privacy preserving rules on blockchain. These decentralized security services ensure that only
authorized users can successfully access privacy-preserving information given their access right and
privacy specification.
**Figure 5. Sensitive attributes detection and privacy protection.**
**5. Blockchain-Based Security Mechanism**
The blockchain-based security mechanism leverages lightweight microservices architecture and
hybrid blockchain fabric to ensure decentralization, security, and privacy proprieties for the BlendSPS
system. This section provides detail explanations on two parts: upper layered microservices-enabled
security service architecture and the underlying hybrid blockchain fabric.
_5.1. Microservices-Enabled Security Service_
The microservices-enabled security services layer functions as a fundamental microservices
oriented service architecture to support the decentralized security and privacy proprieties in the
BlendSPS system, as shown by the left bottom part of Figure 2. Each containerized microservices node
exposes a set of web-service APIs to upper application layer and uses the local RPC interfaces to trigger
self-executing procedures defined by SC. The key design ideas and operations are described below.
5.1.1. Video Stream Fingerprint
The ENF-based false frame detection uses a sliding window-based method to obtain and estimate
the ENF fingerprint from video records online. This real-time detection mechanism ensures that
the raw data from the camera is authenticated. However, insecure communication channels or
modification by other service providers also jeopardize the integrity of the original data. Based on the
blockchain network, a decentralized video stream data auditing strategy is design to protect against
false frame injection attacks in the BlendSPS system. By recording the extracted ENF fingerprint data
into a distributed ledger, an intra-committee that includes a small number of validators executes
a lightweight consensus protocol to ensure the sanctity of the data stored on the ledger. Therefore,
any participants from intra-committee can verify the immutable ENF data without relying on a
centralized third-party trust authority.
The video stream fingerprint audition procedures are presented in Algorithm 1. As two functions
used in real-time false frame detection, the extract_ENF_signal() at line 2 is responsible for extracting
-----
_Smart Cities 2020, 3_ 938
ENF signal given input of frames data, and the correlation_ENF_signa() at line 8 outputs the similarity
_coef_ENF_fingerprint by using a correlation coefficient between the two sampled signals. The ENF_
fingerprint recording and verification procedures use a set of RPC interfaces exposed by validators to
interact with distributed ledger. In record_ENF_fingerprint(frames), video stream owner first calls
_extract_ENF_signal() to get ENF signal by feeding checkpoint frames. Then, he/she launches a_
transaction (tx) request by calling transaction_commit() RPC to record ENF_fingerprint_tx into distributed
ledger. Finally, tx_result will be returned to notify the feature owner as long as ENF_ _f ingerprint_tx_
has been recorded and confirmed in a new block.
**Algorithm 1 The video stream fingerprint audition procedures**
1: procedure: record_ENF_fingerprint(frames)
2: _ENF_fingerprint_tx ←_ **extract_ENF_signal(frames)**
3: _tx_result ←_ transaction_commit(ENF_fingerprint_tx)
4: **return tx_result**
5: procedure: verify_ENF_fingerprint(frames)
6: _ENF_fingerprint_tx ←_ **extract_ENF_signal(frames)**
7: _query_ENF_fingerprint_tx ←_ transaction_query(ENF_fingerprint_tx[’id’])
8: _coef_ENF_fingerprint ←_ **correlation_ENF_signal(ENF_fingerprint_tx, query_ENF_fingerprint_tx)**
9: **if coef_ENF_fingerprint < coef_threshold then**
10: _veri f y_ENF ←_ _True_
11: **else**
12: _veri f y_ENF ←_ _False_
13: **end if**
14: **return verify_ENF**
The users who utilize video stream in their task will execute verify_ENF_fingerprint(frames) to
verify these checkpoint frames. Simply by calling transaction_query(), the user can fetch recorded
_query_ENF_fingerprint_tx from the distributed ledger for further verification process. Given comparison_
between coef_threshold and coef_ENF_fingerprint, which is evaluated by calling correlation_ENF_signal
function, the user can verify whether or not the checkpoint frames are generated by original owner.
The false frame injection attacks can be detected.
5.1.2. Data Integrity
As feature data extracted at edge devices are transferred to fog nodes, it is necessary to ensure
data integrity for decision-making. Relying on the blockchain network, a data integrity scheme based
on hashed features authentication is designed to enable a decentralized and secured features sharing
among participants in the BlendSPS system. The distributed ledger is ideal to enable an immutable and
traceable storage. However, directly putting a huge amount of features data into a transaction brings
more communication and computation cost by transaction propagation and verification. In addition,
larger transaction size means fewer committed transactions per block, it also reduces transactions rate
given fixed block size.
To ensure efficient data recording, access, and verification, only a fixed length string of hashed
features is saved on the distributed ledger instead of the raw data. For a set of features Fi, the string of
hashed features is calculated as hash_Fi = H(Fi), where H(·) is a predefined collision-resistant hash
function that outputs a binary hash string h ∈{0, 1}[λ] with the length λ. The hash_Fi will be put into a
transaction that is recorded on the distributed ledger. Any participants can query hash_Fi from the
distributed ledger as an authenticator for verification process.
The hashed features authentication procedures are presented by Algorithm 2. The hash_ _f eature()_
function is responsible for computing hash string given the input of f eatures_set. First, all parameter
vectors of each feature line are converted to the string format and combined as a string_ _f eatures,_
as shown from line 3 to line 6. Then, line 7 converts a string_ _f eatures to a binary string bytes__ _f eatures._
Finally, a cryptographic one-way hash function outputs a fix length of the hash string f eature_hash
given input of bytes_ _f eatures, and a dictionary { f eature_id : f eature_hash} will be returned._
-----
_Smart Cities 2020, 3_ 939
**Algorithm 2 The hashed features authentication procedures**
1: function: hash_feature(features_set)
2: _string__ _f eatures ←_ [empty_string]
3: **for feature_vector in features_set.items do**
4: _feature_string ←_ Convert_to_string(feature_vector)
5: _string__ _f eatures ←_ (string_ _f eatures f eature_string)_
6: **end for**
7: _bytes_features ←_ Convert_to_bytes(string_features)
8: _feature_hash ←_ Convert_to_hash(bytes_features)
9: _f eature_id ←_ _f eatures_set.name_
10: **return {feature_id:feature_hash}**
11: procedure: record_hashed_feature(features_set)
12: _feature_tx ←_ **hash_feature(features_set)**
13: _tx_result ←_ transaction_commit(feature_tx)
14: **return tx_result**
15: procedure: verify_hashed_feature(features_set)
16: _feature_tx ←_ **hash_feature(features_set)**
17: _query_feature_tx ←_ transaction_query(feature_tx)
18: **if query_feature_tx == feature_tx then**
19: _veri f y_hash ←_ _True_
20: **else**
21: _veri f y_hash ←_ _False_
22: **end if**
23: **return verify_hash**
The hashed feature authentication procedures interact with as set of RPC interfaces of the
blockchain client to record and query data on the distributed ledger. In the record_hashed_ _f eature()_
procedure, the feature owner first computes a hashed feature dictionary f eature_tx as a transaction data
by calling hash_ _f eature() function. Then, record_hashed__ _f eature() RPC is called to record f eature_tx_
into the distributed ledger. As f eature_tx has been recorded and confirmed on the distributed ledger,
_tx_result will be returned to notify the feature owner._
The veri f y_hashed_model() procedure is performed by entities who utilize these features data,
like fog layer service in the contextualization and decision-making processes. Through executing
_hash__ _f eature( f eatures_set), f eature_tx of currently verifying f eatures_set is calculated as the key index_
for querying data in the distribute ledger. The user simply calls transaction_query() RPC to fetch
the recorded query_tx as proof in verification. Given a comparison between query_ _f eature_tx and_
_f eature_tx, the user can verify whether or not the f eatures_set is authentic._
5.1.3. Identity Verification and Access Control
As each blockchain account is uniquely indexed by its address, which is derived from the public
key, the blockchain account address is ideal for identity authentication needed by other security
microservices, like data integrity and access control. Identity authentication relies on a virtual trust
zone that is ensured by a blockchain network, and each entity records its account address into the
blockchain as a virtual identity (VID) for identity verification.
The identity verification procedure is presented in Algorithm 3. The identity verification is
triggered as a host receives a service request from the client, like access control or privacy policies
services. The host calls RPC function get_VNodeInfo() to query the recorded Virtual Node (VNode)
information from the SC. Then, it checks if client has the same VZoneID as the host does and returns
the identity verification results verify_ID. Readers interested in a detailed study of VID based identity
authentication are referred to the work in [30].
-----
_Smart Cities 2020, 3_ 940
**Algorithm 3 The identity and access control verification procedures**
1: procedure: identity_verification(client_ID)
2: _host_ID ←_ get_Account()
3: _json_VNode_host ←_ get_VNodeInfo(host_ID)
4: _json_VNode_client ←_ get_VNodeInfo(client_ID)
5: **if json_VNode_host[’VZoneID’] == json_VNode_client[’VZoneID’] then**
6: _veri f y_ID ←_ _True_
7: **else**
8: _veri f y_ID ←_ _False_
9: **end if**
10: **return verify_ID**
11: procedure: access_control_verification(client_ID)
12: _json_access_data ←_ get_AccessToken(client_ID)
13: **if json_access_data[’isValid’] != True then**
14: **return False**
15: **end if**
16: **if (json_access_data[’issuedate’] > current_time) or (json_access_data[’expireddate’] < current_time) then**
17: **return False**
18: **end if**
19: **for access_right in json_access_data[’authorization’] do**
20: **if is_access_valid(access_right) != True then**
21: **return False**
22: **end if**
23: **end for**
24: access to service or data is permitted
25: **return True**
To enable a decentralized access authorization and verification, a decentralized capability-based
access control mechanism is integrated [34]. Data owners can implement access control models and
policies as a SC based access control (AC) microservices entity. In initial, an user must send an access
authorization request to the AC microservices to get a capability token before requesting services or
resources at SPS service providers. Given predefined access authorization policies, AC microservices
put authorized access right into capability token which is saved in the SC.
The access control verification procedure is explained from line 11 to 24 in Algorithm 3. Once a
service request is received from a user, the service provider calls get_AccessToken() to fetch the user’s
access token, then checks if the AC token json_access_data satisfies the valid conditions. If the AC token
is valid, the service provider verifies whether user’s access request is permitted through comparing
every access right item saved in the AC token. Otherwise, verification process aborts and the service
request is denied. If the service request is permitted by AC token, access_control_verification() outputs
_True, and then the service provider grants the user’s access to data or service. Otherwise, service_
request is denied.
5.1.4. Privacy Policies
The privacy-preserving microservices is applied mostly for privacy-sensitive data management.
Therefore, the privacy-sensitive data is not accessible or even not visible to unauthorized entities. Data
integrity service ensures that only hash strings of sensitive data are recorded on the blockchain
for authenticity checking, while raw date is encrypted and stored off the chain. Hence, data
privacy is protected during the transmission and storage time. In addition, AC service programs
access control rules as SCs, and therefore the access authorization and verification can be executed
automatically. The decentralized AC service can effectively prevent unauthorized access to sensitive
data. Furthermore, the privacy policies can be securely stored on the blockchain, according to which a
data or service requester is aware of his/her privileges to access the sensitive data.
With above mechanism, the data owners are allowed to adjust their access control and privacy
policies flexibly. Only authorized users are assigned access to surveillance services without violating
-----
_Smart Cities 2020, 3_ 941
the privacy of individuals. The privacy policies service relies on existing security services, like AC
and identity verification, to enabled a decentralized privacy-preserving surveillance visualization.
As Figure 2 shows, the surveillance visualization firstly interacts with privacy policies microservices,
which could query privacy rules based on user’s identity. Then, the user can fetch the surveillance
service data, like video streams or detection results, according to user’s permissions defined by AC
services. Finally, given user’s privacy policies, the scrambling contents and objects in video steams are
visualized to users without revealing information pertinent to individuals’ privacy.
_5.2. Hybrid Blockchain Network Architecture_
The BlendSPS utilizes a hierarchical edge-fog computing paradigm, in which each layer has
different performance, security and privacy requirements. The cameras and edge devices are deployed
on the synchronous and permissioned edge network that is managed by a domain administrator.
Therefore, lightweight design and high throughput become key matrices as running the consensus
protocol. Meanwhile, decision-making tasks and surveillance services are deployed on the fog
computing layer, which requires data sharing and operations across domain boundaries and relies on
an asynchronous network environment. Thus, scalability and security are the key matrices as choosing
consensus algorithm. It is hard to optimize the trade-offs among performance, scalability, and security
by integrating a single consensus mechanism into the BlendSPS system.
To handle the aforementioned issues as performing consensus algorithms in a distributed
BlendSPS network that is highly heterogeneous and dynamical, the hybrid blockchain fabric adopts a
two-level consensus mechanism: intra-domain consensus and inter-domain consensus, as the right
part of Figure 2 shows. Considering a local domain that includes a small number of cameras and edge
devices, a lightweight but efficient intra-committee consensus protocol is executed among specified
committee members to validate transactions within the domain and maintain a local distributed ledger.
For multidomain operations, like recording hashed features and updating access token, a scalable and
security inter-domain consensus protocol is responsible to finalize those inter-domain transactions on
a global distributed ledger. The design rationale and workflows are explained as follows.
5.2.1. Permissioned Network
The SPS system is deployed on a permissioned network, where every entity must register its
unique identity information to the system administrator; thus, only authorized nodes can join the
network. As permissioned network provides basic security primitives, like public key infrastructure
(PKI), digital signature, etc., it enhances security proprieties of blockchain from network infrastructure
prospective. For a local domain, domain owner chooses a subset of the nodes as an intra-domain
committee, and only validators from the committee can execute the intra-domain consensus protocol,
launch transactions and maintain the shared local ledger in the private blockchain network.
To securely record the cross-domain transactions in the SPS system, a consortium blockchain
network is used by participants from different domains. Given the computation capacity of devices
and the security policy, the SPS system administrator specifies all participants as miner or node
(non-miner). Unlike the private blockchain network adopted by local domains, both miners and nodes
in the SPS system can send transactions and access data on the global ledger. However, only authorized
participants can work as miners to execute the inter-domain consensus protocol.
5.2.2. Intra-Domain Consensus
Given a private blockchain network managed by a local domain owner, a lightweight Byzantine
Fault Tolerant (BFT) based consensus protocol is executed by validators of the intra-domain committee.
The BFT consensus comes from classical Byzantine General Problem [42], and it aims to achieve a single
value agreement among n geographically distributed and inter-connect participants given failures
of partner or conflicting information. Considering a Byzantine failure scenario, there are f dishonest
nodes who attempt to break the consensus agreement by sending contradicting values to other nodes.
-----
_Smart Cities 2020, 3_ 942
If a super-majority of participants (n _f > 2_ _f_ ) are honest, they can still agree on the consistent actions.
_−_
Thus, BFT consensus ensures the ultimate goal of agreement if a network includes n 3 _f + 1 total_
_≥_
nodes and only f are Byzantine ones.
The BFT consensus protocol can achieve high throughput in a small-scale consensus network and
it introduces a low computation overhead as executing the consensus algorithm on hosts. Therefore,
the BFT-based consensus protocol is suitable for the intra-domain committee at a distributed edge
network. For recording data on the local ledger, a user sends data transactions to a validator within
the intra-domain committee. Then, the validator verifies received transactions and broadcasts valid
ones to other validators. Each validator collects valid transactions and records them in a new block
given a block generation algorithm. If the proposed block is verified and confirmed by no less than
2/3 of validators, the consensus agreement is achieved by finalizing recorded data in block on the
distributed ledger. As intra-domain consensus protocol is only executed among a small-scale of
committee members, communication cost incurred by messages propagation is reduced.
5.2.3. Inter-Domain Consensus
Inter-domain operations rely on a consortium blockchain network and it inevitably runs into
critical issues in open-access and asynchronous network environments. The inter-domain consensus
protocol adopts a scalable Proof-of-Work (PoW) mechanism to enable a probabilistic finality on
inter-domain transactions. In PoW consensus, each miner must exhaustively query a cryptographic
hash function to gain a hash code as a work proof for new block generations. The PoW mining process
can be formally defined by the following equation:
_hash_block =_ (block_datanonce) _D(h),_ (1)
_H_ _≤_
where nonce is a random number used to calculate the candidate hash_block, D(h) = 2[L][−][h] is a difficulty
condition specified by a certain length of bits h as parameter, and ( ) is a predefined collision-resistant
_H_ _·_
cryptographic hash function that outputs a fixed λ length of hash string L ∈{0, 1}[λ].
If the hash_block of a candidate block satisfies a pre-defined difficult condition defined in
Equation (1), the miner broadcasts the candidate block to peers and appends it on the local chain.
Every node follows a message gossiping rule to multicast valid transactions and blocks to peers rather
than the whole network. All honest nodes only accept valid blocks and always extend blocks on
the longest chain that they have ever synchronized from the network. Such a longest chain rule can
effectively mitigate fork issues in an asynchronous network and ensure that all honest miners are
working on a common main chain. The security of the inter-domain consensus is ensured if majority
(51%) of the miners are honest and correctly execute the consensus protocol.
**6. Experimental Results**
To verify the feasibility of the BlendSPS scheme, a proof-of-concept prototype is built and
tested in a real physical network environment. The Docker is adopted to develop microservices
framework, and those containerized microservices units can be deployed both on the edge (Raspberry
Pi) devices and fog (desktop) server. The security services are implemented in Python with
Flask [43] as web-service framework. For the blockchain network, we use Ethereum [44] to build
inter-domain blockchain network, and use Tendermint [45] to develop intra-domain blockchain
network. The Solidity [46], which is a contract-oriented and high-level language, is used for developing
SCs. We use RSA for asymmetry cryptography, like digital signature, and SHA-256 for hash function,
which are developed using standard python lib: cryptography [47]. All documents and source code
are available on the BlendSPS project repository [48].
-----
_Smart Cities 2020, 3_ 943
_6.1. Experimental Setup_
As prototype implementation and test cases design are mainly to verify performance and
security proprieties provided by BlendSPS, the experimental set-up focus on security functions related
configuration. For private Ethereum network, six miners are deployed on six separate desktops,
and all nodes use Go-Ethereum [49] as the client application to interact with Ethereum network.
The Tendermint are running on a 20-validators test network, where each validator is hosted on a
Raspberry Pi (RPi) device. All desktops and RPi devices are connected through a local area network
(LAN). Table 1 shows configurations of devices used for the experimental study.
**Table 1. Configuration of experimental devices.**
**Device** **Redbarn HPC** **Dell Optiplex 760 Desktop** **Raspberry Pi 3 Model B+**
**CPU** 3.4GHz, Intel(R) Core(TM) 3GHz, Intel(R) Core(TM) 1.4GHz, Broadcom ARM
i7-2600K (8 cores) E8400 (2 cores) Cortex-A53 (ARMv8)
**Memory** 16GB DDR3 4GB DDR3 1GB SDRAM
**Storage** 500GB HHD 250G HHD 32GB (microSD card)
**OS** Ubuntu 18.04 Ubuntu 16.04 Raspbian GNU/Linux (Jessie)
In security service simulation test, Redbarn HPC acts as a system oracle that provides security
basics, like PKI and registration for network management. The oracle can only manage permissioned
network by adding and removing participants. The validators can only update and verify data and
transactions on the distributed ledger rather than changing the permissioned network configuration.
All desktops can work as fog computing nodes, and RPi devices run as edge computing nodes.
The security microservices are deployed both on edge and fog layers for experimental test.
_6.2. Performance Evaluation_
To evaluate the performance of operating microservices-based security schemes, a set of
experiments is conducted on our prototype blockchain private networks by simulating service
transactions, like access right token generation, identity verification, etc. The cost of message encryption
and decryption are not considered during the test.
6.2.1. Microservices Overhead: Computation Overhead and Network Latency
A service request experiment, which includes five RPi devices and four desktops, is designed
to evaluate the overhead incurred by the security microservices on the host machines. Four types of
microservices are deployed on four RPi devices and four desktops separately, and each machine only
runs a single microservices node. One RPi device functions as a client that sends service request to
these security service providers. One-hundred test runs have been done in total based on the proposed
test scenario, where a client initiates the connection by sending a request for service to the server side
for an access permission.
Figure 6 shows the computation overhead for hosting individual microservices node on edge
and fog platforms. The results reveal that computation overhead increases as the complexity of the
tasks grows. As video stream fingerprint relies on a lightweight Tendermint to record and verify
the ENF fingerprint data, it needs less processing time than other security services, which require
more computational resource by SC operations. Unlike data integrity, identity verification and AC
microservices need more cryptographic computations and authentication operations. Therefore, they
require higher processing time both on the RPi device and the desktop. Due to multiple SC interactivity,
identity verification microservice takes largest processing time for querying the data in blockchain.
-----
_Smart Cities 2020, 3_ 944
**Figure 6. Processing time of security microservices on different host platform.**
The end-to-end delay is evaluated based on the test case that a client sends multiple service
transactions per second (TPS) and waits until that all responses are received. Figure 7 shows network
latency of running security microservices as send transaction rate varies from 1 to 100 TPS. In terms of
the bandwidth of network and capacity of the server side, the time latency of sending transactions
and receiving all acknowledgments is almost linear scale to the transaction rate. Considering the same
networking environment and transaction data size, the influence of communication cost is almost
negligible. Therefore, the computation cost on the server side becomes dominant as scaling multiple
transactions during single microservices node scenario.
**Figure 7. Network latency of accessing security microservices with different transaction rate.**
6.2.2. Throughput Evaluation: Microservices vs. Monolithic Framework
To evaluate the network delay influence between microservices and monolithic framework as
scaling multiple transactions, a set of comparative experiment is conducted on two service demo
applications: Micro_App and Mono_App. Micro_App uses microservices framework in which five
containerized security microservices are deployed on five RPi devices separately, while Mono_App
relies on a monolithic framework by encapsulating all security functions into one container that is
-----
_Smart Cities 2020, 3_ 945
deployed on a RPi device. A RPi device works as a client to send service transactions to Micro_App
and Mono_App service providers that are deployed on a desktop.
Figure 8 shows network latency of launching multiple identity verification requests on
microservices and monolithic frameworks. On receiving transactions from client, Micro_App service
provider can even distribute service workload into subgroups that are assigned to microservices nodes
in the network. Therefore, total network delay is reduced to improve the quality of service (QoS) in
terms of response time. As the bottom line in Figure 8 shows, the Micro_App with full microservices
capacity achieves lower network delay than other scenarios, and it is amount to 23% of that Mono_App
does when TPS is 100. Compared to Mono_App, certain fraction of microservices node dropout
does not disturb the service access. However, the network delay increases significantly as fraction of
dropout increases, as Figure 8 shows.
**Figure 8. Network latency of service requests on microservices and monolithic frameworks.**
Figure 9 presents the transaction throughput that Micro_App and Mono_App can achieve as TPS
increases. The transaction throughput is greatly influenced by network communication capacity and
service processing capability that a security microservices host machine can provide. As Mono_App
uses a single monolithic application node to handle all security service transactions, transaction
throughput is easier to become saturate than Micro_App as TPS increases. Figure 9 shows that
transaction throughput ascent of Micro_App with 0% dropout becomes flat when TPS is about 60,
however, Mono_App cannot dramatically increase transaction throughput as TPS is larger than 20.
**Figure 9. Transaction throughput of service requests on microservices and monolithic frameworks.**
-----
_Smart Cities 2020, 3_ 946
Figure 9 also indicates that transaction throughput of Micro_App is greatly impacted by
microservices dropout rate. As each microservices node has limited service processing capacity,
so that service access overload to dropout nodes is transferred to other working nodes. As a result,
the transaction throughput of Micro_App declines owning to the decreased system capacity by
dropout nodes. Micro_App can tolerant certain fraction of microservices nodes dropout, and it is more
robust than Mono_App, which is vulnerable to performance bottleneck. Moreover, through properly
deploying security microservices nodes, Micro_App is more scalable and flexible than Mono_App on
dynamic and heterogeneous IoT-based networks.
6.2.3. Blockchain Fabric Performance: Tendermint vs. Ethereum
The security microservices utilizes a set of transaction_commit() RPC functions to save data to the
distributed ledger, and consensus protocol is responsible to guarantee the security of recorded data on
the distributed ledger. Thus, executing consensus protocol and recording data into distributed ledger
inevitably introduce extra delays besides normal service operation. One-hundred testing runs have
been carried out based on the proposed test scenario, in which a video stream fingerprint microservices
node saves ENF fingerprint into Tendermint and a AC microservices node updates access token SC
on Ethereum.
Figure 10 shows the time delay, that a node launches a blockchain transaction (bc_tx) and waits
until it has been committed on the distributed ledger. The bc_tx committed time is closely related to
the block confirmation time that is defined by the consensus algorithm. Tendermint utilizes a BFT
consensus protocol to achieve a deterministic finality on a new block for each voting round, so that
_bc_tx committed time is almost stable (~2.9 s), as showb by Figure 10. However, Ethereum relies on_
a random block generation mechanism defined by the PoW consensus protocol, and it achieves a
probabilistic finality on committed data on the distributed ledger. Therefore, the bc_tx committed time
of Ethereum is greatly varying owing to variable block confirmation time as illustrated by Figure 10.
**Figure 10. Network latency for committing data transactions in blockchain.**
Table 2 provides a comprehensive performance of running intra-domain (Tendermint) and
inter-domain consensus (Ethereum) protocols regarding several key performance matrices. The bc_tx
committed time is calculated by averaging 100 test results in Figure 10. Ethereum achieves a 4.6 s
latency by committing a transaction on distributed ledger, which is 28% longer than Tendermint
does. The SC-based security services are generally used by either non-time-sensitive operations,
like verify access token, or offline tasks, like checking integrity of contextual features. Thus, 4.6 s
latency for updating a SC data meets service requirements in SPS applications. The ENF-based false
frame detection relies on a minimal 5 s sliding window to obtain a constant correlation coefficient for
-----
_Smart Cities 2020, 3_ 947
dissimilar ENF signal estimations. Thus, 3.6 s bc_tx committed time is enough for finalizing an ENF
fingerprint data on the distributed ledger within one detection cycle.
**Table 2. Comparative evaluation on blockchain fabric.**
**Ethereum** **Tendermint**
**Miner** **Node** **Validator**
**bc_tx committed time (s)** 4.6 3.6
**CPU usage (%)** 103 5 27.5
**Memory usage (MB)** 1232 45 64
**Gas/bc_tx (Ether)** 0.001 _×_
Considering resource consumption in term of CPU and memory usage, Tendermint demonstrates
advantages over Ethereum. Ethereum uses a computation intensive PoW consensus algorithm,
and mining process almost occupies the full CPU capacity and consumes about 1.2 GB memory.
Therefore, it is not feasible to deploy miners on resource constrained IoT devices. However, Ethereum
can be deployed as a light node on RPi devices, which only synchronizes and validates blockchain
data instead of mining new blocks. Table 2 shows that a Ethereum node only needs 5% CPU capacity
and 45 MB memory to support data recording and querying on SC. Tendermint uses a lightweight
BFT consensus algorithm to achieve efficiency in CPU and memory usage, so that it is suitable for
deploying validators on resource-constraint IoT devices.
In an Ethereum network, transaction commitment requires gas that is used to pay for miners.
The average gas fee for each transaction is 0.001 Ether, which amounts to $2.3 given the Ether price in
the public Ethereum market ($236.23/Ether at 20 July 2020). Compared to Ethereum, Tendermint does
not require transaction fee. Therefore, it is more suitable for inter-domain scenario, which requires
large volume of data transactions without introducing additional financial cost.
_6.3. Discussions_
Our experimental results verified the feasibility of the proposed BlendSPS solution. It has the
potential to enable a practical IoT-based SPS system featured as a decentralized, secure, and privacy
preserving service. Compared to centralized security solutions implemented by monolithic framework,
the BlendSPS has the following advantages.
_(1) Decentralized network architecture: The BlendSPS system leverages the blockchain and smart_
contract technology to provide decentralized security services. Therefore, geographical scattered data
owners and service providers maintain control on their own resources and securely share information
without relying on a centralized third authority to ensure a trust relationship. It is promising to
improve system performance and reduce the risk of single point of failure.
_(2) Flexible and fine-grained SOA framework: BlendSPS uses fine-grained and loose-coupling_
microservices to enable flexible and robust service architecture. As the whole system can be decoupled
into multiple fine-grained microservices units, each microservices unit is only responsible for a
dedicated task according to domain related performance and security requirements. Moreover, running
microservices units is independent of remaining parts of system. Therefore, user and service providers
can increase or decrease serving microservices nodes to achieve expected QoS without disturbing
system functionality.
_(3) Security proprieties: Given a partial synchronous network environment of SPS settings,_
persistence and termination are two security proprieties provided by the hybrid blockchain fabric to
enable a robust distributed ledger. The persistence ensures that those finalized hashed model strings
are immutable and traceable, and can be audited by other participants. Termination ensures aliveness
-----
_Smart Cities 2020, 3_ 948
so that all valid hashed model transactions by honest nodes are finalized in distributed ledger after a
sufficient amount of time.
**7. Conclusions**
This paper introduces BlendSPS, a blockchain-enabled decentralized smart public safety system,
to enhance security and privacy-preserving proprieties in distributed SPS network. Leveraging
lightweight microservices-based SOA framework and hybrid blockchain fabric, BlendSPS supports a
decentralized, secure and efficient data sharing and multidomain operations in SPS settings. Moreover,
BlendSPS brings low computation and communication cost on edge network, making it ideal for
IoT-based SPS applications
While the experimental results on the prototype are encouraging, there are still open questions to
answer before deploying a practical solution on real-world SPS scenarios. By integrating blockchain
with a heterogeneous IoT-based SPS network, a hybrid blockchain fabric is promising to address
trade-offs caused by consensus mechanisms, like scalability, efficiency, and security. However,
it also brings new performance and security concerns during cross-chain transactions. In addition,
the transparency of the distributed ledger may exacerbate the privacy problem when users record
sensitive data on blockchain. Furthermore, each node needs a full replica of the public ledger to join
the blockchain network, hence, it inevitably increases bootstrapping time for new participant and
incurs huge storage space consumption on host machine. Part of our ongoing effort is focused on
further exploration of the efficient and privacy-preserving blockchain protocol, which can be executed
at the edge networks with lower communication, computation, and storage overhead.
**Author Contributions: Conceptualization, R.X. and Y.C.; Methodology, R.X., S.Y.N., D.N., and A.F.; Software,**
R.X., S.Y.N., D.N., and A.F.; Validation, R.X., D.N., and Y.C.; Formal analysis, R.X. and Y.C.; Investigation, R.X.;
Resources, R.X., S.Y.N., D.N., and A.F.; Data Curation, R.X.; Writing—Original Draft Preparation, R.X., S.Y.N.,
D.N., and A.F.; Writing—Review and Editing, R.X. and Y.C.; Visualization, R.X.; Supervision, Y.C.; Project
Administration, Y.C. All authors have read and agreed to the published version of the manuscript.
**Funding: This research received no external funding.**
**Conflicts of Interest: The authors declare no conflict of interest.**
**Abbreviations**
The following abbreviations are used in this manuscript.
ABI Application Binary Interfaces
AC Access Control
BFT Byzantine Fault Tolerant
CNN Convolutional Neural Networks
DLT Distributed Ledger Technology
DNN Deep Neural Networks
ENF Electrical Network Frequency
IoT Internet of Things
IPC Inter Process Communication
ML Machine Learning
PKI Public Key Infrastructure
PoW Proof-of-Work
QoS Quality of Service
RPC Remote Procedure Call
SC Smart Contract
SOA Service-oriented Architecture
SPS Smart Public Safety
VID Virtual Identity
-----
_Smart Cities 2020, 3_ 949
**References**
1. Nikouei, S.Y.; Xu, R.; Chen, Y.; Aved, A.; Blasch, E. Decentralized smart surveillance through microservices
platform. In Sensors and Systems for Space Applications XII; International Society for Optics and Photonics:
Bellingham, WA, USA, 2019; Volume 11017, p. 110170K.
2. Xu, R.; Nikouei, S.Y.; Chen, Y.; Blasch, E.; Aved, A. Blendmas: A blockchain-enabled decentralized
microservices architecture for smart public safety. In Proceedings of the 2019 IEEE International Conference
on Blockchain (Blockchain), Atlanta, GA, USA, 14–17 July 2019; pp. 564–571.
3. Nikouei, S.Y.; Chen, Y.; Song, S.; Xu, R.; Choi, B.Y.; Faughnan, T. Smart surveillance as an edge network
service: From harr-cascade, svm to a lightweight cnn. In Proceedings of the 2018 IEEE 4th International
Conference on Collaboration and Internet Computing (CIC), Philadelphia, PA, USA, 18–20 October 2018;
pp. 256–265.
4. Wu, R.; Liu, B.; Chen, Y.; Blasch, E.; Ling, H.; Chen, G. A container-based elastic cloud architecture for pseudo
real-time exploitation of wide area motion imagery (wami) stream. J. Signal Process. Syst. 2017, 88, 219–231.
[[CrossRef]](http://dx.doi.org/10.1007/s11265-016-1206-6)
5. Nakamoto, S. Bitcoin: A Peer-to-Peer Electronic Cash System; Technical Report; Manubot, 2019. Available
[online: https://git.dhimmel.com/bitcoin-whitepaper/ (accessed on 23 July 2020).](https://git.dhimmel.com/bitcoin-whitepaper/)
6. Xu, T.; Botelho, L.M.; Lin, F.X. Vstore: A data store for analytics on large videos. In Proceedings of the
Fourteenth EuroSys Conference 2019, Dresden, Germany, 25–28 March 2019; ACM: New York, NY, USA,
2019; p. 16.
7. Nikouei, S.Y.; Chen, Y.; Song, S.; Xu, R.; Choi, B.Y.; Faughnan, T.R. Real-Time Human Detection as an Edge
Service Enabled by a Lightweight CNN. In Proceedings of the IEEE International Conference on Edge
Computing, San Francisco, CA, USA, 2–7 July 2018.
8. Blasch, E.P.; Liu, K.; Liu, B.; Shen, D.; Chen, G. Cloud Based Video Detection and Tracking System. US Patent
9,373,174, 21 June 2016.
9. Ma, J.; Dai, Y.; Hirota, K. A Survey of Video-Based Crowd Anomaly Detection in Dense Scenes. J. Adv.
_[Comput. Intell. Intell. Inform. 2017, 21, 235–246. [CrossRef]](http://dx.doi.org/10.20965/jaciii.2017.p0235)_
10. Ribeiro, M.; Lazzaretti, A.E.; Lopes, H.S. A study of deep convolutional auto-encoders for anomaly detection
[in videos. Pattern Recognit. Lett. 2017, 105, 13–22. [CrossRef]](http://dx.doi.org/10.1016/j.patrec.2017.07.016)
11. Fuse, T.; Kamiya, K. Statistical Anomaly Detection in Human Dynamics Monitoring Using a Hierarchical
[Dirichlet Process Hidden Markov Model. IEEE Trans. Intell. Transp. Syst. 2017, 18, 3083–3092. [CrossRef]](http://dx.doi.org/10.1109/TITS.2017.2674684)
12. Chen, N.; Chen, Y.; You, Y.; Ling, H.; Liang, P.; Zimmermann, R. Dynamic urban surveillance video stream
processing using fog computing. In Proceedings of the 2016 IEEE Second International Conference on
Multimedia Big Data (BigMM), Taipei, Taiwan, 20–22 April 2016; pp. 105–112.
13. Wang, X. Intelligent multi-camera video surveillance: A review. _Pattern Recognit. Lett. 2013, 34, 3–19._
[[CrossRef]](http://dx.doi.org/10.1016/j.patrec.2012.07.005)
14. Blasch, E.; Nagy, J.; Aved, A.; Jones, E.K.; Pottenger, W.M.; Basharat, A.; Hoogs, A.; Schneider, M.; Hammoud,
R.; Chen, G.; et al. Context aided video-to-text information fusion. In Proceedings of the 17th International
Conference on Information Fusion (FUSION), Salamanca, Spain, 7–10 July 2014; pp. 1–8.
15. Aved, A.J.; Blasch, E.P. Multi-int query language for dddas designs. Procedia Comput. Sci. 2015, 51, 2518–2532.
[[CrossRef]](http://dx.doi.org/10.1016/j.procs.2015.05.360)
16. Piciarelli, C.; Esterle, L.; Khan, A.; Rinner, B.; Foresti, G.L. Dynamic Reconfiguration in Camera Networks:
[A short survey. IEEE Trans. Circuits Syst. Video Technol. 2016, 26, 965–977. [CrossRef]](http://dx.doi.org/10.1109/TCSVT.2015.2426575)
17. Fan, C.T.; Wang, Y.K.; Huang, C.R. Heterogeneous information fusion and visualization for a large-scale
[intelligent video surveillance system. IEEE Trans. Syst. Man. Cybern. Syst. 2017, 47, 593–604. [CrossRef]](http://dx.doi.org/10.1109/TSMC.2016.2531671)
18. Wu, J. Mobility-enhanced public safety surveillance system using 3d cameras and high speed broadband
[networks. GENI NICE Evening Demos 2015. Available online: https://cis.temple.edu/~jiewu/research/](https://cis.temple.edu/~jiewu/research/research_projects/Safety_Surveillance.html)
[research_projects/Safety_Surveillance.html (accessed on 23 July 2020).](https://cis.temple.edu/~jiewu/research/research_projects/Safety_Surveillance.html)
19. Mahmud, R.; Kotagiri, R.; Buyya, R. Fog computing: A taxonomy, survey and future directions. In Internet of
_Everything; Springer: Berlin/Heidelberg, Germany, 2018; pp. 103–130._
20. Nikouei, S.Y.; Chen, Y.; Aved, A.; Blasch, E.; Faughnan, T.R. I-safe: Instant suspicious activity identification
at the edge using fuzzy decision making. In Proceedings of the 4th ACM/IEEE Symposium on Edge
Computing, Arlington, VA, USA, 7–9 November 2019; pp. 101–112.
-----
_Smart Cities 2020, 3_ 950
21. Nikouei, S.Y.; Chen, Y.; Song, S.; Faughnan, T.R. Kerman: A hybrid lightweight tracking algorithm to
enable smart surveillance as an edge service. In Proceedings of the 2019 16th IEEE Annual Consumer
Communications & Networking Conference (CCNC), Las Vegas, NV, USA, 11–14 January 2019; pp. 1–6.
22. Datta, S.K.; Bonnet, C. Next-generation, data centric and end-to-end iot architecture based on microservices.
In Proceedings of the 2018 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia),
Jeju, Korea, 24–26 June 2018; pp. 206–212.
23. Nagothu, D.; Xu, R.; Nikouei, S.Y.; Chen, Y. A microservice-enabled architecture for smart surveillance
using blockchain technology. In Proceedings of the 2018 IEEE International Smart Cities Conference (ISC2),
Kansas City, MO, USA, 16–19 September 2018; pp. 1–4.
24. Xu, R.; Ramachandran, G.S.; Chen, Y.; Krishnamachari, B. Blendsm-ddm: Blockchain-enabled secure
microservices for decentralized data marketplaces. In Proceedings of the 2019 IEEE International Smart
Cities Conference (ISC2), Casablanca, Morocco, 14–17 October 2019; pp. 14–17.
25. Swan, M. Blockchain: Blueprint for a New Economy; O’Reilly Media, Inc.: Newton, MA, USA, 2015.
26. [Szabo, N. Formalizing and securing relationships on public networks. First Monday 1997, 2. [CrossRef]](http://dx.doi.org/10.5210/fm.v2i9.548)
27. Nikouei, S.Y.; Xu, R.; Nagothu, D.; Chen, Y.; Aved, A.; Blasch, E. Real-time index authentication for
event-oriented surveillance video query using blockchain. In Proceedings of the 2018 IEEE International
Smart Cities Conference (ISC2), Kansas City, MO, USA, 16–19 September 2018; pp. 1–8.
28. Xu, R.; Lin, X.; Dong, Q.; Chen, Y. Constructing Trustworthy and Safe Communities on a Blockchain-Enabled
Social Credits System. In Proceedings of the 15th EAI International Conference on Mobile and Ubiquitous
Systems: Computing, Networking and Services, New York, NY, USA, 5–7 November 2018; ACM:
New York, NY, USA, 2018; pp. 449–453.
29. Lin, X.; Xu, R.; Chen, Y.; Lum, J.K. A blockchain-enabled decentralized time banking for a new social value
system. In Proceedings of the 2019 IEEE Conference on Communications and Network Security (CNS),
Washington, DC, USA, 10–12 June 2019; pp. 1–5.
30. Xu, R.; Chen, Y.; Blasch, E.; Chen, G. Exploration of blockchain-enabled decentralized capability-based
[access control strategy for space situation awareness. Opt. Eng. 2019, 58, 41609. [CrossRef]](http://dx.doi.org/10.1117/1.OE.58.4.041609)
31. Xu, R.; Chen, Y.; Blasch, E.; Aved, A.; Chen, G.; Shen, D. Hybrid blockchain-enabled secure microservices
fabric for decentralized multi-domain avionics systems. In Sensors and Systems for Space Applications XIII;
International Society for Optics and Photonics: Bellingham, WA, USA, 2020; Volume 11422, p. 114220J.
32. Xu, R.; Chen, S.; Yang, L.; Chen, Y.; Chen, G. Decentralized autonomous imaging data processing
using blockchain. In Multimodal Biomedical Imaging XIV; International Society for Optics and Photonics:
Bellingham, WA, USA, 2019; Volume 10871, p. 108710U.
33. Xu, R.; Chen, Y.; Blasch, E.; Chen, G. BlendCAC: A BLockchain-ENabled Decentralized Capability-based
Access Control for IoTs. In Proceedings of the 2018 IEEE International Conference on Blockchain
(Blockchain-2018), Halifax, NS, Canada, 30 July–3 August 2018; pp. 1–8.
34. Xu., R.; Chen, Y.; Blasch, E.; Chen, G. Blendcac: A smart contract enabled decentralized capability-based
[access control mechanism for the iot. Computers 2018, 7, 39. [CrossRef]](http://dx.doi.org/10.3390/computers7030039)
35. Nagothu, D.; Schwell, J.; Chen, Y.; Blasch, E.; Zhu, S. A study on smart online frame forging attacks against
video surveillance system. In Sensors and Systems for Space Applications XII; International Society for Optics
and Photonics: Bellingham, WA, USA, 2019; Volume 11017, p. 110170L.
36. Nagothu, D.; Chen, Y.; Blasch, E.; Aved, A.; Zhu, S. Detecting Malicious False Frame Injection Attacks
on Surveillance Systems at the Edge Using Electrical Network Frequency Signals. Sensors 2019, 19, 2424.
[[CrossRef] [PubMed]](http://dx.doi.org/10.3390/s19112424)
37. Fitwi, A.; Chen, Y.; Zhu, S. No peeking through my windows: Conserving privacy in personal drones.
In Proceedings of the 2019 IEEE International Smart Cities Conference (ISC2), Casablanca, Morocco, 14–17
October 2019; pp. 199–204.
38. Fitwi, A.; Chen, Y.; Zhu, S. A lightweight blockchain-based privacy protection for smart surveillance
at the edge. In Proceedings of the 2019 IEEE International Conference on Blockchain (Blockchain),
Atlanta, GA, USA, 14–17 July 2019; pp. 552–555.
39. Fitwi, A.; Yuan, M.; Nikouei, S.Y.; Chen, Y. Minor privacy protection by real-time children identification and
[face scrambling at the edge. EAI Endorsed Trans. Secur. Saf. 2020, 18. [CrossRef]](http://dx.doi.org/10.4108/eai.13-7-2018.164560)
40. Hajj-Ahmad, A.; Garg, R.; Wu, M. Spectrum combining for ENF signal estimation. IEEE Signal Process. Lett.
**[2013, 20, 885–888. [CrossRef]](http://dx.doi.org/10.1109/LSP.2013.2272523)**
-----
_Smart Cities 2020, 3_ 951
41. The Images of Groups Dataset. Available online: [http://chenlab.ece.cornell.edu/people/Andy/](http://chenlab.ece.cornell.edu/people/Andy/ImagesOfGroups.html)
[ImagesOfGroups.html (accessed on 23 July 2020).](http://chenlab.ece.cornell.edu/people/Andy/ImagesOfGroups.html)
42. Lamport, L.; Shostak, R.; Pease, M. The Byzantine Generals Problem. ACM Trans. Program. Lang. Syst. 1982,
_[4, 382–401. [CrossRef]](http://dx.doi.org/10.1145/357172.357176)_
43. [Flask: A Pyhon Microframework. Available online: http://flask.pocoo.org/ (accessed on 23 July 2020).](http://flask.pocoo.org/)
44. [Ethereum Homestead Documentation. Available online: http://www.ethdocs.org/en/latest/index.html](http://www.ethdocs.org/en/latest/index.html)
(accessed on 23 July 2020).
45. [Kwon, J. Tendermint: Consensus without mining. 2014, 1, 11. Available online: https://tendermint.com/](https://tendermint.com/docs/tendermint.pdf)
[docs/tendermint.pdf (accessed on 23 July 2020).](https://tendermint.com/docs/tendermint.pdf)
46. [Solidity. Available online: http://solidity.readthedocs.io/en/latest/ (accessed on 23 July 2020).](http://solidity.readthedocs.io/en/latest/)
47. [Pyca/Cryptography Documentation. Available online: http://pyca/cryptography (accessed on 23 July](http://pyca/cryptography)
2020).
48. [BlendSPS Project. Available online: https://github.com/samuelxu999/Research/tree/master/Security/](https://github.com/samuelxu999/Research/tree/master/Security/py_dev/BlendSPS/)
[py_dev/BlendSPS/ (accessed on 23 July 2020).](https://github.com/samuelxu999/Research/tree/master/Security/py_dev/BlendSPS/)
49. [Go-ethereum. Available online: https://ethereum.github.io/go-ethereum/ (accessed on 23 July 2020).](https://ethereum.github.io/go-ethereum/)
_⃝c_ 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/smartcities3030047?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/smartcities3030047, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2624-6511/3/3/47/pdf?version=1598944816"
}
| 2,020
|
[] | true
| 2020-07-28T00:00:00
|
[
{
"paperId": "f1dc31ae5b41035d79c87b51642854d4822f85a0",
"title": "Minor Privacy Protection by Real-time Children Identification and Face Scrambling at the Edge"
},
{
"paperId": "00323f5d22c03fe67fdfc1ba688f456ad14e397b",
"title": "Hybrid blockchain-enabled secure microservices fabric for decentralized multi-domain avionics systems"
},
{
"paperId": "7947fe4227fa9e7cdc97bde8da3dffbdfacfd4ee",
"title": "BlendSM-DDM: BLockchain-ENabled Secure Microservices for Decentralized Data Marketplaces"
},
{
"paperId": "a21896724ecc528772d83679f5093025f8483d42",
"title": "I-SAFE: instant suspicious activity identification at the edge using fuzzy decision making"
},
{
"paperId": "40622ca50be2c7d6391fa749a2168fc92a033881",
"title": "No Peeking through My Windows: Conserving Privacy in Personal Drones"
},
{
"paperId": "d27f8515ad1349e2a05fbfe66ed08c1b9c099f42",
"title": "A Lightweight Blockchain-Based Privacy Protection for Smart Surveillance at the Edge"
},
{
"paperId": "78e659acf6a85409d2764826fbec7363f8454eef",
"title": "A Blockchain-Enabled Decentralized Time Banking for a New Social Value System"
},
{
"paperId": "0f1ac5a62e5f59da828e95724a01f8a37836d5cd",
"title": "Detecting Malicious False Frame Injection Attacks on Surveillance Systems at the Edge Using Electrical Network Frequency Signals"
},
{
"paperId": "9de105b956eca2c6ea8db171be032543a371d1e8",
"title": "Decentralized smart surveillance through microservices platform"
},
{
"paperId": "407210fb2d4c5f6ecd904a5737ed4c9606560737",
"title": "A study on smart online frame forging attacks against Video Surveillance System"
},
{
"paperId": "d1156ab2d59e6c5f9cf8c4098ec2b1a294bc7ce5",
"title": "BlendMAS: A Blockchain-Enabled Decentralized Microservices Architecture for Smart Public Safety"
},
{
"paperId": "915e97adcccc230c3273a3e1236465a2643fbc3b",
"title": "Decentralized autonomous imaging data processing using blockchain"
},
{
"paperId": "78150d0f113679dcd7525c244488148ba762b12c",
"title": "VStore: A Data Store for Analytics on Large Videos"
},
{
"paperId": "799139ed091b1316b62b7cee3185b023751fd19b",
"title": "Exploration of blockchain-enabled decentralized capability-based access control strategy for space situation awareness"
},
{
"paperId": "78f5fd4c1db1e493e72b050d67adb27606c1747c",
"title": "Constructing Trustworthy and Safe Communities on a Blockchain-Enabled Social Credits System"
},
{
"paperId": "9a5801cd5d96fac18df0c0abfe276ec251c66146",
"title": "Kerman: A Hybrid Lightweight Tracking Algorithm to Enable Smart Surveillance as an Edge Service"
},
{
"paperId": "1e96dbe3e74a303143847f25a5880bab86fe6e38",
"title": "A Microservice-enabled Architecture for Smart Surveillance using Blockchain Technology"
},
{
"paperId": "733335154c907d49062945fad5d67d7909fb9cf5",
"title": "Real-Time Index Authentication for Event-Oriented Surveillance Video Query using Blockchain"
},
{
"paperId": "83bc879120207d575fa92dbbc1d34f40351e6085",
"title": "BlendCAC: A Smart Contract Enabled Decentralized Capability-Based Access Control Mechanism for the IoT"
},
{
"paperId": "5897f14e9c923f4b178da1c7e57fc95cd3adbb3c",
"title": "Next-Generation, Data Centric and End-to-End IoT Architecture Based on Microservices"
},
{
"paperId": "78c5dcc0e93b0ff44036841f7709ef15a8edef19",
"title": "BlendCAC: A BLockchain-Enabled Decentralized Capability-Based Access Control for IoTs"
},
{
"paperId": "900f2d5986c402a729c20a6a6d43e56db5449ac5",
"title": "Real-Time Human Detection as an Edge Service Enabled by a Lightweight CNN"
},
{
"paperId": "5825589f9edd72ba2a7346bbbe0d597c3626557d",
"title": "Smart Surveillance as an Edge Network Service: From Harr-Cascade, SVM to a Lightweight CNN"
},
{
"paperId": "e0f73e991514450bb0f14f799878d84adc8601f9",
"title": "A study of deep convolutional auto-encoders for anomaly detection in videos"
},
{
"paperId": "e8e3e8b779640b08e3bb9845118520e7c02e3ec3",
"title": "Heterogeneous Information Fusion and Visualization for a Large-Scale Intelligent Video Surveillance System"
},
{
"paperId": "977c105f207df5cd6a18e5ea6133af1fc57da75f",
"title": "Statistical Anomaly Detection in Human Dynamics Monitoring Using a Hierarchical Dirichlet Process Hidden Markov Model"
},
{
"paperId": "264ef105920d13faf6aa358a2e03b2041e384f79",
"title": "A Survey of Video-Based Crowd Anomaly Detection in Dense Scenes"
},
{
"paperId": "def302ca1310c19d72cbf7a7fd25876749bf5251",
"title": "A Container-Based Elastic Cloud Architecture for Pseudo Real-Time Exploitation of Wide Area Motion Imagery (WAMI) Stream"
},
{
"paperId": "e2103d99fe61d9f2de835239aeb023170d2ec6e4",
"title": "Fog Computing: A Taxonomy, Survey and Future Directions"
},
{
"paperId": "e66304bfd5338fef9dcfb63d4b9df6622c98e4d3",
"title": "Dynamic Reconfiguration in Camera Networks: A Short Survey"
},
{
"paperId": "bf3fda7c67d79585537e2172838c7858aa5c90fa",
"title": "Dynamic Urban Surveillance Video Stream Processing Using Fog Computing"
},
{
"paperId": "97fddbbfd681bce9eeb8e0a013353b4d5b2ba0db",
"title": "Blockchain: Blueprint for a New Economy"
},
{
"paperId": "627c32f2b091f818e5990ae7344b1e518a56fdfe",
"title": "Context aided video-to-text information fusion"
},
{
"paperId": "e3d297c72f56e1643febb9fde5299cdcf8c3c739",
"title": "Spectrum Combining for ENF Signal Estimation"
},
{
"paperId": "5b4cf1e37954ccd1ca6b315986d45904f9d2f636",
"title": "Formalizing and Securing Relationships on Public Networks"
},
{
"paperId": "1689f401f9cd18c8fd033d99d1e2ce99b71e6047",
"title": "The Byzantine Generals Problem"
},
{
"paperId": "433561f47f9416a6500c8350414fdd504acd2e5e",
"title": "Bitcoin Proof of Stake: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "80d8b10a63f86e05b940a94a4d0156f9475abcdc",
"title": "Multi-INT Query Language for DDDAS Designs"
},
{
"paperId": "df62a45f50aac8890453b6991ea115e996c1646e",
"title": "Tendermint : Consensus without Mining"
},
{
"paperId": "12138be732a2aa10e4eef460979bec64eb8e4f4c",
"title": "Intelligent multi-camera video surveillance: A review"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": null,
"title": "Consensus without mining"
},
{
"paperId": null,
"title": "Peer-reviewed version available at Smart Cities"
},
{
"paperId": null,
"title": "Cloud Based Video Detection and Tracking System"
},
{
"paperId": null,
"title": "A Peer-to-Peer Electronic Cash System; Technical Report; Manubot, 2019"
},
{
"paperId": null,
"title": "A Pyhon Microframework"
},
{
"paperId": null,
"title": "Mobility-enhanced public safety surveillance system using 3d cameras and high speed broadband networks"
},
{
"paperId": null,
"title": "The Images of Groups Dataset"
},
{
"paperId": null,
"title": "Cloud based video detection and tracking system, 2016"
},
{
"paperId": null,
"title": "Flask : A Pyhon Microframework Ethereum Homestead Documentation"
},
{
"paperId": null,
"title": "Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 28 July 2020 Peer-reviewed version available at Smart Cities"
},
{
"paperId": null,
"title": "Mobility-enhanced public safety surveillance system using 3d cameras and high speed broadband networks. GENI NICE Evening Demos"
},
{
"paperId": null,
"title": "Blueprint for a new economy"
},
{
"paperId": null,
"title": "Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted"
},
{
"paperId": null,
"title": "Ethereum Homestead Documentation"
}
] | 19,556
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02d7700c6829e4ab72d225e8b4ecdc0a862f2384
|
[
"Computer Science"
] | 0.870863
|
Decentralized Machine Learning for Intelligent Health-Care Systems on the Computing Continuum
|
02d7700c6829e4ab72d225e8b4ecdc0a862f2384
|
Computer
|
[
{
"authorId": "1887201",
"name": "Dragi Kimovski"
},
{
"authorId": "1746860",
"name": "Sasko Ristov"
},
{
"authorId": "100533173",
"name": "R.-C. Prodan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Computer",
"IEEE Comput"
],
"alternate_urls": [
"https://ieeexplore.ieee.org/servlet/opac?punumber=2",
"http://www.computer.org/portal/site/ieeecs/index.jsp",
"https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=2"
],
"id": "f6572f66-2623-4a5e-b0d9-4a5028dea98f",
"issn": "0018-9162",
"name": "Computer",
"type": "journal",
"url": "http://www.computer.org/computer"
}
|
The introduction of electronic personal health records (EHRs) enables nationwide information exchange and curation among different health-care systems. However, current EHR systems are centrally orchestrated, which could potentially lead to a single point of failure.
|
### Continuum
any societies and cultures
perceive that sexually trans
mitted diseases (STDs) only
affect “others” who fol
# Mlow “sinful” lifestyles and practices.
Therefore, discrimination and stigma
tization are common outcomes of such
distorted depictions of STDs, espe
cially related to the acquired immunode
ficiency syndrome/human immunodefi
ciency virus. On a global level, the fear
of stigmatization prohibits effective
disease identification, prevention,
care, and treatment adherence, neg
atively influencing many communities’
quality of life.[1]
##### The introduction of electronic personal health records (EHRs) enables nationwide information exchange
and curation among different health-care systems. However, current EHR systems are centrally orchestrated, which could potentially lead to a single point of failure.
The introduction of electronic personal health
health-care systems, and support diagnosis quality, safety
monitoring, or medical research.
Although EHR systems promise substantial benefits
record (EHR) systems is a first step toward addressing monitoring, or medical research.
these issues, especially for illness-related stigmatiza Although EHR systems promise substantial benefits
tion. The purpose of EHR systems is to provide informa for improved care and reduced health-care costs, there
tion exchange and curation among different national are unforeseen difficulties related to privacy and limited
diagnosis-related functionalities. The essential barriers
that limit the usability and application of the EHR sys
_Digital Object Identifier 10.1109/MC.2022.3142151_
-----
###### ›› utilization of central control
and orchestration, which could
potentially lead to a single point
of failure, exposure of private
information, and hindered
interoperability
###### ›› storage of personal data in the
custody of a single institution,
hindering data privacy and lim
iting knowledge extraction and
processing
###### ›› limited integration with avail
able personal medical Internet
of Things (IoT) devices, such as
heart rate monitors or blood
sugar sensors
###### ›› the inability to utilize a highly
heterogeneous set of computing
resources.
Consequently, EHRs do not allow
intelligence to be injected into the pro
cess of medical data analysis. This is pri
marily due to a lack of basic approaches
for supporting decentralized manage
ment and transparent integration with
medical IoT devices.
Recently, the so-called computing
continuum[2] that federates cloud ser
vices with emerging fog and edge
resources, presented a relevant com
puting alternative for supporting nextgeneration EHR systems. The comput
ing continuum provides a vast hetero
geneity of computational and commu
nication resources, which can allow
low-latency communication for fast
decision making close to data sources,
and substantial computing resources
for a complex data analysis. The distrib
uted nature of the computing contin
uum further embraces the utilization of
machine learning (ML) for the creation
of intelligent systems and their feder
ation through distributed ledger tech
nologies (DLTs). It therefore promises
IoT data, coming for various personal
medical sensors. These technologies
promise to be the next disruptive ones
and will eventually enable intelligently
controlled health-care systems with
better societal involvement. The exe
cution of ML over the computing con
tinuum and integration with DLTs can
make an EHR system personalized,
enable transparent integration of IoT
devices, and urge active participation
of patients and medical professionals
in the health-care system. Ultimately,
it opens the possibility for training
ML algorithms for predictive analysis
with medical data belonging to one
patient and using the trained models
to aid another patient’s treatment.
We therefore discuss in this article
intelligent computational approaches
for an anonymous analysis of medi
cal information across the computing
continuum by creating a decentral
ized ML overlay for model training in
a DLT network.
To support such a decentralized EHR
system, we explore the possibility of
###### ›› creating a decentralized ML net
work with multiparty computa
tions for secure nonproprietary
model training
###### ›› a cross-patient predictive analysis
for therapy assessment and
research with data acquired
from medical IoT devices
###### ›› a transparent orchestration of
the EHR system over heteroge
neous resources across the com
puting continuum.
As a proof of concept, we propose a
decentralized conceptual EHR system
that uses ML models for anonymous
predictive analysis and evaluate it on
a real-world computing continuum
###### RELATED WORK
DLTs and decentralized ML for health-care applications
Support for future decentralized plat
forms for medical data analysis with
autonomous practices is being researched
extensively. In the literature, Kuo and
Ohno-Machado propose a cross-insti
tutional, health-care predictive model
for quality-improvement initiatives
by predicting the risk of readmission
of a group of patients using data from
multiple institutions.[3] This approach
sets the groundwork for developing
privacy-preserving ML technology in
a DLT. Furthermore, Mettler provides
an initial medical data management
approach through DLT, empower
ing patients and fighting counterfeit
drugs in the pharmaceutical indus
try.[4] Recently, a feasibility study pre
sented in the work of Sheller et al.
explore the idea of applying feder
ated learning for secure multi-insti
tutional data analysis with multiple
local models coordinated by a central
ized aggregation server.[5] Although
the concept is promising, it still requires
a centralized model to gather all the
updates prone to failures and central
ized decisions.
Roehrs et al. propose a novel DLT
based architecture, OmniPHR, for a
distributed and interoperable EHR
architecture.[6] The approach allows
for a unified viewpoint of the per
sonal medical information between
patients and health-care providers.
Furthermore, Roehrs et al. describe
a prototype implementation of the
OmniPHR architectural model and
present an evaluation of the scal
ability of the approach in terms of
integrated, production-ready data
bases with information from 40,000
7
-----
The industry has also explored
using blockchain for private data stor
age and management. The GemOS sys
tem provides a platform for discover
ing and sharing disparate data tied to
unique identifiers, enabling connec
tions of data sources from different
systems on a common ledger and cre
ating proofs of existence with verifi
able data integrity.[8]
###### Decentralized ML across the computing continuum
Limitations in hospital infrastruc
tures’ computational capabilities pose
serious problems for deploying ML
systems in a decentralized manner.
Therefore, the computing continuum
has recently been considered as a suit
able computing infrastructure, capa
ble of meeting the conflicting require
ments of EHR systems.
More concretely, Wang et al. intro
duce a novel gradient-based training
concept for distributed ML models
without external computing services
over multiple edge resources.[9] The
edge devices train local models with
local data coming from multiple IoT
and medical devices, which are finally
aggregated on one device. The work
of Kumar et al. presents a novel treebased ML algorithm called Bonsai, for
efficient prediction on edge and fog
devices close to the IoT devices, which
maintains acceptable prediction accu
racy while minimizing model size
and prediction costs.[10] Furthermore,
Osia et al. present a distributed learn
ing approach that complements the
cloud for providing privacy-aware
and efficient analytics.[11] The algo
rithm divides the deep learning model
into multiple smaller ones, which can
be placed on available edge devices
while maintaining a central part of
Wang et al. present the application of
a deep learning algorithm for medical
image analysis that uses fixed-point
arithmetic, which can fine-tune the
analytic algorithm based on a medical
image segment and the available com
puting resources on the device.[12]
###### TECHNOLOGICAL GAPS
Gap 1: Centralized control of medical data and ML models
Based on the related work analysis, we
identified three research technological
gaps. Due to privacy concerns, medical
institutions manage their data locally,
which often leads to inefficient data
propagation. This hinders the possi
bility of training ML algorithms for
predictive analysis with medical data
belonging to one patient and using the
trained models to aid another patient’s
treatment. The current attempts to
integrate ML and IoT with EHR sys
tems, such as federated learning,
which enables training across multi
ple geographically devices, has already
yielded promising results.[13] The role of
federated learning for EHR systems is
twofold: 1) it allows distribution of the
training over the computing contin
uum resources and 2) it brings privacy
in combination with the multiparty
computing approach. However, even
though these algorithms are distrib
uted, they are centrally controlled and
model, periodically updated by mul
tiple local algorithms and potentially
exposing private information. In such
an environment, malicious federated
learning actors can compromise the ML
model by mimicking a local or contrib
uting learner/model’s role. Even worse,
the central actor gathering the model
updates (such as those used by Google or
Amazon) may steer them toward his or
her own personal interests, which may
be different from those of the contribu
tors, that is, put biases in the model.
To overcome the identified issues,
it is essential to research permissioned
###### THE INDUSTRY HAS ALSO EXPLORED
USING BLOCKCHAIN FOR PRIVATE DATA
STORAGE AND MANAGEMENT.
DLT protocols for federating a set of
ML models, with no need for central
ized training and later inference, for
three key benefits. First, to improve
users’ control of the models with a
secure relation to their data (or train
ing information). Second, to enable
control of information ownership,
shared further down the federation
of ML models through an adaptive
state-transition modeling. Third, to
enable anonymous sharing of parts of
the ML models (set of rules or wights)
between ML systems, with similar
characteristics, in the overlay for fur
ther improvements.
###### Gap 2: Constrained predictive data analysis with limited IoT integration
Training decentralized ML models
-----
medical information across DLTs also
faces serious challenges. One essen
tial issue is to transparently classify
one anonymous patient’s medical
data among various others through a
decentralized collaboration of a set of
local ML models (with similar charac
teristics/algorithms), logically located
at different medical institutions. In
addition, it is difficult to correlate
and analyze millions of anonymous,
noncontextualized medical records
produced by various devices, distrib
uted into different locations with dif
ferent attributes. In this scenario, it
ML predictive analysis algorithms on
noncontextualized and anonymous
medical information.
###### Gap 3: Insufficient computing resources and computationally inefficient DLT and ML solutions for edge training
DLT and ML approaches are known
to be computationally demanding.[14]
However, in large-scale heterogeneous
and fragmented environments where
patient data span geographical bound
aries, the important limiting factors are
insufficient computational resources
###### WITH THE STIGMA SYSTEM, MEDICAL
DATA ALWAYS STAY AT MEDICAL
INSTITUTIONS AND FORM LOCAL ML
MODELS, BUT ONLY AFTER PERFORMING
ANONYMIZATION.
is difficult to determine whether the
data comes from different patients (or
even different sensors belonging to
the same patient), affecting predictive
analysis. Furthermore, the feasibil
ity of training decentralized ML mod
els for medical information analysis,
research, and its integration with IoT
devices has never been explored.
Therefore, it is important to explore
decentralized approaches for the fed
eration of ML training with guided ana
lytics. The approach should address
the problem of noncontextualized
training data aggregation, knowledge
extraction, and cognitive learning about
users’ medical and personal data
in an anonymous manner. This could
and technical expertise. Concretely,
hospitals do not own a vast infra
structure, and the utilization of high-
performance computing services is
not always feasible. Furthermore, the
employment of local hospital infra
structure for decentralized ML train
ing can lead to reduced accuracy of
the ML model and errors during pre
dictive analysis, especially if the med
ical data for training are generated by
IoT devices.
Therefore, we should address scalable
approaches for efficient model updates
in an ML overlay with an increasing
number of learners/algorithms dis
tributed across various physical loca
tions. In practice, we should approach
resources across the computing con
tinuum, from various angles such as
latency for a consensus and transac
tion validation time (for example, a
model update). It is therefore essential
to explore whether we can sacrifice ML
model accuracy to allow for execution
on computing continuum resources
connected directly to personal medical
IoT devices (such as heart rate or blood
saturation monitors) or other med
ical equipment, which might not be
directly accessible over the network.
###### DECENTRALIZED EHR SYSTEM ARCHITECTURE
We propose a conceptual EHR system
architecture, named _STIGMA (see_
Figure 1). With the STIGMA system,
medical data always stay at medical
institutions and form local ML models,
but only after performing anonymiza
tion. Medical professionals interact
with the system through multimodal
diagnosis equipment, enriched with
sensor data from personal IoT devices.
Medical institutions can register in
the STIGMA EHR system by utilizing
strictly defined protocols for interop
erability, as defined by Roehrs et al.[6]
The STIGMA EHR system performs in
the following manner:
###### ›› A data analysis instance receives
a direct multimodal data stream
(magnetic resonance imaging,
computed tomography scans,
IoT heart rate sensors, electroen
cephalogram sensors, and so on)
of medical procedures.
###### ›› Afterward, the data stream is
analyzed on the available com
puting continuum resources.
###### ›› Data analysis filters anonymize
the data stream, which is then
sent to the model training
-----
-----
###### ›› The model training instance
in hospital computing infrastructures
to reduce replication and network
throughput.
**Data immutability and secure prop**
**agation of decentralized ML model**
**updates with multiparty computa**
**tion.** This aspect addresses the immu
tability and propagation of ML models
without violating data privacy during
updates. It is used to publish, update,
and activate anonymous information
exchange among the ML algorithms
during model training across the over
lay. Unfortunately, current technologies
are computationally inefficient, thus
not allowing straightforward utiliza
tion of DLTs for complex data sets. We
therefore utilize the concept of multi
party computation[16] to enable compu
tation on data from different providers.
The other participating actors gain no
additional information about each oth
er’s inputs, except what they learn from
the ML model’s collaborative output,
that is, decoupling the model from the
training data.
###### Predictive data analysis with IoT integration
Another important enabler for deploy
ment of the STIGMA EHR system is
cross-medical data analysis for improved
diagnosis and therapy assessment
through distributed ML with IoT med
ical device integration. Therefore, we
rely on the following solutions.
**Predictive analysis with decentral**
**ized ML.** The STIGMA EHR system
utilizes approaches for automated ML
reasoning with distributed non- and
cross-referenced data received from
professional medical equipment and
personal medical IoT devices. This pro
cess reduces uncertainty and mistrust
applies ML algorithms to train a
model on the available comput
ing continuum resources.
###### ›› After the model is trained, the
model training instance utilizes
the distributed ledger to register
the model (only as a pointer,
without exposing the data) and
checks for other suitable regis
tered models.
###### ›› Thereafter, if suitable models
are found in the distributed led
ger registry, model training con
tacts the model owners directly,
namely, other medical institu
tions, to receive rolling updates
or exchange (share) relevant
data for model improvement.
###### ›› The STIGMA EHR system can
perform only the rolling updates
and the data sharing after a
consensus (by voting) is reached
among all the medical institu
tions, federated by the distrib
uted ledger.
###### ›› The information is then used for
improving the model, which is
used to provide real-time sup
port for diagnosis and therapy
assessment and is again regis
tered in the distributed ledger.
All of the aforementioned steps are
continuously managed and synchro
nized in a decentralized manner by
the STIGMA EHR network. It logically
forms a peer-to-peer group that main
tains records on all the transactions
(model updates, inference performance
data, and accuracy). The STIGMA EHR
network also contains information
for available computing continuum
resources (in terms of computing power
and available ML models) at each medi
cal institution. The EHR network, there
to confirm or reject any piece of data
added to it, while no data can be deleted
from it. This provides a full history of all
transactions appearing on the DLT, giv
ing EHRs a method to ensure the cor
rectness of retrieved information.
###### ML overlays with decentralized medical data control
To support the creation of a decentral
ized STIGMA EHR, as depicted in Fig
ure 1, we research a DLT-based overlay
for the federation of multiple medical
institutions through the following
actions, directly related to identified
technological gaps.
**DLT for a decentralized federation of**
**ML models in an overlay. The STIGMA**
EHR system uses a permissioned[15] pro
tocol to create an appropriate configu
rable and modular federating archi
tecture addressing EHR systems’
requirements for anonymous ML model
updates with full control of the private
data that do not leave hospital infra
structure. The EHR system relies on
scalable DLT management approaches
capable of reaching a consensus with
a minimal number of communication
steps with a limited number of ledgers
in a permissioned environment.
**Model provenance for decentralized**
**ML.** Another important aspect of the
STIGMA EHR is data provenance, a
key concept for supporting ML-based
analysis over decentralized networks,
especially when data from IoT devices
are used. Data provenance enables effi
cient access approaches that allow all
the participating ML models in the ML
overlay to maintain a copy of the DLT
and ensure the availability of the same
version of truth. The DLT contains only
the transaction logs that the ML model
-----
information and its sources in poten
tially unpredictable environments.
The approach enables shared knowl
edge and improves data acquisition
from IoT devices.
###### Scalable ML, and a consensus on the computing continuum
Multiple research works, such as Paxos
and RAFT,[17] agree on a single majority
value (that is, a state transaction), with
reduced overhead and power require
ments. Unfortunately, they still require
the large computational resources that
a resource-constrained hospital infra
structure cannot provide, thus making
deployment of the STIGMA EHR sys
tem challenging. Therefore, we modify
the current approaches to make them
suitable for execution on computing
continuum devices. To achieve this,
the STIGMA EHR system assesses the
complexity of ML algorithms and the
training data structure to select suit
able resources in the computing contin
uum with higher computational capa
bilities, close to where the data reside in
terms of network distance. Then, based
on the available hospital computational
infrastructure, a decision is made about
where to conduct the training, and the
accuracy level is identified.
###### REAL TESTBED EVALUATION
To validate the proposed conceptual
EHR system, we deployed DLT-based
ML systems on a real-world experimen
tal testbed. We emulated the computing
infrastructure of medical institutions
by using adequate cloud, fog, and edge
resources, as described in the “Physical
Testbed” section. For the evaluation, we
implemented the Paxos three-phase
commit protocol, where each institu
tion in the DLT network kept track of
current changes. To allow for execution
on multiple heterogeneous systems, we
developed a Paxos protocol in Java 11.0.
###### Physical testbed
We utilized Carinthian Computing Con
tinuum (C[3]),[18] a real computing contin
uum testbed, located at the University
of Klagenfurt, to emulate a network of
multiple medical institutions with lim
ited computing capacities. C[3] encom
passes heterogeneous resources, pro
vided as containers or virtual machines,
in multiple performance categories. We
have therefore identified a subset of
resources usually available in hospitals
(such as fog and private cloud infra
structures), and user-specific devices
(such as ECs composed of low-powered,
portable devices), to conduct the concep
tual evaluation (see Table 1).
Centralized computing infrastruc
tures (CCIs) consist of virtualized
instances provisioned on demand from
Amazon Web Services. For evaluation
purposes, we selected m5a.xlarge and
c5.large as general-purpose instances
powered by an AMD EPYC 7000 proces
sor at 2.5 GHz and an Intel Xeon Plati
num 8000 series processor at 3.6 GHz,
respectively.
A fog cluster (FC) comprises reso
urces from the local Exoscale (ES) cloud
provider, which enables communication
latency of ≤12 ms and a maximal band
width of ≤10 Gb/s. For evaluation pur
poses, we identified medium and large
ES instances, as described in Table 1.
An edge cluster (EC) includes five
NVIDIA Jetson Nano (NJN) and 32 Rasp
berry Pi-4 (RPi4) single-board comput
ers. We installed a Raspberry Pi operat
ing system (version 2020-05-27) on the
RPi devices. We used Linux for Tegra
for the NJN resources. We utilized a
**Instance/device** m5a.xlarge c5.large **Exoscale large** **Exoscale medium** **EGS** **NJN** **RPi4**
CPU type AMD EPYC 7000 Intel Xeon
Platinum 8180
ARM
Cortex 72
CPU clock (GHz) 2.5 3.6 3.6 3.6 3.5 1.43 1.5
Memory (GB) 32 8 8 4 32 4 4
Storage (GB) 120 120 120 120 1,000 64 64
BW (Mb/s) 27 26 65 65 813 450 800
|TABLE 1. T|Col2|The C3 testbed configuration.|Col4|Col5|Col6|
|---|---|---|---|---|---|
|CCI (Amazon Web Services)||FC||||
|m5a.xlarge|c5.large|Exoscale large|Exoscale medium|EGS|NJN|
|AMD EPYC 7000 2.5 32 120 27|Intel Xeon Platinum 8180 3.6 8 120 26|Intel Xeon Platinum 8180 3.6 8 120 65|Intel Xeon Platinum 8180 3.6 4 120 65|AMD Ryzen 2920 3.5 32 1,000 813|Tegra X1 and ARM Cortex A57 1.43 4 64 450|
-----
managed, 48-port, three-layer HP Aruba
switch to interconnect all resources in
the EC. The switch supports 1 Gb/s per
port, with a latency of 3.8 µs and an
aggregate data transfer rate of 104 Gb/s.
The EC is managed by the Edge Gateway
System (EGS), based on a 12-core AMD
Ryzen Threadripper 2920X processor
at 3.5 GHz with 32 GB of random-ac
cess memory, which is easily available
in many medical and business envi
ronments. For cases when there are not
sufficient resources available at the EC,
the EGS is responsible for partially off
loading the execution of the compute
processes to other computing contin
uum resources, including FC or CCI.
###### Experimental design
We designed the following four sets of
experiments according to character
istics of the conceptual decentralized
EHR system and averaged the results
over 10 runs for statistical significance:
1. The DLT network initialization
time evaluates the initializa
tion time of the EHR network,
encompassing multiple medical
institutions in the range of {3,
5, 7, 10}. The medical institu
tion that initializes the EHR
network is considered the
first leader, where the leader
interval is 30 ms and the delay
between voting rounds is
100 ms. The medical institu
tions join the EHR network in
regular intervals of 10 s.
2. The consensus time evaluates
the time needed for the net
work encompassing all medical
institutions in the range of {3,
5, 7, 10} to reach a consensus
on a single value. Similar to
the previous experiment, the
leader interval is 30 ms and the
delay between voting rounds
is 100 ms. The consensus time
is measured only after the net
work is fully initialized with all
participating institutions.
3. The ML training time evalu
ates the training process of a
convolutional neural network
for object detection with med
ical multimodal data from
laparoscopic procedures[19]
limited to 500 samples. The
convolutional network has
three layers, with a kernel size
in the range of {32, 64, 128} and
an accuracy of 97%. The ML
training time also included the
overhead required for transfer
ring the trained model to the
device where inference will be
performed.
4. Edge accuracy evaluates the
tradeoff between accuracy and
training time for the afore
mentioned convolutional
neural network on the com
puting continuum devices.
This experiment compares the
execution time for training the
neural network with an aver
age accuracy of 85 and 70%,
respectively.
5. The data transfer time mea
sures the time needed for
transfer of 1 MB of raw data
between an IoT device, con
nected to the C[3] infrastructure,
and the corresponding des
tination resource. The trans
fer time was measured using
the Prometheus monitoring
system.
###### Results
Figure 2(a) shows that current consen
sus algorithms have limited scalability,
considering network initialization. We
observe that initialization of the EHR
network with 10 medical institutions
**FIGURE 2 A consensus evaluation of the STIGMA EHR system (a) The DLT network initialization and (b) DLT consensus time**
-----
can take up to 28 times more time com
pared to the small network of three
institutions, which limits the number
of participating institutions in a single
decentralized EHR system. However, the
standard deviation ranges from 29% for
10 participating institutions to 58% for
three. The reason for the scalability lim
itation is that all consensus messages
must be relayed through a single coordi
nator, which, although not a single point
of failure, is a potential performance
bottleneck. This is evident during net
work initialization for a large number of
institutions. However, this experiment
proves that up to 10 medical institutions
can be federated in a single overlay with
minimal initialization overhead.
Furthermore, in Figure 2(b), we ob
serve a similar trend related to the
time needed to reach a consensus. The
EHR network composed of 10 institu
tions required nearly 19 times more
time to reach a consensus compared to
the small network of three institutions.
However, we observe a much lower
standard deviation, which ranges from
18% for seven participating institutions
to 31% for three. Furthermore, com
pared to the proof-of-work approach
implemented in the blockchain proto
col, our approach is more efficient in
terms or computing resources.
Figure 3(a) evaluates the suitabil
ity of the most commonly available
resources for performing ML train
ing over multimodal medical data. We
observe that specialized devices for ML,
such as the NJN device, are very suit
able for performing these tasks and can
be easily afforded by medical institu
tions. In addition, available EC devices,
extended with other resources from the
computing continuum, can achieve very
low model-training times, making them
suitable for supporting decentralized
EHR systems, especially in cases when
the system utilization is low. The reason
for this is that resources across the com
puting continuum can meet the conflict
ing requirements of EHR systems (such
as close proximity to the data source and
high-performance analysis) due to their
high heterogeneity.
Figure 3(b) evaluates the relation
ship between accuracy of the ML
model and execution time on the com
puting continuum. We observe that by
reducing the accuracy from 97 to 85%,
we can reduce the execution time
|432|Col2|Col3|
|---|---|---|
||||
||||
**FIGURE 3. ML training in the STIGMA EHR system. AWS: Amazon Web Services. (a) The average training time needed to achieve 97%**
accuracy and (b) average training time needed to lower model accuracy
-----
**FIGURE 4. The effective time for transferring 1 MB of data. AWS: Amazon Web Services.**
by more than 60%. Furthermore, by
reducing the accuracy to 70%, we can
reduce the execution time on the con
strained devices by 90%. However, the
tradeoff between accuracy and execu
tion time depends on requirements of
the EHR system and the specific med
ical procedure. In general, this allows
for various proximity computing
techniques to be applied to improve
the performance of ML training with
out any significant accuracy penalty.
Finally, Figure 4 analyzes the net
work performance of raw medical
data exchanges among the different
resources available across the comput
ing continuum. We observe that the
RPi4 and EGS devices can each achieve
very low data transfer times compared
to the CCI and FC instances, which
could significantly reduce any com
puting performance advantage the CCI
alone can provide.
training time by 60% compared to
the cloud.
Finally, based on the evaluation
results of the conceptual STIGMA EHR
system, we conclude that decentral
ized ML over the computing contin
uum for medical data analysis can be
achieved through the utilization of
scalable consensus algorithms over
a permissioned DLT network with
transparent integration of personal
IoT devices. In the future, we plan to
explore further how we can identify
the optimal tradeoff between train
ing accuracy and execution time on
low-performance devices across the
computing continuum.
**REFERENCES**
1. S. E. Stutterheim et al., “Patient and
provider perspectives on HIV and
HIV-related stigma in Dutch health
care settings,” AIDS Patient Care
_STDs, vol. 28, no. 12, pp. 652–665,_
2014, doi: 10.1089/apc.2014.0226.
2. P. Beckman et al., “Harnessing the
computing continuum for program
ming our world,” Fog Comput., Theory
_Pract., pp. 215–230, Apr. 2020, doi:_
10.1002/9781119551713.ch7.
3. T.-T. Kuo and L. Ohno-Machado, “Mod
elchain: Decentralized privacy-pre
serving healthcare predictive model
ing framework on private blockchain
networks,” 2018, arXiv:1802.01746.
4. M Mettler. “Blockchain technology
in healthcare: The revolution starts
here,” in Proc. 2016 IEEE 18th Int.
_Conf. e-Health Netw., Appl. Services_
_(Healthcom), pp. 1–3, doi: 10.1109/_
HealthCom.2016.7749510.
5. M. J. Sheller, G. Anthony Reina, B.
Edwards, J. Martin, and S. Bakas,
“Multi-institutional deep learning
modeling without sharing patient
data: A feasibility study on brain
## T
o unleash the potential of decen
tralized EHR systems and their
devices, in this article, we explored the
need for the creation of an ecosystem
that supports the complete lifecycle
of medical data sharing and process
ing. The presented approach enables
knowledge extraction for improved
medical diagnosis, therapy, and stigma
reduction on top of decentralized het
erogeneous infrastructures as a part
of the computing continuum and IoT
environments. We therefore identified
critical research gaps. Based on the
identified considerations, we defined
concrete research and technical actions
required for their implementation.
Finally, we discussed the implementa
tion of STIGMA, a conceptual, decen
tralized EHR system as a proof of con
cept. The system yielded promising
results in terms of scalability, which
indicate that up to seven different
medical institutions can be integrated
in a decentralized overlay, with a con
sensus latency of 8 s or lower. In terms
of ML learning time, we observed that
edge devices can perform similar to
cloud resources, and some of them,
-----
_MICCAI Brainlesion Workshop, 2018,_
pp. 92–104.
6. A. Roehrs, C. A. Da Costa, and R. da
Rosa Righi, “OmniPHR: A distrib
uted architecture model to integrate
personal health records,” J. Biomed.
_Inf., vol. 71, pp. 70–81, Jul. 2017, doi:_
10.1016/j.jbi.2017.05.012.
7. A. Roehrs, C. A. da Costa, R. da Rosa
Righi, V. F. da Silva, J. R. Goldim,
and D. C. Schmidt, “Analyzing the
performance of a blockchain-based
personal health record implemen
tation,” J. Biomed. Inf., vol. 92, pp.
103,140, Apr. 2019, doi: 10.1016/j.
jbi.2019.103140.
8. “The GemOS System,” Gem, 2021.
https://enterprise.gem.co/gemos/
(Accessed: May 5, 2021).
9. S. Wang et al., “When edge meets
learning: Adaptive control for
resource-constrained distributed
machine learning,” in Proc. IEEE
_INFOCOM 2018-IEEE Conf. Comput._
_Commun., pp. 63–71, doi: 10.1109/_
INFOCOM.2018.8486403.
10. A. Kumar, S. Goyal, and M. Varma,
“Resource-efficient machine learn
ing in 2 kb ram for the internet
of things,” in Proc. 34th Int. Conf.
_Mach. Learn., 2017, vol. 70, pp._
1935–1944.
11. S. A. Osia, A. S. Shamsabadi, A.
Taheri, H. R. Rabiee, and H. Haddadi,
“Private and scalable personal data
analytics using hybrid edge-to-cloud
deep learning,” Computer, vol. 51,
no. 5, pp. 42–49, 2018, doi: 10.1109/
MC.2018.2381113.
12. G. Wang et al., “Interactive medical
image segmentation using deep
learning with image-specific fine
tuning,” IEEE Trans. Med. Imag., vol.
37, no. 7, pp. 1562–1573, 2018, doi:
10.1109/TMI.2018.2791721.
13. T. S. Brisimi, R. Chen, T. Mela,
and W. Shi, “Federated learning of
predictive models from federated
electronic health records,” Int. J. Med.
_Inf., vol. 112, pp. 59–67, Apr. 2018, doi:_
10.1016/j.ijmedinf.2018.01.007.
14. T.-T. Kuo, H.-E. Kim, and L. Ohno-
Machado, “Blockchain distributed
ledger technologies for biomedical
and health care applications,” J.
_Amer. Med. Inf. Assoc., vol. 24, no._
6, pp. 1211–1220, 2017, doi: 10.1093/
jamia/ocx068.
15. E. Gaetani, L. Aniello, R. Baldoni, F.
Lombardi, A. Margheri, and V. Sas
sone, “Blockchain-based database to
ensure data integrity in cloud com
puting environments,” in Proc. Ital
_ian Conf. Cybersecurity, 2017, pp. 1–10._
16. O. Goldreich, “Secure multi-party
computation,” Weizmann Inst.
Available: https://citeseerx.ist.psu.
edu/viewdoc/download?doi=10.1.1.
11.2201&rep=rep1&type=pdf
17. D. Ongaro and J. K. Ousterhout, “In
search of an understandable con
sensus algorithm,” in Proc. USENIX
_Annu. Tech. Conf., 2014, pp. 305–319._
18. D. Kimovski, R. Mathá, J. Hammer,
N. Mehran, H. Hellwagner, and R.
Prodan, “Cloud, fog or edge: Where
to compute?” IEEE Internet Comput.,
vol. 25, no. 4, pp. 30–36, 2021, doi:
10.1109/MIC.2021.3050613.
19. A. Leibetseder, S. Kletz, K. Schoeff
mann, S. Keckstein, and J. Keckstein,
“GLENDA: Gynecologic laparoscopy
endometriosis dataset,” in Proc.
_26th Int. Conf., MultiMedia Model_
_ing (MMM), Daejeon, South Korea,_
Jan. 5–8, 2020, pp. 439–450, doi:
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2207.14584, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://ieeexplore.ieee.org/ielx7/2/9903856/09903863.pdf"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-07-29T00:00:00
|
[
{
"paperId": "289d4e1c409cefcb3829871be63d45bb587265c9",
"title": "Cloud, Fog, or Edge: Where to Compute?"
},
{
"paperId": "10a80a3b0d1136abec71c29827cb18dbc348e9f5",
"title": "Secure multiparty computation"
},
{
"paperId": "673f6feb15951f15b9d6c5090922e6f9b1d5ba6b",
"title": "Harnessing the Computing Continuum for Programming Our World"
},
{
"paperId": "dc03d098ec890f3b5df438a37e33a147ecbf790d",
"title": "Blockchain-based Database to Ensure Data Integrity in Cloud Computing Environments"
},
{
"paperId": "aa5b9d9ff614bb1af2c7aae25e173ba1502ff06b",
"title": "GLENDA: Gynecologic Laparoscopy Endometriosis Dataset"
},
{
"paperId": "5e1ec4fd418ef0b4cbe0268b187bcdcd9ac5235f",
"title": "Analyzing the performance of a blockchain-based personal health record implementation"
},
{
"paperId": "97943e09b03e3ccd30761835ea5cbbea174de43d",
"title": "Multi-Institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation"
},
{
"paperId": "10d6fe566125597195cf985084347139a8cdad1e",
"title": "Private and Scalable Personal Data Analytics Using Hybrid Edge-to-Cloud Deep Learning"
},
{
"paperId": "8caac5411e6e8d0d071ccf56ce1c57a74a64c483",
"title": "Federated learning of predictive models from federated Electronic Health Records"
},
{
"paperId": "0f910174d2e19101ca8f008909006e79416821fd",
"title": "When Edge Meets Learning: Adaptive Control for Resource-Constrained Distributed Machine Learning"
},
{
"paperId": "661c41a83c7a874998b061a813aebb845147a66c",
"title": "ModelChain: Decentralized Privacy-Preserving Healthcare Predictive Modeling Framework on Private Blockchain Networks"
},
{
"paperId": "4f17e652b2e090c93175eec37dfcd862c848113c",
"title": "Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning"
},
{
"paperId": "5bbc4181e073ec6b3ec894a35eacdc6a67e8c3a3",
"title": "Blockchain distributed ledger technologies for biomedical and health care applications"
},
{
"paperId": "db3e4b11a569fe17908417351678525f6304e7b7",
"title": "Resource-efficient Machine Learning in 2 KB RAM for the Internet of Things"
},
{
"paperId": "6536e22778df1d8b371cd8ff263145713e7bffd9",
"title": "OmniPHR: A distributed architecture model to integrate personal health records"
},
{
"paperId": "310e677ce23004fdf0a549c2cfda2ef15420d6ec",
"title": "Blockchain technology in healthcare: The revolution starts here"
},
{
"paperId": "4627a6acbe70cd94cc39fbe8a21156264bfddd65",
"title": "Patient and provider perspectives on HIV and HIV-related stigma in Dutch health care settings."
},
{
"paperId": "9979809e4106b29d920094be265b33524cde8a40",
"title": "In Search of an Understandable Consensus Algorithm"
},
{
"paperId": null,
"title": "His research interests include performance and resource management tools for parallel and distributed systems"
},
{
"paperId": null,
"title": "His research interests include performance modeling and optimization of parallel and distributed systems, particularly workflow applications and serverless computing"
},
{
"paperId": null,
"title": "His research interests include fog and edge computing, multiobjective optimization, and distributed storage. Kimovski received a Ph.D. in computer science from the Technical University of Sofia"
},
{
"paperId": null,
"title": "The GemOS System"
},
{
"paperId": null,
"title": "He was an assistant professor at the University for Information Science and Technology in Ohrid (North Macedonia) and a senior researcher at the University of Innsbruck (Austria)"
},
{
"paperId": null,
"title": "Dragi Kimovski is a tenure-track researcher at the Institute of Information Technology (ITEC)"
}
] | 9,156
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02d91005853c54564e84d0139534f3e832487c78
|
[
"Computer Science"
] | 0.861926
|
C2MP: Chebyshev chaotic map-based authentication protocol for RFID applications
|
02d91005853c54564e84d0139534f3e832487c78
|
Personal and Ubiquitous Computing
|
[
{
"authorId": "2109086281",
"name": "Zhihua Zhang"
},
{
"authorId": "49528383",
"name": "Huanwen Wang"
},
{
"authorId": "2260643",
"name": "Yanghua Gao"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Pers Ubiquitous Comput"
],
"alternate_urls": [
"https://link.springer.com/journal/volumesAndIssues/779"
],
"id": "68fd8242-3be0-4b1e-8b5d-ad0a8a02db12",
"issn": "1617-4909",
"name": "Personal and Ubiquitous Computing",
"type": "journal",
"url": "http://www.springer.com/computer/hci/journal/779"
}
|
Radio frequency identification (RFID) is a promising wireless sensor technology in the Internet of Things and can be applied for object identification. However, the security issues are still open challenges and should be addressed to achieve enhanced safeguard. Existing security solutions mainly apply logical operators, hash function, and other cryptographic primitives to design authentication schemes. In this paper, we propose a Chebyshev chaotic map-based authentication protocol (C2MP) for the RFID applications. Thereinto, Chebyshev polynomial’s semigroup and chaotic properties are introduced for identity authentication and anonymous data transmission. The proposed C2MP owns the security properties including data integrity, authentication, anonymity, and session freshness. According to the BAN logic, security formal analysis is performed based on the messages formalization, initial assumptions, anticipant goals, and logic verification. It indicates that the proposed C2MP is suitable for universal RFID applications.
|
DOI 10.1007/s00779 015 0876 6
ORIGINAL ARTICLE
# C2MP: Chebyshev chaotic map-based authentication protocol for RFID applications
Zhihua Zhang[1][ •] Huanwen Wang[1][ •] Yanghua Gao[1]
Received: 15 December 2014 / Accepted: 1 May 2015 / Published online: 1 September 2015
� The Author(s) 2015. This article is published with open access at Springerlink.com
Abstract Radio frequency identification (RFID) is a
promising wireless sensor technology in the Internet of
Things and can be applied for object identification. However,
the security issues are still open challenges and should be
addressed to achieve enhanced safeguard. Existing security
solutions mainly apply logical operators, hash function, and
other cryptographic primitives to design authentication
schemes. In this paper, we propose a Chebyshev chaotic
map-based authentication protocol (C2MP) for the RFID
applications. Thereinto, Chebyshev polynomial’s semigroup
and chaotic properties are introduced for identity authentication and anonymous data transmission. The proposed
C2MP owns the security properties including data integrity,
authentication, anonymity, and session freshness. According
to the BAN logic, security formal analysis is performed
based on the messages formalization, initial assumptions,
anticipant goals, and logic verification. It indicates that the
proposed C2MP is suitable for universal RFID applications.
Keywords Radio frequency identification (RFID)
�
Authentication Chebyshev chaotic map Protocol
� � �
Security
### 1 Introduction
Radio frequency identification (RFID) is a promising
wireless sensor technology in the Internet of Things (IoT)
and can be applied for object identification in various
& Zhihua Zhang
yhgao633@sohu.com
1 China Tobacco Zhejiang Industrial Co., Ltd, Hangzhou,
China
applications such as supply chain, logistics, and asset
management. Due to the open wireless communication
channels, the reader–tag air interface is suffering from
several security threats and attacks [1, 2]. Consequently,
security issues become key concerns with the increasing
popularity of RFID systems. It is necessary to propose an
authentication scheme for security protection in the RFID
applications.
In RFID systems, readers are deployed for distributed
tag data acquirement, collection and extraction in wireless
radio environments. The open environments during the
system operations bring serious security challenges. Due to
the tags are assigned with sensitive data involving a wide
variety of applications from transportation, logistics, to
asset management [3, 4]. Therefore, RFID systems differ
from the traditional wireless systems, which suffer from
more insecure situations and may be subject to more
attacks for commercial purposes.
Several security solutions have been proposed to
address potential security problems in RFID systems,
including physical mechanisms, authentication protocols,
access control protocols, and encryption algorithms.
Thereinto, authentication protocols are the principal
schemes that own ubiquitous applicability. There are
three main categories of authentication protocols
according to the weight of cryptographic primitives [13].
Concretely, the ultra-lightweight protocols mainly apply
the bitwise logical operators and pseudo-random number
generator (PRNG) to achieve safeguard [5–7]. The
lightweight protocols mainly use cyclic redundancy code
(CRC) operator, message authentication code (MAC),
and hash function to realize identity authentication [8–
11]. The middleweight protocols introduce the fullfledged symmetric/asymmetric encryption [e.g., elliptic
curve cryptography (ECC)] for the applications with
## 1 3
-----
higher security requirements (e.g., finance, and military)
[12–15]. However, several complicated protocols may be
limited by the tag hardware requirements such as power
consumption, storage space, and computational capacity.
Hence, it is necessary to propose a suitable authentication
scheme to achieve improved robustness, reliability and
security. The existing security schemes are mainly based
on the modern cryptography for different RFID applications. Recently, chaos encryption becomes an attractive
direction to address security issues. Thereinto, Chebyshev
chaotic map owns perfect randomness, semi-group and
chaotic properties for the chaotic sequences, which can
be introduced for identity authentication and anonymous
transmission.
A sound security solution should achieve three main
security requirements in RFID applications [16]. (1)
Authentication: The readers and tags should pass the
verification by the backend database so that any illegal
reader cannot access the system for resource abuse, and
any illegal tag cannot pass the verification for information
cheat. (2) Anonymity: Both readers and tags should protect
their own identifiers during ongoing communications, and
attackers cannot obtain any sensitive information with
privacy considerations. (3) Session freshness: The interactive session can be regarded as freshness due to the
random operators, any attackers cannot correlate two
communication sessions, and also cannot derive the previous or subsequent interrogations according to the current
session.
In this work, we propose a Chebyshev chaotic mapbased authentication protocol (C2MP) for RFID applications, and the main contributions are as follows.
- The semi-group property of Chebyshev chaotic map is
introduced for authentication. The defined algebraic
relationships of the Chebyshev polynomials are
adopted to realize mutual trust relationship among the
legal entities.
- The chaotic property of Chebyshev chaotic map is
applied to enhance anonymous message transmission.
An attacker cannot obtain any sensitive information of
the ongoing session due to irregular message flows.
- The pseudo-random numbers are adopted to enhance
the randomization and forward security of the interactions, and session freshness is achieved to against a
typical attack such as replay attack.
The remainder of the paper is organized as follows.
Section 2 introduces the related works in RFID security.
Section 3 reviews the proposed authentication protocol.
Sections 4 and 5 present the security formal analysis and
performance analysis. Finally, Sect. 6 draws a
conclusion.
## 1 3
### 2 Related work
Tian et al. [5] proposed an ultralightweight RFID authentication protocol with permutation (RAPP), which avoids
to apply the unbalanced OR and AND operations for
authentication. In the RAPP, the tags only perform the
bitwise XOR, left rotation and permutation operations.
Meanwhile, de-synchronization attacks are addressed by
the unique message transmission mode. According to the
security analysis, the RAPP satisfies the main security
properties to defend the various attacks.
Liu et al. [6] proposed a grouping-proofs based
authentication protocol (GUPA) to address the security
issue for multiple readers and tags simultaneous identification in distributed RFID systems. In the GUPA, distributed authentication mode with independent
subgrouping proofs is adopted to enhance hierarchical
protection, an asymmetric denial scheme is applied to grant
fault-tolerance capabilities against an illegal reader or tag,
and a sequence based odd-even alternation group subscript
is presented to define a function for secret updating. It
indicates that the GUPA realizing both secure and simultaneous identification is efficient for the resource constrained distributed RFID systems.
Liu and Ning [7] proposed a zero-knowledge authentication protocol (ZKAP) based on alternative mode for
RFID systems. In the ZKAP, dual zero-knowledge proofs
are randomly chosen to provide anonymity and mutual
authentication without revealing any sensitive identifiers.
Pseudo-random flags and access lists are employed for
quick check to ensure high efficiency and scalability. It
indicates that the ZKAP owns no obvious design defects
theoretically and is robust enough to resist the forgery,
replay, Man-in-the-Middle (MITM), and tracking attacks.
Liu et al. [8] proposed a lightweight mutual authentication protocol based on variable linear feedback shift
registers for EPC Gen2 standard systems. An application
specific integrated circuit (ASIC) implementation of the
protocol is performed with low-power consumption.
Yao et al. [9] proposed a multiple tags privacy-preserving authentication protocol (MAP) for authenticating a
batch of tags with strong privacy and high efficiency. The
MAP applies batch-type authentication pattern, and leverages the collaboration among multiple tags for accelerating
the authentication speed. Both security protection and
privacy preservation are achieved in terms of confidentiality, cloning resistance, tracking resistance, timing-based
attack resistance, and forward secrecy.
Morshed et al. [11] proposed an efficient mutual
authentication protocol by using individual secret values
for each tag. This protocol avoids complex hash operations
in the database to reduce the computation overhead. The
-----
evaluation indicates that the protocol requires a low tag
storage, computation and communication cost for lightweight RFID applications.
Toward Chebyshev chaotic map-based authentication
protocols, Ning et al. [17] proposed an aggregated-proof
based hierarchical authentication scheme (APHA) for the
unit IoT and ubiquitous IoT. In the APHA, the aggregatedproofs are established for multiple targets to achieve
backward and forward anonymous data transmission; and
directed path descriptors, homomorphism functions, and
Chebyshev chaotic maps are jointly applied for mutual
authentication. Particularly, Chebyshev chaotic maps are
applied to describe the mapping relationships between the
shared secrets and the path descriptors for mutual
authentication.
In this work, we propose an RFID authentication protocol absorbing the merits of former schemes based on
lightweight bitwise operations. Compared with the existing
researches, the proposed C2MP based on the semi-group
property and chaotic property of Chebyshev chaotic map
differs from the conventional security scheme applying
complex hash function and cryptographic algorithms.
Considering the limitations of tags, the proposed C2MP
based on algebraic and bitwise operations is suitable for
ubiquitous systems in pervasive computing environments.
The combination of Chebyshev chaotic map, hash function,
pseudo-random numbers, and mutual authentication
mechanism has not received much attention from previous
studies.
### 3 The proposed authentication protocol
3.1 System initialization
In the RFID system, there are readers, tags, and a backend
database. The communication between a reader and the
database can be regarded as a secure channel, while the
communication link between a reader and a tag is suffering
from various security attacks and threats. Assume that a
reader (R) and a tag (T) own the corresponding pseudonyms PIDR and PIDT, the database (DB) owns all the legal
readers and tags information and a pre-shared value
Q �T xðSÞ ðmod pÞ, in which x 2 Z[�] is a secret value, S is
a pre-shared value, and p is a large prime. Both T and
R own the values {Q, S, p}. The detailed notations are
introduced in Table 1.
In the system initialization, hardware and software
requirements are given as follows [13].
- Tags considered in the system are smart cards comprising an intelligent micro-processor unit (MPU),
storage units and chip operating system (COS). Assume
Table 1 Notations
Notation Description
R, T, DB The reader, tag, and database
PIDR; PIDT The reader/tag’s pseudonym
rR; rT The reader/tag’s pseudo-random numbers
TIDR; TID[0]R The reader’s temp pseudonym
TIDT ; TID[0]T [;][ TID][00]T The tag’s temp pseudonym
S, Q The pre-shared value for the legal entities
x, y, z The random integers
T �ð:Þ The Chebyshev polynomial
H(.) The hash function
k The comparison operator
! The transition operator
that tags have the basic crypto-operational and storage
capabilities to realize data transmission in the open
channels.
- Readers are static or mobile active devices distributed
to cover the areas where tags exit. Both the readers and
the database are not power constrained, and besides the
database is regarded as the credible entity.
- The communication channel between a reader and the
back-end database is assumed to be secure, while the
wireless channel between a reader and a tag is
vulnerable.
Note that the physical destructions such as removing a
tag physically from a tagged item are not considered since
there are no technical methods to discriminate between
intentional or unintentional behaviors.
The Chebyshev chaotic maps is available for authentication [18, 19]. Suppose that a Chebyshev polynomial
T xðmÞ is in x of degree m, and T xðmÞ : ½�1; 1�! ½�1; 1� is
defined as follows:
T xðmÞ ¼ cosðl � arccosðmÞÞ
The Chebyshev polynomials satisfy the following
relationships.
T 0ðmÞ ¼ 1;
T 1ðmÞ ¼ m;
T xðmÞ ¼ cosðl � arccosðmÞÞ; ðl � 2Þ:
Define the degrees {x1, x2} are positive integer numbers.
The Chebyshev polynomials T x1ðmÞ and T x2 ðmÞ
(m ; ) are assigned with the semigroup and
2 ½�1 1�
chaotic properties.
T xðmÞ �ð2mT l�1ðmÞ �T x�2ðmÞÞðmod qÞ; ðl � 2Þ;
T x1 ðT x2ðmÞÞ
In the trust model, DB is an only entity trusted by all the
readers and tags. There is no other direct trust relationships
## 1 3
-----
between readers and tags. Thereinto, a reader is assigned
with default access authority on a set of tags.
3.2 The protocol descriptions
An interaction among {R, T, DB} is introduced to describe
the protocol process. Figure 1 shows the proposed Chebyshev chaotic map-based authentication protocol, and the
main message exchanges among R, T, and DB are as
follows.
3.2.1 Challenge–response between a reader and a tag
The reader R generates a pseudo-random number rR and
transmits rR to T as an access challenge to launch a new
session. Upon receiving the message, T first generates a
pseudo-random number rT, and a random integer z. Thereafter, T extracts the pre-shared values {Q, S} and its
pseudonym PIDT to compute the authentication operators
{AT, BT}, a temp identifier TIDT, and a hash value HT.
AT ¼ T zðSÞ ðmod pÞ;
BT ¼ T zðQÞ ðmod pÞ;
TIDT ¼ PIDT � HðBT krT Þ;
HT ¼ HðTIDT kBT krRÞ:
T transmits the cascade messages rT kAT kTIDT kHT to
R as a response. Upon receiving the messages, R also
generates a random integer y. Afterward, R computes its
authentication operators {AR, BR}, a temp identifier TIDR,
and a hash value HR.
AR ¼ T yðSÞ ðmod pÞ;
BR ¼ T yðQÞ ðmod pÞ;
TIDR ¼ PIDR � HðBRkrRÞ;
HR ¼ HðTIDRkBRkrT Þ:
3.2.2 Authentication on both reader and tag
by the database
R transmits the cascade messages rT kAT kTIDT kHT and
rRkARkTIDRkHR to the database DB for authentication.
Upon receiving the messages, DB extracts the locally
stored pseudonyms {PIDT ; PIDR} to compute the values
{B[0]T [;][ B][0]R[} and the temp identifier {][TID][0]T [;][ TID][0]R[}, respec-]
tively, for {R, T}.
B[0]T [¼ T][ x][ð][A][T] [Þ ð][mod p][Þ][;]
B[0]R [¼ T][ x][ð][A][R][Þ ð][mod p][Þ][;]
TID[0]T T [k][r][T] [Þ][;]
[¼][ PID][T][ �] [H][ð][B][0]
TID[0]R R[k][r][R][Þ][:]
[¼][ PID][R][ �] [H][ð][B][0]
According to Q �T xðSÞ ðmod pÞ, it turns out that B[0]T [¼]
BT will hold.
B[0]T [¼ T][ x][ð][A][T] [Þ ð][mod p][Þ ¼ T][ x][ðT][ z][ð][S][ÞÞ ð][mod p][Þ][;]
BT ¼ T zðQÞ ðmod pÞ ¼ T zðT xðSÞÞ ðmod pÞ:
Similarly, B[0]R [¼][ B][R] can also be obtained since
T yðT xðSÞÞ ðmod pÞ theoretically equals T xðT yðSÞÞ
mod p .
ð Þ
DB checks the validity of {T, R} by computing the hash
values HðTID[0]T [k][B][0]T [k][r][R][Þ][ and][ H][ð][TID][0]R[k][B][0]R[k][r][T] [Þ][, and com-]
pares whether the received values {HT ; HR} equal the
computed values. If HT ¼ HðTID[0]T [k][B][0]T [k][r][R][Þ][ and][ H][R][ ¼]
HðTID[0]R[k][B][0]R[k][r][T] [Þ][ hold,][ DB][ will regard][ T][ and][ R][ as legal]
entities; otherwise, the protocol will terminate.
DB further computes a value PID[0]T [, and transmits][ PID][0]T
to R.
PID[0]T [¼][ TID]T[0] [�] [H][ð][B]R[0] [k][r][R][Þ]
3.2.3 Authentication on the reader by the tag
Upon receiving the messages, R computes the temp identifier TID[00]T [, an authentication operator][ S][R][, and a hash value]
MR. R transmits the cascade messages ARkMR to T for
further authentication.
TID[00]T [¼][ PID]T[0] [�] [H][ð][B][R][k][r][R][Þ][;]
SR ¼ T yðAT Þ ðmod pÞ;
MR ¼ HðTID[00]T [k][S][R][k][r][T] [Þ][:]
Thereafter, T computes a value ST and checks the
validity of R by re-computing the hash value
HðTIDT kST krT Þ. According to Q �T xðSÞ ðmod pÞ, it
turns out that ST = SR since T zðT yðSÞÞ ðmod pÞ ¼
T yðT zðSÞÞ ðmod pÞ. If MR ¼ HðTIDT kST krT Þ holds, T will
regard R as a legal reader; otherwise, the protocol will
terminate.
Fig. 1 The Chebyshev chaotic map-based authentication protocol ST ¼ T zðARÞ ðmod pÞ
## 1 3
-----
Till now, R, T and DB have established the trusting
relationships, and DB has authenticated {R, T} as legal
entities. The Chebyshev chaotic map is applied for
authentication, and the main authentication phases can be
described as follows:
- R ? T: rR;
- T ? R: rT kAT kTIDT kHT ;
- R ? DB: rT kAT kTIDT kHT ; rRkARkTIDRkHR;
- DB ? R: PID[0]T [;]
- R ? T: ARkMR.
3.3 Security properties
The proposed C2MP is based on Chebyshev polynomials to
adopt authentication, anonymity, and session freshness
mechanisms to enhance security protection in the RFID
systems.
3.3.1 Authentication
The authentication mechanism is applied to establish the
mutual trusting relationships between interactive entities.
Thereinto, the database DB can be regarded as a trusted
entity in the system. Note that the semigroup property of
the Chebyshev polynomials is introduced for authentication, and the detailed authentication includes the following
aspects:
- The database DB performs authentication on both
reader R and tag T by checking whether the received
values {HT, HR} equal the computed hash values
HðTID[0]T [k][B][0]T [k][r][R][Þ][ and][ H][ð][TID][0]R[k][B][0]R[k][r][T] [Þ][. According to]
Q �T xðSÞ ðmod pÞ, it turns out that B[0]T [¼][ B][T][ and]
B[0]R [¼][ B][R][ will hold.]
- The tag T performs authentication on the reader R by
checking the consistency of the received MR and the recomputing the hash value HðTIDT kST krT Þ. It turns out
that ST ¼ SR since T zðT yðSÞÞ ðmod pÞ ¼ T yðT zðSÞÞ
mod p holds.
ð Þ
3.3.2 Anonymity
The pseudonyms {PIDR; PIDT } are wrapped along with the
hash function, and the temp identifiers {TIDT ; TID[0]T [;][ TID][00]T [}]
and {TIDR; TID[0]R[} are transmitted instead of the pseudo-]
nyms. The anonymous transmission mode makes that any
attacker cannot obtain the real identifiers during the
authentication process. Moreover, the polynomial’s chaotic
property enhances the anonymity due to the irregular
message flow.
Meanwhile, data integrity is also achieved by one-way
hash functions to guarantee that the interactive data cannot
be modified during the authentication process.
- The tag’s temp identifiers {TIDT ; TID[0]T [;][ TID][00]T [} and]
reader’s temp identifiers {TIDR; TID[0]R[} are computed]
by wrapping the pseudonyms PIDT and PIDR with the
hash values H B r and H B[0]
ð �k �Þ ð �[k][r][�][Þ][.]
- The values {HT ; HR; MR} are respectively computed by
hashing the values TIDT kBT krR and TIDRkBRkrT .
Such hash values realize that any attacker cannot derive
the sensitive information even if it obtains the exchanged
messages via the open channels. The authentication protocol considers the channel limitations and applies lightweight hash functions in the wireless networks to realize
the trade-off of security and efficiency.
3.3.3 Session freshness
Session freshness is achieved by introducing pseudo-random numbers, which also enhance the randomization and
forward security.
- The pseudo-random numbers rR and rT are generated
by the pseudo-random number generator (PRNG) and
are used to compute the temp identifiers and hash
values such as {TID�; TID[0]�[;][ H][�][;][ M][R][}.]
- The random integers x, y and z are generated to
determine the degree of the Chebyshev polynomial
T �ð:Þ, which is applied for further authentication.
The current security compromises cannot correlate with
the previous interactions due to the pseudo-random
numbers.
### 4 Security formal analysis with BAN logic
In this section, Burrows–Abadi–Needham (i.e., BAN) logic
[20] is applied to analyze the design correctness of the
C2MP. The BAN logic is a rigorous evaluation method to
detect subtle defects for authentication protocols. The
security formal analysis focuses on belief and freshness,
and involves the following steps:
1. Formalization of the protocol messages;
2. Declaration of initial assumptions;
3. Declaration of anticipant goals;
4. Verification by logical rules and formulas.
The main reasoning progress is based on the belief use
postulates and definitions to determine whether the protocol goals can be derived from the initial assumptions and
message exchanges. If such derivation exists, the protocol
## 1 3
-----
Table 2 The formal notations [20]
Notation Description
Pj � X P believes X, or P would be entitled to believe X
P / X P sees X. A party has sent a message containing X to
P who can read and repeat X
Pj � X P once said X. P sent a message including the
statement X before, and P believed X when he sent
the message
Pj ) X P has jurisdiction over X. P is an authority on X and
should be trusted on this matter
]ðXÞ X is fresh, and X has not been sent in a message at
any time before the current run of the protocol
X X is a secret known only to P and P[0], and trusted by
P ! P[0]
them. Only P and P[0] may use X to prove their
identities to each other
fXgY X is combined with the formula Y. It means that Y is
a secret and that its presence prove the identity of
whoever utters fXgY
will be regarded as reasonable. Table 2 shows formal
notations in the BAN logic.
4.1 Message formalization
According to the authentication phases of the C2MP, the
formalized messages (M) delivered among R, T and DB can
be described in the following forms.
- M1 (R ! T): T / rR.
T receives rR from R, and can repeat rR.
- M2 (T ! R): R / rT ; R / AT ; R / TIDT ; R / HT .
R receives rT kAT kTIDT kHT from T and can repeat the
messages.
- M3 (R ! DB): DB / rT ; DB / AT ; DB / TIDT,
DB / HT ; DB / rR; DB / AR; DB / TIDR; DB / HR.
DB receives rT kAT kTIDT kHT and rRkARkTIDRkHR
from R and can repeat the messages.
- M4 (DB ! R): R / PID[0]T [.]
R receives PID[0]T [from][ DB][, and can repeat the messages.]
- M5 (R ! T): T / AR; T / MR.
T receives ARkMR from R, and can repeat the messages.
4.2 Initial assumptions
The initial possessions and abilities of each participant are
defined, and the initiative assumptions (IA) can be obtained
as follows.
- For T:
## 1 3
IA1.1: T R S;Q;p T,
j � ()
IA1.2: T DB PIDT T,
j � ()
IA1.3: Tj � ]ðrT ; zÞ,
IA1.4: Tj � DBj ) ðPIDT ; xÞ.
IA1.1: T believes that the secrets {S, Q, p} are shared
with R;
IA1.2: T believes that the pseudonym PIDT is shared
with DB;
IA1.3: T believes that the values {rT ; z} are fresh and
have never been sent before the current session;
IA1.4: T believes that DB has jurisdiction over the
values {PIDT ; x}.
- For R:
S;Q;p
IA2.1: R T R,
j � ()
IA2.2: R DB PIDR R,
j � ()
IA2.3: Rj � ]ðrR; yÞ,
IA2.4: Rj � DBj ) ðPIDR; xÞ.
IA2.1: R believes that the secrets {S, Q, p} are shared
with T;
IA2.2: R believes that the pseudonym PIDR is shared
with DB;
IA2.3: R believes that the values {rR; y} are fresh, and
have never been sent before the current session;
IA2.4: R believes that DB has jurisdiction over the
values {PIDR; x}.
- For DB:
IA3.1: DB T PIDT DB,
j � ()
IA3.2: DB R PIDR DB,
j � ()
IA3.3: DBj � Tj ) ðPIDT ; zÞ,
IA3.4: DBj � Rj ) ðPIDR; yÞ.
IA3.1: DB believes that the pseudonym PIDT is shared
with T;
IA3.2: DB believes that the pseudonym PIDR is shared
with R;
IA3.3: DB believes that T has jurisdiction over the
values {PIDT ; z};
IA3.4: DB believes that R has jurisdiction over the
values {PIDR; y};
4.3 Anticipant goals
The main objectives are the data belief and freshness R,
T and DB. It guarantees that the messages are from trustable entities and were not used in former sessions. The
anticipant goals (G) can be obtained as follows.
G1: Tj � Rj � PIDT,
-----
G2: Tj � ]ðMRÞ,
G3: T DB PIDR R,
j � ()
G4: R DB PIDT T,
j � ()
G5: Rj � ]ðHT Þ,
G6: DB T Q,
j � j �
G7: DB R Q.
j � j �
G1: T believes that R once sent a message including the
statement PIDT;
G2: T believes that the message MR is fresh, i.e., T believes that MR has not been sent in a message at any time
before the current run of the protocol;
G3: T believes the pseudonym PIDR is shared as a secret
by DB and R;
G4: R believes the pseudonym PIDT is shared as a secret
by DB and T;
G5: R believes that the message HR is fresh;
G6: DB believes that T once sent a message including
the statement Q;
G7: DB believes that R once sent a message including
the statement Q.
Thereinto, G1, G3, G4, G6 and G7 refer to the belief
requirements, and messages are sent from the legal participants instead of malicious attackers. G2 and G5 indicate
freshness requirements. The received messages were not
used by malicious attackers in the previous sessions.
4.4 Logic verification
The logic verification is performed based on the message
formalization, initial assumptions, and BAN logic rules.
Theorem 1 T believes that R once sent a message
including the statement PIDT :
Proof According to M5: T / AR; T / MR, it turns out that
T has received messages AR and MR. Thereinto, AR is a
Chebyshev polynomial T yð:Þ containing S, and MR is a
hash value involving TID[00]T [;][ S][R][ and][ r][T] [.]
Here, TID[00]T [is a temp value computed by introducing]
TID[0]T [,] which theoretically equals TIDT ¼ PIDT �
HðBT krT Þ. Thus, T / MR can be regarded as follows, in
which means omitted parameters.
�
T / ðhPIDT iQ; �Þ
Applying the seeing rule (R1): [P][ /]P[ ð] /[X] X[;][Y][Þ][, we obtain that a]
party has sent a message containing hPIDT iQ to T.
T / hPIDT iQ
S;Q;p
According to IA1.1: T R T; T believes that the
j � !
secrets {S, Q, p} are shared with R. Applying the message
meaning rule (RM3): [P][j�][P]P[0][ !]j�YP[0]Pj �;PX / hXiY, we obtain that:
Tj � Rj � PIDT
If T believes that Q is a shared secret with R, and
T receives hPIDT iQ; T will believe that R once conveyed
the message PIDT. Till now, G1 has been proven.
Theorem 2 T believes that the message MR is fresh.
Proof According to M5: T / MR, it turns out that T has
received messages MR, which is a hash value involving
TID[00]T [;][ S][R][ and][ r][T] [. Thus,][ T][ /][ M][R][ can be regarded as follows.]
T / ðrT ; �Þ
According to IA1.3: Tj � ]ðrT Þ; T believes that rT is
fresh. Applying the freshness rule (F1): PPj�j�]ð]XðX;YÞÞ[, we obtain]
that:
Tj � ]ðrT ; �Þ
If one part of MR (marked as ðrT ; �Þ) is known to be
fresh, then MR are also fresh. Thus, T will believe that the
message MR is fresh, and G2 has been proven.
Theorem 3 T believes the pseudonym PIDR is shared as
a secret by DB and R.
Proof DB can be regarded as a secure entity during the
interactions, and we obtain that:
T DB DB ;
j � j ) ð j ��Þ
T DB DB :
j � j �ð j ��Þ
According to IA3.2: DB R PIDR DB; DB believes that
j � ()
the pseudonym PIDR is shared with R. It also means that
DB DB PIDR R, and we obtain that:
j � ()
� PIDR �
T DB DB R ;
j � j ) ()
� PIDR �
T DB DB R :
j � j � ()
T believes that DB is honest and competent, and DB
believes that the pseudonym PIDR shared by DB and R is
honest.
Applying the jurisdiction rule (J1): [P][j�][P][0][j)]P[X]j�[;][P]X[j�][Q][j�][X], and
we obtain that:
PIDR
T DB R
j � ()
If T believes that DB has jurisdiction over a statement,
then T trusts DB on the truth of the statement. Thus,
T believes the pseudonym PIDR is shared as a secret by DB
and R, and G3 has been proven.
Theorem 4 R believes the pseudonym PIDT is shared as
a secret by DB and T.
## 1 3
-----
Proof According to the secure communication channel
between R and DB, we obtain that:
R DB DB ;
j � j ) ð j ��Þ
R DB DB :
j � j �ð j ��Þ
Similarly, according to IA3.1 and J1, we obtain that:
DB DB PIDT T;
j � ()
R DB DB PIDT T ;
j � j ) ð () Þ
R DB PIDT T:
j � ()
Thus, R believes the pseudonym PIDT is shared by DB
and T, and G4 has been proven.
Theorem 5 R believes that the value HT is fresh.
Proof According to M2: R / HR, it turns out that R has
received messages HR, which is a hash value involving
TIDT ; BT, and rRÞ. Thus, R / HR can be regarded as follows.
R / ðrR; �Þ
According to IA2.3: Rj � ]ðrRÞ; R believes that rR is
fresh. Applying the freshness rule (F1): PPj�j�]ð]XðX;YÞÞ[, we obtain]
that:
Rj � ]ðrR; �Þ
If one part of HR (marked as ðrR; �Þ) is known to be
fresh, then HR are also fresh. Thus, R will believe that the
message HR is fresh (Rj � ]ðHRÞ), and G5 has been proven.
Theorem 6 DB believes that T once sent a message
including the statement Q.
Proof According to M3: DB / TIDT ; DB receives the
message TIDT . Here, TIDT is computed involving PIDT ; BT
and rR, in which BT ¼ T zðQÞ. Thus, DB / TIDT can be
regarded as follows.
DB / ðhQiPIDT ; �Þ
Applying the seeing rule (R1): [P][ /]P[ ð] /[X] X[;][Y][Þ][, we obtain that:]
DB / hQiPIDT
According to IA3.1: DB T PIDT DB; DB believes that
j � ()
the secret PIDT is shared with t. Applying the message
meaning rule (RM3): [P][j�][P]P[0][ !]j�YP[0]Pj �;PX / hXiY, we obtain that:
DB T Q
j � j �
If DB believes that PIDT is a shared secret with R, and
DB receives hQiPIDT ; DB will believe that T once conveyed
the message Q. Till now, G6 has been proven, and G7 can
also be achieved via the similar procedures.
In summary, the BAN logic based security proof is
demonstrated for formal analysis. In C2MP, R, T, and DB
## 1 3
can, respectively, establish beliefs via the authentication,
and the C2MP is proved to be correct and ensures
nonexistence of obvious design defects.
### 5 Performance analysis
In performance analysis, the C2MP is investigated from
three aspects: storage requirement, communication overhead and computation load.
- Storage requirement In the C2MP, T/R stores the tag/
reader real identifier IDT /IDR, pseudonym PIDT/PIDR,
and the shared secrets {Q, S, p}. A 64-bit length is
assumed for ID and PID according to ISO/IEC
� �
related standard. Additional memory consumption on
PRNG and Chebyshev polynomials is necessary during
protocol execution. In the C2MP, DB can be regarded
as a resource-rich entity, which stores all the legal tags/
readers’ real identifiers and pseudonyms. Note that an
efficient implementation of hash functions (e.g. MD5,
SHA-1, SHA-256) could be introduced with 16.0K–
23.0K gates requirement [21].
- Communication overhead Communication overhead is
the number of transmitted bit stream for each phase or
for a full run of the protocol. In the C2MP, the
number of transmitting frames depends on message
exchanges in authentication phases. The communication overhead refers to the sum of signaling loads
during each authentication session. Suppose the Chebyshev polynomials have L-bit length, the pseudonyms of readers and tags have the same length 64-bit,
the pseudo-random numbers have 16-bit length, and
the hash values have 128-bit length. The total length
of message deliveries between a reader and a tag are
(52 1=4L) bytes. The total authentication progress
þ
completed via 5 phases is acceptable in practical
applications.
- Computation load During the entire round, T performs
two PRNG operations, three Chebyshev polynomials
T zð:Þ, one XOR bitwise operations, and three hash
functions. R performs two PRNG operations, three
Chebyshev polynomials T yð:Þ, two XOR bitwise operations, and four hash functions. There are no complex
encryption operations in the C2MP. Based on the
existing technology, smart cards (e.g. MIFARE Plus,
and MIFARE DESFire) [22] comprise with microprocessor unit (MPU), storage units, and chip operating
system (COS). They can efficiently support the required
algebraic algorithms. The power-saving module should
be considered to deal with multi-rounds of Chebyshev
chaotic maps.
-----
### 6 Conclusion
In this paper, a Chebyshev chaotic map-based authentication protocol is proposed to address the security issues in
RFID systems. The proposed C2MP adopts authentication,
anonymity, and session freshness mechanism to enhance
security and privacy protection. Particularly, Chebyshev
polynomial’s semigroup and chaotic properties are introduced for identity authentication and anonymous transmission. Dual random numbers are generated to achieve
session freshness and forward security, and one-way hash
functions are adopted for data integrity. The C2MP is
verified by BAN logic to provide that there is nonexistence
of obvious design flaws and security errors. It indicates that
the C2MP is suitable for universal RFID applications.
Open Access This article is distributed under the terms of the
[Creative Commons Attribution 4.0 International License (http://crea](http://creativecommons.org/licenses/by/4.0/)
[tivecommons.org/licenses/by/4.0/), which permits unrestricted use,](http://creativecommons.org/licenses/by/4.0/)
distribution, and reproduction in any medium, provided you give
appropriate credit to the original author(s) and the source, provide a
link to the Creative Commons license, and indicate if changes were
made.
### References
1. Ning H, Liu H, Yang LT (2013) Cyberentity security in the
Internet of things. Computer 46(4):46–53
2. Ning H (2013) Unit and ubiquitous Internet of things. CRC Press,
Taylor & France Group, Boca Raton
3. Zeng H, Zhang J, Dai G, Gao Z, Haiyang Hu (2014) Security
visiting: RFID-based smartphone indoor guiding system. Int J
Distrib Sens Netw 2014:1–13
4. Xie L, Yin Y, Vasilakos AV, Lu S (2014) Managing RFID data:
challenges, opportunities and solutions. IEEE Commun Surv
Tutor 16(3):1294–1311
5. Tian Y, Chen G, Li J (2012) A new ultralightweight RFID
authentication protocol with permutation. IEEE Commun Lett
16(5):702–705
6. Liu H, Ning H, Zhang Y, He D, Xiong Q, Yang LT (2013)
Grouping-proofs-based authentication protocol for distributed
RFID systems. IEEE Trans Parallel Distrib Syst 24(7):1321–1330
7. Liu H, Ning H (2011) Zero-knowledge authentication protocol
based on alternative mode in RFID systems. IEEE Sens J
11(12):3235–3245
8. Liu Z, Liu D, Li L, Lin H, Yong Z (2015) Implementation of a
new RFID authentication protocol for EPC Gen2 standard. IEEE
Sens J 15(2):1003–1011
9. Yao Q, Han J, Qi S, Liu Z, Chang S, Ma J (2013) MAP: towards
authentication for multiple tags. Int J Distrib Sens Netw
2013:1–14
10. Avoine G, Kim CH (2013) Mutual Distance bounding protocols.
IEEE Trans mob Comput 12(5):830–839
11. Morshed MM, Atkins A, Yu H (2012) Efficient mutual authentication protocol for radio frequency identification systems. IET
Commun 6(16):2715–2724
12. Lu L, Han J, Hu L, Ni LM (2012) Dynamic key-updating: privacy-preserving authentication for RFID systems. Int J Distrib
Sens Netw 2012:1–12
13. Ning H, Liu H, Mao J, Zhang Y (2011) Scalable and distributed
key array authentication protocol in radio frequency identification-based sensor systems. IET Commun 5(12):1755–1768
14. Jiang Y, Cheng W, Du X (2014) Group-based key array
authentication protocol in radio frequency identification systems.
IET Inf Secur 8(6):290–296
15. Avoine G, Bingol MA, Carpent X, Yalcin SBO (2013) Privacyfriendly authentication in RFID systems: on sublinear protocols
based on symmetric-key cryptography. IEEE Trans Mob Comput
12(10):2037–2049
16. Hermans J, Peeters R, Preneel B (2014) Proper RFID privacy:
model and protocols. IEEE Trans Mob Comput 13(12):
2888–2902 1
17. Ning H, Liu H, Yang LT (2014) Aggregated-proof based hierarchical authentication scheme for the internet of things. IEEE
Trans Parallel Distrib Syst. [http://ieeexplore.ieee.org/stamp/](http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6767153)
[stamp.jsp?tp=&arnumber=6767153](http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6767153)
18. Mason JC, Handscomb DC (2003) Chebyshev polynomials.
Chapman & Hall/CRC Press, Boca Raton
19. Zhang L (2008) Cryptanalysis of the public key encryption based
on multiple chaotic systems. Chaos Solitons Fractals 37(3):
669–674
20. Burrows M, Abadi M, Needham R (1990) A logic of authentication. ACM Trans Comput Syst 8(1):18–36
[21. http://www.heliontech.com/core.htm. Accessed Dec (2014)](http://www.heliontech.com/core.htm)
[22. http://www.nxp.com. Accessed Dec (2014)](http://www.nxp.com)
## 1 3
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s00779-015-0876-6?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s00779-015-0876-6, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://link.springer.com/content/pdf/10.1007/s00779-015-0876-6.pdf"
}
| 2,015
|
[
"JournalArticle"
] | true
| 2015-10-01T00:00:00
|
[
{
"paperId": "e3c1af2c6ad1c71ded8be4a0de06e9d545fa607c",
"title": "Aggregated-Proof Based Hierarchical Authentication Scheme for the Internet of Things"
},
{
"paperId": "f9483dc8f44a0f72060644c101c6147a021ed05f",
"title": "Implementation of a New RFID Authentication Protocol for EPC Gen2 Standard"
},
{
"paperId": "954d7192137232db2be6b75dcaa4db202147d928",
"title": "Development of a Novel, Compact, Balanced, Micropower Impulse Radar for Nondestructive Applications"
},
{
"paperId": "87f241c88dbfb00184633c014218c2d010c5e0f2",
"title": "Group-based key array authentication protocol in radio frequency identification systems"
},
{
"paperId": "91b763b39979260c92b362fbd6c845da6dc9e3c2",
"title": "Proper RFID Privacy: Model and Protocols"
},
{
"paperId": "757e624903d8753fa8f1eb274afc6606aa7c3675",
"title": "Managing RFID Data: Challenges, Opportunities and Solutions"
},
{
"paperId": "7e9f8b6daaeccc81c26752584edfcc1fb7de4f36",
"title": "Security Visiting: RFID-Based Smartphone Indoor Guiding System"
},
{
"paperId": "f32a2ae9f61ace855334e47b5bf2d094435eb841",
"title": "MAP: Towards Authentication for Multiple Tags"
},
{
"paperId": "84c136c129800ceb9e140a6cb188b43420f79acb",
"title": "Privacy-Friendly Authentication in RFID Systems: On Sublinear Protocols Based on Symmetric-Key Cryptography"
},
{
"paperId": "c7283e22f1560ad70730fc06e6a93647d87599db",
"title": "Grouping-Proofs-Based Authentication Protocol for Distributed RFID Systems"
},
{
"paperId": "55845d03a2e03c01a1b5ffa5c7e1824a058d705f",
"title": "Mutual Distance Bounding Protocols"
},
{
"paperId": "5bb6b8309150385171261c6ecd3e6591f767f991",
"title": "Unit and Ubiquitous Internet of Things"
},
{
"paperId": "bc8d6b5facbd28d992c6768e772cfb05fa444885",
"title": "Cyberentity Security in the Internet of Things"
},
{
"paperId": "c00c19927ae947fd2d2aacd84dca256d8e5d5a7c",
"title": "Efficient mutual authentication protocol for radiofrequency identification systems"
},
{
"paperId": "134cc364c1c145213276c6545016ec17cdaca190",
"title": "A New Ultralightweight RFID Authentication Protocol with Permutation"
},
{
"paperId": "ed974ab56baa10121427f7b160f87a60d8f07352",
"title": "Scalable and distributed key array authentication protocol in radio frequency identification-based sensor systems"
},
{
"paperId": "c565774d6d9d65171c87dbc2b292e63b32c3dbde",
"title": "Zero-Knowledge Authentication Protocol Based on Alternative Mode in RFID Systems"
},
{
"paperId": "bf0262f17140ce23d88b98348807ecc30bd8d4a0",
"title": "Cryptanalysis of the public key encryption based on multiple chaotic systems"
},
{
"paperId": "726a9191fbca7eb38762adcd6be79ff86e2aa33a",
"title": "Dynamic Key-Updating: Privacy-Preserving Authentication for RFID Systems"
},
{
"paperId": "502935ba4c4be621106cd186610bdcb8ce935ea0",
"title": "A logic of authentication"
},
{
"paperId": "79f850a464fea866621ac45f2aab1bcb86165f15",
"title": "The Chebyshev polynomials"
}
] | 10,949
|
en
|
[
{
"category": "Business",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02dcbba6f18b065bc8b6d04aced706ab1d59357b
|
[
"Business",
"Medicine"
] | 0.847242
|
Benefits of Blockchain Initiatives for Value-Based Care: Proposed Framework
|
02dcbba6f18b065bc8b6d04aced706ab1d59357b
|
Journal of Medical Internet Research
|
[
{
"authorId": "1387888383",
"name": "Rongen Zhang"
},
{
"authorId": "1381503348",
"name": "Amrita George"
},
{
"authorId": "2116911536",
"name": "Jongwoo Kim"
},
{
"authorId": "152449663",
"name": "Veneetia Johnson"
},
{
"authorId": "145293633",
"name": "B. Ramesh"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Med Internet Res"
],
"alternate_urls": [
"http://www.jmir.org/",
"https://www.jmir.org/"
],
"id": "2baad992-2268-4c38-9120-e453622f2eeb",
"issn": "1438-8871",
"name": "Journal of Medical Internet Research",
"type": "journal",
"url": "http://www.symposion.com/jmir/index.htm"
}
|
Background The potential of blockchain technology to achieve strategic goals, such as value-based care, is increasingly being recognized by both researchers and practitioners. However, current research and practices lack comprehensive approaches for evaluating the benefits of blockchain applications. Objective The goal of this study was to develop a framework for holistically assessing the performance of blockchain initiatives in providing value-based care by extending the existing balanced scorecard (BSC) evaluation framework. Methods Based on a review of the literature on value-based health care, blockchain technology, and methods for evaluating initiatives in disruptive technologies, we propose an extended BSC method for holistically evaluating blockchain applications in the provision of value-based health care. The proposed method extends the BSC framework, which has been extensively used to measure both financial and nonfinancial performance of organizations. The usefulness of our proposed framework is further demonstrated via a case study. Results We describe the extended BSC framework, which includes five perspectives (both financial and nonfinancial) from which to assess the appropriateness and performance of blockchain initiatives in the health care domain. Conclusions The proposed framework moves us toward a holistic evaluation of both the financial and nonfinancial benefits of blockchain initiatives in the context of value-based care and its provision.
|
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
##### Original Paper
# Benefits of Blockchain Initiatives for Value-Based Care: Proposed Framework
##### Rongen Zhang[1]; Amrita George[2], PhD; Jongwoo Kim[3], PhD; Veneetia Johnson[1], DBA; Balasubramaniam Ramesh[1], PhD
1Georgia State University, Atlanta, GA, United States
2Marquette University, Milwaukee, WI, United States
3University of Massachusetts Boston, Boston, MA, United States
**Corresponding Author:**
Balasubramaniam Ramesh, PhD
Georgia State University
35 Broad Street
Atlanta, GA, 30303
United States
Phone: 1 404 413 7372
[Email: bramesh@gsu.edu](mailto:bramesh@gsu.edu)
### Abstract
**Background:** The potential of blockchain technology to achieve strategic goals, such as value-based care, is increasingly being
recognized by both researchers and practitioners. However, current research and practices lack comprehensive approaches for
evaluating the benefits of blockchain applications.
**Objective:** The goal of this study was to develop a framework for holistically assessing the performance of blockchain initiatives
in providing value-based care by extending the existing balanced scorecard (BSC) evaluation framework.
**Methods:** Based on a review of the literature on value-based health care, blockchain technology, and methods for evaluating
initiatives in disruptive technologies, we propose an extended BSC method for holistically evaluating blockchain applications in
the provision of value-based health care. The proposed method extends the BSC framework, which has been extensively used to
measure both financial and nonfinancial performance of organizations. The usefulness of our proposed framework is further
demonstrated via a case study.
**Results:** We describe the extended BSC framework, which includes five perspectives (both financial and nonfinancial) from
which to assess the appropriateness and performance of blockchain initiatives in the health care domain.
**Conclusions:** The proposed framework moves us toward a holistic evaluation of both the financial and nonfinancial benefits
of blockchain initiatives in the context of value-based care and its provision.
**_(J Med Internet Res 2019;21(9):e13595)_** [doi: 10.2196/13595](http://dx.doi.org/10.2196/13595)
**KEYWORDS**
blockchain; balanced scorecard; evaluation; value-based care
### Introduction
##### Background
The health care sector has recently been focused on two related
challenges: the transition to value-based care and the use of
innovative technologies (such as blockchain technology) to
facilitate the delivery of health care. The transition to
value-based care, which aims to improve the value of care while
providing it at a lower cost, places new demands on health care
information systems (IS) [1] that current health information
technology infrastructure is not designed to support [1].
Adler-Milstein et al [1] identified three major stakeholder groups
that must be supported in achieving value-based care: patients,
providers, and researchers. Disruptive technologies such as
blockchain offer the potential to support these currently
inadequately supported stakeholder groups with Health
Information Technology (IT) infrastructure.
Blockchain technology, widely celebrated as a technological
revolution, is creating unprecedented hype and optimism [2].
Blockchain is a distributed database that maintains a
continuously growing list of data records that are secured from
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
tampering and revision [3,4]. A global survey documents the
widespread application of blockchain in domains such as health
care, manufacturing, legal, government, not for profit, retail,
real estate, tourism, and media [5]. The potential of this
technology to aid organizations in achieving strategic goals like
value-based care is increasingly being recognized by health care
providers and other stakeholders (eg, payers, shareholders,
accreditation agencies) [6]. However, Iansiti and Lakhani [7]
note that practitioners are uncertain about the impact that
disruptive technologies such as blockchain might have on
organizational performance. Current research and practice lack
comprehensive approaches to evaluating the benefits of
blockchain and developing appropriate use cases of blockchain
applications for value-based care [8].
As IT is increasingly becoming a strategic necessity for
improving services and reducing medical errors [9],
comprehensive approaches to evaluating the appropriateness
and value of disruptive technologies such as blockchain are
needed. An evaluation approach should facilitate the assessment
of both technical and nontechnical (eg, legal, data ownership
and privacy, security) implications. To address this need, we
assessed two sets of existing evaluation frameworks: technology
evaluation methods (the Zachman framework, human-computer
interaction [HCI] guidelines, and the technology-centric
framework) and comprehensive evaluation methods (total
quality management [TQM], the European foundation quality
management excellence model (EFQMEM), the performance
pyramid, and the performance prism). Based on this assessment,
we identified deficiencies in the existing evaluation methods
and subsequently developed an approach that extends the
balanced scorecard (BSC) framework that addresses these
deficiencies.
The BSC, developed by Robert Kaplan and David Norton nearly
two decades ago [10], provides organizations with a structured
approach to assessing both the financial and nonfinancial
dimensions of organizational initiatives and processes in terms
of strategic outcomes. Beyond the purely accounting-based
measures traditionally used, the BSC is balanced in that it
provides a comprehensive view of organizational performance.
It translates high-level organizational vision and strategy into
a holistic set of performance and action measures [11]. The BSC
is a practical method that is applicable within the health care
service sector and health care organizations, and it has
previously been used to assess clinical outcomes, for example
[12]. However, it has not yet been used to evaluate disruptive
innovations, such as blockchain, that can improve patient care
and reduce costs but that have regulatory, financial, and
operational implications.
A myriad of seemingly promising blockchain projects are being
implemented in the health care domain, often without careful
consideration of the applicability of the technology [13].
Moreover, questions still linger for early adopters of this
technology: “How does an organization holistically assess the
performance of blockchain technology in the health care
domain?” and “Does the introduction of blockchain technology
align with the strategic priorities of a health care organization?”.
Answering these questions is critical for health care
organizations to achieve the health care IT mission identified
by the US federal government, namely: "Improve the health
and well-being of individuals and communities through the use
of technology and health information that is accessible when
and where it matters most" [14].
We sought to answer the above questions through our
assessment of existing evaluation frameworks and the
development of a new framework that can guide the
comprehensive evaluation of the value of blockchain initiatives
that seek to enable the delivery of value-based care.
In the sections below, we first discuss the relevant literature on
value-based health care and blockchain technology. We then
assess existing evaluation frameworks and present our
framework, which extends the BSC by addressing some of its
limitations. Further, we customize the framework to the context
of blockchain applications in health care settings. We then
present an illustrative case study on the application of the
framework in a pharmaceutical supply chain organization.
Finally, we discuss the implications of our framework for both
researchers and practitioners.
##### Information Technology Support for Transitioning to Value-Based Health Care
Health care value, defined as health outcomes (including quality
of care achieved per dollar spent), has become a cornerstone of
the strategy to restructure the US health care system [15-17].
One of the proposed frameworks for improving health care
value is the value-based care model [18]. Value-based care
attempts to advance the triple aim of providing better care for
individuals, improving population health management strategies,
and reducing health care costs. Value-based care models center
on patient outcomes and how well health care providers can
improve quality of care using measures such as reduced hospital
readmissions, improved timeliness and safety of care, more
equitable care, shared decision-making, and improved
preventative care [17]. This model ties payments for care
delivery to the quality of care provided, and rewards providers
for both efficiency and effectiveness [19].
Unlike more traditional approaches, value-based care is driven
by data because providers must report to payers on specific
metrics and demonstrate improvement. Providers are required
to use IT systems to track and report metrics such as hospital
readmissions, adverse events, population health, and patient
engagement. Further, providers are incentivized to use
evidence-based medicine, engage patients, upgrade health care
IT, use data analytics, and receive payments electronically.
When patients receive more coordinated, appropriate, and
effective care, providers are rewarded. To achieve these goals,
health care organizations need a digital infrastructure that
facilitates the provision of comprehensive, affordable,
accessible, effective, and error-free care.
While significant progress has been made in digitizing the US
health care system, today’s health IT infrastructure largely
remains a collection of systems that were not designed to support
the transition to value-based care [1]. In fact, prior literature
has identified a health IT chasm, which refers to the gaps
between the current health IT ecosystem (see Multimedia
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
Appendix 1) and the system that is needed for value-based care
[1,20-42].
In fact, a recent study identified several gaps from the
perspectives of three stakeholder groups. From the patient
perspective, patients are unable to access electronic medical
records from most providers, and most care providers do not
provide functionalities for patients to submit patient-generated
data. Only a small percentage of patients receive clinical trial
information from their primary physician, and an even smaller
percentage participate in biobanks [1]. From the provider
perspective, due to the lack of standardized application interfaces
providers have difficulty accessing external data, which hinders
the advanced analytics on which personalized assistance is based
[43]. In addition, manual credentialing (typically takes more
than 120 days) and administration of contracts is complicated
and inefficient. Further, pharmaceutical providers find it
challenging to ensure the authenticity of pharmacy products
due to a lack of transparency in current supply chain systems.
Finally, from the researchers’perspective, it is difficult for them
to track investigational products to ensure data authenticity, and
payments to investigators are delayed due to manual processing
[33]. The health IT environment is immature, provides few
safeguards for safety and effectiveness, and provides very
limited integration of applications used in clinical care or
research.
Prior literature has also identified specific goals (eg, improving
patients’ access to clinical data, improving patient’s ability to
submit and access data via mobile health technology, more
readily engaging patients in clinical research) for addressing
the needs of each of these stakeholder groups [1]. Blockchain
technology may help achieve these goals.
##### Blockchain for Enabling Value-Based Care
Blockchain consists of blocks that hold batches of individual
transactions. Each block contains a timestamp and a link to a
previous block [3,4]. The most salient benefit of blockchain is
decentralization and the elimination of a trusted centralized
third party in distributed applications. Thus, multiple parties
can conduct transactions in a distributed environment without
the need for a centralized authority, thereby avoiding a single
point of both trust and failure. The absence of a centralized
processing entity may reduce time and costs. A consensus
mechanism is used to reconcile any discrepancies that may arise
between participants in a blockchain network.
Iansiti and Lakhani [7] summarized five basic principles
underlying blockchain technology: a distributed database,
peer-to-peer transmission, transparency with pseudonymity,
irreversibility of records, and computational logic. These unique
characteristics of blockchain technology enable the development
of solutions that reduce uncertainty and ambiguity and enhance
security of stored transactional information by providing full
transparency and a single truth for all network participants [44].
Although blockchain technology enjoys the benefits of
decentralization, it often comes at the cost of scalability.
Blockchains are typically incapable of processing large numbers
of transactions in a timely manner [1]. The trustless peer-to-peer
network infrastructure, which requires information to be
propagated to and validated at each node, is the root of this
problem. Several solutions (eg, off-chain transactions, sharding,
and a provably neutral cloud) have been proposed to address
this issue. For example, Leung et al proposed a design that
minimizes storage, bootstrapping costs, and bandwidth costs of
joining a network by 90% [45]. Such advances are essential for
blockchain to realize its disruptive potential [46]. However,
effective management of personal health records using
blockchain technology still requires improvements such as
reduced data size, strengthened personal information protection,
and reduced operational costs [47].
Despite its technological infancy, experimental adoption and
customization of blockchain technology appears to be fully
underway in the health care domain [8]. One of the most
impactful health care applications is expected to be the
management of electronic health records (EHRs). The
decentralization, immutability, traceability, security, and privacy
of blockchain make it well suited for the storing, managing, and
sharing of patient-centric data among stakeholders [48-50].
Aligning with the requirements of the European General Data
Protection Regulation (GDPR), blockchain can be used to build
health care platforms that empower patients to control how their
data are used and ensure that sensitive personal data are not
revealed without the patients’ consent [2,22,51]. Guardtime
[2,51], MedRec [23], the Gem Health Network [44], Patientory,
and IBM’s Watson [21] are some of the key projects in this
ecosystem.
Another salient application domain of blockchain is supply
chain management in the pharmaceutical industry. Because of
the immutability and traceability of blockchain, any modification
of a prescription by any party in the supply chain can be
detected, which, in turn, can help address the severe problem
of counterfeit medications [2,44,49]. In addition, in biomedical
research and education, blockchain could facilitate the
elimination of falsification of data or the exclusion of
undesirable results from clinical trials [31]. Benchoufi [38] and
Nugent et al [37] illustrated the ability to trace patients’consent
and provide data transparency in clinical trials. Moreover,
insurance claim processing is a promising area for blockchain
applications because of its transparency, decentralization,
immutability, and auditability; a few prototype implementations,
such as MIStore [52] and Politdok’s initiative partnered with
Intel [53], have been reported [44]. Other promising areas
include remote patient monitoring [24] and precision medicine
[54].
Blockchain technology has the potential to address some of the
gaps in the current health IT ecosystem, thereby supporting the
three important stakeholder groups involved in value-based care
[1]. Multimedia Appendix 1 identifies these gaps and highlights
what blockchain can do to address these gaps. Based on a careful
study of the needs of the three stakeholder groups, we further
outline in the appendix how specific characteristics of
blockchain technology may help meet these needs. We also list
some proof-of-concept systems that provide some of the desired
functionalities.
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
### Methods
##### Overview
While blockchain offers the potential to address issues (eg,
interoperability, difficulty in providing optimal personalized
care due to lack of comprehensive medical records, and
maintaining integrity of records) that are critical for effective
value-based care [55], there is limited research comprehensively
evaluating the financial and nonfinancial benefits of blockchain
solutions in health care [56]. A review of the literature on
value-based health care strongly suggests the need for a
framework to holistically evaluate the impacts of technologies
such as blockchain. Existing evaluation mechanisms (such as
the Level of Information System interoperability reference
model [56]) have focused on the operational aspects of
blockchain. Motivated by the need for a framework to guide
the strategic evaluation of blockchain applications within a
health care organization, we extend the BSC approach, which
is an already well-established performance evaluation system.
Specifically, our approach integrates financial and nonfinancial
perspectives (ie, internal processes, learning and growth,
external perspectives, and customer perspectives), which are
parts of the original BSC, with an external perspective that
incorporates the viewpoints of external stakeholders and
regulators, especially because of the significant role these parties
play in health care delivery. In the following section, we
illustrate the use of our framework with a blockchain application
for managing a pharmaceutical supply chain.
##### Performance Evaluation of Health Care Blockchain Implementations and the Balanced Scorecard
Traditional performance measurement systems have either
focused purely on financial factors, ignoring the value of
nonfinancial factors [12], or have focused solely on the
effectiveness of the technical system without considering the
external or financial implications. Health care organizations
have been using economic evaluations for health care
decision-making for several decades. During this period,
increased pressure on health care budgets has necessitated the
consideration of cost-effectiveness in addition to clinical
effectiveness. Economic evaluation approaches have also been
applied to other health care–related decision-making in terms
of funding, reimbursement, and new technologies [57,58]. Even
comprehensive evaluation approaches that include
cost-consequences analysis, cost-minimization analysis,
cost-effectiveness analysis, cost-utility analysis, and cost-benefit
analysis [59] are focused on financial factors and give limited
consideration to nonfinancial aspects of evaluation targets. For
example, Zachman’s framework [60] evaluates business-IT
alignment in detail but lacks a holistic governance framework.
Similarly, the human-computer interaction [61] and
technology-centric frameworks [62] provide insights into
developing intuitive and interactive IS, but they do not focus
on assessing the impact of these systems from external and
financial perspectives. Additionally, the interrelationships
between the various functional areas in an organization are
overlooked in these frameworks. For example, a blockchain
implementation in one functional area, such as improving
patients’ access to their own medical records, may have major
impacts in other areas, such as customer service management,
internal processes for quality assurance, security checks, or
external partnerships (with, say, insurance companies or
pharmacies). Finally, the knowledge that results from the
long-term growth of organizations or the ability to deal with
future threats also needs to be factored into the performance
evaluation [12]. The BSC has dual functions as a performance
framework and a management methodology, and thus can tackle
the shortcomings of traditional performance measurement
systems. These shortcomings include the lack of consideration
of nonfinancial factors and the lack of strategic focus. Our
evaluation suggests that the BSC addresses both shortcomings
and is well suited for the evaluation of disruptive technologies,
especially in the dynamic environment in which health care
organizations operate.
Our comparison of the various performance measurement
systems, as presented in Multimedia Appendix 2, suggests that
BSC is an appropriate approach for evaluating blockchain
initiatives in achieving value-based care for the following
reasons. We compared BSC with two sets of existing methods,
namely, technology evaluation methods (the Zachman
framework, HCI, and the technology-centric framework) and
comprehensive performance evaluation methods (TQM, the
European foundation quality management excellence model,
the performance pyramid, and the performance prism) [63-65].
Technology evaluation methods typically do not provide a
holistic view (such as the consideration of external or customer
perspectives) and therefore are not appropriate in our setting.
Among the comprehensive evaluation methods, TQM’s narrow
focus on internal process is inadequate, and the European
foundation quality management excellence model, designed to
improve TQM, lacks a strategic focus. Although both the BSC
and the performance pyramid use strategic mapping to link
strategy to operational metrics, prior research suggests that the
performance pyramid is less effective and harder to understand
than the BSC [64]. Moreover, although the performance prism
considers stakeholders’ perspectives, it does not provide
adequate guidelines and neglects to show how the proposed
measures can be operationalized [65]. Thus, our comparison of
the various technical and comprehensive performance evaluation
methods suggests that, among them, the BSC is the most suited
to evaluate the performance of disruptive technologies (such as
blockchain) in value-based care initiatives.
Organizations in multiple domains, including health care, have
adopted the BSC [66,67]. In increasingly dynamic business
environments, traditional performance evaluation approaches
may not work well due to the uncertainty involved in
ascertaining both the costs and benefits of new technologies,
such as blockchain. However, both theoretical research and
practitioner articles support the use of the BSC for evaluating
IT initiatives in such contexts. For example, Gartner [68] notes
that performance measurement solutions deployed within an
organization should include a spectrum of leading measures
rather than focusing on lagging financial indicators. To provide
a holistic assessment, Gartner [29] recommends using the BSC
to measure return on investment (ROI) and the business value
of IT services because it enables the consideration of both
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
financial and nonfinancial perspectives and helps develop
relevant metrics [68]. Researchers also recognize the BSC
framework as a holistic approach that provides managers with
a structure to develop metrics that reflect performance from
various perspectives [69], hence our selection of the BSC as
the basis for the development of our approach to evaluating
blockchain applications.
The BSC measures the performance of organizations from the
following four linked and balanced perspectives:
1. Financial: How do we increase value for our shareholders
(or providers of financial resources)?
2. Customer: How well do we satisfy our customers’ needs?
3. Internal: How well do we perform key internal operational
processes? To satisfy our customers, in what processes must
we excel?
4. Learning and growth: Are we able to sustain innovation,
change, and continual improvement? Do we have the basic
infrastructure in place to improve, create, value, and achieve
our mission?
Some limitations of the traditional BSC have received attention
in the literature [70,71]. One major concern is that the
environment external to the organizations, including key groups
of stakeholders, is not represented in the framework. For
example, Mohobbot [72] points out that the BSC is unable to
answer questions concerning the impact of external competitors.
Moreover, the BSC does not consider the extended value chain,
in which supplier and employee contributions are very
significant [73]. This issue is exacerbated in the health care
domain due to the complex interactions among the wide variety
of organizations and stakeholders that are part of the ecosystem.
For example, Norreklit [30] identifies crucial stakeholders like
public authorities and suppliers, but other external stakeholders
may include insurers, physicians, hospitals, clinics, laboratories,
clinical research organizations, supply chain logistics
stakeholders (such as pharmaceutical manufacturers, distributors,
and retailers), government and regulatory agencies, and charities.
To account for the impact of external stakeholders, we extended
the BSC with an additional perspective, namely the external
and regulatory perspective (see Figure 1). This perspective seeks
to answer the following question: "How well does the
organization improve value creation through external
partnerships while ensuring regulatory compliance?"
**Figure 1.** Proposed framework for evaluating blockchain initiatives for value-based care.
By integrating financial measures with other crucial performance
indicators concerning patients, organizational learning, growth
and innovation, internal processes, and external perspectives,
this extended BSC framework offers health care organizations
a comprehensive view of the performance of blockchain
applications.
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
### Results
##### Summary
In this study we adopted a for-profit health care organization’s
view, as a majority of current blockchain implementations are
in for-profit organizations.
##### Financial Perspective
From a value-based perspective, one of the key questions health
care organizations should ask is: "How do health care
organizations use blockchain applications to generate more
profits at lower cost?" Typically, the focus of the financial
perspective in the BSC has been on traditional financial metrics
such as ROI and net income. In the context of value-based health
care, patient-centric metrics such as gross revenue, adjusted
cost per discharge, in-patient or out-patient revenue mix,
contract allowances, discounts as a percentage of operating
patient revenue [74], patient-payer mix, Medicare or Medicaid
mix, average length of stay, and occupancy rate all deserve
consideration.
The auditability and traceability features of blockchain enable
more secure and efficient revenue management. As it does not
require an intermediary, blockchain can support health care
financing tasks, such as automatic claims processing using smart
contracts [48,75], preauthorization of payments [36], and
alternative payment models [76]. A distributed ledger makes
claims processing and payment transactions more efficient and
cost-effective. Replacing redundant health care intermediaries
(namely, organizations that operate between stakeholders and
institutions but that add little value to the health care value chain
[54]) with transparent blockchain technology could facilitate
processes like real-time claims adjudication [75]. With the data
provenance benefits offered by blockchain, providers and
patients could have enhanced accessibility to patient data.
Blockchain technology can also help eliminate information
asymmetry and mistrust between stakeholders in the health care
ecosystem. With the innate immutability, transparency, and
traceability provided by blockchain technology, medical
products can be traced from manufacturer to patient, thereby
reducing medication and medical equipment fraud. However,
in the short-term, the adoption of blockchain technology will
likely involve significant investments in application
development, and their integration with legacy systems might
initially undermine the financial benefit to shareholders.
##### Customer (Patient) Perspective
From a value-based care perspective, one of the key questions
health care organizations should ask is: "How can we improve
our service to customers and satisfy customer needs via
blockchain applications?" Improving the performance of health
care information systems that support the provision of effective
and efficient care to patients is critical for achieving this goal.
The patient-centric care paradigm requires the sharing of
patients’ EHRs, which raises issues such as privacy,
confidentiality, integrity, availability, and security [77]. As a
valuable personal asset, health care data should be owned and
controlled by customers (patients) easily and securely without
violating their privacy [41]. With blockchain-supported
applications such as FHIRChain and Blockchain-Based
Multi-level Privacy-Preserving Location Sharing Scheme
(BMPLS), which simplify data authentication and authorization,
patients can control access to their medical data easily and
quickly. According to a seminal paper on IS success [78], user
satisfaction is affected by information quality, system quality,
and service quality. Blockchain enables health care stakeholders
to access complete, relevant, and secure data on patients, thereby
improving information quality. Health care organizations can
overcome common challenges, such as data segregation, and
achieve better integration of patients’medical data. Blockchain
supports data immutability and auditability, thereby improving
service quality (eg, reliability, responsiveness, and rapport) of
medical IS [79], and as a result health care organizations can
enhance their medical service quality and thereby patient
satisfaction. Blockchain can help health care organizations easily
integrate various elements of clinical data, which can enable
medical professionals to make accurate diagnoses at low cost.
##### Internal Perspective
From a value-based care perspective, one of the key questions
that health care organizations should ask is: "What internal
processes can blockchain improve to satisfy our customers and
the population in general?" Effective internal business processes
are critical for providing products and services that satisfy health
care organizations’ customers’ needs in a fiscally responsible
manner. These effective processes can be reliable indicators of
future financial and operational success [12]. With blockchain
applications, health care organizations can build time-stamped,
tamper-proof, immutable ledger systems that will improve
organizations’ auditing and reporting capabilities. These
capabilities are crucial for identifying failures in internal
processes and remedying those failures. Some benefits that
accrue with improvements to internal processes include reduced
length of patient stay, accuracy of services provided (both
primary and ancillary), optimal surgical capacity utilization,
and timeliness of services [12].
A variety of internal processes are candidates for improvements
using blockchain technology. Using smart contracts,
organizations can encode internal logic (eg, validating identity
and tracking the participation of various stakeholders. such as
patients and health providers), which will enhance service
quality. Service quality can be reflected in measures such as
reductions in diagnostic errors, readmission rates, and data
security incidents, all of which lower costs. Value for customers
can also be improved by instituting newer internal processes,
such as Hitech service (eg, digitization of wellness check in
Mount Sinai’s Lab 100 [80]). Access to longitudinal medical
charts using blockchain applications (such as those implemented
in FHIRChain) can help health care organizations achieve
optimal results with Hitech services, thereby enabling effective
long-term care for chronic illnesses (eg, diabetes). Further, such
charts can be useful in designing effective population outreach
programs. Finally, using peer-to-peer network-enabled
blockchain applications (eg, BMPLS), health care organizations
can leverage newer mechanisms of health care delivery, such
as telecare, to increase their reach, thereby improving health
equity while providing care at a reduced cost.
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
##### External Perspective
From a value-based care perspective, one of the key questions
health care organizations should ask is: "How can we leverage
external partnerships to create value while ensuring regulatory
compliance, thus satisfying our customers and the general
population?" Creating effective partnerships with external
stakeholders (eg, payers, accreditation bodies) while remaining
compliant with regulations is critical for value creation. These
partnerships enable health care organizations to supply products
and services that satisfy customer needs in a fiscally and legally
responsible manner.
Some multi-level, privacy-preserving. location-sharing
blockchain applications (eg, BMPLS) enable interoperability
with external systems, thereby enabling access to
multi-dimensional medical charts from various stakeholders
that can improve long-term medical care at a low cost. Through
external partnerships, these health care organizations can seek
to create value by taking a proactive role in providing care to
their customers (say, by tracking customers’ lifestyle and
suggesting changes). Naturally, such partnerships can enable
future financial and operational success through service
innovation, which can help build deeper long-term relationships
with customers. Additionally, having access to
multi-dimensional population health data will enable health
care providers to design outreach services that benefit the
community as a whole. Blockchain solutions may also include
smart contracts that help meet security and privacy mandates.
Further, through the standardization of smart contracts at both
the provider’s and the external partner’s end, interoperability
of medical systems for value creation can be achieved.
##### Learning and Innovation Perspective
From a value-based care perspective, one of the key questions
health care organizations should ask is: "How can we use
blockchain applications to improve the learning capabilities that
lead to growth and innovation?" Blockchain applications can
help health care organizations reassess their resources, from
employee capabilities to health care delivery processes, and
align them to the organization’s strategy.
Blockchain enables health care stakeholders to learn and to
improve their services, thereby enhancing their competitiveness
and sustainability. The systems interoperability enabled by
blockchain technology can help health care professionals learn
about opportunities to innovate their services. Blockchain
technology also supports organizations in reassessing existing
processes and resources and identifying opportunities for
improvement. For example, auditability and traceability
improved by blockchain can help streamline insurance claim
processes and make them easier to manage. Blockchain can also
significantly reduce administration costs and potentially
eliminate some intermediaries that were previously needed for
data integration. Aggregated health care data can help health
care organizations reconfigure their procedures and innovate
medical services for patients. With enhanced traceability and
transparency supported by blockchain, organizations can learn
how to optimize the health care supply chain.
##### Interrelationships Among Perspectives
The BSC does not explicitly consider the interactions and
trade-offs between perspectives. In dynamic environments,
correctly identifying and addressing trade-offs between
perspectives can help organizations accurately evaluate the
target system and develop effective incentives to improve overall
organizational performance. Focusing on the financial
perspective alone may motivate organizations to reduce
nonfinancial investments that could produce long-term benefits.
In particular, if a nonfinancial perspective has no
contemporaneously congruent relationship with financial
perspective, managers may reduce investments that improve
performance in other areas for short-term benefits.
Our approach suggests that in addition to evaluating value-based
care with respect to each perspective, health care organizations
need to examine the interrelationship among the five
perspectives. For example, efforts to improve the efficiency of
internal processes (eg, improving quality process within a unit)
with blockchain applications can help health care organizations
enhance their learning capabilities (eg, creating quality
management processes at the organizational level).
While developing relevant key metrics for each perspective (see
Multimedia Appendix 3) is crucial for the effective use of the
BSC, it is also important to carefully examine the relationships
among the perspectives to understand how focus on one affects
performance in others in both the short and long term (see
Multimedia Appendix 4 for some of the tradeoffs that merit
consideration). The relationship is dependent on case
characteristics and is therefore not conclusive. For example, as
health care organizations learn how to better use blockchain
applications, they can use this knowledge to improve their
internal processes. Efficient and effective processes can lead to
improved service quality, thereby increasing customer
satisfaction and revenue in the long term. In turn, organizations
can invest more resources in identifying opportunities to learn
and develop blockchain applications across the various units.
Similarly, an existing health care system may provide a
moderate level of data protection that can be achieved with
minimal investment, moderate levels of customer satisfaction,
and minimal changes to internal processes and learning
capabilities. When providing more secure protection of patients’
medical data becomes a top priority for compliance with external
and regulatory requirements, organizations may consider
adopting a blockchain solution. From the financial perspective,
adopting blockchain applications may have a negative impact
on organizations as it increases costs in the short term. In
addition, blockchain adoption may decrease customer
satisfaction in the short term until customers become familiar
with the new systems and realize value through capabilities
such as ease of access and control. These short-term negative
impacts from the customer and on financial perspectives may
delay the adoption of improvements to internal processes. In
the long term, however, improvements to internal processes
that are facilitated by the technology may positively affect
customer satisfaction. In addition, process improvements can
facilitate learning capabilities, which, in turn, positively affect
internal processes and organizational finances in the long term.
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
##### Case Study: Analysis of the Proposed Extended Balanced Scorecard with a Blockchain Implementation in Health Care
Outline
What follows is a case study applying the BSC framework to
the implementation of a blockchain in health care. PharmaChain
Inc is a business unit that manages aviation and trucking
transportation within the supply chain journey for
pharmaceuticals. PharmaChain Inc prides itself on maintaining
pharmaceutical supply chain industry certification to handle
high-value, temperature-sensitive cargo. The highest impact of
blockchain implementation is providing greater visibility and
transparency, thereby ensuring the safe transportation of
life-saving pharmaceuticals. Business leaders suggest that this
blockchain application use case, managing aviation and trucking
of pharmaceutical products from manufacturers to health care
providers, serves as an example of PharmaChain Inc’s
commitment to pursuing high impact innovation.
While stakeholders often have varying perspectives and goals,
this use case illustrates significant benefits for two important
stakeholder groups, namely, customers and providers. The
varying stakeholder goals within supply chains results in
operational complexity when the process is desynchronized.
Blockchain technology helps standardize stakeholder
interactions, contributing mutual benefit to the provider and the
customer. Standardization of interactions, in turn, reduces human
intervention and results in accrual of added business value to
all stakeholders. See Figure 2 for a summary of the case study.
**Figure 2.** Case study: Application of developed framework in the pharmaceutical supply chain.
##### Customer Perspective
Customers are at the center of all decisions at PharmaChain Inc.
PharmaChain Inc is committed to customer service and
innovation, and these two values guide its decision to strengthen
its pharmaceutical transportation services. The blockchain
solution enables customers to track and trace their
temperature-regulated pharmaceutical products, thereby
increasing consumer confidence. As an additional benefit, the
organization receives positive media attention regarding its
commitment to safely transporting pharmaceutical products,
which positively strengthens the company’s relationship with
its customer base. Thus, the customer gains real-time access
via mobile device or desktop computer to trustworthy
information via the blockchain, and the need to contact customer
service, which can be time consuming for the customer and
costly for the company, is removed.
##### Internal Perspective
PharmaChain Inc explores various parts of the internal and
external process to solve customer challenges. Blockchain aids
in the reduction of lags in the internal processes between
temperature measurement and timely corrective actions. Those
lags may have otherwise resulted in increased liability, loss of
product efficacy, and product destruction. This implementation
facilitates the monitoring of the pharmaceutical products’
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
exposure to undesirable conditions (such as temperature
extremes and delays in transit).
##### External Perspective
Conversely, blockchain delivers external process improvements
leveraged to resolve legal and compliance issues more rapidly,
ultimately allowing lifesaving medicine to reach patients more
quickly by eliminating customary hold times in customs. A
blockchain initiative was selected to improve visibility and
facilitate trust among stakeholders (eg, manufacturers,
distributors, transporters, government agencies, and pharmacies).
If the freight forwarders do not produce and submit customs
approval forms in a timely fashion, the pharmaceutical products
cannot be released, with the duration of the hold potentially
affecting the quality of the product and negatively affecting the
customer experience. A trusted blockchain minimizes the
standard 4- to 8-hour hold duration necessary to verify the
validity of the customs approval submitted by the freight
forwarder, and improved compliance also helps increase trust
among external partners.
##### Learning and Growth (Innovation) Perspective
Blockchain applications support PharmaChain Inc in improving
its learning capabilities by enabling it to analyze its business
processes and optimize them. The learning capabilities can be
extended beyond pharmaceutical products, resulting in
organizational efficiencies. The growth opportunity within
blockchain applications is enabling traceability along the supply
chain journey. Traceability helps reduce fraud in the
pharmaceutical supply chain, which is a major societal benefit.
Encouraged by the success of the initiative, the organization is
deploying blockchain across multiple business products,
especially for high value activities and products like pet
transportation and food items.
##### Financial Perspective
For PharmaChain Inc, pharmaceuticals represent one of its
highest grossing revenue centers among all its shipping products.
With a supply chain industry ripe for innovation, PharmaChain
Inc accepts that a financial investment must be made to realize
the key benefits of blockchain technology. Blockchain
technology reduces the risk of theft and fraud as pharmaceutical
products move through multiple warehouses along the supply
chain, thus justifying the financial investment. The blockchain
solution implemented by PharmaChain Inc positively impacts
customer service and internal and external processes, increasing
reliability and thereby reducing long-term costs. The risk of
theft is minimized due to the automation of security controls,
facilitated by the blockchain implementation. In addition, the
cost of physical tracking of shipments is also minimized. The
organization anticipates a lift of 10% in pharmaceutical sales
over an 18-month period due to the initiative.
##### Tradeoffs Between the Perspectives
Transparency is one of the key characteristics of blockchain
that helps to facilitate value within health care. Transparency
helps ensure the authenticity of the pharmaceutical products
while providing a lone source of the truth for the pharmacy
supply chain network. However, transparency comes with
tradeoffs between the value-based perspectives. For example,
transparency replaces the concept of _need to know that_
previously existed between the internal operational perspective
and the customer perspective. Prior to the adoption of the
blockchain solution, process improvements that were necessary
to address internal operational failure were implemented only
when the benefits outweighed the costs. With the introduction
of blockchain, increased transparency may increase the exposure
of failures in internal operations to the entire supply chain
network, which, in turn, may reduce confidence in PharmaChain
Inc. Therefore, any deficiencies identified in internal processes
will be addressed more rapidly. While this increases the cost of
the pharmacy product in the short term, it is likely to improve
performance in the long term. Since blockchain in
pharmaceuticals is transformational in providing trusted
information, positive media attention that results from being an
innovator in the industry provides additional opportunities for
expanding the customer base.
Thus, PharmaChain Inc needs to continuously balance
competing demands to improve internal operations and to
innovate. Blockchain innovations require financial and human
capital investments, which compete with the demands to
improve existing internal systems. Thus, at least in the short
term, increased quality of services provided to the customer (for
example, via the ability to track and trace pharmaceutical
products) may negatively affect the financial metrics. However,
the benefits are expected to significantly increase financial
performance in the long term as the blockchain technology
enables PharmaChain Inc to offer superior services in
comparison to its competition, thus providing PharmaChain Inc
the opportunity to strengthen its competitive position in the
industry.
### Discussion
Thus, we provide a comprehensive framework that can be used
to evaluate blockchain implementation in the value-based health
care context, and our study contributes to research streams on
blockchain technology, the balanced scorecard framework, and
value-based care.
First, our framework can help decision makers in health care
organizations evaluate the feasibility and utility of various
blockchain proposals that seek to address the health IT chasm
reported in prior research [1]. We examined the health IT chasm
from three stakeholder perspectives to identify how
blockchain-based solutions can resolve these issues based on
existing use cases (Multimedia Appendix 1). However, because
this disruptive technology is still in its infancy, having a holistic
view of the value of blockchain applications is critical to making
informed strategic investment decisions [55,81]. Our framework
will aid health care organizations in holistically considering the
implications of blockchain technology from five critical
perspectives. While prior literature has identified three groups
of stakeholders central to the delivery of value-based care [1],
our study additionally highlights the critical role of external
stakeholders and regulations.
In addition, our study extends the BSC framework by
emphasizing the importance of the external perspective within
the health care domain. The health care domain is a dynamic
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
environment marked by changing regulations as well as
competitive forces that are charting the course of the industry
more rapidly than ever before. Regulatory compliance and
value-based provision of services and products are two salient
considerations in the health care industry. While value can be
created through external partnerships, interoperability among
IT systems and regulatory compliance are two areas of concern
that constrain such partnerships. Blockchain’s inherent
characteristics, such as transparency, immutability, and
traceability, facilitate interoperability and enable health care
organizations to both cocreate value with their external
stakeholders and comply with regulations. Considering the
influence of the external environment on a health care
organization’s existence, our framework enables the examination
of the external perspective when evaluating the performance of
blockchain-based HIT solutions.
Third, with their emphasis on value-based care, health care
organizations need to develop integrated health care IT
infrastructure that can improve services and reduce medical
errors. Blockchain, with its inherent trust- and
security-promoting qualities, has the potential to significantly
affect various areas of value provision for patients in health
care. While many performance evaluation solutions exist, our
study demonstrates the unique aspects of BSC in evaluating IT
initiatives for enabling value-based care. The BSC framework
enables the consideration of both financial and nonfinancial
##### Conflicts of Interest
None declared.
##### Multimedia Appendix 1
How blockchain can empower value-based care.
[[PDF File (Adobe PDF File)230 KB-Multimedia Appendix 1]](https://jmir.org/api/download?alt_name=jmir_v21i9e13595_app1.pdf&filename=66b60e5b6a3a633d5ac78a555dbde572.pdf)
##### Multimedia Appendix 2
Assessment of performance evaluation frameworks.
[[PDF File (Adobe PDF File)188 KB-Multimedia Appendix 2]](https://jmir.org/api/download?alt_name=jmir_v21i9e13595_app2.pdf&filename=323c70e8c0d159e0f46ca1af7aed80bd.pdf)
##### Multimedia Appendix 3
Metrics (KPIs) per perspectives.
[[PDF File (Adobe PDF File)182 KB-Multimedia Appendix 3]](https://jmir.org/api/download?alt_name=jmir_v21i9e13595_app3.pdf&filename=9e4df6e94e54c06fe88580c85b622bb7.pdf)
##### Multimedia Appendix 4
Relationship matrix among perspectives.
[[PDF File (Adobe PDF File)104 KB-Multimedia Appendix 4]](https://jmir.org/api/download?alt_name=jmir_v21i9e13595_app4.pdf&filename=b3736dce7bcfeb07ee2d99df26c9c38a.pdf)
##### References
dimensions of IT initiatives in the short term as well as the long
term. When compared with other performance evaluation
solutions (such as Zachman’s framework, the HCI framework,
or the technology-centric framework), our extended BSC
framework facilitates consideration of the external perspective.
It also defines and assesses performance against operational
metrics for each of the five critical perspectives. In addition,
our approach highlights the importance of the interrelationships
among the perspectives, thus offering another critical extension
of the BSC approach. The BSC, however, is limited in its ability
to build intuitive and interactive systems like those that HCI
and other frameworks provide. Thus, we recommend combining
the BSC approach with other appropriate framework(s) to meet
an organization’s unique needs.
Finally, our case study illustrates how the proposed framework
can be utilized to evaluate a health care blockchain application
in the for-profit sector. Our approach can also be extended to
not-for-profit organizations, which prioritize social goals over
financial goals. In such organizations, the financial perspective
can be modified to focus on financial sustainability by
establishing metrics such as cost reduction, revenue growth,
and cost of stakeholder engagement. Similarly, the customer
perspective may be widened to include additional stakeholders,
such as donors, funding sources, community, volunteers, and
employees, that are critical to such organizations [82].
1. Adler-Milstein J, Embi PJ, Middleton B, Sarkar IN, Smith J. Crossing the health IT chasm: considerations and policy
recommendations to overcome current challenges and enable value-based care. J Am Med Inform Assoc 2017 Sep
[01;24(5):1036-1043. [doi: 10.1093/jamia/ocx017] [Medline: 28340128]](http://dx.doi.org/10.1093/jamia/ocx017)
2. Mettler M. Blockchain technology in healthcare: the revolution starts here. : The revolution starts here. 2016 IEEE 18th
International Conference on e-Health Networking, Applications and Services (Healthcom) IEEE; 2016 Presented at: IEEE
18th International Conference on e-Health Networking, Applications and Services (Healthcom); 14-16 Sept. 2016; Munich,
[Germany p. 1-3 URL: https://ieeexplore.ieee.org/document/7749510 [doi: 10.1109/healthcom.2016.7749510]](https://ieeexplore.ieee.org/document/7749510)
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
3. [Nakamoto S. Bitcoin. 2008. Bitcoin: a peer-to-peer electronic cash system URL: https://bitcoin.org/bitcoin.pdf [accessed](https://bitcoin.org/bitcoin.pdf)
2018-12-10]
4. Kim HM, Laskowski M. Toward an ontology-driven blockchain design for supply-chain provenance. Intell Sys Acc Fin
[Mgmt 2018 Mar 28;25(1):18-27. [doi: 10.1002/isaf.1424]](http://dx.doi.org/10.1002/isaf.1424)
5. [Marr B. Forbes. 2018. 35 Amazing real world examples of how blockchain is changing our world URL: https://www.](https://www.forbes.com/sites/bernardmarr/2018/01/22/35-amazing-real-world-examples-of-how-blockchain-is-changing-our-world/#1f18c5943b56)
[forbes.com/sites/bernardmarr/2018/01/22/35-amazing-real-world-examples-of-how-blockchain-is-changing-our-world/](https://www.forbes.com/sites/bernardmarr/2018/01/22/35-amazing-real-world-examples-of-how-blockchain-is-changing-our-world/#1f18c5943b56)
[#1f18c5943b56 [accessed 2018-11-10]](https://www.forbes.com/sites/bernardmarr/2018/01/22/35-amazing-real-world-examples-of-how-blockchain-is-changing-our-world/#1f18c5943b56)
6. Lagasse J. Healthcare Finance. 2017. Curisium raises $3.5 million to bring blockchain to value-based contracting URL:
[https://www.healthcarefinancenews.com/news/curisium-raises-35-million-bring-blockchain-value-based-contracting](https://www.healthcarefinancenews.com/news/curisium-raises-35-million-bring-blockchain-value-based-contracting)
[accessed 2018-11-20]
7. [Iansiti M, Lakhani KR. The truth about blockchain. Harvard Business Review 2017;95(1):118-127 [FREE Full text]](https://hbr.org/2017/01/the-truth-about-blockchain)
8. Glaser F. Pervasive decentralisation of digital infrastructures: a framework for blockchain enabled system and use case
analysis. 2017 Presented at: 50th Hawaii International Conference on System Sciences; Jan 1 2017 - Jan 7 2017; Hawaii
[URL: https://aisel.aisnet.org/hicss-50/da/open_digital_services/4/ [doi: 10.24251/hicss.2017.186]](https://aisel.aisnet.org/hicss-50/da/open_digital_services/4/)
9. LeRouge C, Mantzana V, Wilson EV. Healthcare information systems research, revelations and visions. European Journal
[of Information Systems 2017 Dec 19;16(6):669-671. [doi: 10.1057/palgrave.ejis.3000712]](http://dx.doi.org/10.1057/palgrave.ejis.3000712)
10. Kaplan RS, Norton DP. The balanced scorecard: measures that drive performance. Harvard Business Review 1992;70(1):71-79
[[FREE Full text]](https://hbr.org/1992/01/the-balanced-scorecard-measures-that-drive-performance-2)
11. Jain R, Benbunan-Fich R, Mohan K. Assessing Green IT Initiatives Using the Balanced Scorecard. IT Prof 2011
[Jan;13(1):26-32. [doi: 10.1109/mitp.2010.139]](http://dx.doi.org/10.1109/mitp.2010.139)
12. Voelker KE, Rakich JS, French GR. The balanced scorecard in healthcare organizations: a performance measurement and
[strategic planning methodology. Hosp Top 2001;79(3):13-24. [doi: 10.1080/00185860109597908] [Medline: 11794940]](http://dx.doi.org/10.1080/00185860109597908)
13. Slabodkin G. A work in progress: blockchain. Health Data Management 2017;25(3):37-39.
14. ONC HHS. The Office of the National Coordinator for Health Information Technology. 2015. Federal health IT strategic
[plan URL: https://www.healthit.gov/sites/default/files/9-5-federalhealthitstratplanfinal_0.pdf [accessed 2018-11-20]](https://www.healthit.gov/sites/default/files/9-5-federalhealthitstratplanfinal_0.pdf)
15. Patel MS, Davis MM, Lypson ML. The VALUE Framework: training residents to provide value-based care for their patients.
[J Gen Intern Med 2012 Sep;27(9):1210-1214 [FREE Full text] [doi: 10.1007/s11606-012-2076-7] [Medline: 22573146]](http://europepmc.org/abstract/MED/22573146)
16. [Porter ME. What Is Value in Health Care? N Engl J Med 2010 Dec 23;363(26):2477-2481. [doi: 10.1056/nejmp1011024]](http://dx.doi.org/10.1056/nejmp1011024)
17. Tseng EK, Hicks LK. Value Based Care and Patient-Centered Care: Divergent or Complementary? Curr Hematol Malig
[Rep 2016 Aug;11(4):303-310. [doi: 10.1007/s11899-016-0333-2] [Medline: 27262855]](http://dx.doi.org/10.1007/s11899-016-0333-2)
18. Porter ME. A strategy for health care reform--toward a value-based system. N Engl J Med 2009 Jul 09;361(2):109-112.
[[doi: 10.1056/NEJMp0904131] [Medline: 19494209]](http://dx.doi.org/10.1056/NEJMp0904131)
19. [LaPointe J. Revcycle Intelligence. 2016 Jun 07. What is value-based care, what it means for providers? URL: https:/](https://revcycleintelligence.com/features/what-is-value-based-care-what-it-means-for-providers)
[/revcycleintelligence.com/features/what-is-value-based-care-what-it-means-for-providers [accessed 2018-12-10]](https://revcycleintelligence.com/features/what-is-value-based-care-what-it-means-for-providers)
20. Office of the National Coordinator for Health Information Technology. 2015. U.S. hospital adoption of patient engagement
[functionalities URL: https://dashboard.healthit.gov/quickstats/pages/](https://dashboard.healthit.gov/quickstats/pages/FIG-Hospital-Adoption-of-Patient-Engagement-Functionalities.php)
[FIG-Hospital-Adoption-of-Patient-Engagement-Functionalities.php [accessed 2018-11-20]](https://dashboard.healthit.gov/quickstats/pages/FIG-Hospital-Adoption-of-Patient-Engagement-Functionalities.php)
21. Zhang J, Xue N, Huang X. A Secure System For Pervasive Social Network-Based Healthcare. IEEE Access
[2016;4:9239-9250. [doi: 10.1109/ACCESS.2016.2645904]](http://dx.doi.org/10.1109/ACCESS.2016.2645904)
22. Engelhardt MA. Hitching Healthcare to the Chain: An Introduction to Blockchain Technology in the Healthcare Sector.
[TIM Review 2017 Oct 27;7(10):22-34. [doi: 10.22215/timreview/1111]](http://dx.doi.org/10.22215/timreview/1111)
23. Ekblaw A, Azaria A, Halamka J, Lippman A. A case study for blockchain in healthcare:. 2016 Presented at: Proceedings
of IEEE Open & Big Data Conference; Aug 22, 2016 - Aug 24, 2016; Vienna, Austria.
24. Dey T, Jaiswal S, Sunderkrishnan S, Katre N. HealthSense: a medical use case of Internet of Things and blockchain. 2017
Presented at: International conference on intelligent sustainable systems (ICISS); Dec 7, 2017 - Dec 8, 2017; Palladam,
[India. [doi: 10.1109/iss1.2017.8389459]](http://dx.doi.org/10.1109/iss1.2017.8389459)
25. Ichikawa D, Kashiyama M, Ueno T. Tamper-Resistant Mobile Health Using Blockchain Technology. JMIR Mhealth
[Uhealth 2017 Jul 26;5(7):e111 [FREE Full text] [doi: 10.2196/mhealth.7938] [Medline: 28747296]](https://mhealth.jmir.org/2017/7/e111/)
26. Saravanan M, Shubha R, Marks A, Iyer V. SMEAD: a secured mobile enabled assisting device for diabetics monitoring.
2017 Presented at: IEEE International Conference on Advanced Networks and Telecommunications Systems (ANTS); Dec
[17, 2017 - Dec 20, 2017; Bhubaneswar, India p. 1-6. [doi: 10.1109/ants.2017.8384099]](http://dx.doi.org/10.1109/ants.2017.8384099)
27. Carson B, Romanelli G, Walsh P, Zhumaev A. McKinsey & Company. 2018. Blockchain beyond the hype: what is the
[strategic business value? URL: https://cybersolace.co.uk/CySol/wp-content/uploads/2018/06/](https://cybersolace.co.uk/CySol/wp-content/uploads/2018/06/McKinsey-paper-about-Blockchain-Myths.pdf)
[McKinsey-paper-about-Blockchain-Myths.pdf [accessed 2018-11-20]](https://cybersolace.co.uk/CySol/wp-content/uploads/2018/06/McKinsey-paper-about-Blockchain-Myths.pdf)
28. Genestier P, Zouarhi S, Limeux P, Excoffier D, Prola A. Blockchain for consent management in the ehealth environment?:
a nugget for privacy and security challenges. Journal of the International Society for Telemedicine and eHealth 2017;5:1-4
[[FREE Full text]](https://journals.ukzn.ac.za/index.php/JISfTeH/article/view/269)
29. [Chandler N. Gartner. 2010 Apr 02. Solutions for scorecards and strategy management URL: https://www.gartner.com/en/](https://www.gartner.com/en/documents/1334316/solutions-for-scorecards-and-strategy-management)
[documents/1334316/solutions-for-scorecards-and-strategy-management [accessed 2018-12-10]](https://www.gartner.com/en/documents/1334316/solutions-for-scorecards-and-strategy-management)
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
30. Nørreklit H. The Balanced Scorecard: what is the score? A rhetorical analysis of the Balanced Scorecard. Accounting,
[Organizations and Society 2003 Aug;28(6):591-619. [doi: 10.1016/s0361-3682(02)00097-1]](http://dx.doi.org/10.1016/s0361-3682(02)00097-1)
31. Glover D, Hermans J. Improving the traceability of the clinical trial supply chain. Applied Clinical Trials 2017;26(12):36-39
[[FREE Full text]](http://www.appliedclinicaltrialsonline.com/improving-traceability-clinical-trial-supply-chain)
32. Funk E, Riddell J, Ankel F, Cabrera D. Blockchain Technology: A Data Framework to Improve Validity, Trust, and
Accountability of Information Exchange in Health Professions Education. Acad Med 2018 Dec;93(12):1791-1794. [doi:
[10.1097/ACM.0000000000002326] [Medline: 29901658]](http://dx.doi.org/10.1097/ACM.0000000000002326)
33. Moreira C, Farley M, Gebreyes K, McDonnell W. PwC. 2018. A prescription for blockchain and healthcare: reinvent or
[be reinvented URL: https://www.pwc.com/us/en/health-industries/health-research-institute/assets/pdf/](https://www.pwc.com/us/en/health-industries/health-research-institute/assets/pdf/pwc-hri-a-prescription-for-blockchain-in-healthcare_27sept2018.pdf)
[pwc-hri-a-prescription-for-blockchain-in-healthcare_27sept2018.pdf [accessed 2018-12-10]](https://www.pwc.com/us/en/health-industries/health-research-institute/assets/pdf/pwc-hri-a-prescription-for-blockchain-in-healthcare_27sept2018.pdf)
34. Androulaki E, Barger A, Bortnikov V, Cachin C, Christidis K, De Caro A, et al. Hyperledger fabric: A distributed operating
system for permissioned blockchains. In: www.hyperledger.org.: ACM; 2018 Presented at: Thirteenth EuroSys Conference;
[April 23, 2018 - April 26, 2018; Porto, Portugal. [doi: 10.1145/3190508.3190538]](http://dx.doi.org/10.1145/3190508.3190538)
35. Bocek T, Rodrigues B, Strasser T, Stiller B. Blockchains everywhere - a use-case of blockchains in the pharma supply-chain.
2017 Presented at: IFIP/IEEE Symposium on Integrated Network and Service Management; May 8, 2017 - May 12, 2017;
[Lisbon, Portugal. [doi: 10.23919/inm.2017.7987376]](http://dx.doi.org/10.23919/inm.2017.7987376)
36. HealthIT.: IBM; 2016. Blockchain: the chain of trust and its potential to transform healthcare – our point of view URL:
[https://www.healthit.gov/sites/default/files/8-31-blockchain-ibm_ideation-challenge_aug8.pdf [accessed 2018-12-11]](https://www.healthit.gov/sites/default/files/8-31-blockchain-ibm_ideation-challenge_aug8.pdf)
37. Nugent T, Upton D, Cimpoesu M. Improving data transparency in clinical trials using blockchain smart contracts. F1000Res
[2016;5:2541 [FREE Full text] [doi: 10.12688/f1000research.9756.1] [Medline: 28357041]](https://f1000research.com/articles/10.12688/f1000research.9756.1/doi)
38. Benchoufi M, Ravaud P. Blockchain technology for improving clinical research quality. Trials 2017 Jul 19;18(1):335
[[FREE Full text] [doi: 10.1186/s13063-017-2035-z] [Medline: 28724395]](https://trialsjournal.biomedcentral.com/articles/10.1186/s13063-017-2035-z)
39. Hollan J, Hutchins E, Kirsh D. Distributed cognition: toward a new foundation for human-computer interaction research.
[ACM Trans. Comput.-Hum. Interact 2000;7(2):174-196. [doi: 10.1145/353485.353487]](http://dx.doi.org/10.1145/353485.353487)
40. Zhang A, Lin X. Towards Secure and Privacy-Preserving Data Sharing in e-Health Systems via Consortium Blockchain.
[J Med Syst 2018 Jun 28;42(8):140. [doi: 10.1007/s10916-018-0995-5] [Medline: 29956061]](http://dx.doi.org/10.1007/s10916-018-0995-5)
41. Yue X, Wang H, Jin D, Li M, Jiang W. Healthcare Data Gateways: Found Healthcare Intelligence on Blockchain with
[Novel Privacy Risk Control. J Med Syst 2016 Oct;40(10):218. [doi: 10.1007/s10916-016-0574-6] [Medline: 27565509]](http://dx.doi.org/10.1007/s10916-016-0574-6)
42. Cichosz SL, Stausholm MN, Kronborg T, Vestergaard P, Hejlesen O. How to Use Blockchain for Diabetes Health Care
[Data and Access Management: An Operational Concept. J Diabetes Sci Technol 2019 Mar;13(2):248-253 [FREE Full text]](http://europepmc.org/abstract/MED/30047789)
[[doi: 10.1177/1932296818790281] [Medline: 30047789]](http://dx.doi.org/10.1177/1932296818790281)
43. Carson B, Romanelli G, Walsh P, Zhumaev A. McKinsey & Company. 2018. Blockchain beyond the hype: what is the
[strategic business value? URL: https://cybersolace.co.uk/CySol/wp-content/uploads/2018/06/](https://cybersolace.co.uk/CySol/wp-content/uploads/2018/06/McKinsey-paper-about-Blockchain-Myths.pdf)
[McKinsey-paper-about-Blockchain-Myths.pdf [accessed 2018-12-10]](https://cybersolace.co.uk/CySol/wp-content/uploads/2018/06/McKinsey-paper-about-Blockchain-Myths.pdf)
44. Agbo C, Mahmoud Q, Eklund J. Blockchain Technology in Healthcare: A Systematic Review. Healthcare (Basel) 2019
[Apr 04;7(2):56 [FREE Full text] [doi: 10.3390/healthcare7020056] [Medline: 30987333]](http://www.mdpi.com/resolver?pii=healthcare7020056)
45. Leung D, Suhl A, Gilad Y, Zeldovich N. Network and Distributed Systems Security (NDSS) Symposium. 2019. Vault:
[fast bootstrapping for the algorand cryptocurrency URL: https://people.csail.mit.edu/nickolai/papers/leung-vault.pdf](https://people.csail.mit.edu/nickolai/papers/leung-vault.pdf)
[accessed 2019-05-20]
46. [Kuzmanovic A. Net neutrality. Commun. ACM 2019 Apr 24;62(5):50-55. [doi: 10.1145/3312525]](http://dx.doi.org/10.1145/3312525)
47. Park YR, Lee E, Na W, Park S, Lee Y, Lee J. Is Blockchain Technology Suitable for Managing Personal Health Records?
[Mixed-Methods Study to Test Feasibility. J Med Internet Res 2019 Feb 08;21(2):e12533 [FREE Full text] [doi:](https://www.jmir.org/2019/2/e12533/)
[10.2196/12533] [Medline: 30735142]](http://dx.doi.org/10.2196/12533)
48. Kuo T, Kim H, Ohno-Machado L. Blockchain distributed ledger technologies for biomedical and health care applications.
[J Am Med Inform Assoc 2017 Nov 01;24(6):1211-1220 [FREE Full text] [doi: 10.1093/jamia/ocx068] [Medline: 29016974]](http://europepmc.org/abstract/MED/29016974)
49. Liu W, Zhu S, Mundie T, Krieger U. Advanced block-chain architecture for e-health systems. : IEEE; 2017 Presented at:
IEEE 19th International Conference on e-Health Networking, Applications and Services, Healthcom; Oct 12, 2017 - Oct
[15, 2017; Dalian, China p. 37-42. [doi: 10.1109/healthcom.2017.8210847]](http://dx.doi.org/10.1109/healthcom.2017.8210847)
50. Roman-Belmonte JM, De la Corte-Rodriguez H, Rodriguez-Merchan EC. How blockchain technology can change medicine.
[Postgrad Med 2018 May;130(4):420-427. [doi: 10.1080/00325481.2018.1472996] [Medline: 29727247]](http://dx.doi.org/10.1080/00325481.2018.1472996)
51. Angraal S, Krumholz HM, Schulz WL. Blockchain Technology: Applications in Health Care. Circ Cardiovasc Qual
[Outcomes 2017 Sep;10(9):1-3. [doi: 10.1161/CIRCOUTCOMES.117.003800] [Medline: 28912202]](http://dx.doi.org/10.1161/CIRCOUTCOMES.117.003800)
52. Zhou L, Wang L, Sun Y. MIStore: a Blockchain-Based Medical Insurance Storage System. J Med Syst 2018 Jul 02;42(8):149
[[FREE Full text] [doi: 10.1007/s10916-018-0996-4] [Medline: 29968202]](http://europepmc.org/abstract/MED/29968202)
53. [Miller R. TechCrunch. PokitDok teams with Intel on healthcare blockchain solution URL: https://techcrunch.com/2017/](https://techcrunch.com/2017/05/10/pokitdok-teams-with-intel-on-healthcare-blockchain-solution/)
[05/10/pokitdok-teams-with-intel-on-healthcare-blockchain-solution/ [accessed 2018-12-11]](https://techcrunch.com/2017/05/10/pokitdok-teams-with-intel-on-healthcare-blockchain-solution/)
54. Schumacher A. Blockchain Research Institute.: Brightline; 2018. Reinventing healthcare on the blockchain - toward a new
[era in precision medicine URL: https://www.brightline.org/resources/reinventing-healthcare-on-the-blockchain/ [accessed](https://www.brightline.org/resources/reinventing-healthcare-on-the-blockchain/)
2018-12-11]
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
55. Vazirani AA, O'Donoghue O, Brindley D, Meinert E. Implementing Blockchains for Efficient Health Care: Systematic
[Review. J Med Internet Res 2019 Feb 12;21(2):e12439 [FREE Full text] [doi: 10.2196/12439] [Medline: 30747714]](https://www.jmir.org/2019/2/e12439/)
56. Meinert E, Alturkistani A, Brindley D, Knight P, Wells G, Pennington ND. The technological imperative for value-based
[health care. Br J Hosp Med (Lond) 2018 Jun 02;79(6):328-332. [doi: 10.12968/hmed.2018.79.6.328] [Medline: 29894248]](http://dx.doi.org/10.12968/hmed.2018.79.6.328)
57. Sullivan SD, Watkins J, Sweet B, Ramsey SD. Health technology assessment in health-care decisions in the United States.
[Value Health 2009 Jun;12 Suppl 2:S39-S44 [FREE Full text] [doi: 10.1111/j.1524-4733.2009.00557.x] [Medline: 19523183]](https://linkinghub.elsevier.com/retrieve/pii/S1098-3015(10)60060-5)
58. Chernew ME, Rosen AB, Fendrick AM. Value-based insurance design. Health Aff (Millwood) 2007;26(2):w195-w203.
[[doi: 10.1377/hlthaff.26.2.w195] [Medline: 17264100]](http://dx.doi.org/10.1377/hlthaff.26.2.w195)
59. Drummond M, Sculpher M, Claxton K, Stoddart G, Torrance G. Methods for the Economic Evaluation of Health Care
Programmes. Oxford: Oxford University Press; 2015.
60. Sowa JF, Zachman JA. Extending and formalizing the framework for information systems architecture. IBM Syst. J
[1992;31(3):590-616. [doi: 10.1147/sj.313.0590]](http://dx.doi.org/10.1147/sj.313.0590)
61. O'Brien MA, Rogers WA, Fisk AD. Developing a Framework for Intuitive Human-Computer Interaction. Proc Hum Factors
[Ergon Soc Annu Meet 2008 Sep;52(20):1645-1649 [FREE Full text] [doi: 10.1177/154193120805202001] [Medline:](http://europepmc.org/abstract/MED/25552895)
[25552895]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=25552895&dopt=Abstract)
62. Cheng LC, Gibson ML, Carrillo EE, Fitch G. A technology‐centric framework for investigating business operations.
[Industr Mngmnt & Data Systems 2011 Apr 26;111(4):509-530. [doi: 10.1108/02635571111133524]](http://dx.doi.org/10.1108/02635571111133524)
63. Hoque Z. Total quality management and the balanced scorecard approach: a critical analysis of their potential relationships
and directions for research. Critical Perspectives on Accounting 2003 Jul;14(5):553-566. [doi:
[10.1016/s1045-2354(02)00160-0]](http://dx.doi.org/10.1016/s1045-2354(02)00160-0)
64. Hasan N. University of Birmingham. 2006. Developing a balanced scorecard model for evaluation of project management
[and performance URL: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.494604 [accessed 2018-12-10]](https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.494604)
65. Metawie M, Gilman D. Problems with the implementation of performance measurement systems in the public sector where
performance is linked to pay: A literature review drawn from the UK. 2005 Presented at: 3rd Conference on Performance
[Measurement and Management Control; Sept 22, 2005 - Sept 23, 2005; Nice, France URL: http://citeseerx.ist.psu.edu/](http://citeseerx.ist.psu.edu/viewdoc/citations?doi=10.1.1.104.9267)
[viewdoc/citations?doi=10.1.1.104.9267](http://citeseerx.ist.psu.edu/viewdoc/citations?doi=10.1.1.104.9267)
66. Enwere EN, Keating EA, Weber RJ. Balanced scorecards as a tool for developing patient-centered pharmacy services.
[Hosp Pharm 2014 Jun;49(6):579-584 [FREE Full text] [doi: 10.1310/hpj4906-579] [Medline: 24958976]](http://europepmc.org/abstract/MED/24958976)
67. Zelman WN, Pink GH, Matthias CB. Use of the balanced scorecard in health care. J Health Care Finance 2003;29(4):1-16.
[[Medline: 12908650]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=12908650&dopt=Abstract)
68. Bandopadhyay T. Gartner. 2013 Dec 6. I&O Teams Can Build Balanced Scorecards for Business Cases on IT Services
Transitioning to the Cloud.
69. Schelp J, Stutz M. Via Nova Architectura. 2007. A balanced scorecard approach to measure the value of enterprise architecture
[URL: https://www.alexandria.unisg.ch/213190/1/SchelpStutz.2007-BalancedScorecardApproach.pdf [accessed 2018-12-15]](https://www.alexandria.unisg.ch/213190/1/SchelpStutz.2007-BalancedScorecardApproach.pdf)
70. Rillo M. Limitations of balanced scorecard. 2004 Presented at: Proceedings of the 2nd Scientific and Educational Conference,
Business Administration: Business in a Globalizing Economy; Jan 30, 2004 - Jan 31, 2004; Parnu p. 155-161.
71. Banker R, Chang H, Janakiraman S, Konstans C. A balanced scorecard analysis of performance metrics. European Journal
[of Operational Research 2004 Apr;154(2):423-436 [FREE Full text] [doi: 10.1016/S0377-2217(03)00179-6] [Medline:](http://dx.plos.org/10.1371/journal.pone.0163477)
[27695049]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=27695049&dopt=Abstract)
72. Mohobbot A. The balanced scorecard (BSC): a critical analysis. Journal of Humanities and Social Sciences 2004;18:219-232.
73. Akkermans H, Van Oorschot K. Developing a balanced scorecard with system dynamics. Journal of the Operational Research
[Society 2002;40(56):931-941 [FREE Full text]](https://www.researchgate.net/publication/228595884_Developing_a_balanced_scorecard_with_system_dynamics)
74. Grigoroudis E, Orfanoudaki E, Zopounidis C. Strategic performance measurement in a healthcare organisation: A multiple
[criteria approach based on balanced scorecard. Omega 2012 Jan;40(1):104-119. [doi: 10.1016/j.omega.2011.04.001]](http://dx.doi.org/10.1016/j.omega.2011.04.001)
75. Culver K. HealthIT. 2016. Blockchain technologies: a whitepaper discussing how the claims process can be improved URL:
[https://www.healthit.gov/sites/default/files/3-47-whitepaperblockchainforclaims_v10.pdf [accessed 2019-04-12]](https://www.healthit.gov/sites/default/files/3-47-whitepaperblockchainforclaims_v10.pdf)
76. [Yip K. HealthIT. 2017. Blockchain and alternative payment models URL: https://www.healthit.gov/sites/default/files/](https://www.healthit.gov/sites/default/files/15-54-kyip_blockchainapms_080816.pdf)
[15-54-kyip_blockchainapms_080816.pdf [accessed 2019-09-14]](https://www.healthit.gov/sites/default/files/15-54-kyip_blockchainapms_080816.pdf)
77. Pankomera R, van Greunen D. Privacy and security issues for a patient-centric approach in public healthcare in a resource
constrained setting. In: IEEE. 2016 Presented at: IST-Africa Week Conference; May 11, 2016 - May 13, 2016; Durban,
South Africa.
78. DeLone WH, McLean ER. The DeLone and McLean model of information systems success: a ten-year update. Journal of
[Management Information Systems 2003;19(4):9-30. [doi: 10.1080/07421222.2003.11045748]](http://dx.doi.org/10.1080/07421222.2003.11045748)
79. Kettinger, Lee. Zones of Tolerance: Alternative Scales for Measuring Information Systems Service Quality. MIS Quarterly
[2005;29(4):607-623. [doi: 10.2307/25148702]](http://dx.doi.org/10.2307/25148702)
80. Hu C. Business Insider. 2018 Aug 10. Mount Sinai teamed up with the designers who created projects for Nike and Beyonce
[to build a futuristic, new clinic ? and it?s reimagining how healthcare is delivered URL: https://www.businessinsider.com.au/](https://www.businessinsider.com.au/inside-mount-sinais-lab100s-clinic-of-the-future-2018-8)
[inside-mount-sinais-lab100s-clinic-of-the-future-2018-8 [accessed 2018-12-10]](https://www.businessinsider.com.au/inside-mount-sinais-lab100s-clinic-of-the-future-2018-8)
-----
JOURNAL OF MEDICAL INTERNET RESEARCH Zhang et al
81. Zhang P, Walker M, White J, Schmidt D, Lenz G. Metrics for assessing blockchain-based healthcare decentralized apps.
2017 Presented at: IEEE 19th International Conference on E-Health Networking, Applications and Services (Healthcom);
[Oct 12, 2017 - Oct 15, 2017; Dalian, China. [doi: 10.1109/healthcom.2017.8210842]](http://dx.doi.org/10.1109/healthcom.2017.8210842)
82. Somers AB. Shaping the balanced scorecard for use in UK social enterprises. Social Enterprise Journal 2005 Mar;1(1):43-56.
[[doi: 10.1108/17508610580000706]](http://dx.doi.org/10.1108/17508610580000706)
##### Abbreviations
**BMPLS:** Blockchain-Based Multi-level Privacy-Preserving Location Sharing Scheme
**BSC:** Balanced Scorecard
**EHR:** electronic health record
**EFQMEM:** European Foundation Quality Management Excellence Model
**GDPR:** European General Data Protection Regulation
**HCI:** human-computer interaction
**HIT:** Health Information Technology
**IS:** Information Systems
**IT:** Information Technology
**ONC:** Office of the National Coordinator for Health Information Technology
**ROI:** return on investment
**TQM:** total quality management
_Edited by K Clauson, P Zhang; submitted 01.02.19; peer-reviewed by H Oh, C Reis, K McLeroy; comments to author 01.04.19; revised_
_version received 27.05.19; accepted 16.07.19; published 27.09.19_
_Please cite as:_
_Zhang R, George A, Kim J, Johnson V, Ramesh B_
_Benefits of Blockchain Initiatives for Value-Based Care: Proposed Framework_
_J Med Internet Res 2019;21(9):e13595_
_[URL: https://www.jmir.org/2019/9/e13595](https://www.jmir.org/2019/9/e13595)_
_[doi: 10.2196/13595](http://dx.doi.org/10.2196/13595)_
_[PMID: 31573899](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=31573899&dopt=Abstract)_
©Rongen Zhang, Amrita George, Jongwoo Kim, Veneetia Johnson, Balasubramaniam Ramesh. Originally published in the
Journal of Medical Internet Research (http://www.jmir.org), 27.09.2019 This is an open-access article distributed under the terms
of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use,
distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet
Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/,
as well as this copyright and license information must be included.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC6789420, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.jmir.org/2019/9/e13595/PDF"
}
| 2,019
|
[
"JournalArticle",
"Review"
] | true
| 2019-02-01T00:00:00
|
[
{
"paperId": "b73901dfdc7735b29648644b09c296df3c1d0508",
"title": "Blockchain Technology in Healthcare: A Systematic Review"
},
{
"paperId": "0d5a4558e9e0bcaa2d11b5c09b4dc2862d76f3d8",
"title": "Bitcoin"
},
{
"paperId": "b32a19c5d3f8c31e9329090affa806d19ccf4d16",
"title": "Is Blockchain Technology Suitable for Managing Personal Health Records? Mixed-Methods Study to Test Feasibility"
},
{
"paperId": "53c985c6ca3bc23aa8bae7009d6419a7344db93f",
"title": "Implementing Blockchains for Efficient Health Care: Systematic Review"
},
{
"paperId": "f03fcbdb768a2cad7119a68be0f0755e473ed414",
"title": "Net Neutrality: Unexpected Solution to Blockchain Scaling"
},
{
"paperId": "566f03b7e930f540e887988da52dd75a5c172466",
"title": "Blockchain Technology: A Data Framework to Improve Validity, Trust, and Accountability of Information Exchange in Health Professions Education."
},
{
"paperId": "071e3e9691e3af8c0e12335112bf6752500a076e",
"title": "How to Use Blockchain for Diabetes Health Care Data and Access Management: An Operational Concept"
},
{
"paperId": "3b1d4297a61995b9db9e4abd486cc8772c70405f",
"title": "MIStore: a Blockchain-Based Medical Insurance Storage System"
},
{
"paperId": "56cd71e02772d6bd7adead9aa876a862ee0537c2",
"title": "Towards Secure and Privacy-Preserving Data Sharing in e-Health Systems via Consortium Blockchain"
},
{
"paperId": "f65d89a0b903ee18ad7ffbc87102682970afc79b",
"title": "The technological imperative for value-based health care."
},
{
"paperId": "fb7b9707ee635022083ac295dd1302b57d4dbaac",
"title": "How blockchain technology can change medicine"
},
{
"paperId": "1b4c39ba447e8098291e47fd5da32ca650f41181",
"title": "Hyperledger fabric: a distributed operating system for permissioned blockchains"
},
{
"paperId": "b10af86c64bb011887bc870e0520bf40ce44cc24",
"title": "SMEAD: A secured mobile enabled assisting device for diabetics monitoring"
},
{
"paperId": "7cf0da2ae8a466bbb72a441efa7ca9a37cfd6cf2",
"title": "HealthSense: A medical use case of Internet of Things and blockchain"
},
{
"paperId": "f99878f6f3df724ffaf648f7515be053347d19de",
"title": "Metrics for assessing blockchain-based healthcare decentralized apps"
},
{
"paperId": "cb53438f840b404e4cbd41f9189f0bedd6ffa1da",
"title": "Hitching Healthcare to the Chain: An Introduction to Blockchain Technology in the Healthcare Sector"
},
{
"paperId": "d896ac61146ea18ca42c4adf87122009a06d4ef4",
"title": "Advanced block-chain architecture for e-health systems"
},
{
"paperId": "5bbc4181e073ec6b3ec894a35eacdc6a67e8c3a3",
"title": "Blockchain distributed ledger technologies for biomedical and health care applications"
},
{
"paperId": "8ae4be26e24a1a40459bd126044b7baf71580c44",
"title": "Crossing the health IT chasm: considerations and policy recommendations to overcome current challenges and enable value-based care"
},
{
"paperId": "1d693c898e3a665655e9289cba3c0ba944b16211",
"title": "Blockchain Technology: Applications in Health Care"
},
{
"paperId": "694f46db9868c345a2f450fe6d10d7131085f64c",
"title": "Blockchain technology for improving clinical research quality"
},
{
"paperId": "3bf2ef86fab9f2a3238a0cf275562e25a7e33e58",
"title": "Tamper-Resistant Mobile Health Using Blockchain Technology"
},
{
"paperId": "0356360ce4e31a901f5cc48b090af30f56bb3f2d",
"title": "Blockchains everywhere - a use-case of blockchains in the pharma supply-chain"
},
{
"paperId": "064fa0481199642834b1d17fe22ce2313524a747",
"title": "Blockchain for Consent Management in the eHealth Environment: A Nugget for Privacy and Security Challenges"
},
{
"paperId": "859d0535e16095f274df4d69df54954b21258a13",
"title": "Pervasive Decentralisation of Digital Infrastructures: A Framework for Blockchain enabled System and Use Case Analysis"
},
{
"paperId": "4bd5d941233633054090db5236ae01bd4402487a",
"title": "A Secure System For Pervasive Social Network-Based Healthcare"
},
{
"paperId": "6e20b49c64d68d50929e8968237a1eb0227d013e",
"title": "Improving data transparency in clinical trials using blockchain smart contracts"
},
{
"paperId": "208735a6c437b8ae3efba01693c3e8a06289c3dd",
"title": "Healthcare Data Gateways: Found Healthcare Intelligence on Blockchain with Novel Privacy Risk Control"
},
{
"paperId": "310e677ce23004fdf0a549c2cfda2ef15420d6ec",
"title": "Blockchain technology in healthcare: The revolution starts here"
},
{
"paperId": "3e70f5efeddf7c61bbf1999093a2e04b5e9073c0",
"title": "Value Based Care and Patient-Centered Care: Divergent or Complementary?"
},
{
"paperId": "0c17c30a49748db7013994200add395c6d6c5b9f",
"title": "Privacy and security issues for a patient-centric approach in public healthcare in a resource constrained setting"
},
{
"paperId": "fe23cff6baf796a230f06ed515657799deb7d364",
"title": "Net Neutrality"
},
{
"paperId": "3b4ad876ac962af4eb20fcf0bc53f5ba4de2ddd3",
"title": "Balanced Scorecards as a Tool for Developing Patient-Centered Pharmacy Services"
},
{
"paperId": "9fab6a3803b8b2effe18c7d4c3f6560c8dab07c0",
"title": "The VALUE Framework: Training Residents to Provide Value-Based Care for their Patients"
},
{
"paperId": "e561f25203e370b937423548088e5f5a604aaf61",
"title": "A Technology-Centric Framework for Investigating Business Operations"
},
{
"paperId": "8a96d849c67853c29ff282b27ffbcc8ebb96b781",
"title": "What is value in health care?"
},
{
"paperId": "fd3815d46afc72d078cd55b5837278a2e839049e",
"title": "A strategy for health care reform--toward a value-based system."
},
{
"paperId": "ace83c84ae457d21ba79c5b054b19f2aa4746fc5",
"title": "Health technology assessment in health-care decisions in the United States."
},
{
"paperId": "dda7bb126a9744954df3ea17bd2d15595a962f7b",
"title": "Developing a Framework for Intuitive Human-Computer Interaction"
},
{
"paperId": "290d36e6ebf81bf567e7a41c324559345c0b86c0",
"title": "Healthcare information systems research, revelations and visions"
},
{
"paperId": "9c8e16200b38a0ad7e5b22f7506979e93a915411",
"title": "A Balanced Scorecard Approach to Measure the Value of Enterprise Architecture"
},
{
"paperId": "f6086bd626e06817b934cde52ac24561f408c3c6",
"title": "Value-based insurance design."
},
{
"paperId": "14ba70839a2ab0dc636c8cfa1635a1294fb93731",
"title": "Shaping the balanced scorecard for use in UK social enterprises"
},
{
"paperId": "6437d1a38f46d312f074641d72d3b0b6a53fd065",
"title": "A balanced scorecard analysis of performance metrics"
},
{
"paperId": "5dd7c3195152afcf15541c359e8bd06180f4e5c5",
"title": "The Balanced Scorecard: what is the score? A rhetorical analysis of the Balanced Scorecard"
},
{
"paperId": "3b1bf6525c47f4a0a638b8958b602156edf5c8a6",
"title": "TOTAL QUALITY MANAGEMENT AND THE BALANCED SCORECARD APPROACH: A CRITICAL ANALYSIS OF THEIR POTENTIAL RELATIONSHIPS AND DIRECTIONS FOR RESEARCH"
},
{
"paperId": "cf7a2817d14c6e6b12b69dd02f95c8eb0352248e",
"title": "The DeLone and McLean Model of Information Systems Success: A Ten-Year Update"
},
{
"paperId": "00f786ca1089cd0749445afbab657e203ab4a7db",
"title": "The Balanced Scorecard in Healthcare Organizations: A Performance Measurement and Strategic Planning Methodology"
},
{
"paperId": "7ead315ffe71f95abce19fa77631b8dc951e0a8e",
"title": "Distributed cognition"
},
{
"paperId": "0c041c55fdc92df6ed3af040716766cc217d452b",
"title": "The Balanced Scorecard"
},
{
"paperId": "8aafd83aeb1c99993440b35c634fa375ff76626d",
"title": "Extending and Formalizing the Framework for Information Systems Architecture"
},
{
"paperId": "4d0aa678fc739e54c03509f60e9848f54c6bc32e",
"title": "Methods for the economic evaluation of health care programmes"
},
{
"paperId": null,
"title": "Network and Distributed Systems Security (NDSS) Symposium"
},
{
"paperId": "44ee1bf827396f8a08f54be78e1b868c11de23bc",
"title": "Toward an ontology-driven blockchain design for supply-chain provenance"
},
{
"paperId": null,
"title": "TechCrunch. PokitDok teams with Intel on healthcare blockchain solutionURL:https://techcrunch.com/2017/05/ 10/pokitdok-teams-with-intel-on-healthcare-blockchain-solution"
},
{
"paperId": null,
"title": "A prescription for blockchain and healthcare: reinvent or be reinventedURL"
},
{
"paperId": null,
"title": "Reinventing healthcare on the blockchain -toward a new era in precision medicineURL"
},
{
"paperId": null,
"title": "Blockchain Research Institute."
},
{
"paperId": null,
"title": "Blockchain beyond the hype: what is the strategic business value?URL:https://cybersolace.co.uk/CySol/wp-content/uploads/2018/06/ McKinsey-paper-about-Blockchain-Myths.pdf"
},
{
"paperId": null,
"title": "2018. 35 Amazing real world examples of how blockchain is changing our worldURL:https://www"
},
{
"paperId": "a8327365caeb960a50a0c746fe0b683804ccb6ba",
"title": "The Truth about Blockchain"
},
{
"paperId": null,
"title": "Curisium raises $3.5 million to bring blockchain to value"
},
{
"paperId": null,
"title": "Blockchain and alternative payment modelsURL:https://www.healthit.gov/sites/default/files/ 15-54-kyip_blockchainapms_080816.pdf [accessed 2019-09-14"
},
{
"paperId": null,
"title": "A work in progress: blockchain"
},
{
"paperId": null,
"title": "Improving the traceability of the clinical trial supply chain"
},
{
"paperId": "3ed0db58a7aec7bafc2aa14ca550031b9f7021d5",
"title": "A Case Study for Blockchain in Healthcare : “ MedRec ” prototype for electronic health records and medical research data"
},
{
"paperId": null,
"title": "Blockchain technologies: a whitepaper discussing how the claims process can be improvedURL"
},
{
"paperId": null,
"title": "I&O Teams Can Build Balanced Scorecards for Business Cases on IT Services Transitioning to the Cloud"
},
{
"paperId": "24b600e94693071ebe7b22436becb5a028bce7e2",
"title": "Using Modern Business Practice to Enhance the Learning Process in the Introductory Accounting Course"
},
{
"paperId": "c0b5dca32d9d0035fbf95360913f14771ed8d18e",
"title": "Strategic performance measurement in a healthcare organisation: A multiple criteria approach based on balanced scorecard"
},
{
"paperId": "1bf66dc18d51442ab2623d79238e39a6b8b1a012",
"title": "Assessing Green IT Initiatives Using the Balanced Scorecard"
},
{
"paperId": null,
"title": "Solutions for scorecards and strategy management URL:https://www.gartner.com/en/ documents/1334316/solutions-for-scorecards-and-strategy-management [accessed 2018-12-10"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "4381587cb4b1f5843184a75e3881b0ba16711ee5",
"title": "LIMITATIONS OF BALANCED SCORECARD"
},
{
"paperId": null,
"title": "Developing a balanced scorecard model for evaluation of project management and performanceURL:https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.494604"
},
{
"paperId": "cd5bb7b5e46f78a17cc8c3694c7d5c7a939404b0",
"title": "PROBLEMS WITH THE IMPLEMENTATION OF PERFORMANCE MEASUREMENT SYSTEMS IN THE PUBLIC SECTOR WHERE PERFORMANCE IS LINKED TO PAY: A LITERATURE REVIEW DRAWN FROM THE UK"
},
{
"paperId": "98b9c843dd82eb91ed422f1c55da500fab0af0a2",
"title": "Office of the National Coordinator for Health Information Technology Attention: Governance RFI"
},
{
"paperId": "7df8f8f3e7c3641215baf0b42eb1559f49d4846b",
"title": "Use of the balanced scorecard in health care."
},
{
"paperId": null,
"title": "Via Nova Architectura"
},
{
"paperId": "238da74b00199f1aac5c94518a8f5a250ce3a1a7",
"title": "Developing a balanced scorecard with System Dynamics"
},
{
"paperId": null,
"title": "What is value-based care, what it means for providers?URL:https:/ /revcycleintelligence.com/features/what-is-value-based-care-what-it-means-for-providers"
},
{
"paperId": null,
"title": "Business Administration: Business in a Globalizing Economy"
},
{
"paperId": null,
"title": "Blockchain: the chain of trust and its potential to transform healthcare -our point of viewURL"
},
{
"paperId": null,
"title": "Mount Sinai teamed up with the designers who created projects for Nike and Beyonce to build a futuristic, new clinic ? and it?s reimagining how healthcare is deliveredURL"
}
] | 20,076
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02ddc6dbcf87ba01c1b97341044b9e4f08d3b36d
|
[] | 0.897545
|
Predicting User Behaviour Based on the Level of Interactivity Implemented in Blockchain Technologies in Websites and Used Devices
|
02ddc6dbcf87ba01c1b97341044b9e4f08d3b36d
|
Sustainability
|
[
{
"authorId": "72797076",
"name": "Milica Jevremović"
},
{
"authorId": "134533553",
"name": "Nada Staletić"
},
{
"authorId": "46213878",
"name": "Gheorghe Orzan"
},
{
"authorId": "2125196513",
"name": "Milena P. Ilić"
},
{
"authorId": "84094108",
"name": "Zorica Jelić"
},
{
"authorId": "2154793332",
"name": "C. Bălăceanu"
},
{
"authorId": "2154794257",
"name": "O. Paraschiv"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://mdpi.com/journal/sustainability",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172127"
],
"id": "8775599f-4f9a-45f0-900e-7f4de68e6843",
"issn": "2071-1050",
"name": "Sustainability",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172127"
}
|
Today’s business development processes force companies to find ways to increase the level of interactivity of their products with consumers. One of the ways that companies communicate interactively with users is communication via websites; another way is using a channel that makes the customer more loyal to the company. The aim of this paper is to point out the differences between the effects that interactive and non-interactive blockchain technologies have on users and their behavior, as well as to determine whether the same degree of interactivity is achieved with users who use the same site via computers or mobile phones. For this purpose, three models by Song, Liu, and Wu were compared, which gives this paper a superior precision and depth of research regarding the above-mentioned problem. Furthermore, the contributions of the paper are reflected in a comprehensive and detailed review of previous research on the topic of interactivity and the importance of using a website, showing the specific effects expected from users after the introduction of interactive website features, as well as indicating a difference in customer perception and behavior after using a different site search device.
|
## sustainability
_Article_
# Predicting User Behaviour Based on the Level of Interactivity Implemented in Blockchain Technologies in Websites and Used Devices
**Milica Jevremovi´c** **[1], Nada Staleti´c** **[2], Gheorghe Orzan** **[3,]*** **, Milena P. Ili´c** **[4]** **, Zorica Jeli´c** **[4],**
**Cristina Teodora Bălăceanu** **[5]** **and Oana Valeria Paraschiv** **[3]**
1 Information Technology School ITS-Belgrade, Savski Nasip 7, 11000 New Belgrade, Serbia;
milica.jevremovic@its.edu.rs
2 Academy of Technical and Art Applied Studies, School of Electrical and Computer Engineering,
Vojvode Stepe 283, 11000 Belgrade, Serbia; nada.staletic@viser.edu.rs
3 Marketing Department, The Bucharest University of Economic Studies, 010404 Bucharest, Romania;
paraschivoanavaleria@gmail.com
4 Faculty of Contemporary Arts Belgrade, University Business Academy in Novi Sad, 11000 Belgrade, Serbia;
milena.ilic@fsu.edu.rs (M.P.I.); zorica.jelic@fsu.edu.rs (Z.J.)
5 Faculty of Marketing, Dimitrie Cantemir Christian University, Splaiul Unirii No. 176,
040042 Bucharest, Romania; cristina.balaceanu@ucdc.ro
***** Correspondence: orzang@ase.ro; Tel.: +40-07-222-18140
[����������](https://www.mdpi.com/article/10.3390/su14042216?type=check_update&version=2)
**�������**
**Citation: Jevremovi´c, M.; Staleti´c, N.;**
Orzan, G.; Ili´c, M.P.; Jeli´c, Z.;
B˘al˘aceanu, C.T.; Paraschiv, O.V.
Predicting User Behaviour Based on
the Level of Interactivity
Implemented in Blockchain
Technologies in Websites and Used
Devices. Sustainability 2022, 14, 2216.
[https://doi.org/10.3390/su14042216](https://doi.org/10.3390/su14042216)
Academic Editor: Stefan Hoffmann
Received: 23 December 2021
Accepted: 8 February 2022
Published: 15 February 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: Today’s business development processes force companies to find ways to increase the level**
of interactivity of their products with consumers. One of the ways that companies communicate
interactively with users is communication via websites; another way is using a channel that makes the
customer more loyal to the company. The aim of this paper is to point out the differences between the
effects that interactive and non-interactive blockchain technologies have on users and their behavior,
as well as to determine whether the same degree of interactivity is achieved with users who use the
same site via computers or mobile phones. For this purpose, three models by Song, Liu, and Wu were
compared, which gives this paper a superior precision and depth of research regarding the abovementioned problem. Furthermore, the contributions of the paper are reflected in a comprehensive
and detailed review of previous research on the topic of interactivity and the importance of using a
website, showing the specific effects expected from users after the introduction of interactive website
features, as well as indicating a difference in customer perception and behavior after using a different
site search device.
**Keywords: mobile marketing; interactivity; website interactivity; customer satisfaction; customer**
behavior; blockchain technologies
**1. Introduction**
Whether or not it is possible for an enterprise to succeed in today’s market depends
on the enterprise itself, and on its ability to accommodate the market’s needs. Business
methods today imply the use of digital marketing tools and daily communication with
consumers. The most common tool for encouraging two-way communication is a company
website. The website is the mirror of the company, and has a significant impact on the
creation of images in the minds of consumers regarding the company [1–3]. What makes
the distinction between companies is fair usage of digital marketing, i.e., websites, and their
adjustment to users in order to achieve greater customer satisfaction. Thus, many papers
have been written on the topic of researching the interactivity that is achieved with site
users. It is because of this vast research that we know that the introduction of interactive
features on a website increases interactivity with the user [4–18]. The emphasis in this
paper is on the use of mobile phones for searching websites, and the differences in the
-----
_Sustainability 2022, 14, 2216_ 2 of 20
effects that are achieved by users depending on whether they receive information via a
mobile phone or a computer. For this reason, the works of authors who have researched
the importance of mobile marketing were analyzed [19–26].
The primary goal of this paper was to compare three models for research on interactivity, written by authors Ji Hee Song, Y. P. Liu, and G. Wu [8,11], and Song, Zinkhan [12],
and analyze the mentioned models in order to obtain the effects on consumers after the
applied interactive features of the website on mobile phones and computers. As a result of
this research, we expect to determine the difference in the effects achieved by consumers of
interactive and non-interactive sites, as well as the difference in the outcomes achieved by
users of mobile devices and computers.
The use of blockchain technology is a structure consisting of a list of blocked blocks in
a decentralized, distributed, and public form, that is used to record distributed transactions
on the network. The records are timeless in the sense that they cannot be changed over
time without the alteration of subsequent blocks.
**2. Website**
Digital marketing tools are used to manage and reflect a company’s identity, communicate with customers, and increase online presence. The design of the website using
blockchain technologies ensures a distributed, decentralized structure consisting of a list of
chained blocks. The efficiency of the integration of blockchain technologies in the design
of the sites contributes to the increase in their security, and to the transactions carried
out through the payment pages. The blockchain is used to record transactions whose
records are timeless (without retroactive changes affecting subsequent blocks). It is critical to understand the intended outcomes on the web in order to select the appropriate
tools. Each digital marketing tool offers benefits and drawbacks based on the business
type [27] (pp. 28–32). Many authors have studied digital marketing and its technologies [1]
(pp. 17–26) [2] (pp. 32–33) [3] (p. 35). Ryan places a website at the core of any digital
marketing plan [2] (p. 35). It is regarded as a grave omission not to consider the user while
constructing a website [28] (pp. 3–4). This leads to focusing on technology rather than users,
which undermines the website’s success [29] (pp. 573–581). There are various vital parts of
digital marketing, as seen in the preceding categories, but one thing is certain: any online
marketing strategy relies on web presence [29] (pp. 392–393): “You are your website,”
says Charlesworth [29] (pp. 496–504). In their paper, Wolk and Theysohn find that only
quality, interactivity, accessibility, and relevance determine the number of website visits.
According to their second model, credibility, engagement, customization, and navigation
enhance website views per visitor. Thus, only the quality of the product, interactivity, accessibility, and relevancy have a substantial beneficial impact on visitor numbers. Similarly,
website reputation, interaction, personalization, and navigation influence page views per
visitor. They show the impact of the website attributes for potential clients when deciding
how long to stay on a website [30]. Interactivity is the foundation for a customer–vendor
conversation that is sensitive to consumer requirements, although using interactivity to
strengthen online information offerings as a foundation for consumer relationships has
received little attention.
Innovation in the integration of blockchain technologies in website design is represented according to the literature by four factors: internal framework, strategies, operations,
and structure [31]. We must remember that website interaction allows consumers to research purchases [32].
**3. Interactivity**
Numerous studies have investigated interaction. Authors focus on the method, features, perception, or a combination of these [6]. J. Steuer defines interactivity as the extent to
which users can adjust the form and content of the mediated environment in real-time [33].
This definition emphasizes characteristics. However, according to the author, we can influence presence by influencing the mediated environment. Johnson et al. [10] describe
-----
_Sustainability 2022, 14, 2216_ 3 of 20
interactivity as the degree to which a participant perceives communication as mutual,
responsive, and fast; this is typified by the use of nonverbal information. For this study, we
agreed with Wu, and provided a paradigm for perceived interactivity as an intermediary on
the impact of actual interaction on website attitudes [34]. The authors Chung and Zhao [35]
found that perceived interaction influences customers’ attitudes towards websites and the
recall of their contents. Their findings show that perceived involvement has a beneficial
effect on customer attitudes and memory. On the other hand, Song and Zinkhan [12] claim
that recent studies have focused on the sense of interaction. Actual and perceived interactivity should be distinguished [5,6,10,12,34,36]. Authors [5] refer to actual and perceived
interactivity as structural and empirical, and even use the phrases objective and subjective
interactivity. Participants’ sense of interactivity in a communication process differs from the
real interactivity of a system. This work focuses on perceived interactivity or the influence
of interactive elements on consumers. Many authors have studied website interactivity
and reported their findings [6–13,16–18,37]. Others have conceptualized attitudes towards
websites [6,12,36,38–41]. Interactivity also brings satisfaction. Satisfaction is linked to
active control over the content, which is a desired psychological state [5]. The authors’
Song and Zinkhan used the findings of Fornell et al. to quantify satisfaction [12]. Song
and Zinkhan [12] employed devices to measure overall website quality and loyalty. Some
authors, such as Wu [36], focus on the relationship between perceived engagement and
customer views toward websites. However, some authors, such as Song and Zinkhan [12],
look at the attitude towards websites, as well as contentment, overall website quality,
loyalty intention, and repeat purchase intention. Several authors have identified, but not
empirically proved the above impacts [8]. Interactivity, vividness, and participation are
important characteristics influencing virtual experience and behaviour [42]. The website’s
interactivity improves brand experience and choice. Two-way communication helps consumers to observe how the brand meets their demands, leading to an outstanding site
usage experience [16]. Many authors associate good site design and usability with happy
experiences [14,43]. The impact of site style, simplicity of use, customization, interaction, engagement, and enjoyment on online customer experience [43] is determined to be
considerable. The authors found that information quality and website credibility affect
the customer experience when searching for information on B2B websites. In addition, a
good information search is linked to happiness with the experience. CREDIBILITY and
INFORMATION QUALITY signal positive impact. Thus, the absence of online customer
service is linked to unhappiness. [15] Based on previous research on user behaviour and
future behaviour, writers Yoon and Youn evaluated the importance of the function of mediators on the impact of perceived website interactivity on purchase intent (e.g., perceived
utilitarian value and online trust). Active control and two-way communication appeared to
be essential features of interactivity in boosting the strong brand experience and connection
quality with the brand [37]. However, a high degree of interaction is required if the user
requires a great degree of control when utilizing the site. Interactivity positively improves
participants’ perceptions, demonstrating that high degrees of information control may
not overwhelm customers [18]. The website’s interactivity boosts users’ sense of the site’s
usefulness and simplicity of use. Interactive user experiences on retail sites boost user
perception, and thus buy intent. Sellers that want customers to explore their site must
rebuild it with interactive features [17,44–46]. In conclusion, increasing perceived task
difficulty lowers ease of use, but increases enjoyment [47].
**4. Mobile Ads**
Despite the vast quantity of articles on mobile marketing, no universal definition has
been established [19] (pp. 144–151) [20]. Researchers [21] (pp. 153–175) define mobile
marketing as using wireless media to promote products, services, and ideas. It is possible
to personalize web marketing using mobiles [25] (p. 7). We know when the consumer
calls, whom they write to, and how they spend their time, because their phonebook
and calendar are accessible. This technology allows you to track the consumer’s web
-----
_Sustainability 2022, 14, 2216_ 4 of 20
activity and app downloads. Mobile phones are the most targeted kind of web marketing,
because companies know the owners’ preferences. The way people use their phones
reveals a lot about their demographic and psychographic features. This gives people a
new method to communicate, stand out from their peers, and stay informed. Notably,
this necessity predates the emergence of mobile phones [22]. Mobile marketing’s key
features include ubiquity, customization, two-way communication, and localization [23]
With digital marketing comes a new set of marketing methods and expertise [48] (pp.
219–221). Mobile marketing is the best option for marketers to reach consumers quickly.
For example, according to Michael and Salter [22] (p. 25), mobile marketing has a higher
response rate than traditional media, is the cheapest means of connecting with consumers,
and needs the least amount of effort to get started. It is also easier to locate the user who is
best suited for specific marketing [24] (p. 104). Because mobile phone numbers are issued
to individuals rather than locations, they are rarely shared. While mobile marketing is
effective, it is not suitable for all businesses: mobile marketing, like any other marketing
effort, requires rigorous planning and development. [25] Unfortunately, mobile marketing
is typically performed haphazardly, with little or no connection to a company’s marketing
communication plan [26].
**5. Materials and Methods**
Based on the literature review, the tool used in our research is a website. For the
needs of the research, an interactive and a non-interactive website were created for
job/practice/training course searches. Websites contained the same job/practice/training
course advertisements. The difference between interactive and non-interactive websites is
reflected in the introduction of interactive features in an interactive website, as stated in
the works of author Wu [34]. The elements integrated into an interactive website are the
following: a possibility to recommend the website to friends; a possibility to apply for a
job/practice/training course online; a website map; e-mail hotlink; online chat room; dropdown search menu; a website search; tags; and a possibility to comment on advertisements.
An interactive website offers the possibility of sharing website content via other social
media such as Facebook, Twitter, LinkedIn, Google+, Pinterest, Reddit, and is integrated
with other digital marketing tools, such as mobile marketing and e-mail marketing. An
interactive website is also integrated with other digital marketing tools, i.e., it makes it
possible for users to view the website contents on the Facebook social network and to
sign up to a mailing list in order to be informed about any news on the website. Upon
signing up to a mailing list, the users receive an automatic e-mail confirming that they have
successfully signed up to the mailing list, and a link via which they should activate their
registration. Upon activating the registration, users are automatically transferred to the
website page on which they can view recommended advertisements, leave a comment, or
contact the website support.
Based on the literature review, we noticed that there is greater activity in mobile device
users, compared to users who receive the same information via computers, which led us to
the following hypotheses:
**Hypotheses 1 (H1.) The use of a mobile device for site searching increases the degree of user’s**
_interactivity._
_5.1. Pre-Test_
Prior to testing, we performed a pre-test which included 350 students of a chosen
higher educational institution (Appendix A). The objective of the pre-testing was to single
out 240 students with identical or similar interests. All the respondents were in the first year
of studies and were listening to lectures in Digital Multimedia I. Respondents completed a
survey consisting of 8 questions. Based on the given answers, we singled out 240 students
who were interested in looking for a job/practice/training course on the website.
-----
_Sustainability 2022, 14, 2216_ 5 of 20
_5.2. Main Survey_
The total number of 240 respondents was randomly divided into four groups: 60 respondents who used a non-interactive website via a computer; 60 respondents who used a
non-interactive website via a mobile phone; 60 respondents who used an interactive website
via a computer; and 60 respondents who used an interactive website via a mobile phone.
In the primary survey, the students singled out in the pre-testing stage were divided
into 12 groups of 20 students each. All respondents were given the exact instructions, and
we randomly chose the respondents who would visit an interactive website and those who
would visit a non-interactive website, who would use the mobile phone and who would
use a computer, ensuring the same number of all different categories in the group (5 of all
categories in each group). The respondents were given 30 min to search the website.
A week before the survey began, 350 students took part in the study, which was
conducted based on similar research by authors in interactivity [6–8]. The research aimed
to determine whether all respondents had experience on the Internet, whether they had a
smartphone, and whether they have experience using the Internet through mobile devices.
It also aimed to identify the areas of students’ interest to create content relevant to them.
The website was created with the content most respondents showed interest to, as in the
research by Liu [8].
Students in their first year of the study participated in the research, studying the
programs of New Computer Technology, Computer Techniques and Electronic Business.
The main goal of this questionnaire was to single out students with the same interests who
would participate in the main study. It was also essential to single out students predisposed
to participate in the primary research in terms of computer knowledge, long-term computer
use, possession of smartphones, long-term use of mobile phones, and frequent use of
the Internet on mobile phones. Out of the 350 respondents who met these conditions,
240 students with the same interests were selected, i.e., those who looked at job offers,
practices or courses on the Internet, and met all other criteria. Other categories of students’
interest in the Internet were significantly lower; thus, the students interested in work,
practice or courses were invited for the primary survey.
The selected 240 students represented participants in the primary survey. All participants had smartphones by which could visit both sites prepared for this research. In
addition, based on the interest in pre-testing, two websites were created, interactive and
non-interactive, both on the topic of employment, practice and courses for which students
could apply.
In the survey, we had four different groups of respondents, one group used an interactive site through a computer, another used an interactive website via mobile device,
the third group used a non-interactive website through a computer, and the fourth group
used a non-interactive site via mobile device. Due to the limited space in the laboratory,
respondents were divided into 12 groups of 20 participants. Therefore, in every group
of 20 participants, we had 5 participants per group as defined above (5 participants who
used an interactive site via a mobile device, 5 participants who used an interactive site via
desktop, 5 participants who used a non-interactive site via a mobile device, and 5 participants who used a non-interactive site via desktop). This accounted for a total number of
60 respondents in the four defined categories.
In each research group, clear instructions were given to respondents on how to conduct
the research. These instructions were also written on both types of created websites.
Respondents received papers with a marked website they should visit and a device to see
the designated site. All four groups of respondents had 30 min available to search the
designated site. After a 30-min search, respondents received a survey questionnaire and
unlimited time to fill out a survey questionnaire, which they then left to the person on duty
and left the lab.
The laboratory was equipped with 20 of the same computers of the following configuration:
Type and version of Windows 8.1. Enterprise, Microsoft Corporation 2013;
_•_
-----
_Sustainability 2022, 14, 2216_ 6 of 20
Computer configuration:
_•_
Processor: Intel(R) Core (TM) i3-4160 CPU @ 36 GHz;
_◦_
Memory (RAM) 4GB;
_◦_
System type: 64-bit operating system.
_◦_
Computers were connected on an Internet link of 100Mbps to an academic network.
_•_
Students were also provided with a wireless local computer network by IEEE 802.11 G
standard, which allowed respondents who viewed the site via mobile phone to search the
site seamlessly.
In each group of examinations, clear instructions were given to respondents on how to
conduct the research. These instructions are also written on both types of created websites.
Respondents were given papers with a marked website to look at, as well as a designated
device through which to view the obtained site. All four groups of respondents had 30 min
available to search the obtained site. After a 30-min search, respondents received a survey
questionnaire and unlimited time to fill out a survey questionnaire, which they then left to
the person on duty and left the lab.
_5.3. Research Instruments_
A survey questionnaire created for the purpose of measuring the effects on consumers
after using a website was prepared from the research of Wu [36] Song and Zinkhan [12],
and Liu [8]. Upon a detailed analysis of works of the aforementioned authors, it was
established that Wu [36] used 15 items to measure the attitude of consumers towards
websites, that was subsequently reduced to nine. Song and Zinkhan argue and prove
that it is sufficient to use three questions to measure the attitude of consumers towards
a website, and these questions have been used in this research [12]. For measuring the
attitude towards a website, Song referred to Coyle and Thorson, [40]; for measuring the
satisfaction of users, he referred to Fornell, 1996; for measuring the overall website quality,
he referred to Wolfinbarger and Gilly, 2003; and for measuring the loyalty intention he
referred to Zeithaml, Berry and Parasuraman, 1996 [12]. The particularity of this research
is the insight of three models for measuring perceived interactivity, by authors Liu [8],
Wu [11] and Song and Zinkhan [12].
The survey consisted of 35 questions (Appendix B). The first eight questions referred to
the examination of the demographic characteristics of the respondents. Control towards the
website was examined by 12 questions, while the next nine questions referred to communication. Responsiveness was examined under the six questions). The differences between
the respondents who used an interactive website and those who used a non-interactive
website on a different channel (mobile phones and computer) were determined by using
the two-way ANOVA between-groups analysis of variance. The statistical processing and
analysis were performed in the SPSS (Statistical Package for the Social Sciences) program,
ver. 20.
While processing the data in the SPSS program, it was observed that there were incorrectly completed survey questionnaires, which were eliminated from further processing,
and the number of respondents thus decreased from 240 to 197. This resulted in a change
in the number of respondents in related categories, and a uniformity analysis according to
the number of respondents was, therefore, performed.
The number of respondents who used a mobile phone was 98, and the number of
respondents who used a computer was 99. Although the number of respondents was
not absolutely identical in both groups, there was no statistically significant difference
between them (χ[2] = 0.005, p = 0.943). The number of respondents who used an interactive
website was 100, and the number of respondents who used a non-interactive website was
97. Although the number of respondents was not absolutely identical in both groups, there
was no statistically significant difference between them (χ[2] = 0.046, p = 0.831). The obtained
result showed that the groups were uniform when it came to the number of respondents;
thus, the further processing of results could be continued.
-----
_Sustainability 2022, 14, 2216_ 7 of 20
_5.4. Analysis of Results_
The Results of a Two-factor Analysis of the Variation of Different Groups
In the continuation of the paper, the reciprocal influence of the site type and the device
type according to the presented models was determined.
SONG and ZINKHAN Model
According to chosen SONG and ZINKHAN model average size and device type values
are calculated and presented in Table 1.
**Table 1. Average size and device type values for the SONG and ZINKHAN model.**
**Devices (Channel)** **Website Type** **M** **SD**
High interactivity 5.1866 0.61003
Desktop
Mobile
Total
Low interactivity 4.4848 0.62866
Total 4.8463 0.70968
High interactivity 5.4051 0.55328
Low interactivity 4.8794 0.76159
Total 5.1422 0.71295
High interactivity 5.2936 0.59027
Low interactivity 4.6842 0.72306
Total 4.9935 0.72483
M, arithmetic mean (average value of the variable in the sample); SD, standard deviation (average deviation of
individual values of the variable from the average in the sample).
Influence of site and device type based on chosen model value are presented in Table 2.
**Table 2. Influence of site and device type on SONG and ZINKHAN model value.**
**df** **F** **_p_** **Partial Eta Squared**
Devices (Channel) 1 11.198 0.001 0.055
Website type 1 44.886 0.000 0.189
Devices * Website type 1 0.924 0.338 0.005
R squared = 0.226 (adjusted R squared = 0.214).
_Sustainability 2021, 13, x FOR PEER REVIEW_ 8 of 15
Two-factor analysis of variance of different groups—Song and Zinkhan model is
presented with Figure 1.
**Figure 1. Two-factor analysis of variance of different groups—Song and Zinkhan model.**
**Figure 1. Two-factor analysis of variance of different groups—Song and Zinkhan model.**
presented with Figure 1.
-----
_Sustainability 2022, 14, 2216_ 8 of 20
The influence of site and device type on the Song and Zinkhan model was investigated
by the two-factor analyses of variance of different groups. The influence of the interaction
was not statistically significant (F = 0.92, p = 0.338). The value of the Eta square was very low
(ηp2 = 0.005) and showed that the impact of the interaction was very small or non-existent.
Based on the guidelines proposed by Cohen (Cohen, 1988), the value of the eta square was
estimated as follows: 0.01—small impact; 0.06—moderate impact; 0.14—large impact.
A statistically significant separate influence of the device type was determined
(F = 11.19, p = 0.001), as well as a separate statistically significant influence of the site type
(F = 44.885, p = 0.000). The value of the Eta square for the site type showed a large impact
(ηp2 = 0.189), while the value of the Eta square for the device type showed a moderate
impact (ηp2 = 0.055). The total percentage of explained variance of the dependent variable
was 21% (adjusted R squared = 0.214).
LIU Model
According to chosen LIU model, average cite and device type values are calculated
and presented in Table 3.
**Table 3. Average site and device type values for the LIU model.**
**Devices (Channel)** **Website Type** **M** **SD**
High interactivity 5.0967 0.53621
Desktop
Mobile
Total
Low interactivity 4.2639 0.49182
Total 4.6929 0.66160
High interactivity 5.1633 0.63887
Low interactivity 4.5823 0.66608
Total 4.8728 0.71188
High interactivity 5.1293 0.58671
Low interactivity 4.4247 0.60487
Total 4.7824 0.69122
M, arithmetic mean (average value of the variable in the sample); SD, standard deviation (average deviation of
individual values of the variable from the average in the sample).
Influence of site and device type on LIU model value is calculated and presented in
Table 4.
**Table 4. Influence of site and device type on LIU model value.**
**df** **F** **_p_** **Partial Eta Squared**
Devices (Channel) 1 5.282 0.023 0.027
Website type 1 71.250 0.000 0.270
Devices * Website type 1 2.262 0.134 0.012
R squared = 0.288 (adjusted R squared = 0.277).
Figure 2 presents two-factor analyses of variance of different groups based on
LIU model.
-----
_Sustainability 2022, 14, 2216_ Figure 2. presents two-factor analyses of variance of different groups based on LIU 9 of 20
model.
**Figure 2. Two-factor analysis of variance of different groups—LIU Model. Figure 2. Two-factor analysis of variance of different groups—LIU Model.**
The influence of site and device type on the LIU model was investigated by a two
The influence of site and device type on the LIU model was investigated by a two
factor analyses of variance of different groups, while the influence of the interaction was
factor analyses of variance of different groups, while the influence of the interaction was
not statistically significant (F = 2.26, p = 0.134). The value of the Eta square was deficient
not statistically significant (F = 2.26, p = 0.134). The value of the Eta square was deficient
(ηp2 = 0.012) and showed that the impact of the interaction was very small or non-existent.
(ηp2 = 0.012) and showed that the impact of the interaction was very small or non-existent.
Based on the guidelines proposed by Cohen (Cohen, 1988), the value of the eta square was
Based on the guidelines proposed by Cohen (Cohen, 1988), the value of the eta square was
estimated as follows: 0.01—small impact; 0.06—moderate impact; 0.14—large impact.
estimated as follows: 0.01—small impact; 0.06—moderate impact; 0.14—large impact.
A statistically significant separate influence of the device type was noted (F = 5.28,
A statistically significant separate influence of the device type was noted (F = 5.28, p
_p = 0.023), as well as a separate statistically significant influence of the site type (F = 71.25,_
= 0.023), as well as a separate statistically significant influence of the site type (F = 71.25, p
_p = 0.000). The value of the Eta square for the site type showed a large impact (ηp2 = 0.270),_
= 0.000). The value of the Eta square for the site type showed a large impact (ηp2 = 0.270),
while the value of the Eta square for the device type showed a small impact (ηp2 = 0.027).
while the value of the Eta square for the device type showed a small impact (ηp2 = 0.027).
The dependent variable’s total percentage of explained variance was 28% (adjusted R
The dependent variable’s total percentage of explained variance was 28% (adjusted R
squared = 0.277).
squared = 0.277).
**Wu model Wu Model**
Table 5 presents average site and device type values for the Wu model.
Table 5 presents average site and device type values for the Wu model.
**Table 5. Average site and device type values for the Wu model.**
**Devices (Channel)** **Website Type** **M** **SD**
High interactivity 4.9564 0.64677
Desktop
Mobile
Total
Low interactivity 3.9699 0.74685
Total 4.4781 0.85235
High interactivity 5.0181 0.54213
Low interactivity 4.4921 0.93128
Total 4.7551 0.80281
High interactivity 4.9867 0.59559
Low interactivity 4.2337 0.88067
Total 4.6159 0.83755
M, arithmetic mean (average value of the variable in the sample); SD, standard deviation (average deviation of
individual values of the variable from the average in the sample).
-----
Total 4.6159 0.83755
_Sustainability 2022, 14, 2216_ 10 of 20
M, arithmetic mean (average value of the variable in the sample); SD, standard deviation (average
deviation of individual values of the variable from the average in the sample).
Influence of site and device type on WU model value is calculated and presented in Influence of site and device type on WU model value is calculated and presented in
Table 6. Further in figure 3. it is presents two-factor analyses of variance of different Table 6. Further in Figure 3. it is presents two-factor analyses of variance of different groups
groups based on same model. based on same model.
**Table 6. Table 6.Influence of site and device type on Wu model value. Influence of site and device type on Wu model value.**
**Source Source** **df df** **FF** **_p_** **_p_** **Partial Eta Squared Partial Eta Squared**
Devices (Channel) Devices (Channel) 1 1 7.871 7.871 0.0060.006 0.039 0.039
Website type 1 52.827 0.000 0.215
Website type 1 52.827 0.000 0.215
Devices * Website type 1 4.895 0.028 0.025
Devices * Website type 1 4.895 0.028 0.025
R squared = 0.252 (adjusted R squared = 0.240).
R squared = 0.252 (adjusted R squared = 0.240).
**Figure 3. Two-factor analysis of variance of different groups—Wu Model.**
**Figure 3. Two-factor analysis of variance of different groups—Wu Model.**
The influence of site and device type on the Wu model was investigated by a two
The influence of site and device type on the Wu model was investigated by a two
factor analysis of variance of different groups. The results showed that the influence of the
factor analysis of variance of different groups. The results showed that the influence of
interaction was statistically significant (F = 4.89, p = 0.028). The value of the Eta square
the interaction was statistically significant (F = 4.89, p = 0.028). The value of the Eta square
was very high (ηp2 = 0.025) and showed that the impact of the interaction was very large.
was very high (ηp2 = 0.025) and showed that the impact of the interaction was very large.
Based on the guidelines proposed by Cohen (Cohen, 1988), the value of the eta square was
Based on the guidelines proposed by Cohen (Cohen, 1988), the value of the eta square was
estimated as follows: 0.01—small impact; 0.06—moderate impact; 0.14—large impact.
estimated as follows: 0.01—small impact; 0.06—moderate impact; 0.14—large impact.
A statistically significant separate influence of the device type was determined
(F = 7.87, p = 0.006), as well as a separate statistically significant influence of the site type
(F = 52.82, p = 0.000). The value of the Eta square for the site type showed a large impact
(ηp2 = 0.215), while the value of the Eta square for the device type showed a small impact
(ηp2 = 0.039). The total percentage of explained variance of the dependent variable was
24% (adjusted R squared = 0.240).
**6. Discussion**
The following research shows the analysis of the respondents of the interactive site
between the two used channels—computers and mobile devices. The influence of the site
and device type on the presented models in the paper were investigated by a two-factor
analysis of variance of different groups. The influence of the interaction between site
type and device type in the Song and Zinkhan model was not statistically significant, and
showed that the impact of the interaction was very small or non-existent. A statistically
significant separate influence of the device type (moderate influence), as well as a separate
statistically significant influence of the site type (large influence), were determined.
-----
_Sustainability 2022, 14, 2216_ 11 of 20
The LIU model showed that the influence of the interaction between site type and
devise type was not statistically significant, and showed that the impact of the interaction
was very small or non-existent. A statistically significant separate influence of the device
type (small influence), as well as a separate statistically significant influence of the site
type (big influence), were determined. On the other hand, the Wu model showed that the
influence of the interaction between site type and devise type was statistically significant
and that the impact of interaction was tremendous. A statistically significant separate
influence of the device type (small influence), as well as a separate statistically significant
influence of the site type (big influence), were determined. Furthermore, on all three models
shown, a statistically significant separate influence of the device type (minor or moderate)
was observed, which proves H1.
The interactions between the site type and the user device on the Song and Zinkhan
model and the LIU model were not statistically significant, while with the Wu model, a
statistically significant difference was determined. It was noticed that on all three presented
models, there was a statistically significant separate influence of the device type (moderate
or small influence) as well as a different statistically significant influence of the site type
(considerable influence). Perhaps the reason for the moderate or small influence of the type
of device can be attributed to the subjective perception of the user that everything observed
via a mobile phone is considered interactive, although there is no objective evidence for that.
**7. Conclusions**
A large number of authors explored interactivity and the impact that interactivity
leaves on the users when choosing products/services [6–13,16–18,36–41]. A significant
impact was proven on the end actions of users if the level of interactivity increased, by
introducing interactive features on the website used in this research [44].
The importance of using mobile phones for making a bigger influence on consumers
is also very important [22–24,48].
The survey was conducted among students of chosen higher education institutions. It
is effective to know their habits and draw conclusions regarding the learning process, so
that positive impacts can be made for learning outcomes. The results of this research study
can be therefore used for obtaining the sustainability of processes in higher education. In
the higher education sector, mobile devices and other user devices have significant roles in
the learning process, accompanied by other technologies (artificial intelligence, blockchain,
machine learning, augmented reality) [49–52], especially in times of crisis, such as during
the COVID-19 pandemic.
The contribution of this work is reflected in that, in addition to interactivity as one of
the important characteristics of today’s businesses, we should also address the importance
of used device (mobile/computer) when searching for requested products/services. One
group of authors explored the importance of interactive features of the site on the consumer,
another group of authors processed the importance of using mobile phones on the consumer.
In this study, a two-factor analysis of the variation of different groups was completed, in
order to the determine influence of the site type and the device type on the consumer.
For this reason, three models were used to prove the impact of interactive and noninteractive consumer characteristics, as well as the type of tool used on the user.
Results obtained by the survey showed, on all three presented models, a statistically
significant separate influence of the device type, as well as site type.
In the first two models (Song and Zinkhan, and LIU model) the influence of the
interaction between site type and device type was not statistically significant, showing that
the impact of the interaction was very small or non-existent, while in third model (Wu) the
influence of the interaction between site type and device type was statistically significant,
and the impact of interaction was tremendous.
A statistically significant separate influence of the device type (moderate influence—
Song and Zinkhan; small influence—LIU; small influence—Wu) as well as a separate
-----
_Sustainability 2022, 14, 2216_ 12 of 20
statistically significant influence of the site type (large influence—Song and Zinkhan; big
influence—LIU; big influence—Wu) was determined.
The scale used in the works of the authors LIU, Song and Zinkhan and Wu [8,12,36]
as already demonstrated in their work, has been confirmed in this research, and can
find application in both marketing practice and scientific research. Its application can be
expanded by research into the degree to which a student understands prepared materials.
The study, however, contained several limitations. Due to the validity of the results,
the research was conducted in laboratory conditions. The respondents were not in their
natural environment, in which it would be more pleasant for them to visit the website.
Respondents also had limited time for both—to visit the website and to complete the survey
questionnaire, which could affect the speed and reasoning of the respondents. Furthermore,
the participants in the research were first-year students, which included only one age group
of respondents.
This paper established that users utilized their ability to search via mobile phone in
order to achieve the necessary information or perform a desired action, while viewing
interactive features of the used tool, regardless of whether the tool had built-in interactive
features or not. A proposal for further research is suggested in order to investigate the
reasons that lead to a wrong subjective assessment by users.
The two groups of students who were the subject of the research, Serbs and Romanians,
aim to validate the quality of the information obtained through specialized sites. In the
conditions of increasing the incidence of using online tools in the educational process,
the present research highlights the advance of information in the learning process and its
customization in order to increase specific skills.
Currently, researchers, including research teams in Serbia and Romania, are examining
how the use of marketing tools that measure the impact of gadgets on the supply of
information is correctly sized to the demand for specific skills and competencies absorbed
by the labor market. Therefore, future recommendations for the sector of education, as well
as for sectors of the economy, shall be given. For now, based on the results of the current
study, authors can recommend following: in order to remain competitive, businesses must
figure out how to make their products more interactive with their customers. Companies
can communicate with customers in two ways: through websites or through a channel that
strengthens the customer’s bond with the business. When it comes to blockchain technology,
interactive and non-interactive methods have different effects on users’ behavior, which
has to be acknowledged.
**Author Contributions: Conceptualization, M.J., N.S. and M.P.I.; methodology, Z.J., G.O., C.T.B. and**
O.V.P.; software, Z.J., G.O., C.T.B. and O.V.P.; validation, Z.J., G.O., C.T.B. and O.V.P.; formal analysis,
M.J., N.S. and M.P.I.; investigation, M.J., N.S. and M.P.I.; resources, M.J., N.S. and M.P.I.; data curation,
Z.J., G.O., C.T.B. and O.V.P.; writing—original draft preparation, Z.J., G.O., C.T.B. and O.V.P.; writing—
review and editing, M.J., N.S. and M.P.I.; visualization, M.J., N.S. and M.P.I.; supervision, Z.J., G.O.,
C.T.B. and O.V.P.; project administration, Z.J., G.O., C.T.B. and O.V.P.; funding acquisition, M.J., N.S.
and M.P.I. All authors have read and agreed to the published version of the manuscript.
**Funding: This research received no external funding.**
**Institutional Review Board Statement: The study was conducted according to the guidelines of the**
Declaration of Helsinki and approved by the Institutional Review Board (or Ethics Committee) of
Faculty of Contemporary Arts Belgrade Svetozara Mileti´ca 12, Belgrade, Republic of Serbia, protocol
code 8-1/22 and date of approval 13 January 2022.).
**Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.**
**Conflicts of Interest: The authors declare no conflict of interest.**
**Appendix A**
Survey questionnaire for Pre-test
1. Name and surname of the student _______________________________
-----
_Sustainability 2022, 14, 2216_ 13 of 20
2. How old are you? (circle the number in front of the offered answer)
(1) <20
(2) 21–25
(3) 26–30
(4) 31–40
(5) >40
3. What gender are you? (circle the number in front of the offered answer)
(1) Male
(2) Female
4. How many years have you been using the Internet (circle the number in front of the
offered answer)
(1) <2 years
(2) 2–4 years
(3) 5–6 years
(4) >6 years
5. How much time do you spend online per week? (circle the number in front of the
offered answer)
(1) <5 hours
(2) 5–20 hours
(3) 21–40 hours
(4) >40 hours
6. Do you own a mobile phone with an operating system?
(1) Yes
(2) No
7. How many years have you been using your mobile phone? (circle the number in
front of the offered answer)
(1) Less than a year
(2) 1 or 2 years
(3) 3 or 4 years
(4) More tan 5 years
8. How much time do you use the internet on your mobile per week? (circle the
number in front of the offered answer)
(1) <1 hour
(2) from 1 to 3 hours
(3) from 4 to 5 hours
(4) >5 hours
**Appendix B**
Survey questionnaire for Main test
Dear students,
This anonymous survey questionnaire was designed to investigate the degree of
interactivity in digital marketing strategies. The answers given will be used for scientific
purposes and will not be misused in any way.
Please answer each question with one answer.
I was informed regarding the objectives of this study and
-----
_Sustainability 2022, 14, 2216_ 14 of 20
(1) I agree to participate in this study.
(2) I do not agree to participate in this study.
1. How old are you? (circle the number in front of the offered answer)
(1) <20
(2) 21–25
(3) 26–30
(4) 31–40
(5) >40
2. What gender are you? (circle the number in front of the offered answer)
(1) Male
(2) Female
3. How many years have you been using the Internet? (circle the number in front of
the offered answer)
(1) <2 years
(2) 2–4 years
(3) 5–6 years
(4) >6 years
4. How much time do you spend online per week? (circle the number in front of the
offered answer)
(1) <5 hours
(2) 5–20 hours
(3) 21–40 hours
(4) >40 hours
5. How many years have you been using your mobile phone? (circle the number in
front of the offered answer)
(1) Less than a year
(2) 1 or 2 years
(3) 3 or 4 years
(4) More tan 5 years
6. How much time do you use the internet on your mobile per week? (circle the
number in front of the offered answer)
(1) <1 hour
(2) from 1 to 3 hours
(3) from 4 to 5 hours
(4) >5 hours
7. Do you have a social media profile?
(1) No
(2) Yes (you can use more than one answer)
(a) Facebook
(b) Tweeter
(c) Google +
(d) Pinterest
(e) LinkedIn
(f) Neki drugi ______________________
8. How much of the total time you spend online on social media? (circle the number
in front of the offered answer)
-----
_Sustainability 2022, 14, 2216_ 15 of 20
(1) <10%
(2) from 11 till 30%
(3) from 31 till 50%
(4) More than 50%
Answer the following questions by marking one square with the letter "X" below the
desired offered answer.
9. I felt that I had a lot of control over my visiting experiences at this website.
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
10. While I was on the site, I was always aware where I was
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
11. While I was on the site, I always knew where I was going
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
12. I was in control of my navigation through this website
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
Agree
Agree
Agree
Agree
13. I had some control over the content of this Web site that I wanted to see
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
Strongly
disagree
Strongly
disagree
Strongly
disagree
Strongly
disagree
Strongly
disagree
Strongly
disagree
14. While I was on the site, I could choose freely what I wanted to see
Agree
Agree
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
15. While surfing the site, my actions decided the kind of experiences I got
-----
_Sustainability 2022, 14, 2216_ 16 of 20
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
Agree
16. While I was on the site, I was always able to go where I thought I was going
Strongly
disagree
Strongly
disagree
Strongly
disagree
Strongly
disagree
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
17. I was delighted to be able to choose which link and when to click
Agree
Agree
Agree
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
18. I was in total control over the pace of my visit to this Web site
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
19. While surfi ng the site, I had absolutely no control over what I could do on the site
Strongly
disagree
Disagree
Somewhat
disagree
Somewhat
agree
Somewhat
agree
Somewhat
agree
Agree
Agree
Agree
Strongly
disagree
Strongly
disagree
Strongly
disagree
20. The Web site is not manageable
Strongly
disagree
Disagree
Somewhat
disagree
21. This Web site facilitates two-way
Neither
disagree or
agree
Neither
disagree or
agree
Neither
disagree or
agree
Strongly
disagree
Disagree
Somewhat
disagree
1
-----
_Sustainability 2022, 14, 2216_ 17 of 20
22. The Web site gives me the opportunity to talk back.
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Strongly
disagree
Strongly
disagree
Strongly
disagree
Strongly
disagree
Strongly
disagree
Strongly
disagree
Strongly
disagree
23. The Web site facilitates concurrent communication.
Somewhat
agree
Somewhat
agree
Somewhat
agree
Strongly
disagree
Disagree
Somewhat
disagree
24. The Web site enables conversation
Neither
disagree or
agree
Neither
disagree or
agree
Strongly
disagree
Disagree
Somewhat
disagree
25. Site created the feeling that it wants to listen to its users
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
26. The website gives visitors the opportunity to talk back
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
27. It is difficult to offer feedback to the website
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
Somewhat
agree
28. The website does not at all encourage visitors to talk back
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
Agree
Agree
Agree
Agree
Agree
Agree
Agree
-----
_Sustainability 2022, 14, 2216_ 18 of 20
29. The Web site processed my input very quickly
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
30. Getting information from the Web site is very fast
Somewhat
agree
Somewhat
agree
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
31. I was able to obtain the information I want without any delay
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
Agree
Agree
Agree
32. When I clicked on the links, I felt I was getting instantaneous information
Strongly
disagree
Strongly
disagree
Strongly
disagree
Strongly
disagree
Strongly
disagree
Strongly
disagree
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
33. The Web site was very slow in responding to my request
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Agree
Agree
Agree
34. The Web site answers my question immediately
Somewhat
agree
Somewhat
agree
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
35. The site had the ability to respond to my specifi cquestions quickly and efficiently
Strongly
disagree
Disagree
Somewhat
disagree
Neither
disagree or
agree
Somewhat
agree
Agree
Strongly
disagree
-----
_Sustainability 2022, 14, 2216_ 19 of 20
**References**
1. Miller, M. The Ultimate Web Marketing Guide; Pearson Education, Inc.: Upper Saddle River, NJ, USA, 2011; pp. 7–10.
2. Ryan, D.; Jones, C. Understanding Digital Marketing—Marketing Strategies for Engaging the Digital Generation; Kogan Page Ltd.:
London, UK, 2009.
3. Reed, J. Get Up to Speed with Online Marketing; FT Press: Upper Saddle River, NJ, USA, 2012.
4. Downes, E.J.; McMillan, S.J. Defining interactivity: A qualitative identification of key dimensions. New Media Soc. 2000, 2, 157–179.
[[CrossRef]](http://doi.org/10.1177/14614440022225751)
5. Liu, Y.P.; Shrum, L.J. What is interactivity, and is it always such a good thing? Implications of definition, person and situation for
[the influence of interactivity on advertising effectiveness. J. Advert. 2002, 31, 53–64. [CrossRef]](http://doi.org/10.1080/00913367.2002.10673685)
6. McMillan, S.; Hwang, J.-S. Measures of Perceived Interactivity: An Exploration of the Role of Direction of Communication, User
[Control, and Time in Shaping Perceptions of Interactivity. J. Advert. 2002, 31, 29–42. [CrossRef]](http://doi.org/10.1080/00913367.2002.10673674)
7. McMillan, S.J. A Four-Part Model of Cyber-Interactivity; SAGE Publications: Thousand Oaks, CA, USA, 2002; Volume 4, pp. 271–291.
8. [Liu, Y. Developing a scale to measure the interactivity of websites. J. Advert. Res. 2003, 43, 207–221. [CrossRef]](http://doi.org/10.2501/JAR-43-2-207-216)
9. Albert, T.C.; Goes, P.B.; Gupta, A. GIST: A model for design and management of content and interactivity of customer-centric
[web sites. MIS Q. 2004, 28, 161–182. [CrossRef]](http://doi.org/10.2307/25148632)
10. Johnson, G.J.; Ii, G.C.B.; Kumar, A. Interactivity and its Facets Revisited: Theory and Empirical Test. J. Advert. 2006, 35, 35–52.
[[CrossRef]](http://doi.org/10.2753/JOA0091-3367350403)
11. Wu, G. Conceptualising and Measuring the Perceived Interactivity of Websites. J. Curr. Issues Res. Advert. 2006, 28, 87–104.
[[CrossRef]](http://doi.org/10.1080/10641734.2006.10505193)
12. [Song, J.H.; Zinkhan, G.M. Determinants of Perceived Web Site Interactivity. J. Mark. 2008, 72, 99–133. [CrossRef]](http://doi.org/10.1509/jmkg.72.2.99)
13. Jiang, Z.; Chan, J.; Tan, B.C.Y.; Chua, W.S. Effects of Interactivity on Website Involvement and Purchase Intention. J. Assoc. Inf.
_[Syst. 2010, 11, 34–59. [CrossRef]](http://doi.org/10.17705/1jais.00218)_
14. Trevinal, A.M.; Stenger, T. Toward a conceptualisation of the online shopping experience. J. Retail. Consum. Serv. 2014, 21, 314–326.
[[CrossRef]](http://doi.org/10.1016/j.jretconser.2014.02.009)
15. [McLean, G.J. Investigating the online customer experience—A B2B perspective. Mark. Intell. Plan. 2017, 35, 657–672. [CrossRef]](http://doi.org/10.1108/MIP-12-2016-0222)
16. Ye, B.H.; Barreda, A.A.; Okumus, F.; Nusair, K. Website interactivity and brand development of online travel agencies in China:
[The moderating role of age. J. Bus. Res. 2019, 99, 382–389. [CrossRef]](http://doi.org/10.1016/j.jbusres.2017.09.046)
17. Islam, H.; Jebarajakirthy, C.; Shankar, A. An experimental based investigation into the effects of website interactivity on customer
[behavior in online purchase context. J. Strat. Mark. 2021, 29, 117–140. [CrossRef]](http://doi.org/10.1080/0965254X.2019.1637923)
18. Wu, L. Website interactivity may compensate for consumers’ reduced control in E-Commerce. J. Retail. Consum. Serv. 2019, 49,
[253–266. [CrossRef]](http://doi.org/10.1016/j.jretconser.2019.04.003)
19. [Varnali, K.; Toker, A. Mobile marketing research: State-of-the-art. Int. J. Inf. Manag. 2010, 30, 144–151. [CrossRef]](http://doi.org/10.1016/j.ijinfomgt.2009.08.009)
20. Leppaniemi, M.; Karjaluoto, H. Factors influencing consumers’ willingness to accept mobile advertising: A conceptual model. Int.
_[J. Mob. Commun. 2005, 3, 197–213. [CrossRef]](http://doi.org/10.1504/IJMC.2005.006580)_
21. Scharl, A.; Dickinger, A.; Murphy, J. Diffusion and success factors of mobile marketing. Electron. Commer. Res. Appl. 2005, 4,
[159–173. [CrossRef]](http://doi.org/10.1016/j.elerap.2004.10.006)
22. Michael, A.; Salter, B. Mobile Marketing; Routledge: London, UK, 2006.
23. Smutkupt, P.; Krairit, D.; Esichaikul, V. Mobile marketing: Implications for marketing strategies. Int. J. Mob. Mark. 2010, 5,
126–139.
24. Gascó-Hernández, M.; Torres-Coronas, T. Information Communication Technologies and City Marketing: Digital Opportunities for Cities
_around the World Information Science Reference; IGI Global: Hershey, PA, USA, 2009._
25. Krum, C. Mobile Marketing Finding Your Customers No Matter Where They Are; Pearson Education, Inc.: Indianopolis, IN, USA, 2010.
26. Matti, L.; Heikki, K. Mobile marketing: From marketing strategy to mobile marketing campaign implementation. Int. J. Mob.
_Mark. 2008, 3, 50–61._
27. Weber, L. Marketing to the Social Web, How Digital Customer Communities Build Your Business; John Wiley & Sons, Inc.: Hoboken,
NY, USA, 2009.
28. Van Duyne, D.K.; Landay, J.A.; Hong, J.I. The Design of Sites: Patterns for Creating Winning Web Sites; Prentice Hall: Hoboken, NY,
USA, 2007.
29. Charlesworth, A. Digital Marketing, 2nd ed.; Taylor & Francis Group: Abingdon, UK; Routledge: New York, NY, USA, 2014.
30. [Wolk, A.; Theysohn, S. Factors influencing website traffic in the paid content market. J. Mark. Manag. 2007, 23, 769–796. [CrossRef]](http://doi.org/10.1362/026725707X230036)
31. Ceptureanu, S.; Ceptureanu, E.; Popescu, D.; Orzan, O.A. Eco-innovation Capability and Sustainability Driven Innovation
[Practices in Romanian SMEs. Sustainability 2020, 12, 7106. [CrossRef]](http://doi.org/10.3390/su12177106)
32. Grant, R.; Clarke, R.J.; Kyriazis, E. Modelling real-time online information needs: A new research approach for complex consumer
[behaviour. J. Mark. Manag. 2013, 29, 950–972. [CrossRef]](http://doi.org/10.1080/0267257X.2011.621440)
33. [Steuer, J. Defining Virtual Reality: Dimensions Determining Telepresence. J. Commun. 1992, 42, 73–93. [CrossRef]](http://doi.org/10.1111/j.1460-2466.1992.tb00812.x)
34. Wu, G. The Mediating Role of Perceived Interactivity in the Effect of Actual Interactivity on Attitude Toward the Website. J.
_[Interact. Advert. 2005, 5, 29–39. [CrossRef]](http://doi.org/10.1080/15252019.2005.10722099)_
35. Chung, H.; Zhao, X. Effects of Perceived Interactivity on Web Site Preference and Memory: Role of Personal Motivation. J. Comput.
_[Commun. 2006, 10. [CrossRef]](http://doi.org/10.1111/j.1083-6101.2004.tb00232.x)_
-----
_Sustainability 2022, 14, 2216_ 20 of 20
36. Wu, G. Perceived Interactivity and Attitude toward Website. In Proceedings of the Annual Conference of American Academy of
Advertising, Albuquerque, NM, USA, 4–7 April 1999.
37. Yoon, D.; Youn, S. Brand Experience on the Website: Its Mediating Role Between Perceived Interactivity and Relationship Quality.
_[J. Interact. Advert. 2016, 16, 1–15. [CrossRef]](http://doi.org/10.1080/15252019.2015.1136249)_
38. Chen, Q.; Wells, W.D. Attitude Toward the Site. J. Advert. Res. 1999, 39, 27–37.
39. [Bruner, G.C.; Kumar, A. Web Commercials and Advertising Hierarchy-of-Effects. J. Advert. Res. 2000, 40, 35–42. [CrossRef]](http://doi.org/10.2501/JAR-40-1-2-35-42)
40. Coyle, J.R.; Thorson, E. The Effects of Progressive Levels of Interactivity and Vividness in Web Marketing Sites. J. Advert. 2001, 30,
[65–77. [CrossRef]](http://doi.org/10.1080/00913367.2001.10673646)
41. Bruner, G.C.; Kumar, A. Similarity Analysis of Three Attitude-Toward-the-Website Scales. Q. J. Electron. Commer. 2002, 3, 163–172.
42. Cheon, E. Energising business transactions in virtual worlds: An empirical study of consumers’ purchasing behaviours. Inf.
_[Technol. Manag. 2013, 14, 315–330. [CrossRef]](http://doi.org/10.1007/s10799-013-0169-6)_
43. Martin, J.; Mortimer, G.; Andrews, L. Re-examining online customer experience to include purchase frequency and perceived risk.
_[J. Retail. Consum. Serv. 2015, 25, 81–95. [CrossRef]](http://doi.org/10.1016/j.jretconser.2015.03.008)_
44. Jevremovic, M.; Stavljanin, V.; Kostic-Stankovic, M. Study on the actual and perceptual interactivity of the website. Info. M. 2016,
_15, 42–47._
45. Jevremovic, M.; Stavljanin, V.; Vasic, Z.; Stankovic, M. Comparative analysis of the influence on consumers via mobile phones
[and computers. J. Eng. Manag. Compet. (JEMC) 2016, 6, 3–11. [CrossRef]](http://doi.org/10.5937/jemc1601012J)
46. Štavljanin, V.; Jevremovi´c, M. Comparison of Perceived Interactivity Measures of Actual Websites Interactivity. J. Inf. Technol.
_[Appl. 2017, 13, 42–52. [CrossRef]](http://doi.org/10.7251/JIT1701042S)_
47. Reynolds, N.; Ruiz de Maya, S. The impact of complexity and perceived difficulty on consumer revisit intentions. J. Mark. Manag.
**[2013, 29, 625–645. [CrossRef]](http://doi.org/10.1080/0267257X.2013.774290)**
48. Peterson, M.; Koch, V.; Gröne, F.; Vo, H.T.K. Online customers, digital marketing: The CMO-CIO connection. J. Direct Data Digit.
_[Mark. Pract. 2010, 11, 219–221. [CrossRef]](http://doi.org/10.1057/dddmp.2009.33)_
49. Kuleto, V.; Ili´c, M.P.; Ševi´c, N.P.; Rankovi´c, M.; Stojakovi´c, D.; Dobrilovi´c, M. Factors Affecting the Efficiency of Teaching Process
[in Higher Education in the Republic of Serbia during COVID-19. Sustainability 2021, 13, 12935. [CrossRef]](http://doi.org/10.3390/su132312935)
50. Bucea-Manea-T, oni¸s, R.; Martins, O.M.D.; Bucea-Manea-T, oni¸s, R.; Gheorghit,ă, C.; Kuleto, V.; Ili´c, M.P.; Simion, V.-E. Blockchain
[Technology Enhances Sustainable Higher Education. Sustainability 2021, 13, 12347. [CrossRef]](http://doi.org/10.3390/su132212347)
51. Kuleto, V.; Stanescu, M.; Rankovi´c, M.; Ševi´c, N.P.; Păun, D.; Teodorescu, S. Extended Reality in Higher Education, a Responsible
[Innovation Approach for Generation Y and Generation Z. Sustainability 2021, 13, 11814. [CrossRef]](http://doi.org/10.3390/su132111814)
52. Kuleto, V.; Ili´c, M.; Dumangiu, M.; Rankovi´c, M.; Martins, O.M.D.; Păun, D.; Mihoreanu, L. Exploring Opportunities and
Challenges of Artificial Intelligence and Machine Learning in Higher Education Institutions. Sustainability 2021, 13, 10424.
[[CrossRef]](http://doi.org/10.3390/su131810424)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/su14042216?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/su14042216, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2071-1050/14/4/2216/pdf?version=1645152434"
}
| 2,022
|
[
"Review"
] | true
| 2022-02-15T00:00:00
|
[
{
"paperId": "f09abc126a51028412b15b138c6f264c895b4317",
"title": "Factors Affecting the Efficiency of Teaching Process in Higher Education in the Republic of Serbia during COVID-19"
},
{
"paperId": "ebbee11e7fff65d8eda342649be5be683a2df89b",
"title": "Blockchain Technology Enhances Sustainable Higher Education"
},
{
"paperId": "f1757497bfea46c5075d2e1185b88f538b49b094",
"title": "Extended Reality in Higher Education, a Responsible Innovation Approach for Generation Y and Generation Z"
},
{
"paperId": "8adc96980d6872880c8d3ace746049027c5125e5",
"title": "Exploring Opportunities and Challenges of Artificial Intelligence and Machine Learning in Higher Education Institutions"
},
{
"paperId": "6839e1eed27ea2d292545c4ca253ee152a02a45b",
"title": "Eco-innovation Capability and Sustainability Driven Innovation Practices in Romanian SMEs"
},
{
"paperId": "e4f9756b300fcf5969a86e12ee34ca5a8c445dec",
"title": "An experimental based investigation into the effects of website interactivity on customer behavior in on-line purchase context"
},
{
"paperId": "f7fef5738f68835a0f714e35bef698ff602dc642",
"title": "Website interactivity may compensate for consumers’ reduced control in E-Commerce"
},
{
"paperId": "720a8bc1c8c078faa20280cdf610c1580ac8515d",
"title": "Website interactivity and brand development of online travel agencies in China: The moderating role of age"
},
{
"paperId": "ab26151e07bd6865648d6bd434272651a2d860fb",
"title": "Investigating the online customer experience – a B2B perspective"
},
{
"paperId": "3796b8a1622d70d24590590a4b8fb8a22860bea8",
"title": "Comparison of Perceived Interactivity Measures of Actual Websites Interactivity"
},
{
"paperId": "4f11d92c029083bc6fca7b3c9b0d398538ccdb34",
"title": "Brand Experience on the Website: Its Mediating Role Between Perceived Interactivity and Relationship Quality"
},
{
"paperId": "196864b8c18faeff8f3661750d6dcc2dfc10c917",
"title": "Re-examining online customer experience to include purchase frequency and perceived risk"
},
{
"paperId": "23e167ee3b8c2b1141ee006cefbde068ca098c62",
"title": "Toward a conceptualization of the online shopping experience"
},
{
"paperId": "73b553b4001e588f25dc5a44f25ba380202a6b02",
"title": "Energizing business transactions in virtual worlds: an empirical study of consumers’ purchasing behaviors"
},
{
"paperId": "84e2ccbdd347924651a3d91587ca9c1e054dd166",
"title": "Modelling real-time online information needs: A new research approach for complex consumer behaviour"
},
{
"paperId": "df32ddc6f12973d7a9cba27b1225589e3c8dbb22",
"title": "The impact of complexity and perceived difficulty on consumer revisit intentions"
},
{
"paperId": "bc61a11ff3a13c9676f316363880fed1242c0215",
"title": "Mobile Marketing: Finding Your Customers No Matter Where They Are"
},
{
"paperId": "152cf08f2e46e295b21c1199086b0e4fbfcd6a5a",
"title": "Understanding Digital Marketing: Marketing Strategies for Engaging the Digital Generation"
},
{
"paperId": "238997c44f6dac914ce370ee8c22d92c50fe41af",
"title": "Determinants of Perceived Web Site Interactivity"
},
{
"paperId": "35531c3e4f770936fc4f11148894369aae720b8b",
"title": "Factors influencing website traffic in the paid content market"
},
{
"paperId": "f405884be77d57e806d3b7bf1154b00b66381e9b",
"title": "Marketing to the Social Web: How Digital Customer Communities Build Your Business"
},
{
"paperId": "2dc7875834625094c8e182f60069cc5f345b9ff6",
"title": "Interactivity and its Facets Revisited: Theory and Empirical Test"
},
{
"paperId": "afe77d35bd919e7bf7f10ab8ed2ff011d5f579bb",
"title": "Effects of Perceived Interactivity on Web Site Preference and Memory: Role of Personal Motivation"
},
{
"paperId": "18e869438fc6e03442d06bcacdb7ce179233ba76",
"title": "Conceptualizing and Measuring the Perceived Interactivity of Websites"
},
{
"paperId": "b6315c50039d96aeb80dcf96aa71bcbc117d6532",
"title": "Factors influencing consumers' willingness to accept mobile advertising: a conceptual model"
},
{
"paperId": "ae072a26ac315fa8eb893d09720a6fc6e199bac3",
"title": "Diffusion and success factors of mobile marketing"
},
{
"paperId": "bc49783106d60a1de0bac9ce081955dd8a1b192e",
"title": "The Mediating Role of Perceived Interactivity in the Effect of Actual Interactivity on Attitude Toward the Website"
},
{
"paperId": "a0b68e7db857892275efe6fb020d25085ba8a634",
"title": "GIST: A Model for Design and Management of Content and Interactivity of Customer-Centric Web Sites"
},
{
"paperId": "30aff46cc28227f7e06aeff4adc886ed10105bfc",
"title": "Developing a scale to measure the interactivity of websites"
},
{
"paperId": "52b4c83d473729be1d6d987091dfc6837ce81a57",
"title": "What is Interactivity and is it Always Such a Good Thing? Implications of Definition, Person, and Situation for the Influence of Interactivity on Advertising Effectiveness"
},
{
"paperId": "40005d79caa9b3638f056ab4e0405520212faf1a",
"title": "Measures of Perceived Interactivity: An Exploration of the Role of Direction of Communication, User Control, and Time in Shaping Perceptions of Interactivity"
},
{
"paperId": "f1aa5b31b41f0f51c20625af6c20a286b9743678",
"title": "A four-part model of cyber-interactivity"
},
{
"paperId": "16e03f26510e25a41ef2fa9018abf640d64049b9",
"title": "The Effects of Progressive Levels of Interactivity and Vividness in Web Marketing Sites"
},
{
"paperId": "8d18dfa20fd2735aaf4962175c6a79f95816a6c4",
"title": "Defining Interactivity"
},
{
"paperId": "5cbfdd0224256da3639375ecdb7be5187893af45",
"title": "Attitude toward the Site"
},
{
"paperId": "5ac60be482fe521fdbe601c432e58068287f7036",
"title": "Defining virtual reality: dimensions determining telepresence"
},
{
"paperId": "9fe7e75b940051380b08c59ff85c1006dc3c8410",
"title": "Comparative analysis of the influence on consumers via mobile phones and computers"
},
{
"paperId": "b0eb47354eb442c0dccbaedc84297ffc886ec65c",
"title": "Effects of Interactivity on Website Involvement and Purchase Intention"
},
{
"paperId": "36907abc56ad2d9dd0e829ec67eae03631e11fae",
"title": "Online customers, digital marketing: The CMO-CIO connection"
},
{
"paperId": "102ea1973e5740441dbd311a0cb1006f3af76c84",
"title": "The Ultimate Web Marketing Guide"
},
{
"paperId": "98125f94aba73d506cf45659ea513180f9350013",
"title": "Mobile Marketing: From Marketing Strategy to Mobile Marketing Campaign Implementation"
},
{
"paperId": "d7ee39dd5ca39a417dec623f5567a72c655b9106",
"title": "The Design of Sites - Patterns for Creating Winning Web Sites (2. ed.)"
},
{
"paperId": "4fa049028a471053f567ff4c86e87d429549e8fd",
"title": "SIMILARITY ANALYSIS OF THREE ATTITUDE-TOWARD-THE-WEBSITE SCALES"
},
{
"paperId": "8fabd84dc583273c2e2e9623ab59e0214d01987b",
"title": "Web Commercials and Advertising Hierarchy-of-Effects"
},
{
"paperId": null,
"title": "Digital Marketing, 2nd ed.; Taylor & Francis Group: Abingdon, UK; Routledge"
},
{
"paperId": null,
"title": "Perceived Interactivity and Attitude toward Website"
},
{
"paperId": "b96bed702f63e2fb2817f82683c8671481ce8868",
"title": "International Journal of Information Management Mobile Marketing Research: The-state-of-the-art"
},
{
"paperId": null,
"title": "Study on the actual and perceptual interactivity of the website"
},
{
"paperId": null,
"title": "Get Up to Speed with Online Marketing"
},
{
"paperId": null,
"title": "Information Communication Technologies and City Marketing: Digital Opportunities for Cities around the World Information Science Reference"
},
{
"paperId": null,
"title": "Mobile marketing: Implications for marketing strategies"
}
] | 17,274
|
en
|
[
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02df59c12a20e5d67fdeff7d64710336635ffe8b
|
[] | 0.866675
|
The Big Data, Artificial Intelligence, and Blockchain in True Cost Accounting for Energy Transition in Europe
|
02df59c12a20e5d67fdeff7d64710336635ffe8b
|
Energies
|
[
{
"authorId": "88588811",
"name": "J. Gusc"
},
{
"authorId": "2152457800",
"name": "Peter Bosma"
},
{
"authorId": "117575051",
"name": "S. Jarka"
},
{
"authorId": "1403925462",
"name": "A. Biernat-Jarka"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-155563",
"https://www.mdpi.com/journal/energies",
"http://www.mdpi.com/journal/energies"
],
"id": "1cd505d9-195d-4f99-b91c-169e872644d4",
"issn": "1996-1073",
"name": "Energies",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-155563"
}
|
The current energy prices do not include the environmental, social, and economic short and long-term external effects. There is a gap in the literature on the decision-making model for the energy transition. True Cost Accounting (TCA) is an accounting management model supporting the decision-making process. This study investigates the challenges and explores how big data, AI, or blockchain could ease the TCA calculation and indirectly contribute to the transition towards more sustainable energy production. The research question addressed is: How can IT help TCA applications in the energy sector in Europe? The study uses qualitative interpretive methodology and is performed in the Netherlands, Germany, and Poland. The findings indicate the technical feasibilities of a big data infrastructure to cope with TCA challenges. The study contributes to the literature by identifying the challenges in TCA application for energy production, showing the readiness potential for big data, AI, and blockchain to tackle them, revealing the need for cooperation between accounting and technical disciplines to enable the energy transition.
|
# energies
_Article_
## The Big Data, Artificial Intelligence, and Blockchain in True Cost Accounting for Energy Transition in Europe
**Joanna Gusc** **[1,]*** **, Peter Bosma** **[2], Sławomir Jarka** **[3]** **and Agnieszka Biernat-Jarka** **[4]**
1 Faculty of Economics and Business, University of Groningen, P.O. Box 800,
9700 AV Groningen, The Netherlands
2 Deloitte, Groote Voort 291a, 8041 BL Zwolle, The Netherlands; pbosma@deloitte.nl
3 Institute of Management, Warsaw University of Life Sciences—SGGW, Nowoursynowska 166,
02-787 Warsaw, Poland; slawomir_jarka@sggw.edu.pl
4 Institute of Economics and Finance, Warsaw University of Life Sciences—SGGW, Nowoursynowska 166,
02-787 Warsaw, Poland; agnieszka_biernat_jarka@sggw.edu.pl
***** Correspondence: j.s.gusc@rug.nl
[����������](https://www.mdpi.com/article/10.3390/en15031089?type=check_update&version=2)
**�������**
**Citation: Gusc, J.; Bosma, P.; Jarka, S.;**
Biernat-Jarka, A. The Big Data,
Artificial Intelligence, and Blockchain
in True Cost Accounting for Energy
Transition in Europe. Energies 2022,
_[15, 1089. https://doi.org/10.3390/](https://doi.org/10.3390/en15031089)_
[en15031089](https://doi.org/10.3390/en15031089)
Academic Editors: Ignacio Mauleón
and Peter V. Schaeffer
Received: 1 December 2021
Accepted: 19 January 2022
Published: 1 February 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: The current energy prices do not include the environmental, social, and economic short**
and long-term external effects. There is a gap in the literature on the decision-making model for the
energy transition. True Cost Accounting (TCA) is an accounting management model supporting
the decision-making process. This study investigates the challenges and explores how big data, AI,
or blockchain could ease the TCA calculation and indirectly contribute to the transition towards
more sustainable energy production. The research question addressed is: How can IT help TCA
applications in the energy sector in Europe? The study uses qualitative interpretive methodology
and is performed in the Netherlands, Germany, and Poland. The findings indicate the technical
feasibilities of a big data infrastructure to cope with TCA challenges. The study contributes to the
literature by identifying the challenges in TCA application for energy production, showing the
readiness potential for big data, AI, and blockchain to tackle them, revealing the need for cooperation
between accounting and technical disciplines to enable the energy transition.
**Keywords: True Cost Accounting; big data; sustainability; blockchain; AI; energy production**
**1. Introduction**
The energy markets face challenges in the transformation towards sustainable alternatives, with some European countries such as Sweden and the Netherlands showing stronger
readiness than others, i.e., Poland and Hungary [1–3]. The technical and social aspects
of energy production in transitioning towards renewable alternatives seem extensively
covered in literature [4–7]. From the economic perspective, there are business models
for classification [8] and accounting frameworks introduced to track the energy efficiency
trends [9]. There is a gap, however, in the literature on the decision-making model for
the energy transition. Specifically, studies are scares on how to enable decision-makers
throughout the energy production chain (from energy sources and production entities to
energy (pro)consumers) to make better decisions, i.e., choose more sustainable alternatives.
This paper addresses this gap by analysing the True Cost Accounting model for energy cost
estimation based on a broad scope of information covering all aspects of the energy production chain, both internal and external. The analysis goes beyond a single discipline and
combines technical and accounting literature to critically assess the TCA model for energy
cost estimation. Further, it explores the potential of an innovative idea of strengthening the
TCA model with big data, Artificial Intelligence, and blockchain. In doing so, it contributes
to developing a new body of literature on big data use in the accounting field. The study
investigates the transition challenges facing the energy sector and explores how the use
of big data, AI, or blockchain could ease the TCA calculation and indirectly support the
-----
_Energies 2022, 15, 1089_ 2 of 24
move towards more sustainable operations. The primary research question guiding this
study is how IT can help TCA applications in the energy sector in Europe. In answering
this question, we investigate the current challenges of TCA and the current use of big data
in management accounting.
Big data, AI, and blockchain, as elements of Industry 4.0, show different levels of
development across countries in the European Union [10,11]. The current study applies a
multidisciplinary and multinational approach to collect opinions from a diverse group of
relevant stakeholders—IT specialists, sustainability and energy experts, and accountants—
in the European energy market, with particular focus on the Netherlands, Germany, and
Poland, and the countries with contrasting energy markets, levels of industry 4.0 advancement, and development of the accounting discipline.
_1.1. Literature Review_
True Cost Accounting Framework
TCA is a management accounting concept that estimates a true cost [12,13]. TCA is
a holistic approach accounting for current and future, internal and external impacts, by
discounting it in a single price [12,14,15]. TCA provides insight into the complex economic,
social, and ecological processes through which sustainability should be attained [16]. As a
result of the TCA application, the existing prices of products and services can be adjusted
to include the internal and external impacts throughout the whole lifecycle of the products
or services [17]. Consequently, sustainable decision making may be stimulated by putting
a price on otherwise seemingly free impact costs to society [13]. The stimulation can
enable the energy markets playing an important role in tackling the climate change [10,18],
making the externalities of energy production visible which are hardly included in the cost
estimations [11,19].
The TCA framework consists of five steps as shown in Figure 1 [20]. The first four steps
are essential for calculating the cost, the fifth step consider management decisions made
after the estimation and will be omitted in the current study.
1. Analyse company situation and map stakeholders engaged. Identify a cost object by
analysing the company situation [20]. A cost object refers to a process, a waste stream,
an industry, or an entity. Based on the cost object, a True Cost price calculation will
be performed.
2. Define the cost object to identify and outline the scope of the impacts: here, all the
possible externalities (side-effects/by-products or unintended production results)
should be identified. It is essential to set the limit on how far to go. Externalities can
be endless, so a well-defined scope is required.
3. Measure all impacts within the scope of the cost object [20]. Life cycle assessment
(LCA) analyses are helpful since they specify the full usages of materials and the
waste streams created.
4. Monetise all the significant impacts into a monetary unit [20]. This helps overcome
comparison and integration issues for social and environmental impacts [21,22].
Literature on TCA reveals several challenges in its application. For example, the
measurement and monetisation methods are incomplete, and TCA requires development to
provide complete and comprehensive coverage of all the identified impact categories [17].
Furthermore, TCA is complex and should therefore provide useful information efficiently
in order to improve its applicability for practice [17]. This effectivity–efficiency tradeoff
is important since the costs of the analysis should not outweigh the benefits of more
useful accounting information. When analysing the challenges of TCA, three categories are
identified: complexity, accuracy, and timeliness.
-----
The TCA framework consists of five steps as shown in Figure 1 [20]. The first four
_Energies 2022, 15, 1089_ 3 of 24
steps are essential for calculating the cost, the fifth step consider management decisions
made after the estimation and will be omitted in the current study
**Figure 1.Figure 1. True Cost Accounting framework. Source: [True Cost Accounting framework. Source: [20] 20].**
_1.2. TCA Challenges1._ Analyse company situation and map stakeholders engaged. Identify a cost object by
1.2.1. TCA Complexityanalysing the company situation [20]. A cost object refers to a process, a waste stream,
1. Society, the environment, and the economy are interrelated elements interacting with
each other. TCA deals with the different scales and domains of social, environmental,
and economic impacts and those impacts are interrelated. Measurements not integrated into one single and comparable unit [23] have consequences for interpreting
the result.
2. Across industries and throughout the life cycle of a product, different metrics are
used for measurement and monetisation [17]. There is no consensus on measurement
and monetisation, and this lack of standardisation makes it difficult to measure the
product’s impacts uniformly [23]. Especially with regard to monetisation, many
different valuation methods exist [24,25].
3. TCA uses data from multiple disciplines, such as bioscience, biology, psychology,
economy, and accounting, to understand the interaction among organisations, society,
and the natural environment [26]. Each new practice for measurement and monetisation creates a new focus for negotiation, contestation, and political struggle over
values [27].
1.2.2. TCA Accuracy
1. Some impacts deal with emotions and subjectivity, for example, landscape or stress,
and are difficult to quantify and assign value [28].
Monetisation uses different valuation methods: direct behavioural and indirect valuation [17]. The first technique measures the monetary value directly from the preferences
or behaviour of the stakeholder and uses available market prices and observed actual
behaviour [17]. The accuracy challenge occurs in all situations where differences appear
between what stakeholders say ‘they would do’ and what ‘they actually do’ [17]. The indirect techniques estimate either cost of avoidance and restoration or damage costs [17]. The
avoidance and the restoration approaches use real market prices for existing technological
solutions to avoid, restore, or control pollution or damage. The damage costs approach
estimates the damage caused by a pollutant using scientific, statistical, and behavioural
valuation methods [17]. All the approaches mentioned above share shortcomings in the
-----
_Energies 2022, 15, 1089_ 4 of 24
availability of the data and the reliability of the price estimates, causing inaccuracies in
this step.
2. The true cost of an impact depends on its context and the interlinkages of variables.
Takin the water usage on its own, for instance, is an incomplete measure to capture the
true cost of the water usage (water use in areas with plentiful rainfall is less stressful
than the water used for milk and cattle grazing) [23].
1.2.3. TCA Timeliness
1. Long-term cost estimation is characterized by different time lags and inertia, which
masks those important cause–effect relations when captured at one point in time [26].
For example, one ton of extra CO[2] emission now will lead to more expenditures
for tackling climate change in the future. However, it is difficult to determine now
how aggressively the climate will warm up in the upcoming years and what those
expenditures will be in the future. Many variables determine the true cost of an
impact [29], and these become fully visible only in the long run.
2. The time lag in the measurement and the monetising of the impact are uncertain [30].
It takes some time to gain insight into those processes or for the information to reach
managers [31]. When the accounting impact information reaches the user, a problem
may arise that the accounting information has become outdated [31].
1.2.4. IT and TCA
The current study proposes to address the challenges in TCA application, using IT as
the primary data source for account management [32]. Generally, IT systems can collect,
organize, process, and distribute large amounts of data [33], allowing accountants to
interpret data from many sources [34]. IT systems can be defined as a set of interrelated
components, such as software, hardware, people, procedures, and data that collect, process,
store, and distribute information to support decision making and organisational control [35].
IT systems have shifted from traditional data processing to more progressive and automated
data capture, and consequently, more variety of unstructured data sources such as big
data can be exploited [36]. Accounting methods integrate with this new reality of big
data [37]. AI is an outcome of a successful application of big data that can help understand
the past and predict the future based on a large amount of data [38]. It prevents information
overload, predicts future events, and analyses voice-based data and images and other data
sources that are currently not being used in accounting [38]. In addition, blockchain may
be useful in accounting. Blockchain is described as a series of blocks used to establish and
record the ownership of assets, in which an arbiter is not required [39,40]. This enables the
direct exchange of accurate financial information and improves the efficiency and reliability
of transactions [41] and the integrity of transaction history.
Table 1 shows the literature overview on big data applications in accounting, including
several trials data mining applications are prominent within management accounting [42].
1.2.5. TCA Big Data in Coping with Complexity
Big data and AI enhance the processes of data collection, identification of cause and
effect relations, integration of data, translation of raw data into meaningful information,
and the representation of the data on a manageable and accessible scale more efficiently [65].
Automating the processes of identifying cost drivers, forecasting future costs, measuring
impacts, and evaluating impact in a monetary unit may increase efficiency. Descriptive
and predictive data mining helps identify cause–effect relations in the database, allocating
impact costs to certain activities and estimating future costs. Moreover, to reduce the
complexity, it is important to reduce the scope of TCA. Within big data analytics, it is
important to determine the goal of the analysis [66]. A clear question enables the designer of
the big data tool to exclude all but the relevant data. Therefore, big data and AI may reduce
TCA’s complexity and consequently enhance the TCA’s potential application. Blockchain
may also reduce the complexity of TCA since it supports the automated exchange of
-----
_Energies 2022, 15, 1089_ 5 of 24
relevant data by all involved parties accurately and efficiently [67]. Moreover, blockchain
uses predefined protocols for a uniform sharing of information, and this standardisation of
data sharing may further reduce the complexity of TCA.
**Table 1. Data mining applications within accounting literature.**
**Application of Data Mining Studies in**
**Brief Description of the Research**
**Management Accounting**
Esmat et al. (2018) Data mining was used to predict customer demand
Wald et al. (2013) Data mining was used to allocate costs to activities more efficiently
Hämäläinen and Inkinen (2017) Data mining was used to reduce emission costs
Chou et al. (2011) Data mining was used for the estimating equipment manufacturing costs
Data mining was used to improve the accuracy of equipment inspection
Chou and Tsai (2012)
and repair in cost management
Dessureault and Benito (2012) Data mining was used for tracing equipment replacement costs
Data mining was used in defining drivers in activity-based costs and
Kostakis et al. (2008); Liu et al. (2012)
improving production routing
Yu et al. (2006); Shi and Li (2008); Miglaccio et al.
(2011); Vouk et al. (2011)
Data mining was used to construct cost management, create neural
network systems for a faster and more accurate estimation of the total unit
cost of construction, and for operation and maintenance
Chang et al. (2012) Data mining was used to forecast product unit cost
Yeh and Deng (2012) Data mining was used to estimate product life cycle cost
Data mining was used to estimate project design and product
Deng and Yeh (2010); Deng and Yeh (2011)
manufacturing costs
Petroutsatou (2012); Kaluzny et al. (2011) Data mining was used to develop a project-level cost–control system
Chen and He (2012) Data mining was used to develop a project level cost–estimate system
Yu (2011) Data mining was used to develop ABC classification techniques
Xing et al. (2015) Data mining was used to evaluate and predict educational performance
Zhou et al. (2015) Data mining was used to predict financial distress
Source: [43–64].
**Proposition 1. Big Data, AI, and blockchain reduce the complexity of TCA practices.**
1.2.6. TCA Big Data in Coping with Accuracy
Big data, AI, and blockchain may improve the accuracy of TCA, particularly its
measurement and monetisation steps. Here, data mining may be useful. Data mining,
defined as the process of identifying valid, potentially novel, and understandable patterns
in data [68], allows for the identification of causal relations and better forecasting of
future costs. Data mining is the most important current paradigm of advanced intelligent
business analytics and decision-supporting tools [42]. In data mining, specific algorithms
are used to extract patterns from data with three different goals: description, prediction, and
prescription [42]. Descriptive data mining refers to understanding and interpretation of the
data. Predictive data mining analyzes the past to predict the future by detecting patterns
of behaviour and extrapolating future actions based on those patterns [42]. Prescriptive
data mining refers to achieving the best possible outcome. So far, within management
accounting, the prediction function has been used the most often since estimation is the
most common task in managerial accounting application of data mining [42]. AI uses data
mining tools to build logic behind the data to forecast future outcomes and identify patterns
for allocating impacts to activities [45]. In order to arrive at the true cost estimations, the
interplay between discounting, uncertainty, damages, and risk aversion is important to
consider [29]. Those four elements can be integrated into a formula, and consequently, the
true cost can be estimated. Accounting may help determine the need and formula to extract
-----
_Energies 2022, 15, 1089_ 6 of 24
value from the data [69]. Insight should be provided in what data is needed and what
relevant variables capture the problem, based on which an analytic model can be built [42].
Consequently, analytics tools can translate the raw data into valuable decision-making
knowledge [70]. Blockchain is a distributed digital ledger used to record and share information through the peer-to-peer network [71]. Identical copies of the ledger are validated
collectively by all network members [72]. This technology implies that, due to the decentralisation feature of blockchain, it is impossible to alter information in a block at a single
location. This results in efficient, secure, transparent, and accurate processing [72]. Thus,
blockchain in TCA may enable linking measurement data from the production line to the
monetisation for environmental, social, and economic impacts accurately and efficiently.
Consequently, it allows the sharing of TCA measurement data between all the involved
parties within the value chain. Together, blockchain, big data, and AI may help identify the
cause–effect relations within the data, support forecasting of future costs, and accurately
share the measurement data.
**Proposition 2. Big data, AI, and blockchain result in more accurate TCA applications.**
1.2.7. TCA Big Data in Coping with Timeliness
Digitalisation allows accounting information to be produced, distributed, and interpreted in real time [73]. Different databases connected to each other provide, via automated
censoring, real-time insight into the TCA. The measurement of the impacts identified in the
lifecycle of a product, or a service, can be linked directly to the monetisation of the external
and internal costs resulting in a real-time true cost price. The environmental, social, and
economic external data can be integrated with the internal database of production and
automatically updated [45]. The analytical tools will identify relations and correlations and
allocate impact costs to production processes. Big data enables open-source information
sharing so that all involved parties within the life cycle provide and use the required
real-time data to perform the TCA analysis. Blockchain allows for automated exchange and
verification of information, measurement data between parties in the whole value chain
can be directly shared [67]. For TCA, that is advantageous since, in order for TCA to work,
_Energies 2022, 15, x FOR PEER REVIEW the required data should come from measurement for which consensus is required by the7 of 26_
involved stakeholders. Blockchain, furthermore, allows the secured exchange of that data
between all the parties without the approval of an arbiter [67]. This improves the real-time
accounting of information and thus the real-time awareness regarding sustainability.improves the real-time accounting of information and thus the real-time awareness re
garding sustainability.
**Proposition 3.Proposition 3. Big data, AI, and blockchain application results in timelier TCA information.Big data, AI, and blockchain application results in timelier TCA information.**
Figure 2 presents the conceptual model for the study and summarises the proposi
Figure 2 presents the conceptual model for the study and summarises the propositions.
tions.
**Figure 2. Conceptual model. Source: own study.**
**Figure 2.** Conceptual model. Source: own study.
Conceptual model. Source: own study.
-----
_Energies 2022, 15, 1089_ 7 of 24
**2. Materials and Methods**
This research aims to contribute to the literature on sustainability accounting by providing insights into improving the TCA methodology with the application of big data, AI,
and Blockchain. The task requires an exploratory research design and interpretive research
to explore the reasons and dynamics behind the complex, interrelated processes [74]. The
concept of sustainability accounting is complex and draws together many academic disciplines. Therefore, the potential application of IT technologies and their influence on the
TCA application could only be explored within their social context [74]. Using a qualitative
approach to understand the processes behind the TCA method can provide meaningful
insight into how to improve its methodology [75]. Inductive reasoning was used, as there
was no theory at the start of the research, and any theories that were developed are a result
of this research [76].
_2.1. Constructs_
The selection of impacts used in the True Cost Accounting exercise used in the current
study is shown in Table A1 in the Appendix A. In preparation for this study, the TCA
application and estimate of the true price of energy production showed a high complexity
of the exercise and low accuracy and timeliness. The complexity, accuracy, and timeliness
were the core concepts guiding the current study. The accuracy referred to the degree to
which relevant estimates were reliable, the degree to which cause and effect chains between
activities and impacts could be identified, the degree to which subjectivity and uncertainty
could be reduced in estimating costs, and the degree to which the measurements provided
detailed and reliable data. Complexity was operationalized as the degree to which different
metrics were required to measure environmental, social, and economic impacts, the degree
to which the TCA analysis was costly and time consuming, the degree to which different
academic disciplines were needed in the analysis and the degree to which they diverged,
the degree to which different monetisation methods were required and the degree to which
different dimensions and attributes of data sources could be brought together into one
scale. Timeliness relates to the degree of accounting data processing in real time, the degree
to which the data were available and to which measurement from the production could be
directly linked to the monetisation assessments.
In order to discuss the application of big data, AI, and blockchain, the different types
of energy production costs were discussed with each respondent to discern the types of
costs IT allowed to arrive at more accurate, timelier, and less complex TCA estimation.
The scoping was limited to the material impacts, meaning that the plant and system costs
have been identified as internal costs. Greenhouse gas emission costs, air pollution costs,
landscape and noise impacts, loss of biodiversity, and upstream costs of material and
construction have been identified as the external costs for the energy market [25].
The true cost estimation trial for wind and coal energy in the Netherlands conducted
prior to this research showed that construct is defined fractionally, and selected impacts
are included in the energy cost due to the shortcomings in data availability and processing
ability. In an attempt to identify a complete scale of material impacts, several were identified
and monetized, as shown in Table 2.
_2.2. Data Collection and Respondents_
The data were collected in a cross-sectional manner and consisted of interviewing
the experts on how impacts of energy production can be measurable and translated into
meaningful data. The current study used an earlier developed stakeholder map for the
Dutch energy market of Bosma [25]. The respondents were selected based on their expertise
in big data, analytical software, and accounting tools to provide insights on how big data
applications might help TCA processes. Similarly, Galliers and Huang [77] used experts
to provide alternative narratives to the dominant paradigm. The expert panel provides a
forum where leading experts in a given field can share their experiences and insights [78].
-----
_Energies 2022, 15, 1089_ 8 of 24
**Table 2. True Cost Accounting estimate for wind and coal energy.**
**Cost price of Energy Generation in** **Coal with CCS**
**Onshore Wind** **Offshore Wind** **Hard Coal**
**EUR/kWh.** **after Combustion**
Installation costs 4.4 7.6 1.5 7.0
O&M costs 1.0 2.0 0.8 1.0
Fuel costs 0.0 0.0 2.0 2.0
Sum of plant-level costs (a) 5.4 9.6 4.3 10.0
Grid costs 1.0 1.0 0.5 0.5
Balancing costs 0.3 0.3 0.0 0.0
Profile costs 1.5 1.5 0.0 0.0
Sum of system costs (b) 2.8 2.8 0.5 0.5
GHG emissions costs 0.1 0.09 7.11 2.34
Air pollution costs 0.07 0.07 1.37 1.47
Landscape and noise impacts 0.9 0.08 <0.1 <0.1
Loss on biodiversity Data not available 0.2 0.3
Employment benefits (<0.01) (<0.01) (<0.01) (<0.01)
Upstream costs of materials and construction 0.45 0.45 1.9 1.9
Cost of nonrecyclable materials 0.0000015 0.0000015 <0.0000015 <0.0000015
Sum of all quantifiable external costs (c) 1.53 0.7 10.6 5.6
Sum of all quantifiable costs (a+b+c) 9.73 13.1 15.4 16.1
Year 2019 S1 2019 S2 2020 S1 2020 S2
Energy market prices in the Netherlands EUR
20.52 20.55 14.27 13.61
/kWh (Statista, 2021)
Market prices energy in Germany (Statista, 2021) 30.88 28.78 30.43 30.06
Market prices energy in Poland (Statista, 2021) 13.43 13.76 14.75 15.71
Source: own calculations.
The same (Dutch) proxy of the stakeholders was used for Polish and German energy
markets due to the time constraints and since the system complexity of energy generation
was treated as similar across the EU countries.
The more variety exists in the data, the more patterns, relationships, and knowledge
can be extracted [79]. The Netherlands, Poland, and Germany energy markets were selected
for the study. Poland and the Netherlands are among the least sustainable European energy
markets [80] but show contrasting trends in industry 4.0 developments; the Netherlands
is one of the most advanced, Poland the least [10,11]. Germany, in contrast, is currently
reducing the amount of CO[2] emissions significantly and is on the way to becoming the pioneer in renewable energy [81]. In total, 16 respondents were interviewed (see Appendix B,
Table A2) with a total interview time of almost 22 h. The interviews were conducted via
Google Meet due to COVID-19 restrictions on location in the summer of 2021. Before the
interview, a document containing the stakeholders’ analysis, an overview of the types of
energy production costs, an infographic presenting the environmental and societal impacts of energy production, and the true cost calculation for the Dutch energy market
preparation study were shared with the respondents [25]. Consequently, these documents
were discussed with the experts to introduce them to the concept of TCA. The interview
guide was used as a baseline for the interview questions (see Appendix C). The interviews
were recorded to improve the data analysis process, and the transcripts were sent to the
respondents for verification purposes.
_2.3. Data Analysis Method_
In preparation for this study, the true cost estimation outcomes (Table 2) were discussed with the representatives of coal (RWE) and wind energy-producing companies. The
current study used an interpretative and thematic data analysis approach. The interviews
were divided into three themes: accuracy, timeliness, and complexity. Consequently, the
interview transcripts were coded according to the three themes. Quotes from the interviews
are placed in tables in the results section (and also appear in the narrative itself). The
-----
_Energies 2022, 15, 1089_ 9 of 24
narratives were created following Gray [82]. Gray states that narratives are needed to
provide alternative insights and move the boundaries of TCA [82]. Narratives are used
to enrich the current literature on TCA and provide insights into overcoming the current
challenges. Based on the quotes from the respondents, the researcher attempted to assess
the degree to which IT can make the TCA methodology more accurate, timely and less
complex, making use of the coding software but leaving much space to diverse opinions
and trying to grasp the richness of information.
**3. Results**
In general, in Europe, the energy prices do not cover the external influences of energy
production [83]. The estimations made during the preparation for this research were new
to most of the respondents and were received with much interest. Presenting Table 2 to
the respondents certainly contributed to broadening awareness of the externalities issue
and revealed the lack of applicable and common methodologies. According to the wind
farm owners we interviewed in Poland, there are no reliable procedures for this influence.
Further, they mentioned that the cost of avoiding negative impacts should be accounted
for in the investment planning stage. Owners are aware of potential external influences of
production. The owner shared the information that during the service of the wind farm,
the service technicians found that there was a bird’s nest with eggs in the high gondola
of the power plant. The owner believed that this is little evidence that the production of
energy from this source does not pose a radical threat to the birds. A wind power plant
is also a wintering place for ladybugs and other insects. The wind farm became part of
the natural environment. The coal energy plant controller in the Netherlands mentioned a
similar situation. Including the external effects during the investment, phase is essential as
then is easier to make a change rather than when the energy production takes place already.
However, the obstacle mentioned was missing the procedures and techniques to make it
visible and account for it.
Further results are presented according to the constructs described in the literature review part. During the first interviews, a new aspect appeared to challenge the respondents,
namely TCA implementation. It was added in the following interviews and reported in the
results, as it kept coming back.
Overall, the level of awareness about TCA was more advanced in the Netherlands
than in Poland, in the last country where the interviewer faced difficulties in bringing
the concept of TCA into the discussion. Moreover, in Germany and the Netherlands,
relative openness and transparency were experienced while it was to a lesser extent present
in Poland.
_3.1. Complexity_
The results of complexity experiences could be divided into five areas: metrics, cause
and effect relationships, diversity of experts needed to collaborate, number of indicators, resource consumption. Table 3 shows the challenges and solutions developed from
the results.
To summarise, big data and AI allow for the automation of data collection and management in TCA, resulting in a decrease in the complexity of TCA processes. The tools are
becoming cheaper and are available in identifying patterns, forecasting costs, and allocating
costs to drivers. This shows support for Proposition 1.
_3.2. Accuracy_
The accuracy of TCA estimations is a challenge in five areas: quantification and monetisation, fluctuation, objectivity, data availability and ethics. All respondents mentioned
the importance of having a good base—input for interpretation. ( . . . ) We first have to
make sure that the basis is good before we let big data and artificial intelligence let loose on
it. ( . . . ) R10. Table 4 provides an overview of the most important findings on accuracy
deficiencies and potential solutions.
-----
_Energies 2022, 15, 1089_ 10 of 24
**Table 3. TCA complexity and solutions.**
**Result** **Challenge** **Solution** **Result**
( . . . ) thousands of indicators that all Large number of
interrelate ( . . . ) R10 interrelated indicators
Technology is available. Data
can be stored in data centres;
AI used to detect patterns,
blockchain secures
( . . . ) we compared 30 to 40 different AI detects patterns can serve
Common standard
metrics ( . . . ) R2 as standard development
( . . . ) It is hard to consider the whole
chain in the life cycle since something can
have almost no impact in the direct
environment, but a huge impact elsewhere
( . . . ) R4
( . . . ) You have to be an expert in all areas.
Everything comes together in such a study
( . . . ) R7
( . . . ) In order to comprehend something
like biodiversity loss, it is difficult to see
how a population develops, and that is
cost-intensive ( . . . ) R3
( . . . ) These all are sub-topics that are all
in-depth and time-consuming ( . . . ) R5
Cooperation throughout
the life cycle /supply
chain
Manual data collection is
costly due to human
resource and time
consumption
Sharing data would
potentially ease cooperation.
Blockchain would
Sensors connected to a
blockchain system
( . . . ) having large amounts of data is
crucial for the evaluation of the whole
situation ( . . . ) R3
( . . . ) The technologies are already
there. ( . . . ) R4, R11
( . . . ) we have a lot of artificial
intelligence that can detect patterns
very well, and we can visualize data
very nicely ( . . . ) R4, R11
No direct support in the data found;
data sharing is an issue.
( . . . ) sensing is becoming cheaper
and cheaper ( . . . ) R2
( . . . ) Automated cost systems process
a large amount in a short time. ( . . . )
R3
Source: own study.
**Table 4. TCA accuracy and solutions.**
**Result TCA** **Challenge** **Solution** **Result IT**
( . . . ) In many cases, there are impacts
that cannot be expressed in CO2
equivalents.
( . . . ) life expectancy, child mortality
and human development index are
typically things that are not really
monetary ( . . . ) R7
( . . . ) Impacts can occur in 10 years or
100 years, so there is always an
uncertainty range here. ( . . . ) R5
( . . . ) This gives a lot of data problems
since data is often not available ( . . . )
R6
( . . . ) It is difficult to predict future
climate change policies and whether
or not countries will stick to the
climate agreements. A value,
therefore, is never definite, and it is
constantly subject to changes ( . . . ) R5
( . . . ) If data is collected manually,
they have a low credibility ( . . . )
R11–13
( . . . ) Everything is built on
assumptions and proxies ( . . . ) R5
( . . . ) Currently, there is a great deal
of subjectivity in assessing
externalities, biodiversity, etc. R16
( . . . ) I haven’t seen those social
values on your list yet. But if you
leave it out, you take the heart out of
the system. So, my advice is put them
in (..) R10
( . . . ) Technically, you can model each
Uncertain estimations AI modelling little step of it, and I think you can come
up with pretty precise models ( . . . ) R2
( . . . ) I believe this information is not
Data unavailable Data mining available in real time. I use this
information ex post. ( . . . ) R16
Identifying relationships
Fluctuating values
through AI modelling
Objectivity inherent in the
subjective character
blockchain
Ethical quantification of Data streams to develop
social impacts definitions
( . . . ) If you caught those parts in a
well-defined causal relation with triggers
and conditions, then a computer is able to
forecast ( . . . ) R4
( . . . ) Blockchain is perfect for getting
verifiable data. Given ten different
categories of costs, you also have ten
different protocols and foundations that
verify those numbers. ( . . . ) R9
( . . . ) If everyone uses the same protocol,
data can be exchanged uniformly and
verified ( . . . ) R9
( . . . ) data streams and the
democratisation of data, i.e., making this
data available allows socially to simplify
and show the effects of an action: that
something good or bad ( . . . ) R11–13
Source: own study.
-----
_Energies 2022, 15, 1089_ 11 of 24
At last, ethical consideration is important as well. Social values, such as equality,
the right to live a worthy life, and freedom are currently included in the TCA estimation
in the descriptive elements. The IT application would allow for pattern recognition and
quantification at a later stage.
To summarise, the IT technologies enable objective identification of patterns and
forecasting future costs technically possible. Further, blockchain allows for exchanging
verifiable and hygienical data, which improves TCA accuracy. The results show support
for Proposition 2.
_3.3. Timeliness_
The availability of real-time data in TCA is essential to be able to communicate the
holistic aspect of sustainability. If some data is available later, then the estimation of true
cost isfragmented. Currently, due to the manual data collection at each step of TCA, a
time-lag is created by the process itself. Table 5 presents the solutions to the challenges for
the timeliness aspect.
**Table 5. TCA timeliness and solutions.**
**Result TCA** **Challenge** **Solution** **Result IT**
( . . . ) data from 2014 and here is a
study from 2016 and together you Time lag in TCA process
arrive at this number ( . . . ) R9
( . . . ) The IoT devices that we
have, and sensing that we have,
absolutely allow to get real-time
measurements ( . . . ) R2
( . . . ) The input data can be
measured in real time via sensors
and IoT devices. I do not believe
that the human can use it directly.
So, you need an immediate
processing ( . . . ) R2
( . . . ) You can report on it, in a
calculation model, in every time
frame window or even live,
provided that you have
standardized it. That is really
important here ( . . . ) R4, R11
( . . . ) Here, the analysis in the
real state makes sense, certain
things at the level of companies
can be arranged and optimized in
this way ( . . . ) R16
( . . . ) It does depend on what is
being measured. For example,
CO2 emissions and nitrogen are
already being measured in real
time. ( . . . ) R5
( . . . ) I believe that aggregate
data influences long-term
decisions, i.e., investments. Real
data is needed, e.g., when the
level of pollution is close to the
maximum, harmful to people,
then we should be able to make
decisions and take action fast, to
change the source. ( . . . ) R16
( . . . ) I wonder how much the
data collected here and now
delivers to us versus the data
aggregated after a quarter or half
a year or a year. I believe that
aggregate data influences
long-term decisions, i.e.,
investments. ( . . . ) R16
Time lag in data
availability
Data in different metrics
appear in different
timeframes
IoT sensors and data
mining models
including immediate
processing
Standardisation of data
models
Source: own study.
Here, the received solutions show a mixed picture. The costs of providing real-time
insight may not outweigh the benefits of real-time information; therefore, the real-time data
available should be explored further.
( . . . ) The adding of all new details may not be necessary. It may be better to update
the whole analysis once in a while instead of real time. The cost and benefit consideration
are important here ( . . . ) R7.
To summarise, the tools and technologies currently available allow for improving the
timeliness of TCA information to some extent showing partial support for Proposition 3.
Clearly, no information needs to be available in real time at all costs. Some delays can
potentially strengthen the results.
-----
_Energies 2022, 15, 1089_ 12 of 24
_3.4. Implementation_
The implementation of the TCA technique in general and specifically with support of
IT in combination with big data, AI, or blockchain kept running into obstacles. Currently,
the human aspect of collaboration between parties to arrive at reliable and comprehensive
true cost estimation seems to be the biggest challenge. Institutions seem to be working
independently of each other lacking collaboration and developing too many methods not
accepted by the industry. The results suggest that adopting an open blockchain would
eliminate the need for collaboration, therefore, solving this challenge instantly.
Ownership of data is an issue in the implementation. Companies are hesitant to share
sensitive information. Blockchain and automation may deal with these issues around data
ownership and other parties looking into the sensitive data.
( . . . ) Companies are probably only willing to share their data, preferably by AI in an
automated manner ( . . . ) R9.
( . . . ) The first attempts have been taken to make an open protocol to enable uniform
and congruent sharing of data ( . . . ) R9.
The main challenge concerning the application of blockchain technology in TCA is
gaining mutual consensus on working in one platform.
( . . . ) The whole circular chain of events in the lifecycle of energy production should be
united in the blockchain. That means that you will need to combine different blockchains
since you can never have just one blockchain. So that may become complex exercise
( . . . ) R4.
**4. Discussion**
The early stage of adopting True Cost accounting to include the externalities is due to a
lack of awareness of what they are and what they constitute. We find the results repeatedly
in The Netherlands, Germany, and Poland. In all three countries initiated by us, the open
discussion about the challenges to estimate the true cost of energy prediction, including
the externalities on economic, social, and environmental dimensions, was received with
ingenious interest. Participants engaged in the TCA exercise agreed on the importance
and the value of this approach in decision-making on the transition to sustainable energy
prediction. When presented with opportunities for improving the TCA estimation with the
aid of IT, specifically big data, AI, and blockchain, many opportunities emerged, most of
them supporting the Propositions developed in the literature review.
_4.1. Complexity_
The results support Proposition 1, which means that big data technology enables
search for patterns and cost drivers to predict and allocate costs to activities in a more
efficient manner also by developing standards. Big data application allows dealing with
the TCA’s information overload and time consumption challenges. Individuals cannot
comprehend that complexity, and therefore automation of TCA using big data technology
seems highly promising. Currently available IT technologies are advanced enough to
deal with massive amounts of data sets to find patterns. The combination of TCA and
big data is, therefore, value adding. More variables can be included in the analysis, and
consequently, the system can be analyzed as a whole instead of as isolated elements of the
system. Literature on management accounting already acknowledges the potential of big
data for accounting [84] in general. The current study adds to the literature showing the
potential of big data for such advanced management accounting as TCA, which requires
combining financial and nonfinancial data from interdisciplinary resources.
Although big data implementation in TCA has not yet started, the application of
big data and AI may accelerate the TCA development by reducing or even eliminating
TCA complexity.
-----
_Energies 2022, 15, 1089_ 13 of 24
_4.2. Accuracy_
The results show support for Proposition 2. This indicates that the application of
IT reduces the negative challenges of TCA concerning the accuracy of measurement
and monetisation.
Therefore, installing a big data environment and consequently statistically modelling
afterwards enable precise quantifications and valuation, improving the cost allocation and
reducing uncertainty in predicting future costs. Technically, everything is possible.
The problem is that all involved parties should cooperate to help install the data
environment This cooperation is weak or absent at the moment. A government may step in
here to steer the industry or mandate information measuring and sharing using blockchains.
It may have no interest to do so or fear the change in that applying blockchain would
allow for the perfect exchange and continuous verification and sharing of TCA data. The
application of blockchain technology would enable sharing of data without manipulation.
If happening, the uncertainties within the parameters will permanently cease to exist. It
is impossible to precisely predict what will happen in the future, and a complete story
of causality in the system is challenging. In the meantime, TCA may use standard risk
management accounting techniques, e.g., Groot and Selto discuss the risk in decision
making [85]. Some types of costs in energy production are not deterministic and rather
stochastic due to unpredictable future conditions. A distribution function here can help
predict the uncertainty since it allows to define the mean value and the standard deviation
especially in cases where sufficient data about the past is available [85]. Consequently, this
provides an interesting range to work within TCA. Automation of the TCA practices and big
datasets provide sufficient data and enable dealing with subjectivity, human intervention,
and the variety in scales and units.
TCA requires a dynamic process of measurement and monetisation and is not fixed
standardized. This contradicts the current literature that emphasizes that standardisation
of sustainability accounting is required [86]. It may be wise to be careful in standardizing
all TCA processes or define built-in evaluation mechanisms to prevent metrics from being
unable to fully grasp the total impact of products or services.
_4.3. Timeliness_
Although the standardisation is important to cope with earlier described complexity,
it makes the TCA process too static. It must be done with caution not to jeopardize the
machine learning effect from big data. In order to make big data applications in TCA
function, it is crucial to achieve a degree of timeliness. Complex analysis that requires a
lot of computing power may take weeks to arrive at the output. This is extremely costly,
and it may not outweigh the benefits of real-time TCA information. This tradeoff should be
considered when implementing IT technologies in TCA. Then, TCA and big data may work
together to provide more useful information. TCA may look into management accounting
literature. The expected value of additional information can be calculated based on different
conditions and probabilities [87]. Not all extra details in decision making are essential. It
is important to calculate the expected value of relevant decision-making information to
determine its maximum price [87]. The cost of establishing the whole data environment
that provides the required TCA input should be subtracted here to determine whether
combining TCA and big data for timelier information is beneficial. The costs of installing
the data environment can be determined accurately and consequently, and the expected
value of additional information can be calculated. Bayes’ Theorem, based on posterior
probabilities and conditional probabilities, is helpful to arrive at the expected value and
determine whether additional information is beneficial [88].
_4.4. Implementation_
During the research, the implementation struggles arrived quickly. The organisation
in the energy market seem to await governmental institutions to mandate the establishment
of the data environment. Similar to Seele, it seems capturing the concept of sustainability
-----
_Energies 2022, 15, 1089_ 14 of 24
in an algorithm needs a unified definition and, therefore, involvement of the stakeholders
and the legal authorities to make the required data operational [89].
Confidentiality and sharing are important itching issues. Currently, companies most
likely already have a lot of data that they keep for themselves. Therefore, establishing an
industrial protocol per type of cost is important to enable all parties to collectively provide
and exchange their TCA input data in a uniform and transparent manner. Blockchain
allows data to be used in the calculation without other parties diving deep into the data to
extract sensitive data. It secures ownership of data. The protocol should not come from
companies themselves but rather from an independent foundation that checks and owns a
protocol; every type of cost should secure the data sharing.
The blockchain is a revolutionary new technology, and its application will be expanded
and reconsidered, and all the difficulties over time should be addressed with the help and
guidance of a third party to prevent misuse [90]. Given a well-functioning data environment
that gathers, processes, and shares TCA input data, analytical tools can perform predictive
and descriptive analysis.
It is recommended that the academic and business worlds work together more intensively to deal with the current TCA and IT challenges.
All the implementation barriers should be more extensively studied, and it might be
important to link all these barriers to the wider available literature on barriers to sustainability practices i.e., of the circular economy and its barriers as studied by Galvão et al. They
adopted bibliometric research and identified barriers in 6 groups: technological, policy
and regulatory, financial, and economic, managerial, performance indicators, customers
and social [91]. These themes can be used as an umbrella for the implementation barriers
identified in TCA. The lack of collaboration and standardisation is related to the policy, regulatory and managerial barriers; the financing hurdle relates to the financial and economic
barrier, and the lack of advanced technologies is a technological barrier. This understanding
of implementation barriers from broader literature helps study TCA implementation in a
broader context.
_4.5. Future Research_
Besides the recommendation to focus on the literature on the implementation barriers,
it is important to dive into establishing protocols for all different types of energy production
costs. It is helpful to attempt to collaborate with practitioners to establish a protocol on how
to share the relevant TCA input data and in which format. Furthermore, it is important
to dive further into the social impact assessment and what role big data could play here.
Ethical considerations concerning human rights should be at the bottom of how society,
companies, and the environment relate to each other. Much research has already been done
on quantifying social values [92,93].
TCA literature should go even further by attaching monetary values to social impacts
since that would lead to a better weighting and comparison in all three dimensions and
between organisations. At last, it might be helpful in the future to enable the experts within
the panel to interact with each other. This would create a different interview dynamic
where disciplines come together to search for answers.
_4.6. Strengths and Limitations_
This research approached a whole new field of research by applying big data, AI, and
blockchain technologies into True Cost Accounting combining academic and practitioners’
disciplines. Due to its experimental nature, it was important to interview experts from many
different relevant research fields. This research was multidisciplinary and internationally
oriented since local and top experts participated from the Dutch, Polish, and German
energy markets. However, more research is needed. Given the exploratory nature of this
study, this study was mainly about providing new insights to TCA literature, i.e., the
potential for big data and blockchain applications to cope with complexity, timeliness,
and accuracy.
-----
_Energies 2022, 15, 1089_ 15 of 24
A limitation here might be that when looking at the respondents’ insights, one participant showed a contrasting opinion by mentioning that TCA should include ethical
consideration before “letting big data loose on it”. Other respondents showed enthusiasm
about big data’s potential for TCA. This might bias the results, and in future research, more
critical experts should be engaged.
**5. Conclusions**
The study categorized the TCA challenges into complexity, accuracy, timeliness, and
a fourth group of challenges emerged under “implementation”. The study reviewed
the current use of big data, AI, and blockchain in accounting literature in answering the
research question: What is the current use of big data in management accounting?
The study explored an innovative idea of adopting IT to cope with the TCA challenges.
It used an innovative, multidisciplinary, and multinational approach to collect opinions
from a diverse group of relevant stakeholders, IT specialists, sustainability and energy
experts, and accountants in the European energy market; specifically the Netherlands,
Germany, and Poland. It showed ready-to-use technical feasibility of big data infrastructure
that measures the TCA impacts, analyses the data, identifies patterns, allocates costs to cost
objects, and reduces negative challenges. Simultaneously, it identified barriers concerning
financing, and potential standardisation of TCA practices as issues to be solved before the
real adoption can start. Although blockchain technology enables creating protocols for
all types of energy production costs and assures secure, accurate data sharing between
all involved parties, the essential implementation throughout the whole chain, including
policy levels, was perceived as most challenging. The study contributes to the literature by
categorizing the challenges in TCA application for energy production and presenting. the
readiness potential for big data, AI and blockchain to tackle those TCA challenges. Furthermore, it reveals the need for cooperation between accounting and technical disciplines
to enable the energy transition. Future research should further explore the implementation barriers, especially the cooperation aspects and establish protocols for blockchain
applications to ease the big data TCA application.
**Author Contributions: Conceptualization, J.G. and P.B.; data curation, J.G., P.B., A.B.-J. and S.J.;**
formal analysis, J.G. and P.B.; methodology, J.G., P.B. and S.J.; resources, J.G., P.B., A.B.-J. and S.J.;
visualization, J.G. and P.B; writing—original draft, J.G. and P.B., writing—review and editing, J.G., S.J.
and A.B.-J.; supervision, J.G., S.J. and A.B.-J.; funding acquisition, J.G., P.B. All authors have read and
agreed to the published version of the manuscript.
**Funding: This research received no external funding. We greatly appreciate the funding of this**
publication by Groningen Digital Business Center, SOM Graduate School of Faculty of Economics
and Business University of Groningen and Deloitte.
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: The data presented in this study are available on request from the**
corresponding author.
**Acknowledgments: We would like to acknowledge the respondents in the current research for their**
contribution. We are indebted to the four reviewers of the earlier version of the article for their
feedback and constructive comments. We greatly appreciate the work of the editors of the final
version of this paper.
**Conflicts of Interest: The authors declare no conflict of interest.**
-----
_Energies 2022, 15, 1089_ 16 of 24
**Appendix A**
**Table A1. Description of the true costs in the cost price of energy generation.**
**Types of Costs** **Description of the Cost**
Capital costs encompass all investment cost, refurbishment, assembly, decomposing, and
Installation costs
financing costs in an LCOE measure (Samadi, 2017)
Fuel costs The price of the fuel used for the energy in the LCOE measure
Non-fuel operation and Non-fuel operations encompass all fixed costs such as wages, insurance, equipment,
maintenance costs maintenance costs and variable costs at the power plant via an LCOE measure (Samadi, 2017)
Grid costs can be defined as the extra costs in the transmission and distribution system when
Grid costs
power generation from a new plant is integrated into that system (Holttinen et al., 2011).
Balancing costs
Profile costs
The central system operator of the grid, who ensures a stable operation of the energy supply
and demand, manages the electrical systems to compensate for unplanned short-term
fluctuations in the electricity supply and demand by contracting sufficient reserves ahead of
time (Samadi, 2017). This holding of reserves to deal with added flexibility to the grid is being
regarded as balancing costs (Mattman et al., 2016).
Profile costs are additional specific capital and operational costs that the energy production
from a new plant may cause in the residual electricity system. The extra costs due to the
overproduction of renewable energy generation systems are considered to be profile costs
(Samadi, 2017)
GHG emissions contribute to global warming and thus lead to damages for the society in
GHG emission costs tackling climate change. The carbon cost for society is used here, reflecting the GHG emission
in the energy generation process.
Air pollution
The extraction, transportation and conversion of fossil fuels lead to the release of several
forms of pollutants into the environment, such as SO2, NOx, NMVOC, NH3, fine particles,
Cd, As, Ni, Pb, Hg, Cr, Formaldehyde, Dioxin (Samadi, 2017). They affect the air, water, and
soil quality, which affects the health of humans, crops, building materials and the natural
environment.
The welfare of people is affected by the visual appearance of the power plant, landscape
Landscape and noise impacts changes or the noise the power plant generates (Samadi, 2017). The valuation of properties
may be negatively impacted after changes in the use of the land.
Impacts on ecosystems can be in the form of damage to land, plant life or animals. When the
Impacts on biodiversity damage affects the ability of a plant or an animal species to survive is threatened, biodiversity
may be reduced (Epstein et al., 2011).
Employment will create economic and social benefits for employees, and the government has
Employment benefits
less cost of unemployment.
Upstream costs
Downstream costs
The upstream costs result from the extraction of natural resources (Greenstone & Looney,
2012). Here, upstream activities for operating the power plant have been considered. For the
extraction of the resources and production of the required materials for the power plants,
much energy is needed, and GHG is emitted (Jensen, 2019). During the transport of the
resources and the construction of the power plants, energy use and CO2 emission are
inevitable.
The costs of the nonrecyclable components of the power plant could be taken into
consideration as downstream costs since the nonrecyclable waste streams may affect future
generations (Shokrieh & Rafiee, 2020; Jensen, 2019)
Source: [16,24,94–98]
-----
_Energies 2022, 15, 1089_ 17 of 24
**Appendix B**
**Table A2. The list of interviewees participating in the research.**
**Name** **Respondent Field of Expertise** **Duration and Date of the Interview**
Master student at University of Groningen
R1 Florin Schürkens 04 September 2021: 45 min
who researched the German energy market
Expert in application of big data and artificial 12 April 2021:
R2 Marco Aiello
intelligence, University of Stuttgart 45 min
Expert on the application of IT in accounting
R3 Jeroen Kuper 13 April 2021: 1.5 h
and control, in the Netherlands
Expert in system integration in the energy 14 April 2021:
R4 Gideon Laugs
market, Energy academy Groningen 1 h 45 min
Master student at University of Groningen 17 April 2021:
R5 Victor Ipekoglu
who researched the German energy market 45 min
28 April 2021:
R6 Ruben Bour TCA expert, Deloitte Netherlands
35 min
Expert in Modelling of Climate Change at
R7 Harmen-Sytze de Boer Planbureau voor de Leefomgeving (PBL) in
the Netherlands
29 April 2021:
1 h 5 min
Prof of Accountancy University of 11 May 2021:
R8 Dick de Waard
Groningen, Netherlands 45 min
Expert on blockchain application in the 12 May 2021:
R9 Anonymous
Dutch energy market 30 min
Expert on evaluation of social impacts of
R10 Elly Reinierse mining activities around the globe, The
Hague
13 May 2021:
1 h 30 min
Expert, implementer in IT and big data,
R11 Maciej Maciejowski
PlanBe Poland
13 May 2021: Respondents 11, 12 and
R12 Agnieszka Maciejowska Expert, implementer in IT marketing, PlanBe 13 were interviewed together in an
Poland expert discussion session duration of
Expert in carbon footprint and 1 h 30 min in total
R13 Justyna Wojcik
sustainability, PlanBe Poland
15 June 2021:
R14 Anonymous Wind turbine owners from northern Poland.
5 h
A manager from a company dealing with
R15 Anonymous photovoltaic installation in the southern part
of the Masovian Voivodeship.
The energy industry CEO of a large company
R16 Anonymous dealing in energy production, manager in the
energy industry with 25 years of experience.
Source: own study.
**Appendix C**
25 June 2021:
2 h 15 min
12 July 2021:
2 h 30 min
Cost price calculation from Table 2 in the text was central to discuss costs and see how
to come to better cost price calculations. Tables 2 and A1 exhibited in the text were sent to
the respondents in advance together with the Interview guide. Infographic served as an
icebreaker and a brief explanation of the TCA concept to energy and IT experts.
-----
_Energies 2022, 15, x FOR PEER REVIEW_ 20 of 26
_Energies 2022, 15, 1089_ 18 of 24
**Greenhouse gas emission** **Loss of biodiversity**
**Air pollution**
**Upstream impact of materials**
Energy
production with
coal and wind
**Overproduction costs**
**Grid extensions**
**Balancing costs** **Subsidies**
**Landscape impacts** **Taxation**
**Noise impacts**
**Figure A1. Infographic overview of externalities in energy production.**
**Figure A1. Infographic overview of externalities in energy production.**
Thank you for making time for me. I really appreciate it. I want to briefly introduce
Thank you for making time for me. I really appreciate it. I want to briefly introduce
you to my research topic. Last year I did make a true cost price calculation of energy to
you to my research topic. Last year I did make a true cost price calculation of energy to
see how sustainable energy generation really is. I wanted to include all greenhouse gas
see how sustainable energy generation really is. I wanted to include all greenhouse gas
emission impacts, air pollution impacts, and landscape impacts to provide a full overview
emission impacts, air pollution impacts, and landscape impacts to provide a full overview
in order to make the comparison between wind and coal energy generation. However, last
in order to make the comparison between wind and coal energy generation. However, last
year I found out that the measurement and valuation of those impacts is challenging and
year I found out that the measurement and valuation of those impacts is challenging and
requires expertise from many disciplines than just experts in accounting only, which is my
requires expertise from many disciplines than just experts in accounting only, which is
field of discipline. In the energy sectors, many impacts on stakeholders can be identified.
my field of discipline. In the energy sectors, many impacts on stakeholders can be identi
An overview of all the impacts is shared with you via the e-mail. The current research aims
fied. An overview of all the impacts is shared with you via the e-mail. The current research
to explore how big data, AI and maybe blockchain to strengthen the true cost estimations
aims to explore how big data, AI and maybe blockchain to strengthen the true cost esti
we conducted previously. In the infographic, you see an overview of the impacts of energy
mations we conducted previously. In the infographic, you see an overview of the impacts
generation. The impacts that it has on the air, the nature, the mining areas, the land, the
of energy generation. The impacts that it has on the air, the nature, the mining areas, the
society, and the financing. With that in mind, I wanted to ask you some questions. So
let’s start.
1. Complexity
Energy
production with
coal and wind
-----
_Energies 2022, 15, 1089_ 19 of 24
To what degree do you think that energy prices do cover external impacts of energy
production?
- If not, why do you think that is the case or what is the bottleneck?
- Where do you think the complexity comes from?
- How do you think current energy prices are determined? What influence does
the market, regulation and subsidies have?
What do you know about the impacts of energy generation on:
a. Biodiversity
b. GHG emission
c. Air pollution
d. Landscape and noise impacts.
e. Upstream impacts of all materials used in the process of energy generation
f. System impacts
g. Subsidies and taxation
- Consequently, what do you know of the measurement/quantification of those
impacts (a–g)
- If the respondent does not know anything on the measurement of the impacts,
ask: where would you start in trying to measure the impacts?
- To what degree do you think that is difficult/ do you experience complexity in a
sense that there are different metrics and unit?
- What would be the ideal situation to measure those impacts? (e.g., what variables
do you need?)
- If you had to value the impacts, where would you start? (e.g., Do you use market
values? Do you look at the cost of avoidance? Do you look at the costs needed
to restore the damage? Do you look at all the different outputs in the lifecycle
assessment and try to attach a value to it?)
What do you know of big data? In what fields?
- TCA requires input from experts of many disciplines, and large numbers of
upstream and downstream processes need to be tracked. How can big data help
in reducing the complexity?
- When applying big data to measure the impacts of energy production. We
need a lot of data points in order be able to determine what processes in energy
production lead to what impacts and lead to what costs. Where would you start?
- What information do you need? (e.g., data on actual costs, quantities of elements,
conversion of costs, time periods, quality, technical parameters, etc.)
- Where to find that data or what institutions are available in your country that
measure most of the information.
- Big data is often unstructured. How to make different units of measurement
comparable? What techniques are there available to integrate all dimensions into
one single monetary unit?
- Big data can be used to find correlations or forecast costs. How can big data make
estimations of the true cost, for example of 1 ton of CO2 emission, better?
- How would you determine the causality between certain activities and impacts
(e.g., How do you assign air pollution due to energy production for example to
health? What variables and what correlations do you need?)
- How can big data help in valuing the impact of energy production on climate
change, air pollution, biodiversity loss, landscape and noise impacts, subsidies,
upstream impacts, system impacts?
- How to make sense of those different units of measurement? How can big data
help and what techniques are available to compare or integrate the different units
(e.g., use of ratio scales in performance measurement?)
Are you familiar with big data and Artificial intelligence?
-----
_Energies 2022, 15, 1089_ 20 of 24
- What do you know of AI?
- In what fields and circumstances?
- What role can AI play in reducing the complexity of TCA we just discussed?
2. Accuracy
To what degree do you think that subjectivity exist in the valuation of that externalities.
- How do you think that is possible
- Where does this subjectivity comes from?)
- In order to assign impacts to energy generation there should be insight in what
emission lead to what climate costs and what air pollution lead to what health
costs. So there should be an identification of cause and effect relations. How
would you identify such cause and effect relations? What processes lead to what
impacts and to what costs?
- When you look for example at biodiversity, biodiversity is vital for us as human
and all the things we grow, it shows that it is difficult to assign a value to the
biodiversity services. Can big data or AI play a role in reducing the difficulty?
- What implication can big data have on the cost estimation and its subjectivity?
How would the impact of big data on that estimation work?
- How can big data and AI contribute? (e.g., focus on prediction of costs? Identification of patterns and cause- and effect chains? Classification of costs?)
- How can big data provide insight in those cause and effect relationships between
for example GHG emission costs and climate change, air pollution and health
costs/ loss on crops, placement of a power plant and the noise and landscape
impacts? Power plant interferences on biodiversity?
Are you familiar with blockchain? (e.g.,
- What do you know of Blockchain?
- How can blockchain be useful to make sure that the data is accurate?)
3. Timeliness
Do you think it is possible to have real time insight in the impacts of energy production?
- What about the availability of all the data measurement points as discussed earlier?
- To what degree is data on biodiversity, GHG emission, air pollution, landscape and noise impacts and subsidies and system impacts available in a real
time manner?
- What needs to happen in order to have real time insight in those impacts? (e.g.,
does it require a whole paradigm shift in measurement?)
- To what degree is it the same for all types of impacts of energy production? (e.g.,
is there a differences between the loss on biodiversity, air pollution costs, GHG
emission costs, Landscape and noise impacts and subsidies?)
How can big data/ AI / Blockchain helps in providing real time measurements?
- How can those real time measurement be linked to real time valuation techniques
to obtain a real time true cost price calculation.
- Can it be linked to an external database that contains the valuation of a unit of
output from the production?)
- If you see this model of calculating a true cost price with the help of big data and
other technological tools evolving, where might we stand in about 10 years?
Those are all the questions I have for you today. I really want to thank you for your
time. I think It was really interesting and helpful to get an insight in your ideas about
how to measure sustainable performance of energy production. I can definitely move
forward with this. Do you have any questions remaining? Or do you want to come back
on anything? I will type out the transcript of this interview and I will send it to you so that
you are able to determine whether you agree with it.
-----
_Energies 2022, 15, 1089_ 21 of 24
**References**
1. Neofytou, H.; Nikas, A.; Doukas, H. Sustainable energy transition readiness: A multicriteria assessment index. Renew. Sustain.
_[Energy Rev. 2020, 131, 109988. [CrossRef]](http://doi.org/10.1016/j.rser.2020.109988)_
2. ÓhAiseadha, C.; Quinn, G.; Connolly, R.; Connolly, M.; Soon, W. Energy and climate policy—An evaluation of global climate
[change expenditure 2011–2018. Energies 2020, 13, 4839. [CrossRef]](http://doi.org/10.3390/en13184839)
3. Gielen, D.; Boshell, F.; Saygin, D.; Bazilian, M.D.; Wagner, N.; Gorini, R. The role of renewable energy in the global energy
[transformation. Energy Strategy Rev. 2019, 24, 38–50. [CrossRef]](http://doi.org/10.1016/j.esr.2019.01.006)
4. Lupi, V.; Candelise, C.; Calull, M.A.; Delvaux, S.; Valkering, P.; Hubert, W.; Sciullo, A.; Ivask, N.; van der Waal, E.; Iturriza, I.J.; et al.
A Characterization of European Collective Action Initiatives and Their Role as Enablers of Citizens’ Participation in the Energy
[Transition. Energies 2021, 14, 8452. [CrossRef]](http://doi.org/10.3390/en14248452)
5. Nicolas, C.; Tchung-Ming, S.; Bahn, O.; Delage, E. Robust Enough? Exploring Temperature-Constrained Energy Transition
[Pathways under Climate Uncertainty. Energies 2021, 14, 8595. [CrossRef]](http://doi.org/10.3390/en14248595)
6. Wang, X.; Wang, L.; Chen, J.; Zhang, S.; Tarolli, P. Assessment of the External Costs of Life Cycle of Coal: The Case Study of
[Southwestern China. Energies 2020, 13, 4002. [CrossRef]](http://doi.org/10.3390/en13154002)
7. Bonou, A.; Laurent, A.; Olsen, S.I. Life cycle assessment of onshore and offshore wind energy-from theory to application. Appl.
_[Energy 2016, 180, 327–337. [CrossRef]](http://doi.org/10.1016/j.apenergy.2016.07.058)_
8. Giehl, J.; Göcke, H.; Grosse, B.; Kochems, J.; Müller-Kirchenbauer, J. Survey and classification of business models for the energy
[transformation. Energies 2020, 13, 2981. [CrossRef]](http://doi.org/10.3390/en13112981)
9. Ang, B.W.; Mu, A.R.; Zhou, P. Accounting frameworks for tracking energy efficiency trends. Energy Econ. 2010, 32, 1209–1219.
[[CrossRef]](http://doi.org/10.1016/j.eneco.2010.03.011)
10. Brodny, J.; Tutak, M. Assessing the level of digitalization and robotization in the enterprises of the European Union Member
[States. PLoS ONE 2021, 16, e0254993. [CrossRef]](http://doi.org/10.1371/journal.pone.0254993)
11. Castelo-Branco, I.; Cruz-Jesus, F.; Oliveira, T. Assessing Industry 4.0 readiness in manufacturing: Evidence for the European
[Union. Comput. Ind. 2019, 107, 22–32. [CrossRef]](http://doi.org/10.1016/j.compind.2019.01.007)
12. Cuckston, T. Bringing Tropical Forest Biodiversity Conservation into Financial Accounting Calculation. Account. Audit. Account. J.
**[2013, 26, 688–714. [CrossRef]](http://doi.org/10.1108/AAAJ-02-2013-1231)**
13. D’Onza, G.; Greco, G.; Allegrini, M. Full cost accounting in the analysis of separate waste collection efficiency: A methodological
[proposal. J. Environ. Manag. 2016, 167, 59–65. [CrossRef] [PubMed]](http://doi.org/10.1016/j.jenvman.2015.09.002)
14. Bebbington, J.; Larrinaga, C. Accounting and sustainable development: An exploration. Account. Organ. Soc. 2014, 39, 395–413.
[[CrossRef]](http://doi.org/10.1016/j.aos.2014.01.003)
15. Zgrzywa-Ziemak, A. Towards Reporting for Sustainable Development. In Efficiency in Business and Economics; Dudycz, T.,
[Osbert-Pociecha, G., Brycz, B., Eds.; Springer: Berlin/Heidelberg, Germany, 2018; pp. 293–309. [CrossRef]](http://doi.org/10.1007/978-3-319-68285-3_21)
16. Epstein, P.R.; Buonocore, J.J.; Eckerle, K.; Hendryx, M.; Iii, B.M.S.; Heinberg, R.; Clapp, R.; May, B.; Reinhart, N.L.;
[Ahern, M.M.; et al. Full cost accounting for the life cycle of coal. Ann. N. Y. Acad. Sci. 2011, 1219, 73–98. [CrossRef]](http://doi.org/10.1111/j.1749-6632.2010.05890.x)
17. Jasinski, D.; Meredith, J.; Kirwan, K. A comprehensive review of full cost accounting methods and their applicability to the
[automotive industry. J. Clean. Prod. 2015, 108, 1123–1139. [CrossRef]](http://doi.org/10.1016/j.jclepro.2015.06.040)
18. Wu, X.; Xu, Y.; Lou, Y.; Chen, Y. Low carbon transition in a distributed energy system regulated by localized energy markets.
_[Energy Policy 2018, 122, 474–485. [CrossRef]](http://doi.org/10.1016/j.enpol.2018.08.008)_
19. Freedenthal, C. How Green is My Energy Big Factor in Fuel Choice. Pipeline Gas J. 2013, 240, 18–19.
20. Bebbington, J.; Gray, R. An Account of Sustainability: Failure, Success and a Reconceptualization. Crit. Perspect. Account. 2001,
_[12, 557–587. [CrossRef]](http://doi.org/10.1006/cpac.2000.0450)_
21. Gee, K.; Burkhard, B. Cultural ecosystem services in the context of offshore wind farming: A case study from the west coast of
[Schleswig-Holstein. Ecol. Complex. 2010, 7, 349–358. [CrossRef]](http://doi.org/10.1016/j.ecocom.2010.02.008)
22. Spangenberg, J.H. Sustainability Science: A review, an analysis, and some emperical lessons. Environ. Conserv. 2011, 388, 275–287.
[[CrossRef]](http://doi.org/10.1017/S0376892911000270)
23. Unerman, J.; Chapman, C. Academic Contributions to Enhancing Accounting for Sustainable Development. Account. Organ. Soc.
**[2014, 39, 385–394. [CrossRef]](http://doi.org/10.1016/j.aos.2014.07.003)**
24. Samadi, S. The Social Costs of Electricity Generation—Categorising Different Types of Costs and Evaluating Their Respective
[Relevance. Energies 2017, 10, 356. [CrossRef]](http://doi.org/10.3390/en10030356)
25. Bosma, P. True Cost Accounting in Wind Energy and Coal-Fired Energy Generation in the Dutch Energy Market; University of Groningen:
Groningen, The Netherlands, 2020.
26. Jerneck, A.; Olsson, L.; Ness, B.; Anderberg, S.; Baier, M.; Clark, E.; Hickler, T.; Hornborg, A.; Kronsell, A.; Lovbrand, E.; et al.
[Structuring sustainability science. Sustain. Sci. 2021, 6, 69–82. [CrossRef]](http://doi.org/10.1007/s11625-010-0117-x)
27. Sullivan, S.; Hannis, M. Mathematics maybe, but not money: On balance sheets, numbers and nature in ecological accounting.
_[Account. Audit. Account. J. 2017, 30, 1459–1480. [CrossRef]](http://doi.org/10.1108/AAAJ-06-2017-2963)_
28. Maccagnan, A.; Wren-Lewis, S.; Brown, H.; Taylor, T. Wellbeing and society: Towards quantification of the co-benefits of wellbeing.
_[Soc. Indic. Res. 2018, 141, 217–243. [CrossRef]](http://doi.org/10.1007/s11205-017-1826-7)_
29. [Van Den Bergh, J.C.; Botzen, W.J. A lower bound to the social cost of CO2 emissions. Nat. Clim. Change 2014, 4, 253–258. [CrossRef]](http://doi.org/10.1038/nclimate2135)
-----
_Energies 2022, 15, 1089_ 22 of 24
30. [United Nations. The Problem of Lagging Data for Development—And What to Do about It. 2020. Available online: https://www.un.](https://www.un.org/en/un-chronicle/persistent-problem-lagging-data%E2%80%94and-what-do-about-it)
[org/en/un-chronicle/persistent-problem-lagging-data%E2%80%94and-what-do-about-it (accessed on 15 February 2021).](https://www.un.org/en/un-chronicle/persistent-problem-lagging-data%E2%80%94and-what-do-about-it)
31. Müller, H.; Hamilton, D.P.; Doole, G.J. Response lags and environmental dynamics of restoration efforts for Lake Rotorua, New
[Zealand. Environ. Res. Lett. 2015, 10, 1–12. [CrossRef]](http://doi.org/10.1088/1748-9326/10/7/074003)
32. Knauer, T.; Nikiforow, N.; Wagener, S. Determinants of information system quality and data quality in management accounting. J.
_[Manag. Control. 2020, 31, 97–121. [CrossRef]](http://doi.org/10.1007/s00187-020-00296-y)_
33. Beard, J.W.; Sumner, M. Seeking strategic advantage in the post-net era: Viewing ERP systems from the resource-based perspective.
_[J. Strateg. Inf. Syst. 2004, 13, 129–150. [CrossRef]](http://doi.org/10.1016/j.jsis.2004.02.003)_
34. Richins, G.; Stapleton, A.; Stratopoulos, T.C.; Wong, C. Big data analytics: Opportunity or threat for the accounting profession? J.
_[Inf. Syst. 2016, 31, 63–79. [CrossRef]](http://doi.org/10.2139/ssrn.2813817)_
35. Laudon, J.P.; Laudon, K.C. Management Information Systems: Managing the Digital Firm, 16th ed.; Pearson: London, UK, 2020;
pp. 1–648.
36. [Vasarhelyi, M.A.; Kogan, A.; Tuttle, B.M. Big data in accounting: An overview. Account. Horiz. 2015, 29, 381–396. [CrossRef]](http://doi.org/10.2308/acch-51071)
37. Liu, G.; Yin, X.; Pengue, W.; Benetto, E.; Huisingh, D.; Schnitzer, H.; Wang, Y.; Casazza, M. Environmental accounting: In between
[raw data and information use for management practices. J. Clean. Prod. 2018, 197, 1056–1068. [CrossRef]](http://doi.org/10.1016/j.jclepro.2018.06.194)
38. Zhang, Y.; Xiong, F.; Xie, Y.; Fan, X.; Gu, H. The Impact of Artificial Intelligence and Blockchain on the Accounting Profession.
_[IEEE Access 2020, 8, 110461–110477. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.3000505)_
39. Wang, S.; Ouyang, L.; Yuan, Y.; Ni, X.; Han, X.; Wang, F.Y. Blockchain-enabled smart contracts: Architecture, applications, and
[future trends. IEEE Trans. Syst. Man, Cybern. Syst. 2019, 49, 2266–2277. [CrossRef]](http://doi.org/10.1109/TSMC.2019.2895123)
40. Moll, J.; Yigitbasioglu, O. The role of internet-related technologies in shaping the work of accountants: New directions for
[accounting research. Br. Account. Rev. 2019, 51, 100833. [CrossRef]](http://doi.org/10.1016/j.bar.2019.04.002)
41. [Carlin, T. Blockchain and the journey beyond double entry. Austral. Account. Rev. 2019, 29, 305–311. [CrossRef]](http://doi.org/10.1111/auar.12273)
42. Amani, F.A.; Fadlalla, A.M. Data mining applications in accounting: A review of the literature and organizing framework. Int. J.
_[Account. Inf. Syst. 2017, 24, 32–58. [CrossRef]](http://doi.org/10.1016/j.accinf.2016.12.004)_
43. Esmat, A.; Usaola, J.; Moreno, M. A decentralized local flexibility market considering the uncertainty of demand. Energies 2018,
_[11, 2078. [CrossRef]](http://doi.org/10.3390/en11082078)_
44. Wald, A.; Marfleet, F.; Schneider, C.; Gorner, A.; Gleich, R. The Hidden Potential Overhead Cost Reduction: A Study in European
Countries. Cost Manag. 2013, 27, 28–38.
45. Hämäläinen, E.; Inkinen, T. How to generate economic and sustainability reports from Big Data? Qualifications of process
[industry. Processes 2017, 5, 64. [CrossRef]](http://doi.org/10.3390/pr5040064)
46. Chou, J.S.; Cheng, M.Y.; Wu, Y.W.; Tai, Y. Predicting high-tech equipment fabrication cost with a novel evolutionary SVM inference
[model. Expert Syst. Appl. 2011, 38, 8571–8579. [CrossRef]](http://doi.org/10.1016/j.eswa.2011.01.060)
47. Chou, J.S.; Tsai, C.F. Preliminary cost estimates for thin-film transistor liquid–crystal display inspection and repair equipment: A
[hybrid hierarchical approach. Comput. Ind. Eng. 2012, 62, 661–669. [CrossRef]](http://doi.org/10.1016/j.cie.2011.11.037)
48. Dessureault, S.; Benito, R.O. Data mining and activity based costing for equipment replacement decisions Part 1–establishing the
[information infrastructure. Min. Technol. 2012, 121, 73–82. [CrossRef]](http://doi.org/10.1179/1743286312Y.0000000003)
49. Kostakis, H.; Sarigiannidis, C.; Boutsinas, B.; Varvakis, K.; Tampakas, V. Integrating activity-based costing with simulation and
[data mining. Int. J. Account. Inf. Manag. 2008, 16, 25–35. [CrossRef]](http://doi.org/10.1108/18347640810887744)
50. Liu, X.B.; Zhou, S.K.; Meng, Q.N.; Bo, H.G.; Yang, J.P. Activity-based standard cost variance analysis. Comput. Integr. Manuf. Syst.
**2012, 18, 1881–1893.**
51. Tan, K.; Chen, Q.Y.; Ji, H.A. A coevolutionary algorithm for rules discovery in data mining. Int. J. Syst. Sci. 2006, 37, 835–864.
52. Shi, H.; Li, W. The integrated methodology of rough set theory and artificial neural-network for construction project cost
prediction. Int. Symp. Intell. Inf. Technol. Appl. 2008, 2, 60–64.
53. Migliaccio, G.C.; Guindani, M.; Zhang, S.; Ghorai, S. Regression-Based Prediction Methods for Adjusting Construction Cost
Estimates by Project Location. In Proceedings of the Annual Conference of the Canadian Society for Civil Engineering, Ottwwa,
ON, Canda, 14–17 June 2011.
54. Vouk, D.; Malus, D.; Halkijevic, I. Neural networks in economic analyses of wastewater systems. Expert Syst. Appl. 2011,
_[38, 10031–10035. [CrossRef]](http://doi.org/10.1016/j.eswa.2011.02.014)_
55. Chang, P.C.; Lin, J.J.; Dzan, W.Y. Forecasting of manufacturing cost in mobile phone products by case-based reasoning and
[artificial neural network models. J. Intell. Manuf. 2012, 23, 517–531. [CrossRef]](http://doi.org/10.1007/s10845-010-0390-7)
56. Yeh, T.H.; Deng, S. Application of machine learning methods to cost estimation of product life cycle. Int. J. Comput. Integr. Manuf.
**[2012, 25, 340–352. [CrossRef]](http://doi.org/10.1080/0951192X.2011.645381)**
57. Deng, S.; Yeh, T.H. Applying least squares support vector machines to the airframe wing-box structural design cost estimation.
_[Expert Syst. Appl. 2010, 37, 8417–8423. [CrossRef]](http://doi.org/10.1016/j.eswa.2010.05.038)_
58. Deng, S.; Yeh, T.H. Using least squares support vector machines for the airframe structures manufacturing cost estimation. Int. J.
_[Prod. Econ. 2011, 131, 701–708. [CrossRef]](http://doi.org/10.1016/j.ijpe.2011.02.019)_
59. Petroutsatou, K.; Georgopoulos, E.; Lambropoulos, S.; Pantouvakis, J.P. Early cost estimating of road tunnel construction using
[neural networks. J. Constr. Eng. Manag. 2012, 138, 679–687. [CrossRef]](http://doi.org/10.1061/(ASCE)CO.1943-7862.0000479)
-----
_Energies 2022, 15, 1089_ 23 of 24
60. Kaluzny, B.L.; Barbici, S.; Berg, G.; Chiomento, R.; Derpanis, D.; Jonsson, U.; Shaw, R.H.A.D.; Smit, M.C.; Ramaroson, F. An
[application of data mining algorithms for shipbuilding cost estimation. J. Cost Anal. Parametr. 2011, 4, 2–30. [CrossRef]](http://doi.org/10.1080/1941658X.2011.585336)
61. Chen, S.; He, J. Research on Cost Management System of Distribution Network Construction Projects Based on Data Mining.
In Proceedings of the 2012 China International Conference on Electricity Distribution, Shanghai, China, 10–14 September 2012;
pp. 1–7.
62. Yu, M.C. Multi-criteria ABC analysis using artificial-intelligence-based classification techniques. Expert Syst. Appl. 2011,
_[38, 3416–3421. [CrossRef]](http://doi.org/10.1016/j.eswa.2010.08.127)_
63. Xing, W.; Guo, R.; Petakovic, E.; Goggins, S. Participation-based student final performance prediction through interpretable
genetic programming: Integrating learning analytics, educational data mining and theory. Comput. Hum. Behav. 2015, 47, 168–181.
[[CrossRef]](http://doi.org/10.1016/j.chb.2014.09.034)
64. Zhou, L.; Lu, D.; Fujita, H. The performance of corporate financial distress prediction models with features selection guided by
[domain knowledge and data mining approaches. Knowl. Based Syst. 2015, 85, 52–61. [CrossRef]](http://doi.org/10.1016/j.knosys.2015.04.017)
65. Wu, K.J.; Liao, C.J.; Tseng, M.L.; Lim, M.K.; Hu, J.; Tan, K. Toward sustainability: Using Big Data to explore the decisive attributes
[of supply chain risks and uncertainties. J. Clean. Prod. 2017, 142, 663–676. [CrossRef]](http://doi.org/10.1016/j.jclepro.2016.04.040)
66. Kayser, V.; Nehrke, B.; Zubovic, D. Data science as an innovation challenge: From big data to value proposition. Technol. Innov.
_[Manag. Rev. 2018, 8, 16–25. [CrossRef]](http://doi.org/10.22215/timreview/1143)_
67. [Dai, J.; Vasarhelyi, M.A. Toward blockchain-based accounting and assurance. J. Inf. Syst. 2017, 31, 5–21. [CrossRef]](http://doi.org/10.2308/isys-51804)
68. Pujari, A.K. Data Mining Techniques, 1st ed.; Universities Press: Hyderguda, India, 2001; pp. 1–284.
69. Parise, S.; Iyer, B.; Vesset, D. Four strategies to capture and create value from big data. Ivey Bus. J. 2012, 76, 1–5.
70. Hesse, B.W.; Moser, R.P.; Riley, W.T. From big data to knowledge in the social sciences. Ann. Am. Acad. Political Soc. Sci. 2015,
_[659, 16–32. [CrossRef] [PubMed]](http://doi.org/10.1177/0002716215570007)_
71. Ducas, E.; Wilner, A. The security and financial implications of blockchain technologies: Regulating emerging technologies in
[Canada. Int. J. Can. J. Glob. Policy Anal. 2017, 72, 538–562. [CrossRef]](http://doi.org/10.1177/0020702017741909)
72. Bonsón, E.; Bednárová, M. Blockchain and its implications for accounting and auditing. Meditari Account. Res. 2019, 27, 725–740.
[[CrossRef]](http://doi.org/10.1108/MEDAR-11-2018-0406)
73. Troshani, I.; Janssen, M.; Lymer, A.; Parker, L.D. Digital transformation of business-to-government reporting: An institutional
[work perspective. Int. J. Account. Inf. Syst. 2018, 31, 17–36. [CrossRef]](http://doi.org/10.1016/j.accinf.2018.09.002)
74. Bhattacherjee, A. Social Science Research: Principles, Methods, and Practices, 2nd ed.; University of South Florida: Tampa, FL, USA,
2012; pp. 1–159.
75. Eisenhardt, K.M.; Graebner, M.E. Theory Building from Cases: Opportunities and Challenges. Acad. Manag. J. 2007, 50, 25–32.
[[CrossRef]](http://doi.org/10.5465/amj.2007.24160888)
76. Babbie, E.R. The Basics of Social Research, 6th ed.; Cengage Learning: Boston, MA, USA, 2014; pp. 1–542.
77. Galliers, R.D.; Huang, J.C. The teaching of qualitative research methods in information systems: An explorative study utilizing
[learning theory. Eur. J. Inf. Syst. 2012, 21, 119–134. [CrossRef]](http://doi.org/10.1057/ejis.2011.44)
78. Lewthwaite, S.; Nind, M. Teaching research methods in the social sciences: Expert perspectives on pedagogy and practice. Br. J.
_[Educ. Stud. 2016, 64, 413–430. [CrossRef]](http://doi.org/10.1080/00071005.2016.1197882)_
79. Khan, M.S.; Kausar, M.A.; Nawaz, S.S. Big Data Analytics Techniques to Obtain Valuable Knowledge. Indian J. Sci. Technol. 2018,
_[11, 14. [CrossRef]](http://doi.org/10.17485/ijst/2018/v11i14/120977)_
80. [EC Europa. Renewable Energy Statistics. 2020. Available online: https://ec.europa.eu/eurostat/statistics-explained/index.php/](https://ec.europa.eu/eurostat/statistics-explained/index.php/Renewable_energy_statistics)
[Renewable_energy_statistics (accessed on 24 March 2021).](https://ec.europa.eu/eurostat/statistics-explained/index.php/Renewable_energy_statistics)
81. [Curry, A. Germany Faces its Future as a Pioneer in Sustainable and Renewable Energy. Nature 2019, 567, S51–S53. [CrossRef]](http://doi.org/10.1038/d41586-019-00916-1)
[[PubMed]](http://www.ncbi.nlm.nih.gov/pubmed/30918376)
82. Gray, R. Is accounting for sustainabililty actually accounting for sustainability . . . and how would we know? An exploration of
[narratives of organisations and the planet. Account. Organ. Soc. 2010, 35, 47–62. [CrossRef]](http://doi.org/10.1016/j.aos.2009.04.006)
83. Karkour, S.; Ichisugi, Y.; Abeynayaka, A.; Itsubo, N. External-Cost Estimation of Electricity Generation in G20 Countries: Case
[Study Using a Global Life-Cycle Impact-Assessment Method. Sustainability 2020, 12, 2002. [CrossRef]](http://doi.org/10.3390/su12052002)
84. Cockcroft, S.; Russell, M. Big data opportunities for accounting and finance practice and research. Aust. Account. Rev. 2018,
_[28, 323–333. [CrossRef]](http://doi.org/10.1111/auar.12218)_
85. Groot, T.L.C.M.; Selto, F. Advanced Management Accounting; Pearson: Harlow, UK, 2013.
86. Flasher, R.; Luchs, C.K.; Souza, J.L. Sustainability assurance provider participation in standard setting. Res. Account. Regul. 2018,
_[30, 20–25. [CrossRef]](http://doi.org/10.1016/j.racreg.2018.03.003)_
87. Boncompte, M. The expected value of perfect information in unrepeatable decision-making. Decis. Support Syst. 2018, 110, 11–19.
[[CrossRef]](http://doi.org/10.1016/j.dss.2018.03.003)
88. van de Schoot, R.; Depaoli, S.; King, R.; Kramer, B.; Märtens, K.; Tadesse, M.G.; Vannucci, M.; Gelman, A.; Veen, D.;
[Willemsen, J.; et al. Bayesian statistics and modelling. Nat. Rev. Methods Primers 2021, 1, 1. [CrossRef]](http://doi.org/10.1038/s43586-020-00001-2)
89. Seele, P. Predictive sustainability control: A review assessing the potential to transfer big data driven ‘predictive policing’ to
[corporate sustainability management. J. Clean. Prod. 2017, 153, 673–686. [CrossRef]](http://doi.org/10.1016/j.jclepro.2016.10.175)
90. [Dai, J.; Vasarhelyi, M.A. Imagineering Audit 4.0. J. Emerg. Technol. Account. 2016, 13, 1–15. [CrossRef]](http://doi.org/10.2308/jeta-10494)
-----
_Energies 2022, 15, 1089_ 24 of 24
91. Galvão, G.D.A.; de Nadae, J.; Clemente, D.H.; Chinen, G.; de Carvalho, M.M. Circular economy: Overview of barriers. Procedia
_[CIRP 2018, 73, 79–85. [CrossRef]](http://doi.org/10.1016/j.procir.2018.04.011)_
92. Mathews, M.R. Social and environmental accounting: A practical demonstration of ethical concern? J. Bus. Ethics 1995, 14, 663–671.
[[CrossRef]](http://doi.org/10.1007/BF00871347)
93. Lazcano, L.; San-Jose, L.; Retolaza, J.L. Social Accounting in the Social Economy: A Case Study of Monetizing Social Value. In
_Modernization and Accountability in the Social Economy Sector; IGI Global: Hershey, PA, USA, 2019; pp. 132–150._
94. Holttinen, H.; Meibom, P.; Orths, A.; Lange, B.; O’Malley, M.; Tande, J.O.; Estanqueiro, A.; Gomez, E.; Söder, L.; Strbac, G. Impacts
of large amounts of wind power on design and operation of power systems, results of IEA collaboration. Wind Energy 2011,
_[14, 179–192. [CrossRef]](http://doi.org/10.1002/we.410)_
95. [Mattmann, M.; Logar, I.; Brouwer, R. Wind Power Externalities: A meta-analysis. Ecol. Econ. 2016, 127, 23–36. [CrossRef]](http://doi.org/10.1016/j.ecolecon.2016.04.005)
96. Shokrieh, M.M.; Rafiee, R. Fatigue Life Prediction of Wind Turbine Rotor Blades. In Fatigue Life Prediction of Composites and
_Composite Structures, 2nd ed.; Elsevier: Amsterdam, The Netherlands, 2020; pp. 681–710._
97. Greenstone, M.; Looney, A. Paying too much for energy? The true costs of our energy choices. Daedalus 2012, 141, 10–30.
[[CrossRef]](http://doi.org/10.1162/DAED_a_00143)
98. [Jensen, J.P. Evaluating the environmental impacts of recycling wind turbines. Wind Energy 2019, 22, 316–326. [CrossRef]](http://doi.org/10.1002/we.2287)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/en15031089?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/en15031089, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1996-1073/15/3/1089/pdf?version=1644397768"
}
| 2,022
|
[] | true
| 2022-02-01T00:00:00
|
[
{
"paperId": "6e3f1fd391b927d9ce10e9e5bf305960c0230e9d",
"title": "Robust Enough? Exploring Temperature-Constrained Energy Transition Pathways under Climate Uncertainty"
},
{
"paperId": "3d073f42971225ab8e7c9c27b944f6a93ba82c9d",
"title": "A Characterization of European Collective Action Initiatives and their Role as Enablers of Citizens’ Participation in the Energy Transition"
},
{
"paperId": "61ad1a914f662b743f1380024923e96389a911a3",
"title": "Assessing the level of digitalization and robotization in the enterprises of the European Union Member States"
},
{
"paperId": "0e6ef323eefec78aa7b137780625e5c4cede963b",
"title": "Sustainable energy transition readiness: A multicriteria assessment index"
},
{
"paperId": "e1c2359c39f7c22c5571c990bb6ae87aca1e142e",
"title": "Assessment of the External Costs of Life Cycle of Coal: The Case Study of Southwestern China"
},
{
"paperId": "208eef6898dd787d95b555deaf4bc097c0742809",
"title": "Survey and Classification of Business Models for the Energy Transformation"
},
{
"paperId": "b20f2936c062881eb02f4e4e0b9ffd0eb940d201",
"title": "The Impact of Artificial Intelligence and Blockchain on the Accounting Profession"
},
{
"paperId": "49b0334ee6af542f065fd9b2e012fccd9bf1c5bd",
"title": "External-Cost Estimation of Electricity Generation in G20 Countries: Case Study Using a Global Life-Cycle Impact-Assessment Method"
},
{
"paperId": "5c30ed517313600e1baeb35c2d6cffbadf9bf06d",
"title": "Determinants of information system quality and data quality in management accounting"
},
{
"paperId": "9056677a5622bd6955747365edf643f12767dae0",
"title": "The role of internet-related technologies in shaping the work of accountants: New directions for accounting research"
},
{
"paperId": "31fa577912d53d4712dd3684206b40d1c4635629",
"title": "Blockchain and its implications for accounting and auditing"
},
{
"paperId": "fa8d8b61ef09139dc6acc32b3e90a913a907e9d5",
"title": "Assessing Industry 4.0 readiness in manufacturing: Evidence for the European Union"
},
{
"paperId": "1a931f8fa5733a60fa15df138eab58e443cd3f43",
"title": "The role of renewable energy in the global energy transformation"
},
{
"paperId": "b4316d7615c89f6e596087813b11c24dd5f5acf2",
"title": "Germany faces its future as a pioneer in sustainability and renewable energy"
},
{
"paperId": "d8a61389105aa80114af849feda7060d1de3117b",
"title": "Blockchain-Enabled Smart Contracts: Architecture, Applications, and Future Trends"
},
{
"paperId": "25cbfec4b3791e12100e9d09ae58709d39511a55",
"title": "Wellbeing and Society: Towards Quantification of the Co-benefits of Wellbeing"
},
{
"paperId": "ea2a89b87ffe880b02f91d8d8507fe884dd78125",
"title": "Digital transformation of business-to-government reporting: An institutional work perspective"
},
{
"paperId": "b2d110fec4d58b9237d2494b06bf2d5ce57a5eb2",
"title": "Blockchain and the Journey Beyond Double Entry"
},
{
"paperId": "31b92db33c9ac02b4370175eaf7a5b4f00f411f0",
"title": "Low carbon transition in a distributed energy system regulated by localized energy markets"
},
{
"paperId": "4cd63cd2d0c1374a287ffdcda6d98dc0675784ef",
"title": "Evaluating the environmental impacts of recycling wind turbines"
},
{
"paperId": "c8e121eb53c645d33cba01715650a5cdb81bb999",
"title": "Environmental accounting: In between raw data and information use for management practices"
},
{
"paperId": "4bc0f79508e15e86a53e959b4078485c327c7cd0",
"title": "Modelling"
},
{
"paperId": "ccacfee36b16f5f0cf97c61e6802be1256903d90",
"title": "“‘Mathematics maybe, but not money’: on balance sheets, numbers and nature in ecological accounting.”"
},
{
"paperId": "4ed85c6bf7096aca3fa06b14812a27ea95fa1699",
"title": "A Decentralized Local Flexibility Market Considering the Uncertainty of Demand"
},
{
"paperId": "24acf5eb1b933308a229ce89e3efbb2653b00efa",
"title": "The expected value of perfect information in unrepeatable decision-making"
},
{
"paperId": "38288fb8a068be4076c6d9e5403d408623dec431",
"title": "Sustainability assurance provider participation in standard setting"
},
{
"paperId": "faeb7b3e1985e4b94d0260f5fe9597f5e06acbbc",
"title": "BigData Analytics Techniques to Obtain Valuable Knowledge"
},
{
"paperId": "2fe3c9e69d1b8bac69f39f06cae28be010ece975",
"title": "Data Science as an Innovation Challenge: From Big Data to Value Proposition"
},
{
"paperId": "db95b045e80dbb954057d9763daf44543f39e29f",
"title": "Big data opportunities for accounting and finance practice and research"
},
{
"paperId": "05aa1dc8f3f5aa891174c8a4598701afd181ef09",
"title": "The security and financial implications of blockchain technologies: Regulating emerging technologies in Canada"
},
{
"paperId": "b924f9b6267170e2ca136d0b956bbf42f0633225",
"title": "How to Generate Economic and Sustainability Reports from Big Data? Qualifications of Process Industry"
},
{
"paperId": "a6ed0c2467e56f569ce17885e9d5c1dae396b38d",
"title": "Toward Blockchain-Based Accounting and Assurance"
},
{
"paperId": "58fcd6cf509413b2b40d7b6d6605c0a3e0180f92",
"title": "Predictive Sustainability Control: A review assessing the potential to transfer big data driven ‘predictive policing’ to corporate sustainability management"
},
{
"paperId": "d7b6c9c58318ef31ecd96e56fdccb7b328936fe4",
"title": "The Social Costs of Electricity Generation—Categorising Different Types of Costs and Evaluating Their Respective Relevance"
},
{
"paperId": "5a148ded8d82042c17d64af03064871680c732c3",
"title": "Data mining applications in accounting: A review of the literature and organizing framework"
},
{
"paperId": "2ffe1d4507a13b126dbdd930bb0f708f0f8d9489",
"title": "Life cycle assessment of onshore and offshore wind energy-from theory to application"
},
{
"paperId": "1b919ece23b4d72468bf3ea483ea436097502974",
"title": "Big Data Analytics: Opportunity or Threat for the Accounting Profession?"
},
{
"paperId": "f0d55e92474ef214966186cc11aa87ec5e2b3296",
"title": "Wind power externalities: A meta-analysis"
},
{
"paperId": "eec0bb3e873c217d946e16747a182a693d24aef3",
"title": "Imagineering Audit 4.0"
},
{
"paperId": "f1af665ef6084334924138f185507ababff72556",
"title": "Teaching Research Methods in the Social Sciences: Expert Perspectives on Pedagogy and Practice"
},
{
"paperId": "2c669d01d6d1ebf76891900920f77fc7753d2a49",
"title": "Tools of Legitimacy: The Case of the Petrobras Corporate Blog"
},
{
"paperId": "8097c984c975107f52fb223fea3cf1a415173187",
"title": "Full cost accounting in the analysis of separated waste collection efficiency: A methodological proposal."
},
{
"paperId": "d857548c797792fd9b3e394c71561ca0c68eb442",
"title": "A comprehensive review of full cost accounting methods and their applicability to the automotive industry"
},
{
"paperId": "0da1b228b5fe8201e0d40796694bc4b9e5132bc7",
"title": "The performance of corporate financial distress prediction models with features selection guided by domain knowledge and data mining approaches"
},
{
"paperId": "a0e32f44dade884e16bd3d6568d3402c32514373",
"title": "Response lags and environmental dynamics of restoration efforts for Lake Rotorua, New Zealand"
},
{
"paperId": "211fa2a12c8d55053fbac1f75f41e8d7aaa79f83",
"title": "Big Data in Accounting: An Overview"
},
{
"paperId": "ace60355eb9e6320e17ccf913b72e900cc503ea8",
"title": "Participation-based student final performance prediction model through interpretable Genetic Programming: Integrating learning analytics, educational data mining and theory"
},
{
"paperId": "451ce6775150a829c0513a7e4fc86a1ec8095bf0",
"title": "From Big Data to Knowledge in the Social Sciences"
},
{
"paperId": "d9e4707f27e088b79797a28d209d6b7ee7cdc2b1",
"title": "Academic contributions to enhancing accounting for sustainable development"
},
{
"paperId": "3179b4baea0b1d48be536aa3f29143b5e6aa52d6",
"title": "A lower bound to the social cost of CO2 emissions."
},
{
"paperId": "26eff35e95ff6570663b8c1032daae7edfe6133c",
"title": "Bringing tropical forest biodiversity conservation into financial accounting calculation"
},
{
"paperId": "8c4da7536c2dd9f529036c792d99bb5d9948e429",
"title": "Research on cost management system of distribution network construction projects based on data mining"
},
{
"paperId": "3481e9b2b6dd12ee5308f22b5ae978c71a921490",
"title": "Data mining and activity based costing for equipment replacement decisions Part 1 – establishing the information infrastructure"
},
{
"paperId": "ada88bf647493db378608c8ada56d74020411d6a",
"title": "Early Cost Estimating of Road Tunnel Construction Using Neural Networks"
},
{
"paperId": "37755961cc929bfe35c616f6d69c406c55aefe03",
"title": "Application of machine learning methods to cost estimation of product life cycle"
},
{
"paperId": "fb7d7096088834caa87e34d4023fb83a9a58b2be",
"title": "Preliminary cost estimates for thin-film transistor liquid-crystal display inspection and repair equipment: A hybrid hierarchical approach"
},
{
"paperId": "103c072e9e893c1178c98059f50834891867bbb3",
"title": "The teaching of qualitative research methods in information systems: an explorative study utilizing learning theory"
},
{
"paperId": "b4d33cb3ac90cd3b9bea9b69bb443e98c6bf439c",
"title": "Paying Too Much for Energy? The True Costs of Our Energy Choices"
},
{
"paperId": "1aa7765dd38d508d01bd76ecd34c6f1ac2925b39",
"title": "Neural networks in economic analyses of wastewater systems"
},
{
"paperId": "e92fc9a311fbbecbbbc5c20ce34da4a0de73d86e",
"title": "Predicting high-tech equipment fabrication cost with a novel evolutionary SVM inference model"
},
{
"paperId": "2f2728f2c3c6a3cd40c09e69218449bb7e1b96f1",
"title": "Sustainability science: a review, an analysis and some empirical lessons"
},
{
"paperId": "0f364c24feb0d1a6c008c479a2dd4b4ba074f24f",
"title": "Using least squares support vector machines for the airframe structures manufacturing cost estimation"
},
{
"paperId": "0342313b6941e28f21d4d67cda12a40829285b03",
"title": "Multi-criteria ABC analysis using artificial-intelligence-based classification techniques"
},
{
"paperId": "9d33b01a7bd467cd8dceefede30bee5acfc3acbe",
"title": "An Application of Data Mining Algorithms for Shipbuilding Cost Estimation"
},
{
"paperId": "4ec1367fd657b246ec4e3e6a675d763ec62733fd",
"title": "Applying least squares support vector machines to the airframe wing-box structural design cost estimation"
},
{
"paperId": "8b2498b839d1d465e0ab0959de470b33f59c60c3",
"title": "Cultural ecosystem services in the context of offshore wind farming: A case study from the west coast of Schleswig-Holstein"
},
{
"paperId": "cda0597c26da6a2d99f2e28b386b255cb8d361fb",
"title": "Accounting frameworks for tracking energy efficiency trends"
},
{
"paperId": "5a47cb02793f965236bd80f6d912c6d0046fc126",
"title": "Forecasting of manufacturing cost in mobile phone products by case-based reasoning and artificial neural network models"
},
{
"paperId": "1f50840755ba6b77943cf2d9251c4519c8b77052",
"title": "Management Information System: Managing the Digital Firm"
},
{
"paperId": "56fcc28c7b5693bcb13c0bfc9454f21c3a3ad93d",
"title": "The Integrated Methodology of Rough Set Theory and Artificial Neural-Network for Construction Project Cost Prediction"
},
{
"paperId": "a90d8eafda85c530ea20493b0124e808e48d7c3c",
"title": "Integrating activity‐based costing with simulation and data mining"
},
{
"paperId": "6fbc1d627fa8dbaa875225954cc82a721a0dd429",
"title": "Impacts of large amounts of wind power on design and operation of power systems, results of IEA collaboration"
},
{
"paperId": "26942494b118da25574d093b7710b607b795b306",
"title": "Theory Building From Cases: Opportunities And Challenges"
},
{
"paperId": "714b25530f7905557b44c9afdfecffa025257470",
"title": "A coevolutionary algorithm for rules discovery in data mining"
},
{
"paperId": "fe02c70a1c95106f87a515b6e0852ba3505d3098",
"title": "Seeking strategic advantage in the post-net era: viewing ERP systems from the resource-based perspective"
},
{
"paperId": "b692e2c7831c0402a7fdbb431688042d48e1b3c7",
"title": "An account of sustainability: Failure, success and a reconceptualization"
},
{
"paperId": "06117c5d52a397aeef9c2a1da2943f37d76b0719",
"title": "The Basics Of Social Research"
},
{
"paperId": "7dbcef3408b36ebeb7ff104930051ce5e900c662",
"title": "Social and environmental accounting: A practical demonstration of ethical concern?"
},
{
"paperId": "32f786bdb0e0c929b5de4e4d8d2d641effbe5285",
"title": "Advanced Management Accounting"
},
{
"paperId": null,
"title": "True Cost Accounting in Wind Energy and Coal-Fired Energy Generation in the Dutch Energy Market; University of Groningen: Groningen, The Netherlands, 2020"
},
{
"paperId": "9e8b4f2e73c86a041d690ccd2318f023f3cfa484",
"title": "Fatigue life prediction of wind turbine rotor blades"
},
{
"paperId": "15f50616676e8c30f100e1628f91fa1b6d15fcf5",
"title": "Energy and Climate Policy—An Evaluation of Global Climate Change Expenditure 2011–2018"
},
{
"paperId": "cdaa315b0e125fce7aa5e22d08cec2570fb011ce",
"title": "Modernization and Accountability in the Social Economy Sector"
},
{
"paperId": "437f9ccd3d37bda5fdef7b10ee3a887d680226f6",
"title": "Social Accounting in the Social Economy"
},
{
"paperId": "40f23253f76eac8e64265e2ee9302c0388affec0",
"title": "Circular Economy: Overview of Barriers"
},
{
"paperId": "e3d585647c92793c2adf6a4f8d9208e9fbebc6a2",
"title": "Towards Reporting for Sustainable Development"
},
{
"paperId": "f5609bec121f26cd2b05e2213f0297c8e79f48d5",
"title": "Toward Sustainability : Using Big Data to Explore Decisive Attributes of Supply Chain Risks and Uncertainties"
},
{
"paperId": null,
"title": "How Green is My Energy Big Factor in Fuel Choice"
},
{
"paperId": null,
"title": "The Hidden Potential Overhead Cost Reduction: A Study in European Countries. Cost Manag"
},
{
"paperId": "d92aa71424636381a1025fbfa6061285d1d347b6",
"title": "Activity-based standard cost variance analysis"
},
{
"paperId": null,
"title": "Social Science Research: Principles, Methods, and Practices, 2nd ed.; University of South Florida"
},
{
"paperId": null,
"title": "Four strategies to capture and create value from big data"
},
{
"paperId": "d8713ca787f47d7262266cd737a47bd6645c51a7",
"title": "Structuring sustainability science"
},
{
"paperId": "322f20152d3a40be0dd592d8ee69a70ce150d153",
"title": "Regression-based prediction methods for adjusting construction cost estimates by project location"
},
{
"paperId": "b0589ef5facc6a924e0f7400d3f3fdbd24145975",
"title": "Is accounting for sustainability actually accounting for sustainability…and how would we know? An exploration of narratives of organisations and the planet"
},
{
"paperId": "b90c71737e063b9580ff57d136215c4101194618",
"title": "DATA MINING TECHNIQUES"
},
{
"paperId": "9a6418e6ca1e6ad90a2cc74e37a4a6045a9c173d",
"title": "Annals of the New York Academy of Sciences Full Cost Accounting for the Life Cycle of Coal Full Cost Accounting for the Life Cycle of Coal in \" Ecological Economics Reviews. \""
},
{
"paperId": null,
"title": "Renewable Energy Statistics . 2020"
},
{
"paperId": null,
"title": "The Problem of Lagging Data for Development — and What to Do about It . 2020"
}
] | 24,563
|
en
|
[
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02e2ef25a214b0c38216bb8d59a91c4cee3b488a
|
[] | 0.851443
|
Design of Decentralized Hybrid Microgrid Integrating Multiple Renewable Energy Sources with Power Quality Improvement
|
02e2ef25a214b0c38216bb8d59a91c4cee3b488a
|
Sustainability
|
[
{
"authorId": "30897369",
"name": "J. Jayaram"
},
{
"authorId": "145339647",
"name": "M. Srinivasan"
},
{
"authorId": "101717808",
"name": "N. Prabaharan"
},
{
"authorId": "1750061",
"name": "T. Senjyu"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://mdpi.com/journal/sustainability",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172127"
],
"id": "8775599f-4f9a-45f0-900e-7f4de68e6843",
"issn": "2071-1050",
"name": "Sustainability",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172127"
}
|
Due to the energy crisis and exhaustion in the amount of fossil fuels left, there is an urge to increase the penetration of renewables in the grid. This paper deals with the design and control of a hybrid microgrid (HMG) in the presence of variable renewable energy sources. The DC sub-grid consists of a permanent magnet synchronous generator (PMSG) wind turbine, solar PV array with a perturb-and-observe (P&O) MPPT algorithm, boost converter, and battery energy storage system (BESS) with DC loads. The AC sub-grid consists of a PMSG wind turbine and a fuel cell with an inverter circuit synchronized to the grid to meet its load demand. A bidirectional interlinking converter (IC) connects the AC sub-grid and DC sub-grid, which facilitates power exchange between them. The decentralized control of converters allows all the renewables to operate in coordination independently without communication between them. The proposed control algorithm of the IC enables it to act as an active power filter in addition to the power exchange operation. The active power filtering feature of the IC helps to retain the power quality of the microgrid as per IEEE 519 standards by providing reactive power support and reducing the harmonic levels to less than 5%. The HMG with the proposed algorithm can operate in both grid-connected and islanded modes. While operating in grid-connected mode, power exchange between DC and AC sub-grids takes place and all the load demands are met. If it is in islanded mode, a diesel generator supports the AC sub-grid to meet the critical load demands and the BESS supports the DC microgrid. The proposed model is designed and simulated using MATLAB-SIMULINK and its results are analyzed. The efficacy of the proposed control is highlighted by comparing it with the existing controls and testing the HMG for load variations.
|
## sustainability
_Article_
# Design of Decentralized Hybrid Microgrid Integrating Multiple Renewable Energy Sources with Power Quality Improvement
**Jayachandran Jayaram** **[1], Malathi Srinivasan** **[1,]*, Natarajan Prabaharan** **[1,]*** **and Tomonobu Senjyu** **[2,]***
1 School of Electrical & Electronics Engineering, SASTRA Deemed University, Thanjavur 613 401, India;
jj_chandru@eee.sastra.edu
2 Department of Electrical and Electronics Engineering, University of the Ryukyus, Okinawa 903-0213, Japan
***** Correspondence: jj_mals@eee.sastra.edu (M.S.); prabaharan.nataraj@gmail.com (N.P.);
b985542@tec.u-ryukyu.ac.jp (T.S.); Tel.: +91-9600441614 (M.S.); +91-9750785975 (N.P.)
**Citation: Jayaram, J.; Srinivasan, M.;**
Prabaharan, N.; Senjyu, T. Design of
Decentralized Hybrid Microgrid
Integrating Multiple Renewable
Energy Sources with Power Quality
Improvement. Sustainability 2022, 14,
[7777. https://doi.org/10.3390/](https://doi.org/10.3390/su14137777)
[su14137777](https://doi.org/10.3390/su14137777)
Academic Editors: José
Luis Domínguez-García and
George Kyriakarakos
Received: 24 February 2022
Accepted: 21 June 2022
Published: 25 June 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: Due to the energy crisis and exhaustion in the amount of fossil fuels left, there is an urge**
to increase the penetration of renewables in the grid. This paper deals with the design and control
of a hybrid microgrid (HMG) in the presence of variable renewable energy sources. The DC subgrid consists of a permanent magnet synchronous generator (PMSG) wind turbine, solar PV array
with a perturb-and-observe (P&O) MPPT algorithm, boost converter, and battery energy storage
system (BESS) with DC loads. The AC sub-grid consists of a PMSG wind turbine and a fuel cell with
an inverter circuit synchronized to the grid to meet its load demand. A bidirectional interlinking
converter (IC) connects the AC sub-grid and DC sub-grid, which facilitates power exchange between
them. The decentralized control of converters allows all the renewables to operate in coordination
independently without communication between them. The proposed control algorithm of the IC
enables it to act as an active power filter in addition to the power exchange operation. The active
power filtering feature of the IC helps to retain the power quality of the microgrid as per IEEE
519 standards by providing reactive power support and reducing the harmonic levels to less than
5%. The HMG with the proposed algorithm can operate in both grid-connected and islanded modes.
While operating in grid-connected mode, power exchange between DC and AC sub-grids takes place
and all the load demands are met. If it is in islanded mode, a diesel generator supports the AC
sub-grid to meet the critical load demands and the BESS supports the DC microgrid. The proposed
model is designed and simulated using MATLAB-SIMULINK and its results are analyzed. The
efficacy of the proposed control is highlighted by comparing it with the existing controls and testing
the HMG for load variations.
**Keywords: decentralized control; hybrid microgrid; renewable energy; power quality; grid-connected;**
islanded; BESS; diesel generator
**1. Introduction**
In the world of an increasing energy crisis, to provide community resiliency, reliability,
and stability; lower the cost of energy; and promote clean energy for a safe environment,
it is essential to find a limitless source of energy. The exhaustive utilization of fossil
fuels has led to global warming and many environmental issues [1,2]. Thus, research is
carried on for the effective utilization of renewable resources for generating clean and
environmentally friendly energy. In addition, many remote locations have intermittent
supply from the grid. Yet many of those areas are abundant in renewable energy sources
(RES) like wind, solar, biomass, and hydro [3]. Thus, it is better to integrate renewable
sources as distributed generators (DG) in those places to reduce dependency on the grid
bypassing the transmission systems. The recent advancement in power electronics has
resulted in various types and scales of DC and AC loads connected to the power system [4].
All these scenarios encouraged the research on microgrids (MG), for interconnection to the
-----
_Sustainability 2022, 14, 7777_ 2 of 28
grid and to meet the local energy demands [5]. As the grid is an intermittent source, it is
important for a microgrid to seamlessly switch between islanded mode and grid-connected
mode [6,7]. Proper controllers help the DG units to operate efficiently in both islanded and
grid-connected modes. When integrating various DG units in a system, all the DG units
must operate synchronously to maintain the stability of the system.
Various control algorithms are available for the coherent operation of DG units. They
are broadly classified as centralized and decentralized. In a centralized control scheme, all
the DG units in a system are controlled by a microgrid centralized controller (MGCC) [8].
The MGCC acts as a secondary controller that commands the individual primary controller
of each DG. In this method, a communication link should be established between the
MGCC and each controller. This method of control suffers from single-point failure issues
and does not support plug-and-play technology. In a decentralized control scheme, all the
DG units in the system operate independently without a secondary master controller. In the
decentralized control method peer-to-peer interaction takes place and controllers operate
effectively with local measurements themselves. This mode of the controller does not suffer
from a single-point-of-failure issue and also supports plug-and-play technology [9]. In DC
MG the voltage of the DC bus is the key indicator, and the controllers of DC MG DG units
are designed to maintain the DC bus voltage at a reference point [10]. Similarly, in AC MG,
the controllers are designed to operate in synchronization with the voltage and frequency
reference signals [11–13].
As the conventional power system adopts AC due to its transmission and distribution
advantages, implementing AC microgrids with AC DG units is very easy. However, many
DG units, such as solar, fuel-cell, and energy storage devices, are DC in nature. Moreover,
increasing DC loads led to the development of the DC microgrid. Since both AC and
DC MGs have their advantages, a hybrid microgrid (HMG) combines the advantages of
both [14]. The HMG constitutes three main elements: (i) DCMG, (ii) ACMG, and (iii) an
interlinking converter (IC) [15]. Interconnecting AC and DC MG through a power electronic
converter results in an HMG. In an HMG, a bidirectional AC-DC IC interconnects DC and
AC sub-grids [12,16]. This IC supports power exchange between the AC and DC subgrid, allowing us to integrate various types of DG units and loads with an energy storage
facility. Different structures and control strategies of ICs are developed by researchers
to improve the performance and power rating [15,17]. Since renewables are uncertain,
they are combined with other conventional energy sources and/or energy storage systems.
Generally, conventional diesel generators are used in AC sub-grids as backup during
the islanded condition and low renewable generation. The voltage and frequency of the
microgrid in the standalone mode can be maintained within the prescribed safe zone limit
at the lowest cost by adopting a suitable voltage frequency management technique [18]. In
DC, sub-grid battery energy storage systems (BESS) are installed to store the power during
excess generation and utilize it later [19]. Loads of various types, such as linear, non-linear,
balanced, and unbalanced, are connected to the system [20].
The recent advancements in power electronics resulted in the increased usage of
converters in the MG system. Thus, the microgrid suffers from serious power-quality
issues, for instance, low power factor, harmonics, voltage unbalance (sag–swell), etc. [21].
According to the IEEE-519 standards [22], the total harmonic distortion (THD) should
be sustained at less than 5% and the voltage unbalance factor within 2. Custom power
devices like active power filters (APFs) [23], dynamic voltage restorers, unified power flow
controllers, STATCOMs [24], and series compensators play a crucial task in maintaining
the power quality of the system [25]. An appropriate control algorithm for PE converters
will reduce the harmonics injection, but additional compensation devices are required for
mitigating harmonics due to non-linear loads [26]. APF is a widely used custom power
device in the distribution system for mitigating harmonics. Various topologies and control
strategies are available for this device, but this additional device increases the cost of
the overall system [27,28]. New concepts suggest that instead of installing additional
power-conditioning equipment, a modified control algorithm of inverter-based DG units
-----
_Sustainability 2022, 14, 7777_ 3 of 28
allows it to provide power-quality services in addition to its fundamental power transfer.
One such concept in a hybrid microgrid system is virtual APF. By modifying the control
algorithm of IC between the AC and DC sub-grid, it can act as a shunt APF along with
its power-exchange operation [10,29]. The IC performs its basic operation of fundamental
power exchange between DC and AC sub-grids, along with that it also virtually acts as an
APF by providing compensation for harmonics and maintaining the system at the unity
power factor by providing reactive power support.
There is an increased interest in the usage of renewable energy sources, particularly
solar and wind, as they render electricity free of pollution. There are several research
studies that analyze the problems related to the integration of wind and solar into the
grid. Economic analysis and the impact of the integration of renewable energy sources
on the existing and future smart power system for subtropical climates can be studied
through the software Hybrid Optimization for Electric Renewable (HOMER) [30]. The
integration of RES into the HMG involves several power electronic converters in the
system. Because of this, the power quality of the grid is degraded. The power quality of
the microgrid can be Improved by installing optimized STATCOM and energy-storage
elements [31]. The interfacing of the solar photovoltaic array in the AC microgrid also
introduces power-quality issues in the grid. With the design of a suitable control strategy for
the interfacing PV inverter, the power quality of the grid can be improved under non-linear
load conditions [32]. The partially shaded solar photovoltaic cells have multiple peaks in
their power-to-voltage characteristics. Thus, an improved optimization technique is much
needed to extract the global peak instead of the local peak. To capture the global peak at
enhanced explorations, the optimization algorithm requires a greater number of search
agents at the initial stage and a smaller number of search agents at the final stage. Most of
the conventional optimization algorithms do not fulfill the above requirements. A musical
chair algorithm is proposed in [33] for the MPPT of PV systems where the convergence
time, failure rate, and steady-state oscillations are lower compared to other conventional
optimization techniques.
In this paper, a solar PV with an MPPT, a PMSG wind turbine, and a BESS constitute
the DC sub-grid, and the integration of a PMSG wind turbine, fuel cell, and diesel generator
establishes the AC sub-grid. Decentralized control is proposed for the integration and
efficient coordination of various DG units installed in the system. An interlinking converter
connected between the DC and AC microgrids supports the exchange of power among the
sub-grids. With the proposed control algorithm, the IC acts as virtual APF to mitigate the
power-quality issues and offer reactive power support for AC loads. The control technique
also monitors the seamless switching between grid-connected and islanded mode with an
uninterrupted power supply during the standalone mode. The efficacy of the proposed
control is highlighted by comparing it with the existing controls in Table 1. The main
contributions of this paper are summarized as:
**Table 1. Performance comparison of control techniques.**
**Control Strategies**
**Conditions** **Proposed** **Ref. [34]** **Ref. [35]** **Ref. [36]** **Ref. [37]** **Ref. [38]**
Support of DC voltage yes No yes yes No yes
Support of AC voltage yes yes yes yes yes No
Frequency deviation yes No No No No No
Continuous operation of
No yes yes yes yes yes
voltage sources
Implementation to parallel
yes No No yes yes yes
interlinking converter
Seamless operation
yes No No No No No
between grids
Power-quality
yes No yes No No No
Improvement
-----
_Sustainability 2022, 14, 7777_ 4 of 28
_•_ Designing a control technique for an interlinking converter for efficient power-sharing
among the AC and DC microgrid and power-quality improvement. The proposed control effectively coordinates the power exchange between the AC and DC
hybrid microgrid.
Integration and efficient utilization of renewable energy sources by the superior
_•_
operation friendliness of the AC and DC microgrids.
The proposed control supports the bidirectional power flow between DC and AC
_•_
microgrids without much deviation in the frequency and a seamless transition between
grid-connected and islanded mode with minimal dependence on additional sources.
The microgrid model with the above-stated features is designed and simulated using
the MATLAB/SIMULINK environment, and the results are analyzed. This paper has the
following sections. In Section 1, the introduction to the topic and the literature survey are
discussed. In Section 2, the configuration of the microgrid and its design are explained.
In Section 3, the control scheme of the various DGs used is elaborated on. In Section 4,
the performance of the system under various load conditions is analyzed based on the
simulation results, and Section 5 briefs the conclusion and future work.
**2. Microgrid Configuration**
The schematic diagram of the proposed HMG model is shown in Figure 1. The DC
sub-grid consists of a solar PV with converter, a PMSG wind turbine with converter, a BESS
with converter, and DC loads that are connected to a prevailing DC bus. The AC sub-grid
consists of a fuel cell with a converter, a PMSG wind turbine with converter, AC loads,
and a diesel generator, which are connected to the point of common coupling (PCC) of a
three-phase AC bus. The AC sub-grid is also connected to the three-phase utility power
grid through a static transfer switch (STS). Both the DC and AC sub-grids are connected
through an IC. The DC bus of the DC sub-grid and the DC link of the IC are connected,
_Sustainability_ **2022, 14, 7777** 5 of 28
and the AC side of the IC is connected to the AC bus PCC through a coupling inductor. A
three-phase ripple filter is connected to the PCC for filtering current and voltage ripple.
**Figure 1. Schematic diagram of the hybrid microgrid.**
**Figure 1. Schematic diagram of the hybrid microgrid.**
_2.1. Load_
On the DC side, a variable load that varies between 10 kW and 25 kW at 700 V is
connected to the common DC bus. Three single-phase non-linear loads varying between
74 kW 12 kVA d 50 kW 2 5 kVA 415 V 50 H d h PCC f h
-----
_Sustainability 2022, 14, 7777_ 5 of 28
_2.1. Load_
On the DC side, a variable load that varies between 10 kW and 25 kW at 700 V is
connected to the common DC bus. Three single-phase non-linear loads varying between
74 kW +12 kVAr and 50 kW + 2.5 kVAr at 415 V, 50 Hz, are connected at the PCC of the
AC sub-grid.
A 10 kW load on the DC side and 50 kW + 2.5 kVAr on the AC side are considered
critical loads. The DC sub-grid reference voltage is set to 700 V and the AC sub-grid
reference frequency and voltage are set to 50 Hz and 415 V, respectively.
_2.2. PV Array Design_
The PV array is designed for a rated power of 7 kW. The technical specifications of
the PV module are given in Table 2. Based on the technical specifications, the number of
strings and the number of modules are calculated [19].
_NS =_ _[V][dc]_ = [350] (1)
_Voc_ 64.2 [=][ 5.45][ ∼] [6]
_NP =_ _[P][mp][/][V][dc]_ = [7000/350] = 3.58 4 (2)
_∼_
_Imp_ 5.58
where Voc, Imp, and Pmp represent the open-circuit voltage, maximum current, and maximum power of the PV module, respectively. Thus, six modules are connected in a series
to form a string. Four strings are connected in parallel to obtain a power of 7 kW with a
maximum voltage of 328.2 V (54.7 6 = 328.2).
_×_
**Table 2. Technical specifications of the solar PV array.**
**Solar PV Array**
**Model** **SunPower SPR—305WHT**
Number of cells—Nc 96
Open-circuit voltage—Voc 64.2 V
Short-circuit current—Isc 5.96 A
Voltage at maximum power point—VMP 54.7 V
Current at maximum power point—IMP 5.58 A
No. of series modules per string—NS 6
No. of parallel strings—NP 4
Maximum power extractable—Po 7 kW
_2.3. Boost Converter Design_
To extract maximum power from the PV array, an MPPT based boost converter is
incorporated. A perturb-and-observe (P&O) algorithm-based boost converter is utilized
to obtain the maximum power and reference voltage. The explanation for P&O is given
in Section 3.1. The inductance of the boost converter is designed based on the current
ripple, output DC voltage, and switching frequency [19]. Generally, 10–20% of the current
is considered a ripple.
_Lboost[MPPT]_ = [(][V][out][ −]∆[V]IPV[in][)(] ×[V] f[in][/][V][out][)] (3)
_Lboost[MPPT]_ = [(][700][ −]4.46[328.2] 10, 000[)(][328.2/700][)] = 3.9 mH
_×_
where Vin represents the PV output voltage at maximum power condition; “f ” represents
the switching frequency of the boost converter, which is considered 10 kHz; and ∆IPV
represents the ripple current.
-----
_Sustainability 2022, 14, 7777_ 6 of 28
_2.4. PMSG Wind Turbine_
The power from wind is harnessed by converting it into torque. The kinetic energy of
wind drives the blades of the wind turbine, which produces torque. This torque is used to
drive the rotor shaft of the generator to produce electric power. Wind turbines use various
types of generators. PMSG is the most commonly used generator. In the PMSG machine,
the rotor is made up of a permanent magnet that excites the field. The stator produces
three-phase AC, which is converted to DC by a diode bridge rectifier. The DC is converted
back to AC and synchronized to the grid by utilizing a grid-side inverter. The mechanical
power extractable from the wind is given by [39]:
_Pm = 0.5ρACp(λ, β)Vw[3]_ (4)
where Pm is the mechanical power extractable from the wind, ρ is the air density, A is the
rotor-swept area, Vw is the speed of the wind, and Cp(λ,β) is the coefficient of power, a
function of λ,β (tip-speed ratio, pitch angle). The wind turbine is designed for 12 kW at a
nominal wind speed of 12 m/s.
_2.5. Fuel Cell_
A fuel cell is an electrochemical cell that converts chemical energy into electrical energy.
It utilizes H2 and O2 as fuel. The reaction of hydrogen and oxygen between the anode and
cathode produces electric power along with heat and water. The fuel-cell output voltage is
given by [40]:
_VFC = E_ _f c −_ _ηact −_ _ηohm −_ _ηcon_ (5)
where VFC is the output voltage of the fuel cell, Efc is the internal voltage of the fuel cell,
_ηact is the fuel cell voltage drop due to activation, ηohm is the voltage drop due to ohmic_
polarization, and ηcon is a voltage drop due to concentration polarization. The power
produced by the fuel cell is given by:
_PFC = N0VFC_ _IFC_ (6)
Here, PFC is the power produced by a stack of fuel cells, N0 is the number of cells in
the stack, and IFC is the stack current. A fuel cell of 30 kW at 350 V is used in this work.
_2.6. Inverter_
To integrate the DC output of the wind turbine and fuel cell, a grid-side inverter is
used. The DC link voltage of the inverter is given by [19]:
_√_ _√_
2 VLL 2 × 415
_VDC =_ [2] _√_ = [2] _√_ = 678 ∼ 700 V (7)
3m 3 × 1
where VLL is the RMS line voltage and “m” is the modulation index.
_CDC =_ _PDC/VDC_ (8)
2ωVDC−Ripple
where CDC is the DC link capacitor. For 20% of the voltage ripple, the DC link capacitance
for the fuel cell and wind turbine generator is considered 5000 µF and 2000 µF, respectively.
_2.7. Buck—Boost Converter_
The terminal voltage of the battery is less than the DC bus voltage; thus,
a buck–boost converter is used for the step-up and step-down of the voltages. In addition, for proper charging and discharging of the battery, a suitable controller is to be
embedded with it. During charging of the battery, the converter is used to buck the DC
bus voltage to the battery terminal voltage. For the discharging operation, the converter
boosts the terminal voltage of the battery to match the DC bus voltage. The charging and
-----
_Sustainability 2022, 14, 7777_ 7 of 28
discharging operations are based on the DC bus voltage level and SoC of the battery. The
inductance of the buck–boost converter is designed based on the current ripple, output
DC voltage, and switching frequency. Generally, 10–20% of the current is considered a
ripple. For buck mode, the inductor value is selected based on Equation (9), and for boost
mode, its value is calculated as per Equation (10). The largest of these inductance values is
selected for the design.
_Lbuck >_ (Vin − _Vout)(Vout)_ (9)
∆IPV × f × Vin maxIout
_Lboost >_ [(][V][out][ −] _[V][in][)(][V][in min][)]_ (10)
∆IPV × f × Iout × Vout[2]
The inductor value is set as 3 mH.
_2.8. BESS_
The microgrid is designed to store the excess power generation and serve a DC critical
load of 10 kW for up to 6 h without any generating source [19]:
_Ah =_
�Pg − _Pl�t_ = [(][17, 000][ −] [10, 000][)][ ×][ 6] = 120 (11)
_Vb_ 350
where Pg is the maximum generation from DC DG units, Pl is the critical load, and Vb
is the battery terminal voltage. A buck–boost charge controller is used for charging and
discharging the battery, which is connected to a 700 V DC bus.
_2.9. Diesel Generator_
A diesel generator of 50 kVA at 415 V, 50 Hz, is designed for serving critical AC loads
during the islanded condition. During the islanded condition, the diesel generator acts as
the reference signal for other AC DG units.
_2.10. Interlinking Converter_
An IC is connected for integration and power transfer between DC and AC sub-grids.
The IC’s kVA rating is given by:
_S = 3 Vph × Iph × 1.25 × 10[−][3]_ (12)
From Equations (7) and (8), the DC link voltage is 700 V and the capacitance is 15,000 uF.
The IC is designed with a power rating of 95 kVA.
_2.11. Utility Grid_
A balanced three-phase four-wire utility grid of 415 V, 50 Hz, is connected to the AC
sub-grid at PCC through an STS. The utility grid is modeled as a three-phase programmable
voltage source with a series RL branch. The three–phase programmable voltage source
generates a three-phase sinusoidal voltage with time-varying parameters. The series RL
branch is connected in series with the source to account for source impedance. The values
of R and L for source impedance are chosen as 1 Ω and 6 mH, respectively.
During the grid-connected mode of operation, the grid frequency, grid voltage, and
phase angle are used as reference signals for AC DG units. AC DG units operate in
synchronization with the grid to maintain system stability.
**3. Control Algorithm**
Multiple control schemes are used in the proposed microgrid. All the DG units utilize
the decentralized controllers, which are discussed in the subsequent subsections.
-----
#### e e e a i e o o e, i a e i u e i e u eque u e io
_Sustainability 2022, 14, 7777_ 8 of 28
#### 3.1. Solar PV Control
Maximum power from the PV array can be extracted by incorporating the per
_3.1. Solar PV Control_
#### and-observe (P&O)-based MPPT algorithm, which is shown in Figure 2. This M
Maximum power from the PV array can be extracted by incorporating the perturb-and
#### method is simple and efficient. In this method, the boost converter duty cycle is con
observe (P&O)-based MPPT algorithm, which is shown in Figure 2. This MPPT method is
#### ously varied by the MPPT controller for extracting the maximum power. The duty
simple and efficient. In this method, the boost converter duty cycle is continuously varied
by the MPPT controller for extracting the maximum power. The duty cycle is perturbedis perturbed and a change in PPV, and VPV is observed as per the flowchart given in F
and a change in P2. Figure 3 shows the control logic of the P&O MPPT-based boost converter. Based oPV, and VPV is observed as per the flowchart given in Figure 2. Figure 3
shows the control logic of the P&O MPPT-based boost converter. Based on the observedobserved changes, the duty cycle is increased or decreased. The duty cycle is passe
changes, the duty cycle is increased or decreased. The duty cycle is passed to a PWM
#### PWM generator, which generates the switching pulses for the boost converter. This
generator, which generates the switching pulses for the boost converter. This process is
#### cess is repeated to achieve maximum output power.
repeated to achieve maximum output power.
_stainability_ **2022, 14, 7777**
**Figure 2.Figure 2. MPPT perturb-and-observe algorithm.MPPT perturb-and-observe algorithm.**
**Figure 3. MPPT controller.**
#### Figure 3. MPPT controller.
_3.2. Inverter Control_
#### Figure 3.
### 3.2. Inverter Control Two separate inverters are used for integrating the wind turbine generator and a fuel
cell into the AC sub-grid. Since real power injection is the prime objective of these two DG
units, the inverter outputs are synchronized with the reference signal and operated in aTwo separate inverters are used for integrating the wind turbine generat
### cell into the AC sub-grid. Since real power injection is the prime objective of thcurrent-controlled mode. During grid-connected mode, the frequency and voltage of the
grid are used as reference signals. For stable output, the DC link of the inverter should be
### units, the inverter outputs are synchronized with the reference signal and o
stabilized. So, the DC-link voltage of the inverter and reference voltage is compared, and
### current-controlled mode. During grid-connected mode, the frequency and vthe error signal is passed to a PI controller. The PI controller’s output corresponds to power grid are used as reference signals. For stable output, the DC link of the invertloss in the DC-link capacitor to maintain its voltage stability. The difference between the
generated power and power loss across the DC link gives the required amount of power
### stabilized. So, the DC-link voltage of the inverter and reference voltage is com
to be injected into the system. The reference signal is the grid voltage and grid frequency
### the error signal is passed to a PI controller. The PI controller’s output corfor grid-connected mode and the diesel-generator voltage and its frequency for islanded power loss in the DC-link capacitor to maintain its voltage stability. The d
-----
##### islanded mode. The three-phase reference voltage is passed to a PLL block to obtain the
_Sustainability 2022, 14, 7777_ 9 of 28
##### angle Ѳ. Then, the voltage signal is transformed from the abc plane to the dq0 plane using Park’s transformation. The reference d-axis current signal is obtained using the equation below:
mode. The three-phase reference voltage is passed to a PLL block to obtain the angle Θ.
Then, the voltage signal is transformed from the abc plane to the dq0 plane using Park’s𝑃�𝑉� + 𝑄�𝑉�
transformation. The reference d-axis current signal is obtained using the equation below:𝑖� = [2]3 𝑉�� + 𝑉�� (13)
_PgVd + QgVq_
##### Since the active power is injected, the reference q-axis current signal is 0. The refer-id = [2] (13)
3 _V[2]_
##### ence current signal is transferred back to the abc plane from the dq0 plane with angle Ѳ d [+][ V][q][2] using the inverse Park’s transformation. The generated reference current signal and meas
Since the active power is injected, the reference q-axis current signal is 0. The reference
##### ured inverter output current signal is sent to a hysteresis current controller to obtain the current signal is transferred back to the abc plane from the dq0 plane with angle Θ using gate pulses for the inverter, as shown in Figure 4. The hysteresis controller confines the the inverse Park’s transformation. The generated reference current signal and measured current ripples and maintains a sinusoidal inverter output current. inverter output current signal is sent to a hysteresis current controller to obtain the gate
pulses for the inverter, as shown in Figure 4. The hysteresis controller confines the current
ripples and maintains a sinusoidal inverter output current.
**Figure 4. Figure 4. Inverter control logic.Inverter control logic.**
_3.3. BESS Control_
A buck–boost converter is used for battery management. The terminal voltage of the
battery is 350 V, whereas the DC bus voltage is 700 V. Hence, the voltage should be bucked
from 700 V to 350 V to charge the battery, which is done by using a buck converter. During
discharge, the voltage should be boosted from 350 V to 700 V. The discharging and charging
of the battery are decided by the voltage of the DC bus. When the DC bus voltage is 700 V
or above, the buck converter is switched and the battery charges; else, the boost converter
is switched on and the battery discharges.
_3.4. IC Control_
In this work, the IC is designed for power exchange purposes as well as active filtering.
Thus, an instantaneous reactive power theory (IRPT)-based control algorithm that is suitable
for both IC and APF is implemented. In this theory, the instantaneous reactive power is
calculated based on the terminal voltage and load currents of three phases. Using Clarke’s
transformation, the three-phase current and voltages are transformed to the α-β plane.
Before the transformation, those signals are passed through a first-order Butterworth filter
to remove ripples [20,41].
Clarke’s transformation is carried out as follows:
(14)
�[]Va
Vb
_Vc_
2
3
�Vα
_Vβ_
�
=
�
�
1 _−_ 2[1] _−_ 2[1]
_√_ _√_
0 23 _−_ 23
-----
_Sustainability 2022, 14, 7777_ 10 of 28
Since the source is a balanced three-phase four-wire system, the zero-sequence component Vo is eliminated in Equation (14).
�[]Ia
Ib
_Ic_
(15)
2
3
�Iα� =
_Iβ_
�
�
1 _−_ 2[1] _−_ 2[1]
_√_ _√_
0 23 _−_ 23
After transforming the signals to α-β coordinates using Equations (14) and (15), the
instantaneous active and reactive powers of the loads are calculated using Equation (16).
� _pL_
_qL_
� = �Vα _Vβ_
_Vβ_ _−Vα_
��ILα
_ILβ_
�
(16)
The two components vαiLα and vβiLβ constitute the instantaneous real power (p) of the
load and the two components vαiLβ and vβiLα constitute instantaneous imaginary power
(q). The real power (p) and imaginary power (q) consist of both DC and AC values and can
be represented as follows:
_p = p +_ _p_
�
_q = q +_ _q_
�
The components of power _p,_ _q, and q are to be supplied by the DSTATCOM into_
� �
the source for the mitigation of reactive and harmonic power. It can be affirmed that the
proposed controller compensates for the reactive power and improves the power quality
_Sustainability_ **2022, 14, 7777** 11 of 28
for any reactive power consideration of the load. From the instantaneous power, the AC
and DC components are separated using low-pass filters. To sustain the voltage of the DC
link to its reference value, instantaneous active power at the DC capacitor is measured as
_pLoss using the PI controller._
##### 1 0
_p[∗]_ = pl + pLoss and q[∗] =∗ ql are computed and transformed back to the abc plane using
the inverse Clarke’s transformation, as in Equation (17).iib[∗][∗]a �=𝑖𝑖𝑖���∗∗��= 2 [�2]−312[1]⎛⎜⎝−−−[1]2[1]20√23 −−�[√3][√3]22 Vα⎞�𝑉⎟[⎠] _V𝑉��β_ �−𝑉−𝑉1���[�]p��∗��[𝑝]𝑞[∗][∗][�] (17) (17)
##### As shown in Figure 5, the currents ic[∗] 3 − 2[1] − √2𝑖3�∗, 𝑖�∗V, andβ − 𝑖V�∗α are used as reference signals. In a q[∗] hysteresis current-controller block, the three current signals along with the currents meas
As shown in Figure 5, the currents i[∗]a [,] _ib[∗][, and][ i]c[∗]_ [are used as reference signals. In]
##### ured at the output of SAPF are compared to generate appropriate gating pulses for the
a hysteresis current-controller block, the three current signals along with the currents
##### converter. The IC is connected to the AC system through a coupling inductor.
measured at the output of SAPF are compared to generate appropriate gating pulses for
the converter. The IC is connected to the AC system through a coupling inductor.
**Figure 5. Figure 5. IC controller.IC controller.**
##### 3.5. Control in Multi-Microgrid Approach
-----
_Sustainability 2022, 14, 7777_ 11 of 28
**Figure 5. IC controller.**
_3.5. Control in Multi-Microgrid Approach_
_3.5. Control in Multi-Microgrid Approach_
There is an increased need for the integration of multiple microgrids for enhanced
There is an increased need for the integration of multiple microgrids for enhanced
stability and improved energy management. The proposed decentralized control can be
stability and improved energy management. The proposed decentralized control can
adopted in the multi-microgrid approach, as depicted in Figure 6. An interlinking con
be adopted in the multi-microgrid approach, as depicted in Figure 6. An interlinking
verter must be placed between the microgrids. Multiple autonomous systems could be
converter must be placed between the microgrids. Multiple autonomous systems could be
coordinated by the decentralized control [coordinated by the decentralized control [42]. 42].
**Figure 6.Figure 6. Decentralized control of a multi-microgrid.Decentralized control of a multi-microgrid.**
**4. Simulation Results and Analysis**
This section deals with the simulation results of the proposed system. The designed
system is modeled using the MATLAB/SIMULINK environment and the simulation results
of various scenarios are examined. The simulation parameters for the proposed system
are given in the Table A1: Appendix A. To show the effectiveness of the proposed control
algorithm, the system is tested under different conditions, such as grid-connected and
islanded modes, power transfer between the DC sub-grid and AC sub-grid through the
IC, battery charging and discharging, and active filtering of the IC. The description of the
modes of operation and the time period corresponding to the mode are tabulated in Table 3.
**Table 3. Modes of operation of the HMG.**
**Sl. No.** **Mode** **Time Interval in Seconds** **Description**
IC controller.
0 to 1 s and
1 Mode 1: grid-connected mode
2.5 s to 4 s
2 Mode 2: islanded mode 1 s to 2.5 s
AC DG units are synchronized with grid
voltage and frequency (three-phase
four-wire balanced system—415 V, 50 Hz)
The system is isolated from the utility grid.
The diesel generator acts as the voltage and
frequency reference in the AC sub-grid.
Non-critical loads of DC and AC sub-grids
are turned off.
The load in the DC sub-grid is lesser than
Mode 3:
3 4 s to 5 s the DG unit’s generation and the battery
battery-charging mode
gets charged.
Power transfer takes place from the DC to
Mode 4: DC-o-AC
4 5 s to 6 s the AC sub-grid. The battery is presumed
power-flow mode
to be fully charged.
-----
##### 4 5 s to 6 s flow mode grid. The battery is presumed to be fully charged.
_Sustainability 2022, 14, 7777_ 12 of 28
##### 4.1. Mode 1: Grid-Connected Mode
In this mode, AC DG units are synchronized with the grid voltage and frequency
_4.1. Mode 1: Grid-Connected Mode_
##### (415 Vrms and 50 Hz). The DC sub-grid consists of 25 kW loads. In the DC sub-grid, the PV
In this mode, AC DG units are synchronized with the grid voltage and frequency
##### array generates a power of 6.6 kW and the PMSG wind turbine generates a power of 10
(415 Vrms and 50 Hz). The DC sub-grid consists of 25 kW loads. In the DC sub-grid, the
##### kW, the battery supplies power of 4.6 kW, and the remaining 3.8 kW power for the DC
PV array generates a power of 6.6 kW and the PMSG wind turbine generates a power of
##### load is supplied by the AC sub-grid through the IC. It is observed between 0 s < t < 1 s in 10 kW, the battery supplies power of 4.6 kW, and the remaining 3.8 kW power for the DC the simulation results, as shown in Figure 7. load is supplied by the AC sub-grid through the IC. It is observed between 0 s < t < 1 s in
the simulation results, as shown in Figure 7.
_Sustainability_ **2022, 14, 7777** 13 of 28
**Figure 7. DC sub-grid voltage and power at rated load in the HMG.**
**Figure 7. DC sub-grid voltage and power at rated load in the HMG.**
**2022, 14, 7777**
##### The AC sub-grid consists of non-linear loads of 72.5 kW active power and 12 kVAr The AC sub-grid consists of non-linear loads of 72.5 kW active power and 12 kVAr reactive power, and the IC transfers active power of 3.8 kW to the DC sub-grid. In the AC reactive power, and the IC transfers active power of 3.8 kW to the DC sub-grid. In the AC sub-grid, the PMSG WT generates an active power of 10 kW and the fuel cell generates sub-grid, the PMSG WT generates an active power of 10 kW and the fuel cell generates
active power of 26 kW. The remaining active power requirement of 40.3 kW is absorbed
##### active power of 26 kW. The remaining active power requirement of 40.3 kW is absorbed
from the grid, as shown in Figure 8 for the time period 0 s < t < 1 s. Since the IC is also
##### from the grid, as shown in Figure 8 for the time period 0 s < t < 1 s. Since the IC is also d i d i l APF h IC i j i i i h f
-----
_Sustainability 2022, 14, 7777_ 13 of 28
designed to act as a virtual APF, the IC injects reactive power to maintain the power factor
_Sustainability_ **2022, 14, 7777** 14 of 28
and eliminate harmonics. Thus, a reactive power of 12 kVAr is injected into the AC sub grid
by the IC. Of this, 11 kVAr is used to serve the non-linear loads of 11 kVAr, and the excess
1 kVAr reactive power is injected back into the grid, which is shown in Figure 9 in the time
Non-linear loads in AC sub-grid—P period 0 s < t < 1 s. The efficient power-sharing among the DC and AC microgrids by the
- - - −2.9 10 26 39.4 = 72.5 kW proposed control is inferred from the power-sharing details projected in Table 4 for the
DC sub-grid load—12.5 kW rated load connected to the sub-grids. In the DC sub-grid, the power from the PV and wind6.6 10 - −4.1 - - - Mode 4 Non-linear loads in AC sub-grid—P turbine is utilized effectively and the excess power demand is supported by the battery
4.1 10 26 32.4 = 72.5 kW and the AC grid through the IC. The nonlinear loads in the AC sub-grid are supported by
the power of the wind turbine and fuel cell. The excess demand in the AC grid is suppliedPPV*—photovoltaic, PWT-DC*—wind turbine in DC grid, PB*—battery, PIC*—interlinking converter,
by the power from the grid. The IC converter takes care of the reactive power demand andPWT-AC*—wind turbine in AC grid, PFC*—fuel cell, PG*—grid, PDG*—diesel generator.
injects reactive power into the grid for power-quality improvement.
**Figure 8. AC sub-grid Vrms, frequency, and active power at rated load in the HMG.**
**Figure 8. AC sub-grid Vrms, frequency, and active power at rated load in the HMG.**
-----
_SustainabilitySustainability 20222022, 14, 14, 7777, 7777_ 15 of 28 14 of 28
**Figure 9. AC sub-grid reactive power at rated load in the HMG.**
**Figure 9. AC sub-grid reactive power at rated load in the HMG.**
**Table 4.4.2. Mode 2: Islanded Mode Power sharing of the HMG for various modes of operation at rated load.**
During this mode of operation, the system is isolated from the utility grid by the
**Mode** **Details of Load Connectedopening of STS at t = 1 s. In this mode, a diesel generator is connected at the PCC of the PPV*** **PWT-DC*** **PB*** **PIC*** **PWT-AC*** **PFC*** **PG*** **PDG***
**(kW)** **(kW)** **(kW)** **(kW)** **(kW)** **(kW)** **(kW)** **(kW)**
AC sub-grid, which acts as the voltage and frequency reference for other DG units in the
DC sub grid load—25 kW 6.6 10 4.6 3.8 - -
AC sub-grid. During this mode of operation, the IC is disconnected by opening the STS
Non-linear loads in AC suband non-critical loads are turned off in both the DC and AC sub-grids. In the DC sub-grid,
Mode 1 grid—P = 72.5 kWthe loads are reduced to 20 kW, and the PMSG WT and PV arrays generate a power of 10 - - - _−3.8_ 10 26 40.3
Non-linear loads in ACkW and 6.6 kW, respectively. The remaining 3.4 kW is supplied by the battery, which is
sub-grid—Q = 11 kVArshown in Figure 7 in the time period of 1 s < t < 2.5 s. - - - 12 - - - _−1_
DC sub-grid critical In the AC sub-grid, the loads are reduced to 50 kW and 2.5 kVAr. The PMSG WT and
6.6 10 3.4 - - - - load—20 kWfuel cell generate active power of 10 kW and 26 kW, respectively, and the remaining 14
Non-linear critical loads inkW active power demand is met by a diesel generator. Since the IC is disconnected, the
Mode 2 - - - - 10 26 - 14
AC sub-grid—P = 50 kWreactive power of 2.5 kVAr for the load demand is met by the diesel generator, which is
Non-linear critical loads inshown in Figures 8 and 9 in the time period of 1 s< t < 2.5 s. The details of power-sharing
- - - - - - - 2.5
AC sub-grid—Q = 2.5 kVAramong AC and DC microgrids are tabulated in Table 4. The seamless transfer from grid
connected mode to islanded mode is visualized at 2.5 s in Figures 7–9. The renewable
DC sub-grid load—12.5 kW 6.6 10 _−7_ 2.9 - -
sources are effectively utilized for meeting the power demand and the diesel generator is
Mode 3 Non-linear loads in AC
sub-grid—P = 72.5 kWused to meet only the excess power demand during the standalone mode. - - - _−2.9_ 10 26 39.4
At t = 2.5 s, the STS across the grid is closed and the grid is connected to the system.
DC sub-grid load—12.5 kWAt once, the AC DG unit references are changed to grid voltage and frequency, and the 6.6 10 - _−4.1_ - - -
Mode 4 Non-linear loads in ACsystem transfers from islanded mode to grid-connected mode seamlessly. During the time
4.1 10 26 32.4 sub-grid—P = 72.5 kW
interval 2.5 s < t < 4 s, the system operates the same as in mode 1.
PPV*—photovoltaic, PWT-DC*—wind turbine in DC grid, PB*—battery, PIC*—interlinking converter, PWT-AC*—wind
turbine in AC grid, P4.3. Mode 3: Battery-Charging Mode FC*—fuel cell, PG*—grid, PDG*—diesel generator.
In this mode, the load in the DC sub-grid reduces to 12.5 kW at t = 4 s. The total power
_4.2. Mode 2: Islanded Mode_
generation in the DC sub-grid is 16.6 kW. As the DC demand is lower than the DC DG
During this mode of operation, the system is isolated from the utility grid by the
unit generation, the battery starts charging by consuming a power of 7 kW. The remaining
opening of STS at t = 1 s. In this mode, a diesel generator is connected at the PCC of the AC
2.9 kW power for battery charging is obtained from the AC sub-grid through the IC, which
sub-grid, which acts as the voltage and frequency reference for other DG units in the AC
is shown in Figure 7 in the time interval of 4 s < t <5 s.
sub-grid. During this mode of operation, the IC is disconnected by opening the STS and
-----
_Sustainability 2022, 14, 7777_ 15 of 28
non-critical loads are turned off in both the DC and AC sub-grids. In the DC sub-grid, the
loads are reduced to 20 kW, and the PMSG WT and PV arrays generate a power of 10 kW
and 6.6 kW, respectively. The remaining 3.4 kW is supplied by the battery, which is shown
in Figure 7 in the time period of 1 s < t < 2.5 s.
In the AC sub-grid, the loads are reduced to 50 kW and 2.5 kVAr. The PMSG WT and
fuel cell generate active power of 10 kW and 26 kW, respectively, and the remaining 14 kW
active power demand is met by a diesel generator. Since the IC is disconnected, the reactive
power of 2.5 kVAr for the load demand is met by the diesel generator, which is shown in
Figures 8 and 9 in the time period of 1 s < t < 2.5 s. The details of power-sharing among AC
and DC microgrids are tabulated in Table 4. The seamless transfer from grid-connected
mode to islanded mode is visualized at 2.5 s in Figures 7–9. The renewable sources are
effectively utilized for meeting the power demand and the diesel generator is used to meet
only the excess power demand during the standalone mode.
At t = 2.5 s, the STS across the grid is closed and the grid is connected to the system.
At once, the AC DG unit references are changed to grid voltage and frequency, and the
system transfers from islanded mode to grid-connected mode seamlessly. During the time
interval 2.5 s < t < 4 s, the system operates the same as in mode 1.
_4.3. Mode 3: Battery-Charging Mode_
In this mode, the load in the DC sub-grid reduces to 12.5 kW at t = 4 s. The total power
generation in the DC sub-grid is 16.6 kW. As the DC demand is lower than the DC DG
unit generation, the battery starts charging by consuming a power of 7 kW. The remaining
2.9 kW power for battery charging is obtained from the AC sub-grid through the IC, which
is shown in Figure 7 in the time interval of 4 s < t < 5 s.
During this mode in the AC sub-grid, the load and DG unit power generation is the
same as in mode 1 except that the power obtained from the grid is reduced to 39.4 kW
as the power exchange to the DC sub-grid is reduced to 2.9 kW. The reactive power flow
remains the same as in mode 1, which is shown in Figures 8 and 9 in the time interval of
4 s < t < 5 s.
_4.4. Mode 4: DC-to-AC Power Flow_
During this mode of operation, the power transfer from the DC sub-grid to the AC
sub-grid is realized. At t = 5 s, the battery is presumed to be charged fully. As the load in
DC sub-grid is 12.5 kW and the generation of power is 16.6 kW, the excess power of 4.1 kW
is transferred to the AC sub-grid through the IC, which is shown in Figure 7 in the time
interval of 5 s < t < 6 s.
In the AC sub-grid, the loading and generation of DG units remain the same as in
mode 1. The excess power of 4.1 kW from the DC is injected into the AC, and the power
consumed from the grid is reduced to 32.4 kW from 39.4 kW in mode 3. The reactive power
flow remains the same as in mode 1, which is shown in Figures 8 and 9 in the time interval
of 5 s < t < 6 s.
_4.5. Virtual APF_
The three-phase AC grid and load voltage, grid, and load current are shown in
Figure 10. From Figure 10, it is observed that at t = 1 s, the grid is disconnected. Thus,
the grid current and voltage become zero at that point. The load voltage and current
remain sinusoidal. The voltage is 415 Vrms and the current amplitude varies in the grid and
islanded mode due to the change in load.
-----
current and voltage become zero at that point. The load voltage and current remain sinus_Sustainability 2022, 14, 7777_ 16 of 28
oidal. The voltage is 415 Vrms and the current amplitude varies in the grid and islanded
mode due to the change in load.
**Figure 10. AC sub-grid grid voltage, load voltage, grid current, and load current.**
**Figure 10. AC sub-grid grid voltage, load voltage, grid current, and load current.**
**Phase**
From Figure 10, it is also observed that during the grid-connected mode, the AC load
From Figure 10, it is also observed that during the grid-connected mode, the AC load
current is distorted due to the harmonics of non-linear load, but the grid current remains
current is distorted due to the harmonics of non-linear load, but the grid current remains
sinusoidal with harmonics at less than 5% due to the compensation by the IC acting as a
sinusoidal with harmonics at less than 5% due to the compensation by the IC acting as a
virtual APF. Table 5 shows the %THD in the AC sub-grid voltage, grid current, and load
current when the IC is used as an APF and power-exchange converter. From the tabulation
in Table 5, it is evident that the IC performs as an APF to maintain the THD of the grid
current within 5% even when the load-current THD is higher due to the non-linear loads of
the AC sub-grid. Figures 11–13 show the %THD of the Rph load current, Rph grid current,
and Rph voltage, respectively. Figure 14 shows the performance of the IC in maintaining
the THD of the grid current when it is operated as an APF and power-exchange converter.
The superior performance of the IC as an APF in maintaining the THD below 5% is evident
from the chart in Figure 14.
**Table 5. Performance comparison of the IC as APF and power-exchange converter.**
**IC as APF in HMG** **IC for Power Exchange in HMG**
**AC Sub-Grid Voltage Grid Current** **Load Current** **AC Sub-Grid Voltage Grid Current** **Load Current**
**%THD** **%THD** **%THD** **%THD** **%THD** **%THD**
Rph 0.07 4.33 14.29 7.50% 15.4 14.29
Yph 0.07 4.64 15.77 8.30% 16.2 15.77
Bph 0.07 4.24 13.29 8.50% 15.9 13.29
-----
##### Rph 0.07 4.33 14.29 7.50% 15.4 14.29
YSustainabilityph 2022, 140.07, 7777 4.64 15.77 8.30% 16.2 15.77 17 of 28
Yph 0.07 4.64 15.77 8.30% 16.2 15.77
Bph 0.07 4.24 13.29 8.50% 15.9 13.29
Bph 0.07 4.24 13.29 8.50% 15.9 13.29
**Figure 11. THD analysis of Rph load current.**
**Figure 11. Figure 11.THD analysis of R THD analysis of Rphph load current. load current.**
_Sustainability_ **2022, 14, 7777** 18 of 28
**Figure 12. Figure 12. Figure 12.THD analysis of RTHD analysis of R THD analysis of Rph ph phgrid current. grid current. grid current.**
**Figure 13. THD analysis of Rph voltage.**
**Figure 13. THD analysis of Rph voltage.**
**2022, 14, 7777**
-----
_Sustainability 2022, 14, 7777_ 18 of 28
**Figure 13. THD analysis of Rph voltage.**
**Figure 14. Comparison of %THD of grid current.**
**Figure 14. Comparison of %THD of grid current.**
_4.6. Performance of HMG with a Reduction in Load_
_4.6. Performance of HMG with a Reduction in LoadThe load on the DC microgrid is reduced by 10% and the performance of the HMG_
is analyzed for power-sharing, seamless transition, and power quality improvement. The
details of the load connected in each mode and the power shared by each renewable en-The load on the DC microgrid is reduced by 10% and the performance of the HMG
is analyzed for power-sharing, seamless transition, and power quality improvement. Theergy source and energy-storage element are tabulated in Table 6. Figures 15–18 show the
details of the load connected in each mode and the power shared by each renewable energypower in the DC sub-grid, the real power in the AC sub-grid, the reactive power in the
AC sub-grid, and the grid voltage, frequency, and power flow of the battery, respectively.
source and energy-storage element are tabulated in TableThe details presented in Table 6 and the traces in Figures 15–18 show the efficient power- 6. Figures 15–18 show the power
in the DC sub-grid, the real power in the AC sub-grid, the reactive power in the AC sub-sharing of the HMG from RES. In DC MG the excess power to be supplied to the load is
shared by the battery and AC sub-grid through the IC. Similarly, in the AC MG, the load
grid, and the grid voltage, frequency, and power flow of the battery, respectively. Theis supplied from the wind turbine and fuel cell, and the additional power requirement is
details presented in Tablecompensated from the grid. Even during load reduction, the IC manages the reactive 6 and the traces in Figures 15–18 show the efficient power-sharing
of the HMG from RES. In DC MG the excess power to be supplied to the load is shared bypower demand and improves the power quality of the grid, and the decentralized control
maintains the VPCC and frequency of the AC sub-grid.
the battery and AC sub-grid through the IC. Similarly, in the AC MG, the load is supplied
from the wind turbine and fuel cell, and the additional power requirement is compensated
from the grid. Even during load reduction, the IC manages the reactive power demand
and improves the power quality of the grid, and the decentralized control maintains the
VPCC and frequency of the AC sub-grid.
**Table 6. Power sharing of the HMG for various modes of operation at 10% reduction in rated load.**
**Mode** **Details of Load Connected** **PPV*** **PWT-DC*** **PB*** **PIC*** **PWT-AC*** **PFC*** **PG*** **PDG***
**(kW)** **(kW)** **(kW)** **(kW)** **(kW)** **(kW)** **(kW)** **(kW)**
DC sub-grid load—22.5 kW 6.6 10 2.5 3.4 - -
, 7777
Mode 1
Mode 2
Mode 3
Mode 4
Non-linear loads in AC sub
grid—P = 72.5 kW - - - _−3.4_ 10 26 40
Non-linear loads in AC sub
grid—Q = 11 kVAr 12 _−1_
DC sub-grid critical
6.6 10 1.4 - - - - load—18 kW
Non-linear critical loads in
- - - - 10 26 - 14
AC sub-grid—P = 50 kW
Non-linear critical loads in
- - - - - - - 2.5
AC sub-grid—Q = 2.5 kVAr
DC sub-grid load—10 kW 6.6 10 _−9_ 2.4 - -
Non-linear loads in AC sub
grid—P = 72.5 kW - - - _−2.4_ 10 26 38.9
DC sub-grid load—11 kW 6.6 10 - _−5.6_ - - -
Non-linear loads in AC
5.6 10 26 30.9 sub-grid—P = 72.5 kW
PPV*—photovoltaic, PWT-DC*—wind turbine in DC grid, PB*—battery, PIC*—interlinking converter, PWT-AC*—wind
turbine in AC grid, PFC*—fuel cell, PG*—grid, PDG*—diesel generator.
-----
_SustainabilitySustainability 20222022, 14,, 7777 14, 7777_ 19 of 2819 of 28
**Figure 15.Figure 15. DC sub-grid voltage and power with 10% load reduction in the HMG.DC sub-grid voltage and power with 10% load reduction in the HMG.**
-----
_Sustainability 2022, 14, 7777_ 20 of 28
_Sustainability_ **2022, 14, 7777** 20 of 28
**Figure 16. AC sub-grid active power with 10% load reduction in the HMG.**
**Figure 16. AC sub-grid active power with 10% load reduction in the HMG.**
-----
_Sustainability_ **2022, 14, 7777** 21 of 28
_SustainabilitySustainability 20222022, 14, 14, 7777, 7777_ 21 of 28 21 of 28
**Figure 17. AC sub-grid reactive power with 10% load reduction in the HMG.**
**Figure 17. AC sub-grid reactive power with 10% load reduction in the HMG.**
**Figure 17. AC sub-grid reactive power with 10% load reduction in the HMG.**
**Figure 18. AC sub-grid VPCC, frequency, and power flow of battery with 10% load reduction in the**
**Figure 18.Figure 18. HMG.** AC sub-grid VAC sub-grid VPCCPCC, frequency, and power flow of battery with 10% load reduction in the, frequency, and power flow of battery with 10% load reduction in
the HMG.HMG.
-----
_Sustainability 2022, 14, 7777_ 22 of 28
_4.7. Performance of HMG with Increment in Load_
The load on the DC microgrid is increased by 10% and the performance of the HMG
is analyzed for power sharing, seamless transition, and power-quality improvement. The
details of the load connected in each mode and the power shared by each renewable energy
source and energy-storage element are tabulated in Table 7.
**Table 7. Power sharing of the HMG for various modes of operation at 10% increment in rated load.**
**Mode** **Details of Load Connected** **PPV*** **PWT-DC*** **PB*** **PIC*** **PWT-AC*** **PFC*** **PG*** **PDG***
**(kW)** **(kW)** **(kW)** **(kW)** **(kW)** **(kW)** **(kW)** **(kW)**
DC sub-grid load—27.5 kW 6.6 10 7.1 3.8 - -
Mode 1
Mode 2
Mode 3
Mode 4
Non-linear loads in AC
sub-grid—P = 72.5 kW - - - _−3.8_ 10 26 40.3
Non-linear loads in AC
sub-grid—Q = 11 kVAr 12 _−1_
DC sub-grid critical
6.6 10 5.4 - - - - load—22 kW
Non-linear critical loads in
- - - - 10 26 - 14
AC sub-grid—P = 50 kW
Non-linear critical loads in
- - - - - - - 2.5
AC sub-grid—Q = 2.5 kVAr
DC sub-grid load- 12.5 kW 6.6 10 _−7_ 2.9 - -
Non-linear loads in AC
sub-grid—P = 72.5 kW - - - _−2.9_ 10 26 39.4
DC sub-grid load—12.5 kW 6.6 10 - _−4.1_ - - -
Non-linear loads in AC
4.1 10 26 32.4 sub-grid—P = 72.5 kW
PPV*—photovoltaic, PWT-DC*—wind turbine in DC grid, PB*—battery, PIC*—interlinking converter, PWT-AC*—wind
turbine in AC grid, PFC*—fuel cell, PG*—grid, PDG*—diesel generator.
Figures 19–21 show the power in the DC sub-grid, the real power in the AC sub-grid,
and the grid voltage, frequency, and power flow of the battery, respectively. The details
presented in Table 7 and the traces in Figures 19–21 show the efficient power sharing of
the HMG from RES. The excess power to be supplied to the load in the DC sub-grid is
shared by the battery and AC sub-grid through the IC. Similarly, the load is supplied from
the wind turbine and fuel cell in the AC sub-grid, and the additional power requirement
is compensated from the grid. Even during load increment the IC manages the reactive
power demand and improves the power quality of the grid, and the decentralized control
maintains the VPCC and frequency of the AC sub-grid.
-----
##### p g g g
_Sustainability 2022, 14, 7777_ power demand and improves the power quality of the grid, and the decentralized control 23 of 28
##### maintains the VPCC and frequency of the AC sub-grid.
**Figure 19. DC sub-grid voltage and power with 10% load increment in the HMG.**
-----
_Sustainability 2022, 14, 7777_ 24 of 28
**Figure 19. DC sub-grid voltage and power with 10% load increment in the HMG.**
**Figure 20. AC sub-grid active power with 10% load increment in the HMG.**
**Figure 20. AC sub-grid active power with 10% load increment in the HMG.**
-----
_SustainabilitySustainability 20222022, 14, 14, 7777, 7777_ 25 of 28 25 of 28
**Figure 21. AC sub-grid VPCC, frequency, and power flow of battery with 10% load increment in the**
**Figure 21.HMG.** AC sub-grid VPCC, frequency, and power flow of battery with 10% load increment in
the HMG.
**5. Conclusions**
**5. Conclusions**
The proposed HMG system is modeled and simulated. The simulation results verify
The proposed HMG system is modeled and simulated. The simulation results verify
that various types of renewables can be integrated efficiently into the AC and DC mi
that various types of renewables can be integrated efficiently into the AC and DC microgrid
crogrid system with maximum power extraction. The system can effectively utilize power
system with maximum power extraction. The system can effectively utilize power from
from renewable sources during load demand or store power and utilize it during islanded
renewable sources during load demand or store power and utilize it during islanded mode.
mode. Apart from power exchange between AC and DC microgrids, the modified control
Apart from power exchange between AC and DC microgrids, the modified control algo
algorithm enables the IC to act as a virtual APF for improving power quality during un
rithm enables the IC to act as a virtual APF for improving power quality during unbalanced
and non-linear load conditions. The %THD of the grid current is maintained at less thanbalanced and non-linear load conditions. The %THD of the grid current is maintained at
5%, as specified by IEEE519 standards. The decentralized control supports the seamlessless than 5%, as specified by IEEE519 standards. The decentralized control supports the
switching between grid-connected and islanded modes. The system is stable during allseamless switching between grid-connected and islanded modes. The system is stable
modes of operation, meeting all load demands at reference voltages and frequency. Theduring all modes of operation, meeting all load demands at reference voltages and freHMG with the proposed control performs efficiently with variations in load in terms ofquency. The HMG with the proposed control performs efficiently with variations in load
power sharing, seamless transition, power-quality improvement, maintenance of Vin terms of power sharing, seamless transition, power-quality improvement, maintenance PCC,
and frequency of the AC grid.of VPCC, and frequency of the AC grid.
The key findings of the paper are:The key findings of the paper are:
- The proposed controller efficiently coordinates the AC/DC hybrid microgrid in allThe proposed controller efficiently coordinates the AC/DC hybrid microgrid in all
_•_
four modes of operation.four modes of operation.
- The required power is transferred between the AC and DC microgrid via the inter-The required power is transferred between the AC and DC microgrid via the inter
_•_
linking converter. With an energy-storage system, the power exchange between thelinking converter. With an energy-storage system, the power exchange between the
microgrids is efficiently managed by the controller and only the excess power demandmicrogrids is efficiently managed by the controller and only the excess power deis obtained from the utility grid.mand is obtained from the utility grid.
_••_ The modified control technique for the interlinking converter improves the powerThe modified control technique for the interlinking converter improves the power
quality under unbalanced and non-linear load conditions.quality under unbalanced and non-linear load conditions.
_••_ The interlinking converter supports AC/DC voltage bidirectionally during the is-The interlinking converter supports AC/DC voltage bidirectionally during the islanded mode of operation. This reduces the need for additional voltage sources.landed mode of operation. This reduces the need for additional voltage sources.
_••_ The proposed controller helps in the seamless transfer between grid-connected andThe proposed controller helps in the seamless transfer between grid-connected and
isolated modes.isolated modes.
The future works to be carried out are:
_•_ The future works to be carried out are:
The proposed controller can be extended to a multi-microgrid approach.
_•_
-----
_Sustainability 2022, 14, 7777_ 26 of 28
The multi-parallel interlinking converter can be utilized in place of the interlinking
_•_
converter, and an analysis can be carried out.
The proposed controller can be applied for real-time applications.
_•_
_•_ Economic analysis and the impact of the proposed microgrid on the present microgrid
setup can be analyzed through HOMER software.
Degradation of the hybrid components can be included in the analysis.
_•_
**Author Contributions: Formal analysis, J.J.; Methodology, J.J.; Supervision, T.S.; Validation, M.S.,**
N.P. and T.S.; Writing—original draft, M.S.; Writing—review & editing, N.P. All authors have read
and agreed to the published version of the manuscript.
**Funding: This research received no external funding.**
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Data sharing is not applicable to this article.**
**Acknowledgments: The authors would like to express their gratitude to the management of SASTRA**
Deemed University for providing renewable energy lab facilities.
**Conflicts of Interest: The authors declare no conflict of interest.**
**Abbreviations**
APF Active power filter
BESS Battery energy storage system
DG Distributed generation
HMG Hybrid microgrid
IC Interlinking converter
IRPT Instantaneous reactive power theory
MG Microgrid
MGCC Microgrid centralized controller
MPPT Maximum power point tracking
PCC Point of common coupling
PI Proportional integral
PLL Phase locked loop
PMSG Permanent magnet synchronous generator
P&O Perturb and observe
RES Renewable energy sources
STS Static transfer switch
THD Total harmonic distortion
WT Wind turbine
**Appendix A**
**Table A1. Simulation parameters for the proposed system.**
Utility Grid
Three-phase four-wire system with balanced voltages 415 V, 50 Hz
Source impedance R = 1 Ω, L= 6 mH
PMSG Wind Turbine
Nominal mechanical power—Pm 12 kW
Nominal generator electrical power—Pg 12/0.9 kVA
Nominal wind speed—Vm 12 m/s
Maximum power at base speed 0.8 (p.u)
-----
_Sustainability 2022, 14, 7777_ 27 of 28
**Table A1. Cont.**
Wind Turbine Inverter
DC link voltage—VDC 677.49~700 V
DC link capacitor—CDC 4685 µF~4700 µF
Coupling inductor—(R + L) 0.026 + 8.22 mH
Ripple filter—(P + Q) 20 W + 1 kVAr
Fuel Cell
Voltage at (0 A, 1 A) (450.442.5) V
Nominal current—Inom 40 A
Nominal voltage—Vnom 350 V
Maximum current—Iend 140 A
Power obtained—Pobt 27 kW
Boost Converter
Inductor—L 3.9 mH
Capacitor—C 70 µF
Switching frequency—fs 10 kHz
Duty cycle—D 50%
Buck–Boost Converter
Inductor—L 3 mH
Capacitor—C 70 µF
Switching frequency—fs 10 kHz
Duty cycle—D 50%
Fuel Cell Inverter
DC link voltage—VDC 677.49~700 V
DC link capacitor—CDC 4685~4700 µF
Coupling inductor—(R + L) 0.01722 + 5.48 mH
Ripple filter—(P + Q) 30 W + 1.5 kVAr
**References**
1. Amirthalingam, M. A Novel Technology utilizing Renewable energies to mitigate air pollution, global warming & climate change.
In Proceedings of the 1st International Conference on the Developments in Renewable Energy Technology (ICDRET), Dhaka,
Bangladesh, 17–19 December 2009; pp. 1–3.
2. Bauer, N.; Mouratiadou, I.; Luderer, G.; Baumstark, L.; Brecha, R.J.; Edenhofer, O.; Kriegler, E. Global fossil energy markets and
[climate change mitigation—An analysis with REMIND. Clim. Chang. 2016, 136, 69–82. [CrossRef]](http://doi.org/10.1007/s10584-013-0901-6)
3. Tiwari, S.K.; Singh, B.; Goel, P.K. Design and control of micro-grid fed by renewable energy generating sources. IEEE Trans. Ind.
_[Appl. 2018, 54, 2041–2050. [CrossRef]](http://doi.org/10.1109/TIA.2018.2793213)_
4. Bose, B.K. Global Energy Scenario and Impact of Power Electronics in 21st Century. IEEE Trans. Ind. Electron. 2013, 60, 2638–2651.
[[CrossRef]](http://doi.org/10.1109/TIE.2012.2203771)
5. Singh, B.; Pathak, G.; Panigrahi, B.K. Seamless Transfer of Renewable-Based Microgrid between Utility Grid and Diesel Generator.
_[IEEE Trans. Power Electron. 2018, 33, 8427–8437. [CrossRef]](http://doi.org/10.1109/TPEL.2017.2778104)_
6. Ochs, D.S.; Mirafzal, B.; Sotoodeh, P. A Method of Seamless Transitions Between Grid-Tied and Stand-Alone Modes of Operation
[for Utility-Interactive Three-Phase Inverters. IEEE Trans. Ind. Appl. 2014, 50, 1934–1941. [CrossRef]](http://doi.org/10.1109/TIA.2013.2282761)
7. Tang, F.; Guerrero, J.M.; Vasquez, J.C.; Wu, D.; Meng, L. Distributed Active Synchronization Strategy for Microgrid Seamless
[Reconnection to the Grid Under Unbalance and Harmonic Distortion. IEEE Trans. Smart Grid 2015, 6, 2757–2769. [CrossRef]](http://doi.org/10.1109/TSG.2015.2406668)
8. Olivares, D.E.; Mehrizi-Sani, A.; Etemadi, A.H.; Cañizares, C.A.; Iravani, R.; Kazerani, M.; Hajimiragha, A.H.; Gomis-Bellmunt, O.;
[Saeedifard, M.; Palma-Behnke, R.; et al. Trends in Microgrid Control. IEEE Trans. Smart Grid 2014, 5, 1905–1919. [CrossRef]](http://doi.org/10.1109/TSG.2013.2295514)
9. Fathima, H.; Prabaharan, N.; Palanisamy, K.; Kalam, A.; Mekhilef, S.; Justo, J.J. Hybrid-Renewable Energy Systems in Microgrids:
_Integration, Developments and Control; Woodhead Publishing: Thorston, UK, 2018._
10. Pannala, S.; Patari, N.; Srivastava, A.K.; Padhy, N.P. Effective Control and Management Scheme for Isolated and Grid Connected
[DC Microgrid. IEEE Trans. Ind. Appl. 2020, 56, 6767–6780. [CrossRef]](http://doi.org/10.1109/TIA.2020.3015819)
11. Rezkallah, M.; Chandra, A.; Singh, B.; Singh, S. Microgrid: Configurations, Control and Applications. IEEE Trans. Smart Grid
**[2019, 10, 1290–1302. [CrossRef]](http://doi.org/10.1109/TSG.2017.2762349)**
12. Liu, X.; Wang, P.; Loh, P.C. A Hybrid AC/DC Microgrid and Its Coordination Control. IEEE Trans. Smart Grid 2011, 2, 278–286.
13. Li, H.; Li, Y.; Guerrero, J.M.; Cao, Y. AComprehensive Inertial Control Strategy for Hybrid AC/DC Microgrid with Distributed
Generations. IEEE Trans. Smart Grid 2020, 11, 1737–1747.
-----
_Sustainability 2022, 14, 7777_ 28 of 28
14. Zolfaghari, M.; Gharehpetian, G.B.; Shafie-khan, M.; Catalao, J.P.S. Comprehensive review on the strategies for controlling the
[interconnection of AC and DC microgrids. Int. J. Electr. Power Energy Syst. 2021, 136, 107742. [CrossRef]](http://doi.org/10.1016/j.ijepes.2021.107742)
15. Wang, L.; Fu, X.; Wong, M. Operation and Control of a Hybrid Coupled Interlinking Converter for Hybrid AC/Low Voltage DC
[Microgrids. IEEE Trans. Ind. Electron. 2021, 68, 7104–7114. [CrossRef]](http://doi.org/10.1109/TIE.2020.3001802)
16. Phan, D.; Lee, H. Interlinking Converter to Improve Power Quality in Hybrid AC–DC Microgrids With Nonlinear Loads. IEEE J.
_[Emerg. Sel. Top. Power Electron. 2019, 7, 1959–1968. [CrossRef]](http://doi.org/10.1109/JESTPE.2018.2870741)_
17. Hussain, M.N.; Melath, G.; Agarwal, V. An Active Damping Technique for PI and Predictive Controllers of an Interlinking
[Converter in an Islanded Hybrid Microgrid. IEEE Trans. Power Electron. 2021, 36, 5521–5529. [CrossRef]](http://doi.org/10.1109/TPEL.2020.3030875)
18. Shoeb, M.A.; Shahnia, F.; Shafiullah, G.M. A Multilayer and Event-Triggered Voltage and Frequency Management Technique for
Microgrid’s Central Controller Considering Operational and Sustainability Aspects. IEEE Trans. Smart Grid 2019, 10, 5136–5151.
[[CrossRef]](http://doi.org/10.1109/TSG.2018.2877999)
19. Jha, S.; Hussain, I.; Singh, B.; Mishra, S. Optimal operation of PV-DG-battery based microgrid with power quality conditioner.
_[IET Renew. Power Gener. 2019, 13, 418–426. [CrossRef]](http://doi.org/10.1049/iet-rpg.2018.5648)_
20. Jayachandran, J.; Sachithanandam, R.M. Neural network-based control algorithm for DSTATCOM under non-ideal source voltage
[and varying load conditions. Can. J. Electr. Comput. Eng. 2015, 38, 307–317. [CrossRef]](http://doi.org/10.1109/CJECE.2015.2464109)
21. Brandao, D.I.; Santos, R.P.d.; Silva, W.W.A.G.; Oliveira, T.R.; Donoso-Garcia, P.F. Model-Free Energy Management System for
[Hybrid Alternating Current/Direct Current Microgrids. IEEE Trans. Ind. Electron. 2021, 68, 3982–3991. [CrossRef]](http://doi.org/10.1109/TIE.2020.2984993)
22. IEEE Recommended Practice and Requirements for Harmonic Control in Electric Power Systems. In IEEE Std 519-2014 (Revision
_of IEEE Std 519-1992); IEEE: Piscataway, NJ, USA, 11 June 2014; pp. 1–29._
23. Jayachandran, J.; Sachithanandam, R.M. Performance investigation of artificial intelligence-based controller for three-phase
[four-leg shunt active filter. Front. Energy 2015, 9, 446–460. [CrossRef]](http://doi.org/10.1007/s11708-015-0378-2)
24. Malathi, S.; Jayachandran, J. FPGA Implementation of NN based LMS-LMF Control Algorithm in DSTATCOM for Power Quality
[Improvement. Control. Eng. Pract. 2020, 98, 104378. [CrossRef]](http://doi.org/10.1016/j.conengprac.2020.104378)
25. Kaur, S.; Dwivedi, B. Power quality issues and their mitigation techniques in microgrid system-a review. In Proceedings of the
7th India International Conference on Power Electronics (IICPE), Patiala, India, 17–19 November 2016; pp. 1–4.
26. Chang, J.-W.; Lee, G.-S.; Moon, S.-I.; Hwang, P.-I. A Novel Distributed Control Method for Interlinking Converters in an Islanded
[Hybrid AC/DC Microgrid. IEEE Trans. Smart Grid 2021, 12, 3765–3779. [CrossRef]](http://doi.org/10.1109/TSG.2021.3074706)
27. Bhim, S.; Chandra, A.; Al-Haddad, K. Power Quality: Problems and Mitigation Techniques; John Wiley & Sons: Hoboken, NJ, USA, 2014.
28. Malathi, S.; Sachithanandam, R.M.; Jayachandran, J. Performance comparison of neural network based multi output SMPS with
improved power quality and voltage regulation. Control Eng. Appl. Inform. 2018, 20, 86–97.
29. Khederzadeh, M.; Sadeghi, M. Virtual active power filter: A notable feature for hybrid ac/dc microgrids. IET Gener. Transm.
_[Distrib. 2016, 10, 3539–3546. [CrossRef]](http://doi.org/10.1049/iet-gtd.2016.0217)_
30. Shafiullah, G.M.; Oo, A.M.T.; Ali, A.B.M.S.; Jarvis, D.; Wolfs, P. Economic Analysis of Hybrid Renewable Model for Subtropical
[Climate. Int. J. Therm. Environ. Eng. 2010, 1, 57–65. [CrossRef]](http://doi.org/10.5383/ijtee.01.02.001)
31. Shafiullah, G.M.; Arif, M.T.; Oo, A.M.T. Mitigation strategies to minimize potential technical challenges of renewable energy
[integration. Sustain. Energy Technol. Assess. 2018, 25, 24–42. [CrossRef]](http://doi.org/10.1016/j.seta.2017.10.008)
32. Khomsi, C.; Bouzid, M.; Champenois, G.; Jelassi, K. Improvement of the Power Quality in Single Phase Grid Connected
Photovoltaic System Supplying Nonlinear Load. In Advanced Technologies for Solar Photovoltaics Energy Systems; Motahhir, S.,
[Eltamaly, A.M., Eds.; Green Energy and Technology; Springer: Cham, Switzerland, 2021. [CrossRef]](http://doi.org/10.1007/978-3-030-64565-6_13)
33. Eltamaly, A.M. A novel musical chairs algorithm applied for MPPT of PV systems. Renew. Sustain. Energy Rev. 2021, 146, 111135.
[[CrossRef]](http://doi.org/10.1016/j.rser.2021.111135)
34. Liu, Q.; Caldognetto, T.; Buso, S. Flexible control of interlinking converters for DC microgrids coupled to smart AC power
[systems. IEEE Trans. Ind. Electron. 2019, 66, 3477–3485. [CrossRef]](http://doi.org/10.1109/TIE.2018.2856210)
35. Wang, J.; Jin, C.; Wang, P. A uniform control strategy for the inter-linking converter in hierarchical controlled hybrid AC/DC
[microgrids. IEEE Trans. Ind. Electron. 2018, 65, 6188–6197. [CrossRef]](http://doi.org/10.1109/TIE.2017.2784349)
36. Li, X.; Li, Y.; Guo, Z.; Hong, C.; Zhang, Y.; Wang, C. A unified control for the DC-AC interlinking converters in hybrid AC/DC
[microgrids. IEEE Trans. Smart Grid 2018, 9, 6540–6553. [CrossRef]](http://doi.org/10.1109/TSG.2017.2715371)
37. Yang, P.; Xia, Y.; Yu, M.; Wei, W.; Peng, Y. A decentralized coordination control method for parallel bidirectional power converters
[in a hybrid AC-DC microgrid. IEEE Trans. Ind. Electron. 2018, 65, 6217–6228. [CrossRef]](http://doi.org/10.1109/TIE.2017.2786200)
38. Xia, Y.; Peng, Y.; Yang, P.; Wei, W. Distributed coordination control for multiple bidirectional power converters in a hybrid AC/DC
[microgrid. IEEE Trans. Power Electron. 2017, 32, 4949–4959. [CrossRef]](http://doi.org/10.1109/TPEL.2016.2603066)
39. Heier, S. Grid Integration of Wind Energy Conversion Systems; Wiley: Hoboken, NJ, USA, 2006.
40. Valverde, L.; Bordons, C.; Rosa, F. Integration of Fuel Cell Technologies in Renewable -Energy-Based Microgrids Optimizing
[Operational Costs and Durability. IEEE Trans. Ind. Appl. 2016, 63, 167–177. [CrossRef]](http://doi.org/10.1109/TIE.2015.2465355)
41. Ucar, M.; Ozdemir, E. Control of a 3-phase 4-leg active power filter under non-ideal mains voltage condition. Electr. Power Syst.
_[Res. 2008, 78, 58–73. [CrossRef]](http://doi.org/10.1016/j.epsr.2006.12.008)_
42. Villanueva-Rosario, J.A.; Santos-Garcia, F.; Aybar-Mejia, M.E.; Mendoza-Araya, P.; Molina-García, A. Coordinated ancillary
[services, market participation and communication of multi-microgrids: A review. Appl. Energy 2022, 308, 118332. [CrossRef]](http://doi.org/10.1016/j.apenergy.2021.118332)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/su14137777?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/su14137777, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2071-1050/14/13/7777/pdf?version=1656398560"
}
| 2,022
|
[] | true
| 2022-06-25T00:00:00
|
[
{
"paperId": "ec36771cb94d37e3be3d03e23033605ac457ea06",
"title": "Coordinated ancillary services, market participation and communication of multi-microgrids: A review"
},
{
"paperId": "19e737fdc24a0e6066e3d6a30725a52909089061",
"title": "A novel musical chairs algorithm applied for MPPT of PV systems"
},
{
"paperId": "d09bb3dccd39802aedf2761d38ec878641e607d7",
"title": "FPGA implementation of NN based LMS–LMF control algorithm in DSTATCOM for power quality improvement"
},
{
"paperId": "9eab946fdbf9c1f239a240b95c4ecf36165e3ea0",
"title": "A Comprehensive Inertial Control Strategy for Hybrid AC/DC Microgrid With Distributed Generations"
},
{
"paperId": "89f4d321b1f6af8ec1b6a6387702e058be9073de",
"title": "Optimal operation of PV‐DG‐battery based microgrid with power quality conditioner"
},
{
"paperId": "24883bf9e3b57ef7780291095823bb290122e3b8",
"title": "Performance Comparison of Neural Network Based Multi Output SMPS with Improved Power Quality and Voltage Regulation."
},
{
"paperId": "177d01296c52ce6ce84da150635ca2885793ee0b",
"title": "Mitigation strategies to minimize potential technical challenges of renewable energy integration"
},
{
"paperId": "1cfd48f4b26aba272559caa463569274d9efafeb",
"title": "Virtual active power filter: a notable feature for hybrid ac/dc microgrids"
},
{
"paperId": "718a2c43e0c81e3f504cd47c27ef928fd7636958",
"title": "Power quality issues and their mitigation techniques in microgrid system-a review"
},
{
"paperId": "e6f886a0ebbeb7a8312a9ca509109c7e5468e4a8",
"title": "Performance investigation of artificial intelligence based controller for three phase four leg shunt active filter"
},
{
"paperId": "c77322a3a9975a3f205c3544c03497e6ba47fd88",
"title": "Global fossil energy markets and climate change mitigation – an analysis with REMIND"
},
{
"paperId": "4d7ce124bb3f33731bd4b774d55db6f8f8a4d313",
"title": "A Hybrid AC/DC Microgrid and Its Coordination Control"
},
{
"paperId": "17c4caca73488ea5fbf524033e0409900ad7f613",
"title": "A Novel Technology utilizing Renewable energies to mitigate air pollution, global warming & climate change"
},
{
"paperId": "a48220ad36e40bd76e62a95217b55535bd90985e",
"title": "Comprehensive review on the strategies for controlling the interconnection of AC and DC microgrids"
},
{
"paperId": "aa5b90e07060e26581564641d76d38a39e0a763d",
"title": "Economic Analysis of Hybrid Renewable Model for Subtropical Climate"
},
{
"paperId": "7b8b31a7d698c85588dd59885c9cb5afeb1f2498",
"title": "Control of a 3-phase 4-leg active power filter under non-ideal mains voltage condition"
},
{
"paperId": null,
"title": "IEEE Recommended Practice and Requirements for Harmonic Control in Electric Power Systems"
}
] | 21,658
|
en
|
[
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02e3a4ac38814e992ae078c5ea7fd2380955a868
|
[] | 0.90778
|
AN INTERACTIVE DRUG SUPPLY CHAIN TRACKING SYSTEM USING BLOCKCHAIN 2.0
|
02e3a4ac38814e992ae078c5ea7fd2380955a868
|
Indian Journal of Computer Science and Engineering
|
[
{
"authorId": "2123667740",
"name": "P. U."
},
{
"authorId": "2239822",
"name": "Narendran Rajagopalan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Indian J Comput Sci Eng"
],
"alternate_urls": null,
"id": "d24f39a4-e18a-408f-b6ae-e2b611b150f9",
"issn": "0976-5166",
"name": "Indian Journal of Computer Science and Engineering",
"type": "journal",
"url": "http://www.ijcse.com/index.html"
}
|
The modern pharmaceutical supply chain is a complex process in which researches is carried out to produce drugs and according to the research made the drugs is manufactured and these drugs are distributed from the manufacturer to the pharmacies. This process involves a number of stakeholders from manufacturer, wholesaler, distributor, pharmacies and finally end-users, the patients. At each stage, inventory must be managed based on demand in pharmaceutical products. Inventory management system is vital because an excess of inventory could cost the pharmacy money and it is more complex because it needs to track the lot numbers and expiration dates of medicines. Pharmaceutical supply chain suffers from various issues such as drug shortages, temperature control of drugs, lack of visibility in shipment and storage, inventory management, lack of coordination and the most important one drug counterfeiting. The principal regulatory bodies responsible for drug quality try to solve these issues by various ways. But they are very much unregulated, expensive and fragmented. The blossoming technology, Blockchain by its inherent properties such as immutability, transparency, and distributed nature could solve the problems of pharmaceutical supply chain. This paper starts with an introduction about Drug supply chain and their problems. It elucidates how Blockchain Technology could come to rescue pharmaceutical supply chain. This paper proposes a novel drug supply chain management system along with inventory management based on Blockchain 2.0 specifically Hyperledger Fabric. The proposed system records all transactions using Blockchain, thus helping in tracking the drug along its supply chain as well as solving the issues associated with drug supply chain in an efficient manner. It also measures performance of the proposed system using the Benchmark tool called Hyperledger caliper. The system proves its efficiency in terms of success rate, throughput and transaction latency.
|
# AN INTERACTIVE DRUG SUPPLY CHAIN TRACKING SYSTEM USING BLOCKCHAIN 2.0
## U. Padmavathi[1]
### Department of Computer Science & Engineering, National Institute of Technology Puducherry, Karaikal, Puducherry, India.
udayarajepadma@gmail.com
## Narendran Rajagopalan[2]
### Department of Computer Science & Engineering, National Institute of Technology Puducherry, Karaikal, Puducherry, India.
narendran@nitpy.ac.in
**Abstract**
**The modern pharmaceutical supply chain is a complex process in which researches is carried out to**
**produce drugs and according to the research made the drugs is manufactured and these drugs are**
**distributed from the manufacturer to the pharmacies. This process involves a number of stakeholders**
**from manufacturer, wholesaler, distributor, pharmacies and finally end-users, the patients. At each stage,**
**inventory must be managed based on demand in pharmaceutical products. Inventory management**
**system is vital because an excess of inventory could cost the pharmacy money and it is more complex**
**because it needs to track the lot numbers and expiration dates of medicines. Pharmaceutical supply chain**
**suffers from various issues such as drug shortages, temperature control of drugs, lack of visibility in**
**shipment and storage, inventory management, lack of coordination and the most important one drug**
**counterfeiting. The principal regulatory bodies responsible for drug quality try to solve these issues by**
**various ways. But they are very much unregulated, expensive and fragmented. The blossoming**
**technology, Blockchain by its inherent properties such as immutability, transparency, and distributed**
**nature could solve the problems of pharmaceutical supply chain. This paper starts with an introduction**
**about Drug supply chain and their problems. It elucidates how Blockchain Technology could come to**
**rescue pharmaceutical supply chain. This paper proposes a novel drug supply chain management system**
**along with inventory management based on Blockchain 2.0 specifically Hyperledger Fabric. The**
**proposed system records all transactions using Blockchain, thus helping in tracking the drug along its**
**supply chain as well as solving the issues associated with drug supply chain in an efficient manner. It also**
**measures performance of the proposed system using the Benchmark tool called Hyperledger caliper. The**
**system proves its efficiency in terms of success rate, throughput and transaction latency.**
**_Keywords: Blockchain, Counterfeit Drugs, Drug Supply Chain, Hyperledger Fabric, Hyperledger Caliper._**
**1.** **Introduction**
In the current era, new drugs are introduced into the market daily as there is a rapid increase in the number of
diseases among human beings. These drugs help the patient to recover from illness and sometimes they have
some exacerbating effect on humans. The adverse outcome of drugs is mainly due to drug counterfeiting. This is
because the lucrative drug changes hands many times along the supply chain which provides an opportunity for
falsified drugs to enter into the market easily. Identifying these counterfeit drugs is complex and expensive.
Even the physician who prescribes the drugs is not able to find the difference between the licit and illicit ones.
The counterfeited drugs have profound and devastating effect on human beings.[1]
The term “counterfeit drug” refers to pharmaceutical item that are fraudulently mislabeled with respect to
source and/or identity. It could also be defined as a product with wrong ingredient or a product with inactive
ingredient or a product with active ingredient at high dosage. According to a research report by World Health
Organization (WHO), an estimated one in 10 medical products circulating in low-and-middle income countries
is either falsified or substandard [2].
The Indian Pharma industry which grows steadily has a huge place for spurious and counterfeit drugs. It is
expected that the Pharma market in India would grow up to $55 billion by 2020 [3]. Another report by WHO
states that India is the third largest producer of generic drugs in the world and it plays a vital role in counterfeit
pharmaceutical manufacturing. This counterfeiting of drugs is done keeping in mind that it is difficult to detect
-----
counterfeiting, lack of public awareness, leakage points in the supply chain and it needs low investment.
Counterfeited drugs also find its way into the market when there is a shortage of specific drug that is on high
demand[4]. Falsified drug producers take this opportunity and rush to fill the gap between supply and demand.
Counterfeit medicines could result in failure to treat diseases, helps in the evolution of bugs that cause diseases
and sometimes, it may be poisonous which result in fatality. In case of low-income countries, antibiotics and
anti-malarias are the most commonly counterfeited medicines. Fraudsters are very much interested in
manufacturing an exact copy of expensive prescription drugs such as drugs used in the treatment of AIDS,
cancer. They do not bother about the quality and effectiveness of the counterfeited drugs.
Another major problem faced by the Pharmaceutical Supply chain system is the management of inventory at
each stakeholder’s point. Inventory management is vital in order to improve operational efficiency as well as to
reduce costs, and wastage of medicines [5]. This process involves tracking of inventories and preparing the
inventory based on demand, tracking expiration dates of medicines and so on. The inventory must be properly
maintained which would otherwise result in wastage of medications, wastage of money on excess medications, a
cut in the marginal profit of the company.
Other than drug counterfeiting and lack of inventory management, the Pharmaceutical supply chain due to
its convoluted nature, suffers from the following issues.
1. Adverse Drug Reaction on patients
2. Drug shortages
3. Contaminated drug manufacturing
4. Improper Cold chain management
5. Lack of visibility in shipment and storage
**1.1.** **_Blockchain_**
Blockchain, a distributed ledger technology could simply be defined as chain of blocks that are linked to each
other. Each block contains a record of transactions along with block header. The block header comprises current
block hash, previous block hash, difficulty, timestamp and Nonce. The concept of blockchain has its birth with
Bitcoin, the digital currency founded by Satoshi Nakamoto in the year 2008 as an alternate for physical
currency[6]. The following Figure 1 illustrates Blockchain Structure.
Figure.1 Structure of Blockchain[7]
The blocks in the blockchain are linked to the next block using the hash value. This provides immutability
property which is a distinguishing feature of blockchain. Bitcoin blockchain achieves consensus using Proof of
Work (PoW), a resource consuming mechanism in which the miners solve cryptographic puzzles in order to
achieve their difficulty target. But this Bitcoin blockchain suffers from scalability problem and also it could not
be applied to any other application. This gives birth to blockchain 2.0 which makes use of smart contracts.
Smart contracts are computer codes that contain the actual logic of the business process. These smart contracts
are executed automatically and help businesses to make use of blockchain for their applications. Blockchain
may be public, private or consortium. A public Blockchain is the one in which anyone in the world can join and
have access to this network at any time whereas a private blockchain is the one in which the single entity or
organization that owns the blockchain has the full control over it. In case of consortium blockchain, a
consortium of organizations has the control over the blockchain. This is also called as federated blockchain.
Drug Supply chain involves a number of stakeholders along its way which rises the need to make use of
consortium blockchain so that every stakeholder involved in the system could have their control over the
blockchain network. The authors found that Hyperledger Fabric Blockchain best suits this application.
Hyperledger Fabric, one of the umbrella projects of IBM, is an open-source, permissioned blockchain.
Hyperledger Fabric blockchain differs from other blockchain in many ways. It makes use of execute-ordervalidate mechanism whereas other blockchain networks use order-execute logic. This helps Hyperledger Fabric
-----
to eliminate non-deterministic transactions and increase the overall performance of the system. Further, this
architecture helps Hyperledger Fabric to deliver high degree of confidentiality, resiliency, flexibility and
scalability. Hyperledger fabric offers pluggable consensus, pluggable membership service provider, pluggable
endorsement and validation policy, optional peer-to-peer gossip service and also helps to store ledger data in
multiple formats. In Hyperledger fabric, the term consensus is defined as “the full circle verification of the
correctness of a set of transactions comprising a block”. The pluggable consensus option of Hyperledger Fabric
allows it to fit different use cases and trust models in an efficient manner. The members of Hyperledger Fabric
enroll through Membership Service Provider (MSP). The MSP is a component that offers an abstraction of all
cryptographic mechanism and protocols behind issuing and validating certificates, and user authentication[8][9].
Fabric calls smart contracts as “chaincode” in which the business logic of the application is deployed. These
smart contracts are run within container environment for the purpose of isolation. Hyperledger Fabric supports
smart contracts to be written using general-purpose programming languages such as Java, Go, and Node.js
which helps the developer to write applications easily and quickly.
The other distinguishing feature of Hyperledger Fabric is it provides confidentiality through its channel
architecture. A channel is established between the subset of participants allowing only these subsets of
participants to have visibility over a particular set of transactions. The privacy and confidentiality of transactions
in Fabric network is preserved by allowing only the nodes that participate in the channel to access the chaincode
and transaction data. The ledger subsystem supported by Hyperledger Fabric comprises two components namely
World state and the transaction log. The world state is the database of the ledger which describes the current
state of the ledger. The transaction log records all transactions that have resulted in the current value of the
world state. It could be called as the update history for the world state.
Peers in the Hyperledger Fabric are assumed different roles and based on the roles assigned to them, they are
called by different names.
1. Endorsing peer – Every peer with smart contracts installed are called endorsing peers.
2. Ordering peer – it is a peer that receive transactions from clients and order them into blocks.
3. Leader Peer – it is the peer that takes responsibility of collecting ordered transactions from the orderer
and distributing them to the various committing peers.
4. Committing Peer – A peer that receives block of transactions and validate them before they are
committed to the ledger.
5. Anchor peer – helps the peers to communicate with peers present in other organization.
This paper mainly focuses on drug supply chain along with inventory management at each stakeholder’s
point using Hyperledger Fabric Blockchain platform. First, the authors designed a drug supply chain system that
stores the details about the drugs manufactured, demand from the wholesaler, distributor and pharmacies, supply
of drugs to these demand points. The Hyperledger Fabric blockchain platform is used for this work, since it is an
open-source permissioned blockchain designed for business use cases and supports modular architecture and
general-purpose programming languages. Further, Couch DB is deployed to store large amount of transactions
that are made between the stakeholders. Finally, the system is checked and the performance of the system is
measured using the Benchmarking tool Hyperledger Caliper.
The paper is structured as follows. Section 2 elaborates the related work on applications of Blockchain in
supply chain field, health care area and other related works. Section 3 gives an insight details about the proposed
system architecture. Section 4 gives details about the system implementation. Section 5 elucidates execution
results of the proposed system. Section 6 measures the performance in terms of success rate, transaction latency
and throughput and evidences the result. Section 7 concludes the paper.
**2.** **Related Work**
**2.1.** **_In Supply Chain_**
[10] uses generic stochastic model to investigate the impact Blockchain Technology on the performance of
supply chain. Concludes that leveraging Blockchain technology proves beneficial only to some types of goods
and also recommends that Blockchain Technology should be adopted at an earlier stage and also at a higher
degree.
[11] uses a mapping study method to explore and analyze the applications of Blockchain Technology in
supply chain management. The study revealed that majority of the research focused on traceability, security and
finance. Moreover, the study also concludes that real performance evaluation in terms of industrial context is
lacking in almost all proposed frameworks and it needs to be taken into account by the researchers.
[12] investigates the applications and benefits of Blockchain Technology in Supply chain management.
Developed a comprehensive framework to analyze the various applications of Blockchain Technology and
identified five emerging use case clusters that could clearly extend the scope of Blockchain beyond tracking and
tracing.
-----
A synthesis of the challenges that exist in global supply chain and trade operations is provided in [13]. It
discusses how Blockchain could remediate the pain points of supply chain, how Blockchain could fulfill the
needs of supply chain and logistics. It concludes that despite the benefits, there exists several legal and
regulatory challenges in the wide adoption of this brainstorming technology in global supply chain market.
[14] Gives a report on digital Blockchain in supply chain which the organizations can make use of to
understand how Blockchain technology is feasible for their applications and how to implement this booming
technology for their applications. It also discusses how Supply chain is becoming complex over the years and
the ability of Blockchain to remove these constraints.
A reference implementation named BLMS (Blockchain Based Logistics Monitoring System) based on
Ethereum was programmed and tested to provide a solution for parcel tracking in a supply chain environment
[15]. The system employs software components to record transaction entry for logistics operations and supply
chain. The results prove that Blockchain is a promising technology that could increasingly streamline the supply
chain environment by enabling sharing and access to product related information in real time.
A systematic analysis of 20-25 recent scholarly reviewed journals is done to know about innovations that
Blockchain Technology could bring in the execution of supply chain and logistics management[16]. The study
reveals that Blockchain Technology stands as the best option that could innovate today’s business centers
especially those with up-to-date machineries like online application of products and services.
[17] presents the results of a Blockchain Technology use case in particular, fresh food delivery designed
using standard methodology. It evaluates the critical aspects of implementing Blockchain Technology and
discusses how this groundbreaking technology helps in reducing logistics costs and in optimizing the operations
of logistics. It also gives a quick depiction about the various issues such as scalability and costs of implementing
this technology. Concludes that if Blockchain Technology is adopted in supply chain it could be a promising
enhancement as well as, it provides benefits to all stakeholders involved in the system.
**2.2.** **_In Health Care_**
MedRec [18] uses blockchain technology to handle Electronic Medical Records (EMR) in which the patients are
given a log of their medical history. The record is immutable, comprehensive, accessible and credible. Being
source agnostic, MedRec is able to manage authenticity, confidentiality, data sharing and it provides incentives
for medical researchers. This system aims to provide granularity, record flexibility and easy access of medical
information between provider and treatment sites.
MediLedger, a project that developed a network which combines a Lookup directory accessed through a
Blockchain with a permissioned messaging network in order to meet the demands of DSCSA (Drug Supply
Chain Security Act). This allows only authorized companies to place their products in the Look-up directory as
well as helps companies to request and respond to product identifier verification requests in a secure manner.
Being an industry-owned permissioned blockchain network, it is able to solve the sensitive issues of data
privacy. This decentralized network creates an open environment for pharmaceutical industry to overcome the
limitations that the current method offers [19].
**2.3.** **_Other works_**
[20] presented a permissioned blockchain environment that provides trust and cost-efficient approach in
academic publishing. It describes the benefits that academic publishing could get by the adoption of Blockchain
Technology in terms of trust and collaboration between globally distributed participants without the need for
centralized management.
A conceptual model for the fusion of Blockchain with Cloud computing is proposed in[21]. It comprises
Blockchain over Cloud (BoC), Cloud over Blockchain (CoB) and Mixed Blockchain Cloud (MBC) deployment
models and highlights the potential benefits of these fusion. The discussions were focused on secure data
transfer and privacy issues associated with it. It also proposed a three layer model to reengineer cloud data
centers using Blockchain Technology.
A blockchain based data logging and integrity management system for cloud forensics to prove the integrity
of evidence collection and storage in the cloud environment is proposed in [22]and its performance is measured
by comparing it with other blockchain based system. The proposed system outperforms other systems in terms
of transaction processing and it guarantees data integrity.
In [23], a framework based on Blockchain and QR(Quick Response) Code is proposed to provide drug
safety and manufacturer authenticity. The proposed medical storage makes use of permissioned private
blockchain based on PKI and digital signatures. It discusses about how counterfeit drugs could be traced using
the proposed blockchain methodology. It also proves that this methodology prevents replay and man-in-themiddle attack.
[24] Describes how blockchain could make a substantial difference to the current pharmaceutical supply
chain model. It enables barcodes to be scanned and recorded on the Blockchain ledger at every stakeholder
-----
point. The record that is stored on the ledger helps to create an audit trial of the drug journey. It also allows the
drug to be tracked from the time the drug is manufactured to the moment the patient receives the drug. The
authors also enlighten the advantages of adopting blockchain technology in pharmaceutical supply chain.
Further, in future, biometric measures could be used to record the dispenser and pharmacist details which could
also be stored in the ledger for tracking purpose.
In [25], Gcoin Blockchain is used to create transparent drug transaction data. The double spending
prevention mechanism given by the consortium Proof-of-Work approach of Gcoin helps to alleviate the
counterfeit drug problem. In addition, the regulation model used in this work is surveillance net model which
differs from the usual inspection and examination model. The surveillance net model allows every unit in the
drug supply chain to participate simultaneously, helps to prevent counterfeit drugs, help to track and trace drugs
without going into factories, warehouses or pharmacies.
Hyperledger Fabric based drug supply chain to manage integrity is proposed in [26]. The unique feature of
Hyperledger Fabric system allows only valid participants to participate in the supply chain and make
transactions. This system uses Proof-of-Concept which helps to keep track of drug records in a decentralized
way, helps to achieve transparency, security and privacy. The performance of the system is tested by carrying
out a number of experiments and is analyzed in terms of transaction response time, throughput, latency and
resource utilization using the Benchmarking tool called Hyperledger Caliper. The paper concludes that using
Blockchain Technology increases the performance of the system in terms of throughput and minimizes latency
with less utilization of resources.
A quantitative analysis on leveraging Blockchain in the entire drug supply chain of India is proposed in [27].
The Blockchain database which is private and permissioned maintained by the Department of Pharmaceuticals
is used as the distributed drug inventory which helps to maintain transactional records of supply chain. This
helps to trace and track drugs at any level from the extraction unit till it reaches the patient. This work concludes
that the journey of drug becomes more secure and streamlined through the use of potential Blockchain
Technology.
DrugLedger: A blockchain system for drug regulation and drug tracing efficiently stores records and
guarantees sustainable service delivery by reconstructing the whole service architecture. This system is more
resilient than traditional systems and it provides data privacy and authentication. Drugledger tackles the various
problems of package, and makes use of Expiration date of drugs to efficiently prune Blockchain storage [28].
A blockchain based e-prescription system that utilizes cryptocurrency principles is proposed in [29]. The authors
in this work investigated the requirements of this system to run on a Blockchain network. The concept of mint is
used as the fundamental concept of this proposed system. Rxcoin, a transferable currency on Ethereum
Blockchain is utilized to create a database of prescription data on blockchain that ensures integrity. This work
also enlightens that the proposed e-prescription system could be a viable solution for combating opioid crisis.
The impact of Blockchain technology in agriculture and food supply chain is examined in[30]. It demonstrates
the current projects, initiatives and the various challenges associated with these projects. The authors elaborate
that blockchain technology establish a proven and trusted environment for many projects and initiatives. The
findings of the study reveal that Blockchain stands as a promising technology towards transparent food supply
chain but still there exists many barriers and challenges that need to be solved before adopting this technology.
[31] identifies that the transparent nature of Blockchain technology when adopted in the supply chain
furnishes the ability to secure favorable financing transactions. In this work, the authors develop b_verify, that
utilizes Bitcoin to provide transparent supply chain both at scale and at lower cost. The analysis of this work
demonstrates what types of firms or supply chain would benefit from the adoption of this Blockchain
Technology. Finally, it concludes that Blockchain technology provides an efficient way to alleviate the
problems of financing operations in small and medium sized enterprises (SME) by furnishing input transactions
verifiability in supply chains.
BRUSCHETTA a blockchain based application for the traceability and the certification of Extra Virgin
Olive Oil (EVOO) supply chain[32]. It provides a tamper proof record of the product from the point of
plantation till it reaches the shop. This blockchain based application utilizes Internet of Things (IoT) to
interconnect sensors used in EVOO quality control. From the results obtained, a mechanism for dynamic autotuning of BRUSCHETTA is proposed to optimize its performance in case of high loads. It concludes that since
the transaction arrival rate vary over time, it is best to adopt dynamic configurable Blockchain instead of static
configuration of Fabric Blockchain.
**3.** **Proposed System Architecture**
**3.1.** **_Blockchain based Pharmaceutical Supply Chain_**
Figure 2 illustrates the proposed Drug supply chain based on blockchain. It involves the Regulatory Authority,
Manufacturer, Wholesaler, Retailer, Pharmacist and the Consumer. Every participant in the network updates,
-----
verifies and manages the supply chain data using the smart contracts deployed within it. Smart contracts are
computer programs that define the roles and responsibilities of every participant in the network, the relationship
among the participants in the network. It facilitates every participant in the network to interact with the
distributed ledger. It also helps every participant in endorsing the transaction proposal and updating the ledger.
The smart contracts receive the transaction request, execute the request and send the response back to the client.
In the meanwhile, it queries the ledger, updates the ledger by appending information about the transaction. The
working of smart contract is illustrated in Figure 3.
Figure.2 Proposed Drug Supply Chain System using Blockchain
The feature that makes the proposed system unique is that it is designed using a permissioned consortium
network called Hyperledger Fabric. It allows only the authenticated participants to participate in the network
and make transactions. The authenticity of the participants is verified using the certificate issued by the
certificate authority.
Figure.3 Working of Smart Contract
**3.2.** **_Role of Participants_**
The regulatory authority is responsible for setting up of the secure network by allowing only trusted parties to
join and access the network while giving view-only permission to others in the network. The regulatory
authority acts as the certificate authority and issues identity certificate to valid participants in the network.
The manufacturer is responsible for initiating the supply chain. After enrolling in the network, the
manufacturer purchases the raw materials and manufactures the drug according to the formulary. He then enters
the details of the manufactured drug product into the Blockchain which will be subsequently tracked by the
participants in the supply chain network. The consistency of the information entered by the manufacturer is
checked and endorsed by the regulatory authority. The manufacturer then sells the drug product according to the
needs of the wholesalers.
The wholesaler sends the drug request to the manufacturer specifying the name and quantity of drugs
needed. The manufacturer satisfies the request by sending the requested drug product and updates the
information in the Blockchain. The wholesaler on receiving the items first verifies genuineness of the product
using the information in the blockchain and then accepts and pays for the product. If the drug products received
is not found to be genuine, then he would not accept the product and the payment would not be done. All these
transactions are updated in the Blockchain.
The retailer receives the drug he needs, from the wholesaler and verifies the drug products he received are
genuine using the information in the Blockchain. If the information is found to be correct, then the product is
-----
accepted and the payment is done. Otherwise, the product is discarded. This information is then updated in the
Blockchain.
The pharmacist receives the product either from the manufacturer directly or from the retailer. He then verifies
the integrity of the product received using the Blockchain information and updates the ledger.
The consumer purchases the drug prescribed by the doctor from the pharmacy. He can trace back the path of
the drug using its identity. If the consumer finds any mismatch in the information about the drug, then the drug
could be identified as a counterfeit one and the same could be reported to the regulatory authority. It is
illustrated in Figure 4.
Figure.4 Role of Participants in the Drug Supply chain system
**3.3.** **_Drug Movement_**
In the supply chain, the movement of drugs starts from the manufacturing point and it changes hands till it
reaches the end-user. This distribution chain involves a number of participants including the manufacturer,
wholesaler, retailer and the pharmacist. The drug being manufactured has certain properties associated with it
such as the drug name, drug form, dosage, manufacturing date, expiry date and manufacturer name. In addition,
in this proposed system the drug has certain other properties such as drug id, drug owner, drug location,
certificate number and temperature. When the drug is manufactured, it is given a unique id, name, form, dosage,
manufacturing date, expiry date. These properties of the drug do not change until it reaches the end-user. The
information such as drug owner, certificate number, drug location changes as it changes hands. When it is
manufactured, the drug owner would be the manufacturer and the certificate number would be his registered
certificate number and the drug location would be the place at which the drug is being manufactured. The
location information of the drug could be given using the latitude and longitude position of the storage place of
the drug. As the drug moves to the wholesaler, the owner information could be changed to the wholesaler id, the
certificate number would be his certificate number and the location information is also updated. When the drug
reaches the retailer, the certificate number of the retailer, retailer id and the location information is updated in
the ledger. The same kind of update takes as the drug reaches the hands of pharmacist. The unique id of drugs
helps the consumer to track the path back to its origin and he could identify the counterfeiting at any stage if
there is any mismatch in the information provided by the blockchain. It is illustrated in Figure 5.
-----
Figure.5 Drug Movement using Blockchain
**3.4.** **_Flow of Transaction in the Proposed System_**
During the movement of drugs in the supply chain, the nodes perform read and write transaction. After the
nodes become members of the network by obtaining certificate from the certificate authority, they are allowed to
perform the transactions according to the access control policy. The manufacturer, wholesaler, retailer and the
pharmacist nodes are given both the read and write permission while the consumer node is given only the view
permission. The communication between the nodes present in the network starts when a client node sends
transaction proposal over the network. The endorsing nodes take the transaction proposal and execute the
request using smart contracts. The smart contracts execute the transaction request and update the world state
without updating the ledger. The proposal response is taken from the world state by the endorsing nodes and is
signed with its certificate and return back to the client. The client node collects these responses and sends it to
the ordering nodes. The ordering nodes are responsible for collecting these proposal responses from various
clients and order into blocks. The blocks are then communicated to the committing nodes in the network which
performs validation of the transaction response and commits the transaction by updating the ledger and the
world state. The committing node could also generate an event about whether the transaction submitted by the
client is successfully completed or not. The transaction flow for read and write transaction in the proposed drug
supply chain is shown in Figure 6.
Figure.6 Transaction Flow for Read and Write operations in the proposed system
**4.** **System Implementation**
**4.1.** **_Environmental Set up_**
The proposed prototype makes use of Hyperledger Fabric, an open source permissioned Blockchain system to
achieve the business logic behind the pharmaceutical drug supply chain system. The proposed system is
implemented in Ubuntu 18.04 operating system with 16 GB memory and the processor used is Intel Core i5processor. The docker version used for running the docker environment is version 18.09 and the docker
-----
compose used is version 18.09. The development environment of the proposed Drug Supply Chain system using
Hyperledger Fabric is described in Table 1.
**Component** **Description**
Operating system Ubuntu 18.04
CPU i5 Core Processor
Memory 16 GB
Browser Chrome/Firefox
Hyperledger Fabric Version 1.4.3
Docker Compose Version 18.09
Docker Engine Version 18.09
Programming Language Node.js
Node Version 8.11.3
IDE Visual studio code
Table.1 Development environment
**4.2.** **_Network Structure of the Proposed System_**
The network structure of the proposed system contains four organizations namely Manufacturer, Wholesaler,
Retailer and the Pharmacist. These four organizations are connected through channel C1 and the ledger L1
associated with the channel C1 is maintained in the peer node P1. Each of the organization has a client
application A1, A2, A3, A4 through which they communicate with the network. There is also another private
channel C2 between Manufacturer and the Pharmacist and the Ledger L2 is maintained in the Peer node P2.
Peer P3 present in the network maintains both L1 and L2 and is connected to both C1 and C2. Channel
configuration policy (CCP1) governs Channel C1 and (CCP2) govern C2. There is an ordering node O1 which
orders the transactions from various organizations into blocks. Certificate Authority CA1, CA2, CA3, CA4 is
responsible for issuing the certificate to the members of the network. Figure 7 demonstrates the network
structure of the proposed system.
Figure.7 Proposed Network Structure
**5.** **Execution Results**
In this section, the execution of the proposed system is illustrated with the help of snapshots. The environmental
set up of the proposed system is illustrated in Table 1. After successfully setting up the environment the
chaincode is invoked. It is shown in Figure 8. Following this, admin and other members are registered in the
network. Figure 9 shows the manufacturer login page. Using this, the manufacturer can login in to the network.
The manufacturer after manufacturing the drugs can enter the details about the drug such as drug id, drug name,
dosage, latitude and longitude value, manufacturing date, expiry date, manufacturer certificate number and
temperature. Figure 10a and Figure 10b illustrates the manufacturer dashboard. After entering the details, when
the manufacturer submits, a transaction id is created and the data is added to the database as shown in Figure 11.
Figure 12 demonstrates the detailed database about the drugs manufactured by the manufacturer. This database
could help the manufacturer to manage inventory. The manufacturer on viewing the database could come to
know about the amount of drugs manufactured, expiry date of the drugs. Based on this database the
manufacturer is able to sell drugs to the wholesalers and others in the network.
|Component|Description|
|---|---|
|Operating system|Ubuntu 18.04|
|CPU|i5 Core Processor|
|Memory|16 GB|
|Browser|Chrome/Firefox|
|Hyperledger Fabric|Version 1.4.3|
|Docker Compose|Version 18.09|
|Docker Engine|Version 18.09|
|Programming Language|Node.js|
|Node|Version 8.11.3|
|IDE|Visual studio code|
-----
Figure.8 Invoking Chaincode
Figure.9 Manufacturer Login page
Figure.10a Manufacturer Dashboard
Figure.10b Manufacturer Dashboard
-----
Figure.11 Transaction id created
Figure.12 Drug database
Similarly Figure 13 illustrates the wholesaler login page. Wholesaler after receiving the drugs from the
manufacturer could check for the authenticity of the drug using the drug database. If he found that the drug
comes from proper source he then accepts the drug and would change the drug holder name, drug holder
certificate number, and latitude and longitude value of the drugs. Figure 14 illustrates the Wholesaler dashboard.
After successfully changing the drug holder detail, a new transaction id is created as shown in Figure 15.
Figure.13 Wholesaler login page
Figure.14 Wholesaler dashboard
-----
Figure.15 Transaction id generated
Figure 16, 17, 18 and 19 shows the retailer login page, retailer dashboard, pharmacist login page and
pharmacist dashboard respectively. The retailer after logging in enters his dashboard. On receiving the drugs
from the manufacturer or wholesaler, the retailer is able to check for the authenticity of the drugs and could
change certain attributes of the drug such as drug id, drug holder name, certificate number of the drug holder,
and latitude and longitude values of the drugs. Similarly Pharmacist on receiving the required drugs login into
the network and update certain properties of the drug through his dashboard.
Figure.16 Retailer Login page
Figure.17 Retailer Dashboard
-----
Figure.18 Pharmacist Login page
Figure.19 Pharmacist Dashboard
The customer who purchases the drug can check the authenticity of the drug using drug id. He can trace the
path of the drug from the manufacturer till it reaches him. The customer will be provided with the details about
the drug from the point of manufacturing. The customer does not have the right to update the details of the drug.
He can only read the drug details. Figure 20 illustrates the customer dashboard. Figure 21 illustrates the path of
the drug from the manufacturer to the pharmacist.
Figure.20 Customer Dashboard
-----
Figure.21 Path of the Drug from Manufacturer to Pharmacist
**6.** **Performance Measurement**
It is one of the most concerned features to measure the performance of a blockchain solution. Hyperledger
Caliper is a blockchain benchmark tool that helps to measure the performance of a blockchain implementation.
Currently, it supports Hyperledger Besu, Hyperledger Fabric, Hyperledger Iroha, Hyperledger Burrow
Hyperledger sawtooth and Ethereum[33]. The reports produced by the Hyperledger Caliper contain a number of
performance indicators such as Transaction latency, Transaction per second, success rate. The environmental set
up to measure the performance is illustrated in Table 2. After successfully setting up the environment, the
performance is measured by running caliper and the results are produced against the following metric.
1. Success Rate
2. Transaction Latency
3. Throughput
**S.No** **Component**
1 Node-gyp
2 Python
3 Node.js v8.11.4
4 Docker engine v18.06.1-ce
5 Caliper v0.2.0
Table.2 Environmental set up to measure performance
**6.1.** **_Success Rate_**
It is the measure of percentage of successful and failed transactions for a test cycle. We measured the success
rate using 5 groups of users such as 10 users at the first round, 30 users at the second round, 50 users at the third
round, 70 users at the fourth round and finally 100 users at the last round. Figure 22 shows the success rate at all
the five rounds. It is found that when the number of users is 10 and 30 the success rate is 100 percentage. When
the number of users sending request at the same time is increased to 50, the success rate is found to be 99
percentage and it falls to 97.5 percentage when the number of users is increased to 70. It is also observed that
when there are 100 users in the system the success rate is about 96 percentage. It can be concluded that the rate
of success begins to decrease when the number of users gets increased.
Figure.22 Success Rate of the Proposed system
|S.No|Component|
|---|---|
|1|Node-gyp|
|2|Python|
|3|Node.js v8.11.4|
|4|Docker engine v18.06.1-ce|
|5|Caliper v0.2.0|
-----
**6.2.** **_Transaction Latency_**
It is defined as the measure of time from the point of submitting a transaction till it is available across the
network. The transaction latency for the proposed system is measured by invoking the transaction using 10, 30,
50, 70 and 100 users. The transaction latency for invoke is higher since it involves endorsement function. Figure
23 demonstrates the minimum, average and maximum latency at each round using different user groups. It is
found that minimum and average latency does not have much difference in case of 10 and 30 users. When the
number of users is increased to 50 the latency begins to increase and in case of 70 and 100 users the maximum
latency is found to be very high. It is concluded that the transaction latency is high when the numbers of users in
the system gets increased.
Figure.23 Minimum, Average and Maximum Latency for the Invoke transaction of proposed system
**6.3.** **_Throughput_**
It measures the rate of flow of all transactions through the system. It is measured in Transactions per second. In
the proposed system, this metric is evaluated using five groups of users and is illustrated in Figure 24. The
throughput is found to be constant upto 50 users in the system. When the number of users is increased from 50,
the throughput starts to decline and it is found to be very low when there are 100 users in the system. It could be
concluded that the proposed system shows a good throughput of an average of 95 TPS for upto 50 concurrent
users in the system.
Figure.24 Throughput of the Proposed System
**7.** **Conclusion and Future Work**
Blockchain, the revolutionary technology has the capacity to reshape the traditional supply chain system. This
technology in particular could miraculously impact the drug supply chain management system. This paper
described the pitfalls in current drug supply chain system and proposed a novel solution using hyperledger
fabric blockchain. The proposed system could be called as the proof-of-concept application that helps to track
drugs from the point of manufacturing till it reaches the consumer. In this system, the manufacturer, wholesaler,
retailer and the pharmacist has the right to update records in the database whereas the consumer has the right
only to view the records. The consumer is not allowed to perform any update operation. A web application has
been developed in order to provide interaction between the blockchain platform and the users in the system. The
proposed system is secure since only the registered members can have access to the system and the authenticity
of the system is ensured with the help of digital certificates. Drug supply chain system proposed using
Blockchain 2.0 promises the supply of drugs in a secure and accountable manner. The performance of the
proposed system is measured using Hyperledger Caliper, a blockchain benchmark tool. The system undergoes a
number of rounds of experiments with different user groups and the metrics such as success rate, transaction
latency and transaction per second are measured. It is found that the success rate begin to decrease with an
increase in the number of users, there is a decrease in number of transaction per second with the increase in
-----
number of users and there is a maximum latency in case of increased number of users. Further in future, the
system could be designed to support cross-chain platform and improve throughput and success rate with an
increase in the number of users in real-time.
**Funding**
This research did not receive any specific grant from funding agencies in the public, commercial or not-forprofit sectors.
**References**
[1] [1] W. Davies, “The escalating pharma counterfeit problem,” 2018.
[2] [2] “1 in 10 medical products in developing countries is substandard or falsified.” [Online]. Available: https://www.who.int/newsroom/detail/28-11-2017-1-in-10-medical-products-in-developing-countries-is-substandard-or-falsified. [Accessed: 13-Oct-2020].
[3] [3] “India’s pharma industry expected to grow to $55 bn by 2020.” [Online]. Available: https://www.inoxpa.in/company/news/indias-pharma-industry-expected-to-grow-to-55-bn-by-2020. [Accessed: 13-Oct-2020].
[4] [4] D. Kapoor, “An Overview on Pharmaceutical Supply Chain: A Next Step towards Good Manufacturing Practice,” _Drug Des._
_Intellect. Prop. Int. J., vol. 1, no. 2, pp. 49–54, 2018, doi: 10.32474/ddipij.2018.01.000107._
[5] [5] “Pharmacy Inventory Management & Removal Processes | Study.com.” [Online]. Available:
https://study.com/academy/lesson/pharmacy-inventory-management-removal-processes.html. [Accessed: 13-Oct-2020].
[6] [6] S. Nakamoto, “Bitcoin: A Peer-to-Peer Electronic Cash System.”
[7] [7] P. U and N. Rajagopalan, “Pharmaceutical Cold Chain Using Blockchain 3.0,” _Int. J. Psychosoc. Rehabil., vol. 23, no. 1, pp._
202–209, Feb. 2019, doi: 10.37200/ijpr/v23i1/pr190230.
[8] [8] “Key Concepts — hyperledger-fabricdocs master documentation.” [Online]. Available: https://hyperledgerfabric.readthedocs.io/en/release-1.4/key_concepts.html. [Accessed: 13-Oct-2020].
[9] [9] “Architecture Reference — hyperledger-fabricdocs master documentation.” [Online]. Available: https://hyperledgerfabric.readthedocs.io/en/release-1.4/architecture.html. [Accessed: 13-Oct-2020].
[10] [10] J. Chang, M. N. Katehakis, B. Melamed, and J. (Junmin) Shi, “Blockchain Design for Supply Chain Management,” _SSRN_
_Electron. J., 2018, doi: 10.2139/ssrn.3295440._
[11] [11] Y. Tribis, A. El Bouchti, and H. Bouayad, “Supply chain management based on blockchain: A systematic mapping study,”
_MATEC Web Conf., vol. 200, 2018, doi: 10.1051/matecconf/201820000020._
[12] [12] G. Blossey, J. Eisenhardt, and G. Hahn, “Blockchain Technology in Supply Chain Management: An Application Perspective,”
_Proc. 52nd Hawaii Int. Conf. Syst. Sci., vol. 6, pp. 6885–6893, 2019, doi: 10.24251/hicss.2019.824._
[13] [13] Y. Chang, E. Iakovou, and W. Shi, “Blockchain in global supply chains and cross border trade: a critical synthesis of the state-ofthe-art, challenges and opportunities,” Int. J. Prod. Res., pp. 1–18, 2019, doi: 10.1080/00207543.2019.1651946.
[14] [14] “Does blockchain hold the key to a new age of supply chain transparency and trust ? How organizations have moved from
blockchain hype to reality.”
[15] [15] P. Helo and Y. Hao, “Blockchains in operations and supply chains – a review and reference implementation,” Proc. Int. Conf.
_Comput. Ind. Eng. CIE, vol. 2018-Decem, no. July, 2018, doi: 10.1016/j.cie.2019.07.023._
[16] [16] M. Shamout, “Understanding blockchain innovation in supply chain and logisticsindustry,” Int. J. Recent Technol. Eng., vol. 7,
no. 6, pp. 616–622, 2019.
[17] [17] G. Perboli, S. Musso, and M. Rosano, “Blockchain in Logistics and Supply Chain: A Lean Approach for Designing Real-World
Use Cases,” IEEE Access, vol. 6, pp. 62018–62028, 2018, doi: 10.1109/ACCESS.2018.2875782.
[18] [18] A. Azaria, A. Ekblaw, T. Vieira, and A. Lippman, “MedRec: Using blockchain for medical data access and permission
management,” Proc. - 2016 2nd Int. Conf. Open Big Data, OBD 2016, pp. 25–30, 2016, doi: 10.1109/OBD.2016.11.
[19] [19] “MediLedger - Blockchain solutions for Pharma companies.” [Online]. Available: https://www.mediledger.com/. [Accessed: 04Nov-2019].
[20] [20] P. Novotny et al., “Permissioned blockchain technologies for academic publishing,” Inf. Serv. Use, vol. 38, no. 3, pp. 159–171,
2018, doi: 10.3233/ISU-180020.
[21] [21] K. Gai, K. K. R. Choo, and L. Zhu, “Blockchain-Enabled reengineering of cloud datacenters,” IEEE Cloud Comput., vol. 5, no.
6, pp. 21–25, 2018, doi: 10.1109/MCC.2018.064181116.
[22] [22] J. H. Park, J. Y. Park, and E. N. Huh, “Block Chain Based Data Logging and Integrity Management System for Cloud
Forensics,” pp. 149–159, 2017, doi: 10.5121/csit.2017.71112.
[23] [23] R. Kumar and R. Tripathi, “Traceability of counterfeit medicine supply chain through Blockchain,” _2019 11th Int. Conf._
_Commun. Syst. Networks, COMSNETS 2019, vol. 2061, no. 1, pp. 568–570, 2019, doi: 10.1109/COMSNETS.2019.8711418._
[24] [24] “(No Title).” [Online]. Available: https://www.pwc.co.uk/healthcare/pdf/health-blockchain-supplychain-report v4.pdf.
[Accessed: 13-Oct-2020].
[25] [25] J. H. Tseng, Y. C. Liao, B. Chong, and S. W. Liao, “Governance on the drug supply chain via gcoin blockchain,” Int. J. Environ.
_Res. Public Health, vol. 15, no. 6, 2018, doi: 10.3390/ijerph15061055._
[26] [26] F. Jamil, L. Hang, K. H. Kim, and D. H. Kim, “A novel medical blockchain model for drug supply chain integrity management
in a smart hospital,” Electron., vol. 8, no. 5, pp. 1–32, 2019, doi: 10.3390/electronics8050505.
[27] [27] A. Kumar, D. Choudhary, M. S. Raju, D. K. Chaudhary, and R. K. Sagar, “Combating Counterfeit Drugs: A quantitative analysis
on cracking down the fake drug industry by using Blockchain technology,” 2019 9th Int. Conf. Cloud Comput. Data Sci. Eng., pp.
174–178, 2019, doi: 10.1109/confluence.2019.8776891.
[28] [28] K. Fan, Y. Ren, and Z. Yan, “on Blockchain,” 2018 IEEE Int. Conf. Internet Things IEEE Green Comput. Commun. IEEE Cyber,
_Phys. Soc. Comput. IEEE Smart Data, pp. 1349–1354, 2018, doi: 10.1109/Cybermatics._
[29] [29] C. Thatcher and S. Acharya, “Pharmaceutical uses of Blockchain Technology,” _Int. Symp. Adv. Networks Telecommun. Syst._
_ANTS, vol. 2018-Decem, pp. 1–6, 2019, doi: 10.1109/ANTS.2018.8710154._
[30] [30] A. Kamilaris, A. Fonts, and F. X. Prenafeta-Boldύ, “The rise of blockchain technology in agriculture and food supply chains,”
_Trends Food Sci. Technol., vol. 91, pp. 640–652, 2019, doi: 10.1016/j.tifs.2019.07.034._
[31] [31] J. Chod, N. Trichakis, G. Tsoukalas, H. Aspegren, and M. Weber, “On the Financing Benefits of Supply Chain Transparency and
Blockchain Adoption,” pp. 1–35, 2019.
[32] [32] A. Arena and C. Vallati, “BRUSCHETTA : An IoT Blockchain-Based Framework for Certifying Extra Virgin Olive Oil Supply
Chain,” 2019, doi: 10.1109/SMARTCOMP.2019.00049.
-----
[33] [33] “Hyperledger Caliper | Caliper is a blockchain performance benchmark framework, which allows users to test different
blockchain solutions with predefined use cases, and get a set of performance test results.” [Online]. Available:
https://hyperledger.github.io/caliper/. [Accessed: 13-Oct-2020].
**Authors Profile**
**U. Padmavathi** received her M.E degree in Computer Science & Engineering from
Annamalai University, Chidambaram, India in the year 2011. Currently, She is Pursing
her PhD in National Institute of Technology Puducherry, Karaikal, India. Her research
Interest include Blockchain, Networks and Security.
**Narendran Rajagopalan completed his Ph.D from NIT Tiruchirappalli in 2013 and is**
currently serving as Assistant Professor and Head in the department of Computer
Science and Engineering, National Institute of Technology Puducherry, India. His
research interests include Networking, security and Quality of Service.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.21817/indjcse/2022/v13i1/221301110?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.21817/indjcse/2022/v13i1/221301110, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "http://www.ijcse.com/docs/INDJCSE22-13-01-110.pdf"
}
| 2,022
|
[] | true
| 2022-02-20T00:00:00
|
[
{
"paperId": "4f222cbd3c79d036e084bb0d4a87fe621149f56f",
"title": "Blockchains in operations and supply chains: A model and reference implementation"
},
{
"paperId": "6eaa13d48cb21d8435f0e5ac16b2a781d46a6b53",
"title": "The Rise of Blockchain Technology in Agriculture and Food Supply Chains"
},
{
"paperId": "40f9ba969e3e924d6b93b59036efa91bf5d0e957",
"title": "On the Financing Benefits of Supply Chain Transparency and Blockchain Adoption"
},
{
"paperId": "29781fbef455d43f09f5c7e95b12a64a814fd7ab",
"title": "BRUSCHETTA: An IoT Blockchain-Based Framework for Certifying Extra Virgin Olive Oil Supply Chain"
},
{
"paperId": "c113293f1ec69583310186df62320dee2c903096",
"title": "A Novel Medical Blockchain Model for Drug Supply Chain Integrity Management in a Smart Hospital"
},
{
"paperId": "a4591da5b6f9373d93eb5c18cf5eaeb3fcc0eacd",
"title": "Pharmaceutical Cold Chain Using Blockchain 3.0"
},
{
"paperId": "ee42a2f848fa23a6a42d090e579b3e9b0c1d85e9",
"title": "Blockchain Technology in Supply Chain Management: An Application Perspective"
},
{
"paperId": "46c37665e82750ccaebb73b1e2ebd44777e20b16",
"title": "Blockchain in global supply chains and cross border trade: a critical synthesis of the state-of-the-art, challenges and opportunities"
},
{
"paperId": "cdbbfc4216103068f9ceb6b258626e2fa24d2a6e",
"title": "Combating Counterfeit Drugs: A quantitative analysis on cracking down the fake drug industry by using Blockchain technology"
},
{
"paperId": "5a8419bdb2db83a589d71a4a796c8a682a1cd815",
"title": "Traceability of counterfeit medicine supply chain through Blockchain"
},
{
"paperId": "1d563457e29bfe64a4a8d1a1e3c490144b2429d3",
"title": "Blockchain Design for Supply Chain Management"
},
{
"paperId": "d65332e861ae34b52f6d0f0025527c0f08721b63",
"title": "Pharmaceutical uses of Blockchain Technology"
},
{
"paperId": "b28bef6e4d52026bf22b4c7cd605b644b7e7ec95",
"title": "Blockchain-Enabled Reengineering of Cloud Datacenters"
},
{
"paperId": "6271bd9f60d153564d3e5f01962249475ec3618e",
"title": "Blockchain in Logistics and Supply Chain: A Lean Approach for Designing Real-World Use Cases"
},
{
"paperId": "231cd5cac548e2b8bc48c3d8a369493d21c8184d",
"title": "Permissioned blockchain technologies for academic publishing"
},
{
"paperId": "7129b8c4c01e74da8f6b1af1c0079dcd7aedd8bb",
"title": "Governance on the Drug Supply Chain via Gcoin Blockchain"
},
{
"paperId": "2cc244b494445f2b181a08a5fe502ed89d5b86f2",
"title": "An Overview on Pharmaceutical Supply Chain: A Next Step towards Good Manufacturing Practice"
},
{
"paperId": "bd8a307efcffbf57d2e5c3c23577de44d883d865",
"title": "MedRec: Using Blockchain for Medical Data Access and Permission Management"
},
{
"paperId": "4a9e7b18a73902c3be7a4ebab822d300ceb50c5c",
"title": "The Blockchain"
},
{
"paperId": null,
"title": "“Understanding blockchain innovation in supply chain and logisticsindustry,”"
},
{
"paperId": "705fa6fd09f540a8841f64245a8a881f38c56797",
"title": "Supply Chain Management based on Blockchain: A Systematic Mapping Study"
},
{
"paperId": null,
"title": "“The escalating pharma counterfeit problem,”"
},
{
"paperId": null,
"title": "“Block Chain Based Data Logging and Integrity Management System for Cloud"
},
{
"paperId": "ed5c36b2f9db3616b8064a1be504c99a714e3239",
"title": "No title"
},
{
"paperId": null,
"title": "“Pharmacy Inventory Management & Removal Processes | Study.com.”"
},
{
"paperId": null,
"title": "“1 in 10 medical products in developing countries is substandard or falsified.”"
},
{
"paperId": null,
"title": "“Does blockchain hold the key to a new age of supply chain transparency and trust ? How organizations have moved from blockchain hype to reality.”"
},
{
"paperId": null,
"title": "“MediLedger - Blockchain solutions for Pharma companies.”"
},
{
"paperId": null,
"title": "Narendran"
},
{
"paperId": null,
"title": "“Architecture Reference — hyperledger-fabricdocs master documentation.”"
},
{
"paperId": null,
"title": "“India’s pharma industry expected to grow to $55 bn by 2020.”"
}
] | 11,597
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02e5f7515dbaed37b20d5c83068c7a5e33ad0072
|
[
"Computer Science"
] | 0.898502
|
QIACO: A Quantum Dynamic Cost Ant System for Query Optimization in Distributed Database
|
02e5f7515dbaed37b20d5c83068c7a5e33ad0072
|
IEEE Access
|
[
{
"authorId": "153634138",
"name": "S. A. Mohsin"
},
{
"authorId": "145345847",
"name": "S. Darwish"
},
{
"authorId": "143931685",
"name": "A. Younes"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=6287639"
],
"id": "2633f5b2-c15c-49fe-80f5-07523e770c26",
"issn": "2169-3536",
"name": "IEEE Access",
"type": "journal",
"url": "http://www.ieee.org/publications_standards/publications/ieee_access.html"
}
|
Query optimization is considered as the most significant part in a model of distributed database. The optimizer tries to find an optimal join order, which reduces the query execution cost. Several factors may affect the cost of query execution, including number of relations, communication costs, resources, and access to large distributed data sets. The success of a processed query depends heavily on the search methodology that is implemented by the query optimizer. Query processing is considered as NP-hard problem and many researchers are focusing on this problem. Researches are trying to find an appropriate algorithm to seek an ideal solution especially when the size of the database increases. In case of large queries, classical heuristic methods such as ant colony and genetic algorithm can’t cover all search space and may lead to falling in a local minimum. In this paper, quantum inspired ant colony algorithm (QIACO), as one of the hybrid strategy of probabilistic algorithms, is utilized to improve the query join cost in the distributed database model. The ability of quantum computing to diversify leads to cover query large search space, which helps in selecting the best trail and thus improves the slow convergence speed and avoid falling into a local optimum. Using this strategy, the algorithm aims to find an optimal join order which minimizes the total execution time. Experimental results show that the proposed model convergence faster with better goodness than the classic ant colony model for same number of ants used.
|
Received December 21, 2020, accepted January 3, 2021, date of publication January 6, 2021, date of current version January 28, 2021.
_Digital Object Identifier 10.1109/ACCESS.2021.3049544_
# QIACO: A Quantum Dynamic Cost Ant System for Query Optimization in Distributed Database
SAYED A. MOHSIN 1, SAAD MOHAMED DARWISH 1, AND AHMED YOUNES2
1Department of Information Technology, Institute of Graduate Studies and Research, Alexandria University, Alexandria 21526, Egypt
2Department of Mathematics & Computer Science, Faculty of Science, Alexandria University, Alexandria 21568, Egypt
Corresponding author: Sayed A. Mohsin (sayed.abdelmohsin@gmail.com)
**ABSTRACT Query optimization is considered as the most significant part in a model of distributed database.**
The optimizer tries to find an optimal join order, which reduces the query execution cost. Several factors may
affect the cost of query execution, including number of relations, communication costs, resources, and access
to large distributed data sets. The success of a processed query depends heavily on the search methodology
that is implemented by the query optimizer. Query processing is considered as NP-hard problem and many
researchers are focusing on this problem. Researches are trying to find an appropriate algorithm to seek an
ideal solution especially when the size of the database increases. In case of large queries, classical heuristic
methods such as ant colony and genetic algorithm can’t cover all search space and may lead to falling in a
local minimum. In this paper, quantum inspired ant colony algorithm (QIACO), as one of the hybrid strategy
of probabilistic algorithms, is utilized to improve the query join cost in the distributed database model. The
ability of quantum computing to diversify leads to cover query large search space, which helps in selecting
the best trail and thus improves the slow convergence speed and avoid falling into a local optimum. Using
this strategy, the algorithm aims to find an optimal join order which minimizes the total execution time.
Experimental results show that the proposed model convergence faster with better goodness than the classic
ant colony model for same number of ants used.
**INDEX TERMS** Distributed database system, quantum computing, query optimization, ant colony
optimization.
**I. INTRODUCTION**
A Distributed Database is a group of interrelated entities that
are physically distributed over network to improve the computer performance, reliability, availability and modularity of
the distributed systems [1]. Optimizing query in databases,
centralized or distributed, continues to be an important issue
and main problem in commercial and academic fields for
quite a long period of time [2]. Many approaches have been
discussed earlier on query optimization that uses different
technologies, but suffer from the problem of dimension and
accuracy [3], [4]. The importance for optimization arises
from the flexibility provided by modern user interfaces to
databases that help the users to easily specify queries effectively. The purpose of the optimizer, in this case, is determine the best Query Execution Plan (QEP) from many
equivalent QEPs that will reduce the execution cost with
The associate editor coordinating the review of this manuscript and
approving it for publication was Radu-Emil Precup .
less time complexity and utilize the minimum resources [5].
With large number of entities (large queries), the number
of equivalent QEPs increase exponentially and the optimizer
cannot explore all the query plans in such a huge search
space. In this case, the selection of the best QEP, by applying a search strategy, is classified as NP-hard optimization
problem [6], [7].
The search strategy typically falls into one of three categories – exhaustive search, heuristic-based or randomized [2], [4]. Exhaustive Search algorithms have exponential
worst-case running time and exponential space complexity,
which can lead to an algorithm requiring an infeasible amount
of time to optimize large user queries [5]. Since exhaustive
algorithms enumerate over the entire search space the algorithm will always find the optimal plan based upon the given
cost model. The traditional dynamic programming (DP) enumeration algorithm is a popular exhaustive search algorithm,
which has been used in a number of commercial database
management systems [8].
-----
**_Heuristic-based algorithms were proposed with the inten-_**
tion of addressing the exponential running time problem
of exhaustive enumeration algorithms. Heuristic-based algorithms follow a particular heuristic or rule in order to guide the
search into a subset of the entire search space [4]. Typically,
these algorithms have polynomial worst-case running time
and space complexity but the quality of the plans obtained
can be orders of magnitude worse than the best possible
plan. Iterative Dynamic Programming (IDP) is an example
for heuristic-based algorithm [9].
**_Randomized algorithms consider the search space as a set_**
of points each of which correspond to a unique QEP [2].
A set of moves M is defined as a means to transform one
point in space into another i.e. a move allows the algorithm
to jump from a point in space to another. If a point p can
be reached from a point q using a move m _M then_
∈
we say that an edge exists between p and q. Randomized
based models and algorithms are applied with success to
several optimization issues. Simulated Annealing (SA), Iterative Improvement (II), Genetic Algorithm (GA), and Hybrid
Swarm Algorithms [39]–[42] have been suggested to optimize large scale recursive queries [10].
The Ant Colony Optimization (ACO) algorithm is an apt
and effective solution for optimizing query in distributed
database because of its features and characteristics, including its robustness, global optimization, parallelism obtained
from the ability to act concurrently and independently, and
capability to integrate with other methods [3]. To utilize
ACO algorithm for addressing the issue associated with query
optimization, the issue will be described as graph. In this
case the graph symbolized as G _(N, E). The parameters_
=
_N and E formulate the number of entities (tables) and the_
relations (edges) between these entities. The edges that link
the nodes together on the graph G represent the join relations
among entities. In such a case, the purpose of the query
optimizer would be to seek out the best Hamiltonian path
for G.
The most significant advantage of quantum computing is
its ability to potently resolve specific issues faster and more
efficiently than classical computing, such as problems with
a high computational cost [11]. Quantum-inspired exploration procedures employ the ability of parallel processing
by adopting the superposition principle to overcome the
limitation of the classical mechanism and to fulfil a higher
performance [12]. It is noteworthy that superposition is the
aptitude of a quantum system to be in numerous positions
(states) simultaneously while waiting for measuring. It is
often customary to employ such ability of carrying out parallel processing to solve issues that require the exploration of
huge solutions spaces.
This paper is a substantial extension of our conference paper [15]. Compared with this small version, further details of the suggested method are presented, and
more extensive performance evaluation is conducted. Also,
this paper gives a more comprehensive literature review to
introduce the background of the offered method and make
the paper more self-contained. Therefore, this version of
the paper provides a more comprehensive and systematic
report of the previous work. This paper investigates how
the Quantum-Inspired Ant Colony Optimization (QIACO)
algorithm can be used to overcome the problem of join
query optimization in distributed databases when it comes
to search spaces where entities (tables) are not replicated
and depends on total time for explaining the cost model.
Because processing of the queries considered as NP-hard
problem, current traditional approaches, especially when the
size of the database increases, suffer from large computational cost, non-convergence to a global optimum and premature convergence. To solve the problems in traditional
methods, Quantum Inspired Ant Colony (QIACO) paradigm
is used in try to reach the optimum query optimization. Here,
quantum-inspired employed to change the seeking procedure
used by classical ant colony algorithm to move from one node
to another. Instead of using probabilistic mechanism while
building the ant solution, our algorithm will use the quantum partial negation gate, controlled by pheromone values,
to control the ant movement. Our model was tested using synthetic data set and modified TPC-H benchmark queries. This
paradigm able to improve the slow convergence speed and
avoid falling into the local optimum. The result shows that
our model behaves better than the classical model especially
for the queries contained many entities.
The rest of this paper is divided into four sections.
‘‘Section 2’’ formulates the ACO algorithm and describe the
basics of quantum computing used in the proposed algorithm.
‘‘Section 3’’ reviews the previous related works. ‘‘Section 4’’
present our algorithm for query optimization. ‘‘Section 5’’
describe the experimental result that evaluate the algorithm.
The paper is concluded with a ‘‘Future Works’’ and ‘‘Conclusion’’ sections.
**II. PRELIMINARIES**
_A. ANT COLONY OPTIMIZATION_
Ant colony optimization (ACO) is one of many various
approaches of swarm intelligence, a field wherein specialists
consider the collective behavioral of insects as an inspiration
to mimic approaches. At first, ACO was utilized to find a
solution for the traveling salesman and quadratic assignment
problems [16]. Ants solve their issues by traversing the graph
that represents the problem and leaving behind pheromone
to lead the remaining ants. Pheromone trails used to give
the ants a chance to cooperate and benefits from the experiences of other ants by providing a positive feedback. On the
contrary, negative feedback which represented by pheromone
evaporation will need to avoid doldrums. The first ACO
algorithm, called Ant System, is utilized to solve TSP [16].
Many different ACO techniques and algorithms have been
created and suggested since then. One of the main features
of ACO is that, at each iteration, the pheromone values raised
by every ant are modified by the ants at the same site that
-----
provide a solution. The pheromone τij, attached to the edge
joining nodes i and j, is updated as follows [16]:
�m
_τij = (1 −_ _ρ) .τij +_ _k=1_ _[�τ][ k]ij_ (1)
where ρ is the evaporation rate for pheromone quantity �τij[k]
placed on edge (i, j) by ant k from m ants:
_�τ_ _[k]ii_ [=]
_Q_
_if ant k used edge(i, j) in its tour,_
_Lk_
0 _otherwise_
(2)
where Q is a constant and Lk is the length of the round created
by ant k. When building the solution, the ants choose the
next node that should be visited according to a randomized
mechanism. When ant k is in node i and has so far constructed
the partial solution s[p], the probability of going to node j is
given by:
_p[k]ij_ [=]
_τij[α][.η]ij[β]_
_if cij ∈_ _N (S[p]),_
�
_cil_ ∈N (S[p]) _[τ]ij[ α][.η]ij[β]_
0 _otherwise_
(3)
where N (s[p]) is the set of feasible nodes. The relative significance of the pheromone in contrast to the heuristic information ηij, controlled by the parameters α and β and obtained
using distance dij by:
_ηij =_ _d[1]ij_
(4)
**FIGURE 1. Overall structure of QIEA [19].**
1 obtained by _b_ [20]. Normalization of the state to unity
| ⟩ | |[2]
insure
_a_ _b_ 1 (6)
| |[2] + | |[2] =
The qubit’ state can be modified by an operation called a
quantum gate. A quantum gate is a reversible gate and can
be represented as a unitary operator U acting on the qubit
basis states satisfying U[†]U UU[†], where U[†] is the complex
=
conjugate transpose of U [20]. There are many quantum gates,
like the NOT gate, rotation gate, Hadamard gate, etc. [18].
Figure 1 shows the overall structure of QIEA where Q(t)
is the Q-bit representation for the individuals in the search
population at time t, P(t) is the solution acquired by measuring
the states of Q(t) and B(t) is the best solution at time t. More
details regarding the complete steps of QIEA can be found
in [19].
_C. PROBLEM DEFINITION_
A distribution allocation scheme is used in distributed
databases to dispense data, which may be propagated at
different locations. The objective of the query optimizer,
in this case, is to provide an execution plan (from different
equivalent plans) that helps in reducing the cost of query
execution that based on either response time or total time,
to a minimal. The solution space for query that contains many
entities and, in the same time, many database locations will
grow exponentially. Searching for the best query execution
plan, in this case, becomes computationally difficult and
classified as NP-hard optimization problem. Here, finding the
best execution plan depends on the search strategy that will
used to explore the solution space.
The query search space can be represented as a graph
_G=(N_ _, E) with set of vertices N_, represents the set of entities
in the query, and set of edges E, represents the join between
entities, such that each edge e ∈ _E is assigned a cost Ce. Let_
_H be the set of all Hamiltonian cycles, a cycle that visits each_
where dij formulates a distance or cost from nodes i to the
connected node j.
_B. QUANTUM INSPIRED EVOLUTIONARY ALGORITHMS_
Quantum Inspired Evolutionary Algorithms (QIEA) are population based meta-heuristics that draw inspiration from
quantum mechanical principles to improve the efficiency and
the search for evolutionary optimization algorithms. The possibility feature of parallelism given by quantum computing
and simultaneous evaluation of all possible represented states,
drive to the improvement of models which integrate some
feature of quantum computing with evolutionary computation [19]. These models are prepared to execute on classical
computers, not on quantum computers, and categorized as
‘‘quantum inspired’’. One of the early attempts was made by
Han and Kim [19] who prepare a general model of QIEA.
Rather than binary, numeric and symbolic representations
that exist in classical computer, QIEA uses Q-bit to represent
smallest unit of data. A qubit may be in the ‘‘1’’ state, in the
‘‘0’’ state, or in any superposition between ‘‘1’’ and ‘‘0’’. The
qubit’s state is described mathematically as [20].
|ψ⟩= a |0⟩+ b |1⟩ (5)
where a and b are complex numbers that gives the probability
amplitudes for each corresponding state [20]. The probability
that the qubit will be exist in the state 0 obtained by _a_
| ⟩ | |[2]
and the probability that the qubit will be exist in the state
-----
vertex exactly once, in G. The optimizer problem is to find the
path h ∈ _H in G such that the sum of costs Ce is minimized._
Given a set of entities n enumerated as 0, 1, 2, . . ., n−1 to be
joined with the join cost between entity i and j given by Cij.
We introduce a decision variable y for each (i, j) such that
_yij =_
�
1 _if entity j joined to entity i,_
0 _otherwise_
The objective function in this case is
_n_
�
_Cijyij_
_j_
min
_n_
�
_i_
Here, this objective function will be guided by two parameters, the number of entities n and the join cost C which
effected by the number of database locations because the
entities will be transferred between different locations.
So as to establish a solution for the optimization of query
problem, three important parts should be studies: search
space, search strategy, and cost model. Search space refers to
the generation of sets of alternative and equivalent QEPs that
differ in the execution order of the operators. Search strategy
refers to the algorithms applied to explore the search space
and determine the best QEP based on join selectivity and
join sites so as to reduce the cost of query optimization. Cost
model refers to the model used to predict the cost for every
QEP. In this paper, a quantum-inspired ant colony algorithm
will be used as a search strategy, depend on a dynamic
cost technique [15], to identify the best QEP. Here, the ant
colony algorithm will be used to identify the routing path and
the quantum computing will be used to enriches the search
process for identifying the join entity order.
**III. RELATED WORK**
The key component in the optimizer of query is the employed
search methodology. There is extensive and rich literature
described the process of optimization and studied the utilized search technique, indicating its significance. There are
mainly three main search approaches utilized to determine
the best QEP, exhaustive, heuristic-based, and randomized
strategies. Dynamic Programming (DP) is one of the most
recognized exhaustive search strategies, and it is used as a
search technique in most commercial databases. The basic
algorithm of DP used for optimizing query is introduced
in [9], [21]. The optimizer process depending on the drill
down approach by frequently creating composite search plans
using divided smaller search sub-planes till the overall plan
complete. Here, by pruning process, the high-cost search
plan is neglected early in case of alternative equivalent search
plan exists with a low cost. Although this technique gives
a better performance than a randomized strategy when it
comes to queries with a small number of entities, randomized
strategies are a much better fit for queries with a large number
of entities.
Iterative Improvement (II) is one of the most known techniques categorized as randomized algorithms [22]. II initially
choose an irregular start point. Then, the solution improved
by repeated acceptance of random subsidence moves until
reach to local minimum. This procedure is repeated until a
predetermined halting condition is met. By then, the algorithm reaches to the point that have a minimal cost. One of
the primary disadvantages of II is that, sometimes, the conclusive outcome is unsuitable, in any event, when an enormous
number of beginning stages are utilized. At the point when
the set of solution include an enormous number of significant cost nearby minima, II gets handily caught in one of
them.
Genetic Algorithm (GA), another randomized algorithm,
is introduced in [23]. Here, GA is presented as a solution
technique to optimize query issues, and it is tested in comparison with other methodologies. However, this methodology
does not think about the modified crossover and mutation
process. In [24] the author has consolidated GA and Min-Max
Ant System to create optimization methodology for a query
in order to enhance the efficiency of query. The superiority
of parallelism is exceptionally appeared by corresponding
GA and Max-Min Ant System in the event of an enormous
number of relations. In comparison with different algorithms,
this execution plan has less inquiry time and furthermore time
of query execution is diminished in optimal plan created.
This has shortcomings that computation time and the cost of
computation are expanded due to parallel processing of two
algorithms.
As one of the stochastic-based algorithms, the ACO algorithm was used in this investigation as a search methodology
for optimizing queries in both centralized and distributed
database environments [25]. In [26], author proposed multicolony ant algorithm to improve join inquiries in distributed
systems in which tables can be copied however they can’t
be partitioned or fragmented. In this planned algorithm,
4 sorts of ants coordinate to create an execution plan. In this
way every one of the emphasis has four subterranean insect
settlements. So as to locate the optimal plan, every ant performs dynamic decision-making. Two kinds of cost models
fixated on total time as well as response time are used for
the assessment of the quality of the generated plan. In this
algorithm, despite the fact that the total time is diminished
and the convergence speed is increased yet it is getting a
worse performance for a small query and can falling in a local
optimum.
Quantum-inspired evolutionary algorithms are one of the
key areas of research linking quantum computing with evolutionary algorithms. The theoretical applications of quantuminspired evolutionary algorithms in various fields presented
for the first time in [28]. In [29], the author applies QIEA for
locating minimum costs of the assignment in the quadratic
assignment problem (mathematical model for the assignment of a collection of economic activities to a collection
of locations). The main contribution behind this paper is to
present how the algorithm is tailored to the problem, containing crossover and mutation operators furthermore setting
the overall framework for the utilization of quantum ideas
-----
in varied applications. In addition, QIEA is applied along
with genetic programming to improve prediction accuracy of
toxicity degree of chemical compounds [30]. In this work,
the accuracy of linear equation that used to calculate the
degree of toxicity increased as a result of using genetic programming. Moreover, quantum computing helps in improving the selection of the best of run individuals and handling
stinginess pressure to decrease the complexity of solutions.
Also, in [37] the author creates a new technique to find
optimal threshold values at different levels of thresholding
for color images and uses a minimum cross entropy-based
thresholding method as an objective function. In this technique, the results are described in terms of the best threshold
value, fitness measure, and the computational time for each
technique at various levels. Here the convergence curves
prove that the use of quantum-inspired concepts along with
the ACO technique outperforms the results obtained by the
classical ACO technique.
Our proposed algorithm represents an extension to the
work submitted in [15]. This work employs the total query
time calculated for distributed query optimization model that
is utilized for entities which non-replicated as the model’s
cost. The processing and the cost of communication for the
query plan are calculated dynamically and depends on the
path used by ants and the entities site’s location. No fixed
cost exists over the edge of the problem graph, but it is
calculated dynamically as an intermediate outcome while
applies the query’s joins. Quantum gate will used in the
suggested model as a replacement of ACO stochastic search
to enhance the total cost and accelerate the search convergence. This suggested model can be used as an optimization
technique for queries in g SPARQL queries. SPARQL allows
users to write queries against what can loosely be called
‘‘key-value’’ data or, more specifically, data that follow the
Resource Description Framework (RDF) specification, where
RDF is a directed, labeled graph data format for representing
information in the Web.
**IV. THE PROPOSED TECHNIQUE**
Our developed methodology for optimizing query in the environment of a distributed database will be presented based
on the implementation of search space, the method used to
obtain the cost, and the search methodology utilized to find
the best QEP. The search space implementation and cost
calculation method were used the same concept as [15] but
the search methodology will employ the concept of quantum
computing to get the best QEP. Figure 2 shows the main
components of the suggested optimization model and the
way these components are linked together. In this figure,
the SQL statement analyzed to identify the involved entities
(tables). Then, the database statistics associated with the
identified table extracted from the database category tables.
These statistics include the field’s length and type, the entity
tuple length and number of tuples in every entity, and number
of pages required to store the entity data. Also, the entities
site locations and relations between entities are also obtained
from the database category tables. The next step is using the
information obtained from the database category tables to
create the search space. Finally, our search method will be
applied to the search space to obtain the best join order. The
major component of the model described in the following
sections.
_A. COST MODEL_
As exist in [15] the cost calculation method is obtained based
on total time (the total sum of all components cost). Here,
the total cost is obtained as a sum of I/O cost, calculated
for all join process, and data transfer (communication) cost
that is calculated when transferring entities between sites is
necessary.
_Total Cost = IOjoin + COM_ _Ri_ (7)
where IOjoin is the cost of the join process and COMRi represents the cost of transferring entity Ri among the location of
sites. The IOjoin cost is computed as:
_IOjoin = (Pjoin + Pwrite) ∗_ _IO(Sk_ ) (8)
where IO(Sk ) is the I/O time for the disk in location Sk, Pwrite
is the page count that required to save the join outcome, and
_Pjoin is the page count accessed to perform the join process_
between Ri and Rj. Pjoin computed as:
_Pjoin_ _PRi_ _PRj_ (9)
= ∗
where PRi is the page count in entities Ri and Rj. Pwrite
computed as:
�Ri joinRj� ∗ _len_ �Ri joinRj�
_Pwrit =_ _[card]_ _ps_ (10)
where card(Ri) is the tuple count in Ri, len(Ri) is the average
length for tuple in Ri, and ps is the size of page. The cost of
required to transfer relation Ri from location Sk to location Sp
is computed by:
_COM_ _Ri = card(Ri) ∗_ _len(Ri) ∗_ _COM_ (Sk _, Sp)_ (11)
where COM(Sk, Sp) is the time needed to move one byte from
location Sk to location Sp.
_B. BUILD SEARCH SPACE_
The catalog in database is used to primarily store the schema,
which contains information regarding the tables, indices and
views [1]. The information about tables includes the table
name, the field names, the field length, the field types and
the integrity constraints between tables. Various statistics are
also stored in the catalog such as the number of distinct
keys in a given attribute and the cardinality of a particular
relation. In addition, the catalog includes information about
the resident site for tables, the number of sites (locations)
in the system along with their identifiers and the replication
status. This information extracted from the catalog tables will
be employed to build the search space. In our algorithm,
the search space will be implemented as a graph G _(N, A)_
=
-----
**FIGURE 2. Quantum-inspired ant colony model for distributed database query optimization.**
where N represents the collection of vertices (nodes) and A
represents the collection of edges (arcs) [4]. Every vertex in
the graph denotes an entity (table) in the query specification.
Two graph vertices are linked together by an edge if the
corresponding tables are joined together in the query. Every
vertex in the graph represented as a class data structure and
has set of attributes like number of tuples, tuple length, keys
and site location. Figure 3 show a graph representation for a
search space that contains a set of entities and the relations
among them.
-----
**FIGURE 3. Graph representation for the search space.**
_C. SEARCH STRATEGY_
The search methodology for our proposed model depends on
QIACO metaheuristic. The cost of every journey for each
ant to locate the minimal spanning graph, which represents
the query search space, will be dynamically calculated while
building that graph as in [15]. The dynamic calculation of
query cost depends on an additional virtual vertex that will
be added to the graph to carry the intermediate join process outcome. The cost, in this case, calculated between the
virtual node and the next chosen entity in the join order.
In the proposed algorithm, each entity reformed as one qubit
representation as in the form of Eq. (5) and quantum partial negation gate will use to identify the next entity in the
join order. The flow of QIACO described in the following
steps:
**Step1. Initialization. In this phase, all the parameters used**
in the model are initialized, depend on work in [27], experimentally. Minor changes were done to the constant values α,
_β, and ρ. The value of α used as 3 instead of 1, the value of β_
used as 2 instead of 5, and the value of ρ used as 0.02 instead
of 0.1. These changes in the values increased the dependence
of our work on the cost, instead of pheromone, and given a
better result while identifying the next entity in the join order.
The number of ants will be determined, and pheromone trails
will be initialized. All pheromone values will be initialized
by an arbitrary small value equal to √ 1 . The query
_No.Of Entities_
graph that links the tables (entities) is generated such that
every table is connected to all other tables. In this phase
all entities’ qubit probability amplitudes will be initialed as
_ai = bi=_ √[1]2 [which satisfy Eq. (][5][) and Eq. (][6][).]
**TABLE 1. Parameters for ACO algorithm [27].**
**FIGURE 4. Partial negation gate.**
by applying the negation gate, as operator, on all entities’
qubit that have a connection path with the current entity. This
process will be applied many times according to the amount
of pheromone that raised on the path between the current
entity and the connected entities. Then, the entity with best
probability will selected as the next entity. Let X be the PauliX gate which is the quantum equivalent to the NOT gate and
represented as:
� 0 1
_X_
= 1 0
�
(12)
The c[th] partial negation for operator V is the c[th] root of the X
gate and can be calculated using diagonalization as follows,
�
(13)
(14)
� 1 _t 1_ _t_
+ −
1 _t 1_ _t_
− +
[√]C
where t
=
_V_ [√]C _X_ [1]
= =
2
1, and
−
�
_V_ _[†]_ = [1]
2
� 1 _t 1_ _t_
− +
1 _t 1_ _t_
+ −
_V_ _.V = V_ _[†].V_ _[†]_ = X and V _.V_ _[†]_ = V _[†].V = I_ (15)
This gate represented as in Figure 4.
Applying the operator V on a qubit d times is equivalent
to:
(16)
_V_ _[d]_ [1]
=
2
� 1 _t_ _d 1_ _t_ _d_
+ −
1 _t_ _[d]_ 1 _t_ _[d]_
− +
�
1
|�i⟩= √ |0⟩+ √[1] |1⟩
2 2
All the parameters of the algorithm are initialized in this
phase according to Table 1.
**Step 2. For each ant, select random entity and uses it as the**
start-up point for the journey of ant. This entity transfers to
the virtual vertex and waits for choosing the following entity
to implement the join process.
**Step 3. Use a partial negation quantum gate for choose**
the next entity in join sequence. The selection will be done
such that if d _c, then V_ _[d]_ _X_ . When d _c_ _2 this will_
= = = =
give t [2] 1 and so,
= −
� 1 _t_ 2 1 _t_ 2 � � 1 _t_ 2 1 _t_ 2 �
_V_ [2] = V _.V =_ [1] + − _._ [1] + −
2 1 − _t_ [2] 1 + t [2] 2 1 − _t_ [2] 1 + t [2]
� 0 4 � � 0 1 �
_V_ [2] [1] _X_
= 4 4 0 = 1 0 =
In our model, the V gate is used as an operator and is
conditionally applied for c times on every entity’ qubit. The
number of times, c, the V gate will be applied on entity’ qubit
-----
is based on pheromone value and join cost. Here, c will be
defined using Eq. (3) as:
_c = τij[α][.η]ij[β]_ (17)
where τij[α] [is the pheromone trail on the arc that connecting]
entity i with entity j and, ηij[β] [is the heuristic desirability and]
computed as the inverse of the intermediate join cost between
entities i and j. Here, the join total cost is identified by Eq. (4)
and Eq. (7). The amplitudes for each entity will be updated at
time t 1 as,
+
���ψit+1� = � _abt[t]+[+]1[1]_ � = V � _abt[t]_ � (18)
After that, the suggested system uses the tensor product
, a way of putting vector spaces together to obtain a larger
⊗
vector space, so that for n number of entities.
|W ⟩= |�1⟩⊗|�1⟩⊗ _. . . ⊗|�n⟩_ (19)
The vector obtained in Eq. (19) will normalized to the
number of elements similar to the number of entities in the
model. In this case, each element in the normalized vector
will represent the probability of the corresponding entity to
be the next entity in the query join order.
**Step 4. Instead of select the entity with better probability**
in the normalized vector as the next entity in the join order,
the roulette wheel method used to give the entities with small
probability a chance to share in building the query join order.
**Step 5. After choose the next entity, the join cost is deter-**
mined and the outcome of the join process is transferred to
the virtual vertex and afterward utilized as the beginning up
entity for the following ant’ cycle. Equations from (7) to (11)
used to determine the join cost.
**Step 6. Repeat steps from 3 to 5 till all entities are handled**
and then the journey cost for the ant is calculated.
**Step 7. Select the best journey cost from all the journeys**
for ants.
**Step 8. Pheromone update. At the point when all the**
ants build their solutions and, in each cycle, the update
of pheromone is performed. Every pheromone amount is
reduced, to simulate the pheromone evaporation, and raised,
to simulate the ant’s pheromones deposit on the trail. The
modification of pheromone on all graph’ arcs is done using
Eq. (1). The modifications of the pheromone are also dependent on Lk in Eq. (2) that symbolize the total cost of the better
tour created by ant k.
**Step 9. Repeat steps from 2 to 8 until maximum iterations**
and when the cycles complete, the best trail for all ants is
chosen.
The pseudo-code for the suggested model represented in
Algorithm 1 and the partial negation gate represented in
Algorithm 2.
1) CONVERGENCE OF THE MODEL
In [38], the author proof that the convergence of ACO
depends on the pheromone value τij from Eq. (1) and heuristic
**Algorithm 1 QIACO**
(1) Initialize values for α, β, ρ, Q,, Max-Iteration, Ant_Numbers // (step 1)_
(2) Initialize pheromone by τij = √No.Of Entities1 . // (step1)
(3) Initialize all Qubit by |ψi⟩ = [1/sqrt(2);
1/sqrt(2)]//(step 1)
(4) t 0
=
(5) Loop(cycle < Max-Iteration)
(6) _t_ _t_ 1
= +
(7) StartEntity random beginning entity // (step 2)
=
(8) **Loop (ant < Ant-Numbers)**
partialStep NegationGate(ant, StartEntity) //(step
=
**_3)_**
NextEntity ChooseNextEnity (partialStep )//(step
=
**_4)_**
CalculateCos(StartEntity, NextEntity ) // (step 5)
StartEntity NextEntity
=
(9) EndLoop// (step 6)
(10) Identify best trail for ants. // (step 7)
(11) Modify the pheromones. // (step 8)
(12) EndLoop// (step 9)
(13) Identify best trail for the solution
**Algorithm 2 Partial Negation Gate**
Function NegationGate(antX, currentEntity)
(1) QGate [0 1;1 0]
=
(2) For (entity 1 to Number-of-Entities)
=
**If entity is visited then**
partialStep[entity] = 0.1 ∗ 10[−][10] ; very small
_value_
continue
**End IF**
_cost_ CalcCost(currentEntity, entity)
=
_ph_ GetPheromone(currentEntity, entity)
=
_tau_ _ph[∝]_ _cost_ _[β]_
= ∗
���ψit+1� = QGate[tau] ∗ ��ψit �
partialStep[entity] = _ψit+1�_
���
(3) End For
(4) Return partialStep[entity]
**Algorithm 3 Choose Next Enity**
Function ChooseNextEnity(partialStep)
NRV NormalizeVictore(partialStep)
=
SelectedEntity RouletteWheel(NRV)
=
**Return SelectedEntity**
information ηij, from Eq. (4). The suggested model still
depends on Eq. (1) for update the pheromone. Also, Eq. (4)
represents the inverse of cost which used in the suggested
model to find the best rout obtained by ants. So, the convergence of the suggested model is guarantee.
-----
2) COMPLEXITY ANALYSIS
As in [39], the computational complexity of the most classical
ant colony algorithms depends on the number of nodes n,
the number of ants m, and the number of iterations T (the
colony lifetime). Considering equation (3) in more detail,
we can notice that the computational complexity of the algorithm also depends on the parameters α and β. Here, the computational complexity was explained as:
_O(Tm n[2](log2α + log2β))_
In our proposed QACO algorithm, a single quantum superposed state is prepared to encode each node n in the search
space, thus exploration of all nods in a single iteration is
_O(n) time. The selection of next entity in the join order_
depends on the negation gate which represented mathematical
by vectors product with O(n) time. Here, the vector product
will applied to all n nods, so the selection process running
time is O(nxn) _O(n[2]). So, the overall computational time_
=
for the quantum part in our model is O(n) _O(n[2]) which can_
+
reduced to O(n[2]). Hence, the computational complexity for
our QACO algorithm is:
_O_ �Tm n[2][ �]log2α + log2β�[�] + O �n[2][�]
QACO have a computational complexity greater than the
classical ant colony, but gives better results as will explain
in the experimental results section.
_D. LIMITATION_
1) This algorithm uses the total query time calculated for
distributed query optimization as the model’s cost.
2) The algorithm applies for non-replicated entities.
**V. EXPERIMENTAL RESULTS**
In this section, many achieved experiments will be demonstrated to study the efficiency for our proposed model.
Furthermore, to evaluate the performance, the accuracy of our
proposed model will be compared with the traditional optimization techniques. The classical distributed database query
optimization in [15] is built by a modified version from the
C# code that implements an Ant Colony Optimization (ACO)
algorithm to solve the Traveling Salesman Problem (TSP)
which was created by Microsoft MSDN Magazine [35].
The quantum version is implemented using C# along with
MATLAB.
To test the proposed model two types of datasets are
used, synthetic and benchmark. The first group of tests,
experiments 1, 2 and 3, performed on synthetic dataset, which
is randomly generated by a problem generator to simulate
the existence of the join for different number of entities.
The problem generator has two parts; the database generator and the query generator. The first part generates a synthetic database depend on the number of relations. During
this part the cardinality, the length of tuples and the join
attribute of the relations is defined. Also, the site number
**TABLE 2. TPC-H tables [36].**
**TABLE 3. Data distribution.**
where the entities resident is identified. The relation cardinalities, the tuple sizes and the number of sites is randomly generated in range [10, 100], [10, 50] and [2], [10],
respectively. The second part generate a chain query depend
on the schema generated in first part and use the number
of required joins as an input. For example, the generator
can be used to generate query with four joins (e.g. QJ1,
QJ2, QJ3 and QJ4). The queries generated by the generator,
in this case, is random and not depend on specific application or database. The second group of tests, experiments
4 and 5, performed on TPC-H benchmark dataset, as used
in [36]. TPC-H benchmark is a relational data model database
that distributed vertically, as used in our experiments, in different site location. This database is a decision support benchmark that consists of a suite of business-oriented ad-hoc
queries and concurrent data modifications. This benchmark
used to examine large volumes of data and execute queries
with a high degree of complexity. Table 2 and 3 shown the
tables’ size as exist in [36] and data distributions on different
sites.
All tests and experiments are conducted on a PC with Intel
Core I5 2.4.0GHz processor and DDR 8GB main memory
running Microsoft Windows 7 Enterprise 64 bit. All experiments are running with fixed parameters α = 3, β = 2,
_ρ = 0.02 as explained in Table 1. The page size, the network_
transfer rate and I/O access rate are set to be 1024, 0.98×10[−][3]
and 0.98 × 10[−][4] respectively.
In first experiment, and as in Figures 5, 6, 7 and 8, the
minimum costs obtained by the quantum-inspired model have
been compared with the minimum costs obtained by classical
ACO, as in [15], with fixed number of sites equal to 5 and
different ants (from one ant to 5 ants) and run over different
count of entities 5, 10, 15 and 20.
In Figure 5, a small search space resulted from the small
number of entities (5) and QIACO with one ant which was
enough to cover all the search space. So, the optimum solution
(cost) was obtained by single ant only, and adding extra ants
to the model will not lead to better solution.
In case of 10, 15 and 20 entities, the search space becomes
larger and the adding more ants positivity affected the
-----
**FIGURE 5. Comparison between classical ACO and QIACO (5 entities).**
**FIGURE 6. Comparison between classical ACO and QIACO (10 entities).**
**FIGURE 7. Comparison between classical ACO and QIACO (15 entities).**
obtained result. The worst, average and best cost for different
number of ants used in this experiment are briefed in Table 4.
The performance was calculated as the improvement percentage between the average cost in ACO and the average
cost in QIACO. From Table 4, we can conclude that the
QIACO algorithm produces better cost than ACO with different number of ants and for different number of join entities.
When the number of entities is increased, the search space
is exponentially increased and the classic ACO with few
iterations cannot cover this search space. So, in case of 15 and
20 entities, the classical ACO with 300 iterations was compared with QIACO with only 100 iterations. Also, in this
**FIGURE 8. Comparison between classical ACO and QIACO (20 entities).**
**FIGURE 9. Comparison between classical ACO and QIACO (Fixed number**
of sites =1 and different entities from 3 to 15).
case, the QIACO reaches to a better cost than that of ACO.
In case of query with 5 entities, the search space contained
few numbers of alternative solutions and the improvement
percentage between classic ACO and quantum ACO not more
than 13 %. But when the number of entities increased, the corresponding search space increased exponentially which contained a huge number of alternative solutions and in this case
the effect of quantum search appear. Here, the improvement
percentage for QIACO over classic ACO ranged from 77%
to 99%.
In the second experiment; to test the efficiency of the
proposed algorithm in case of one central database and a
different number of join entities, the number of sites is set
as one and the number of join entities will start by 3 up to 15.
There exist many database applications that have complicated
queries with a massive number of joins that may be reached
to 100 or more. Moreover, in some business applications, like
banking and retail systems, the application contains queries
with smaller number of joins that less than 10. The maximum number of joins used in [26] and [32] was 10 and
15, respectively. We used the maximum of 15 joins in our
experiment that sound to be appropriate. Over 10 runs, a
comparison is conducted between the average cost for classical ACO and QIACO with fixed number of ants equals
-----
**TABLE 4. Comparative results between ACO and QIACO.**
**FIGURE 10. Comparison between minimum cost for classical ACO and**
QIACO (fixed number of entities = 10 and sites from 1 to 5).
to 2 and fixed number of iterations equals to 100 as in
Figure 9.
Here, with fewer numbers of joins (less than 5) the same
cost (best cost) was obtained in ACO and QIACO. When
extra entities added to the joined query, the search space
complexity is exponentially increased. Stating from 9 entities,
the QIACO gives average cost better than the average cost
given by classical ACO. The classical ACO cannot cover
all the search space and may fail in local minimum but, the
diversity in QIACO covers much larger space and leads to
better cost.
In third experiment, the suggested model was tested for a
fixed number of entities (10 entities) and a different number
of sites (from 1 to 5 sites). Over 10 runs, two ants were used
with fixed number of iterations equal to 100. Figure 10 and
Figure 11 displays the minimum and average cost for QIACO
and classical ACO. As shown the figures, with the increasing in number of sits, the minimum and average cost are
also increasing, but the QIACO still gives a better cost than
classical ACO. The results reveal that QIACO reaches a
better performance regarding both the minimum cost and the
average cost than classical ACO regardless of number of sites
used.
In fourth experiment, four queries, shown in Figure 12,
from TPC-H benchmark are used to test the performance of
QIACO versus the classical ACO. These queries are chosen
because they fetch data from many tables. Here, different
number of ants, from one to five ants, were employed to get
**FIGURE 11. Comparison between average cost for classical ACO and**
QIACO (fixed number of entities = 10 and sites from 1 to 5).
the cost for each query using on both methodology classical
ACO and QIACO then the average cost for each query was
calculated. As shown in Figure 13, although the average cost
in all queries tends to favour the method of quantum ACO,
it appears clearly in the case of query No. 8. In query No. 8,
a larger search space was produced as a result of existing
number of tables greater than that exist in the other queries.
In this case, QIACO can cover larger search space that lead
to better QEP. The effect of using more ants to seek for
better QEP in query No. 8 is shown in Figure 14. Although
the QIACO give better cost than ACO in all ant numbers,
the increasing in the number of ants used in both methods
lead to a better result. When the number of ants is five,
a larger space can cover in both methodologies and so ACO
and QIACO can reach to the same QEP that gives the same
optimum cost.
In next experiment, the average time used by different
number of ants to get the best QEP for query No. 8 in ACO
and QIACO were compared as shown in Figure 15. Although
adding more ants leads to better cost results, but it is at the
expense of the complexity and therefore the time. In QIACO,
the complexity of computing the tensor product that used
while merging the qubits negatively effect on execution time
and this complexity increase with the number of ants added to
the model. As shown in figure 15, when number of ants is one,
the effect of tensor product in QIACO not exist and in this
case the time required to get the best QEP in QIACO is less
than the classical ACO. When the number of ants increases,
-----
**FIGURE 12.** TPC-H benchmark queries [36].
**FIGURE 13. Comparison between average cost for ACO and QIACO using**
TPC-H benchmark queries.
**FIGURE 14. Effect of using different number of ants on ACO and QIACO**
(query No. 8).
the tensor product clearly effects on the search time and in this
case QIACO needs more time to reach to the best QEP than
classical ACO.
**FIGURE 15. Time to get best QEP by ACO and QIACO (query No. 8).**
**VI. CONCLUSION**
In this work, we proposed a novel ACO methodology based
on the quantum-inspired paradigm to optimize a query by
finding the best execution path, in term of cost, for running
query on distributed database. The QIACO model was built
based on the quantum partial negation gate in order to update
the entity’ qubit amplitudes. The gate conditionally applied
on entity’ qubit based on pheromone value and join cost.
In our model, the average cost that is obtained by QIACO
gives a better performance than the cost obtained by the
classical ACO but on expense of time. This is because in
QIACO, a much wider solution space can be analyzed due to
the structure of the model which is not prescribed in advance
but it is left to the system arising from qubit superposition
via quantum gates. By merging the ACO and the quantum
superposition concepts, we successfully enhanced the cost of
the running query over distributed database. The results show
-----
that the calculation cost obtained by QIACO is better than
that of the classical ACO. The improvement in performance
ranged, in some case, between 77% and 99% but on expense
of time. The tensor product used in QIACO effects on the
search time and needs more time to reach to the best QEP
than classical ACO. Also, the results imply that, although
the classical ACO is used successfully with simple joins that
have a small number of join entities, QIACO can also be used
with simple and complex queries with numerous join entities.
In future work, the plan is to use the hyper-graph, instead of
the classical graph, to represent the search space. Using the
properties and algorithms of sets in the hyper-graph when representing the search space may affect the search methodology
and so enhance the query cost. Also, the complexity of our
proposed algorithm and the results obtained through it will be
compared with others quantum version of swarm intelligence
algorithms such as Particle Swarm Optimization, Artificial
Bee Colony Optimization, and Firefly Algorithm.
**REFERENCES**
[1] R. Ramakrishnan, Databases Management Systems. 3rd ed. New York, NY,
USA: McGraw-Hill, 2003.
[2] M. P. Tiwari and S. V. Chande, ‘‘Query optimization strategies in distributed databases,’’ Int. J. Adv. Eng. Sci., vol. 3, no. 3, pp. 23–29, Jul. 2013.
[3] T. Dökeroglu, and A. Cosar, ‘‘Dynamic programming with ant colony optimization metaheuristic for optimization of distributed database queries,’’
in Proc. 26th Int. Symp. Comput. Inf., London, U.K., 2011, pp. 107–113.
[4] M. Sharma, G. Singh, and R. Singh, ‘‘A review of different cost-based
distributed query optimizers,’’ Prog. Artif. Intell., vol. 8, no. 1, pp. 45–62,
Apr. 2019.
[5] A. Hameurlain and F. Morvan, ‘‘Evolution of query
optimization methods,’’ in Transactions on Large-Scale Data- and
_Knowledge-Centered Systems (Lecture Notes in Computer Science),_
vol. 5740. Berlin, Germany: Springer, 2009, pp. 211–242.
[6] M.-S. Chen and P. S. Yu, ‘‘Using join operations as reducers in distributed
query processing,’’ in Proc. 2nd Int. Symp. Databases Parallel Distrib.
_Syst., 1990, pp. 116–123._
[7] S. Pramanik and D. Vineyard, ‘‘Optimizing join queries in distributed
databases,’’ IEEE Trans, Softw. Eng., vol. 14, no. 9, pp. 1391–1426,
Sep. 1988.
[8] A. Raipurkar and G. R. Bamnote, ‘‘Query processing in distributed
database through data distribution,’’ Int. J. Adv. Res. Comput. Commun.
_Eng., vol. 2, no. 2, pp. 1134–1139, Feb. 2013._
[9] D. Kossmann and K. Stocker, ‘‘Iterative dynamic programming: A new
class of query optimization algorithms,’’ ACM Trans. Database Syst.,
vol. 25, no. 1, pp. 43–82, Mar. 2000.
[10] Y. E. Ioannidis and Y. Kang, ‘‘Randomized algorithms for optimizing large
join queries,’’ ACM SIGMOD Rec., vol. 19, no. 2, pp. 312–321, May 1990.
[11] D. Ristè, M. P. D. Silva, C. A. Ryan, A. W. Cross, A. D. Córcoles,
J. A. Smolin, J. M. Gambetta, J. M. Chow, and B. R. Johnson, ‘‘Demonstration of quantum advantage in machine learning,’’ NPJ Quantum Inf.,
vol. 3, no. 1, p. 16, Dec. 2017.
[12] S.-Y. Kuo, Y.-H. Chou, and C.-Y. Chen, ‘‘Quantum-inspired algorithm for
cyber-physical visual surveillance deployment systems,’’ Comput. Netw.,
vol. 117, pp. 5–18, Apr. 2017.
[13] J.-T. Horng, C.-Y. Kao, and B.-J. Liu, ‘‘A genetic algorithm for database
query optimization,’’ in Proc. 1st IEEE Conf. Evol. Comput. IEEE World
_Congr. Comput. Intell., Orlando, FL, USA, Jun. 1994, pp. 432–444._
[14] E. Sevinc and A. Cosar, ‘‘An evolutionary genetic algorithm for optimization of distributed database queries,’’ Comput. J., vol. 54, no. 5,
pp. 717–725, Jan. 2010.
[15] S. Mohsin, S. Darwish, and A. Younis, ‘‘Dynamic cost ant colony algorithm for optimize distributed database query,’’ in Proc. Artif. Intell. Com_put. Vis. (AICV), Cairo, Egypt, 2020, pp. 170–181._
[16] M. Dorigo, M. Birattari, and T. Stutzle, ‘‘Ant colony optimization,’’ IEEE
_Comput. Intell. Mag., vol. 1, no. 4, pp. 28–39, Nov. 2006._
[17] M. Dorigo and T. Stuzle, Ant Colony Optimization. Cambridge, MA, USA:
MIT Press, 2004.
[18] P. Kaye, R. Laflamme, and M. Mosca, An Introduction to Quantum Com_puting. New York, NY, USA: Oxford Univ. Press, 2007._
[19] K.-H. Han and J.-H. Kim, ‘‘Quantum-inspired evolutionary algorithm for
a class of combinatorial optimization,’’ IEEE Trans. Evol. Comput., vol. 6,
no. 6, pp. 580–593, Dec. 2002.
[20] A. Narayan and C. Patvardhan, ‘‘A novel quantum evolutionary algorithm
for quadratic knapsack problem,’’ in Proc. IEEE Int. Conf. Syst., Man
_Cybern., Oct. 2009, pp. 1388–1392._
[21] D. Kossmann, ‘‘The state of art in distributed query optimization,’’ ACM
_Comput. Surv., vol. 32, no. 4, pp. 422–469, Sep. 2000._
[22] M. Steinbrunn, G. Moerkotte, and A. Kemper, ‘‘Heuristic and randomized
optimization for the join-ordering problem,’’ Int. J. Very Large Data Bases,
vol. 6, no. 3, pp. 191–120, 1997.
[23] Z. Zhou, ‘‘Using heuristics and genetic algorithms for large-scale database
query optimization,’’ J. Inf. Comput. Sci., vol. 2, no. 4, pp. 261–280, 2007.
[24] W. Ban, J. Lin, J. Tong, and S. Li, ‘‘Query optimization of distributed
database based on parallel genetic algorithm and max-min ant system,’’
in Proc. 8th Int. Symp. Comput. Intell. Design (ISCID), vol. 2, Dec. 2015,
pp. 581–585.
[25] N. Li, Y. Liu, Y. Dong, and J. Gu, ‘‘Application of ant colony optimization
algorithm to multi-join query optimization,’’ in Proc. 3rd Int. Symp. Intell.
_Comput. Appl., Berlin, Germany, 2008, pp. 189–197._
[26] L. Golshanara, S. M. T. R. Rankoohi, and H. Shah-Hosseini, ‘‘A multicolony ant algorithm for optimizing join queries in distributed database
systems,’’ Knowl. Inf. Syst., vol. 39, no. 1, pp. 175–206, Apr. 2014.
[27] P. Tiwari and S. Chande, ‘‘Optimal ant and join cardinality for distributed
query optimization using ant colony optimization algorithm,’’ in Proc.
_2nd Int. Symp. Emerg. Trends Expert Appl. Secur., Singapore, Feb. 2019,_
pp. 385–392.
[28] G. Zhang, ‘‘Quantum-inspired evolutionary algorithms: A survey and
empirical study,’’ J. Heuristics, vol. 17, no. 3, pp. 251–303, Jun. 2011.
[29] W. Chmiel and J. Kwiecień, ‘‘Quantum-inspired evolutionary approach
for the quadratic assignment problem,’’ Entropy, vol. 20, no. 10, p. 781,
Oct. 2018.
[30] S. M. Darwish, T. A. Shendi, and A. Younes, ‘‘Quantum-inspired genetic
programming model with application to predict toxicity degree for chemical compounds,’’ Expert Syst., vol. 36, no. 4, p. e12415, May 2019.
[31] A. Younes, ‘‘Reading a single qubit system using weak measurement with
variable strength,’’ Ann. Phys., vol. 380, pp. 93–105, May 2017.
[32] E. Sevinc and A. Cosar, ‘‘An evolutionary genetic algorithm for optimization of distributed database queries,’’ Comput. J., vol. 54, no. 5,
pp. 717–725, May 2011.
[33] P. Stone, P. Dantressangle, A. Mowshowitz, and G. Bent, ‘‘Review of
relational algebra for dynamic distributed federated databases,’’ IBM UK,
Winchester, U.K., Tech. Rep. SO21 2JN, 2009.
[34] B. Cao and A. Badia, ‘‘SQL query optimization through nested relational
algebra,’’ ACM Trans. Database Syst., vol. 32, no. 3, pp. 18–35, Aug. 2007.
[35] J. McCaffrey, ‘‘Test run—Ant colony optimization,’’ MSDN-Mag.,
vol. 27, no. 2, Feb. 2012. Accessed: Dec. 30, 2020. [Online].
Available:https://docs.microsoft.com/en-us/archive/msdnmagazine/2012/february/test-run-ant-colony-optimization
[36] A. Deshpande and J. M. Hellerstein, ‘‘Decoupled query optimization for
federated database systems,’’ in Proc. 18th Int. Conf. Data Eng., San Jose,
CA, USA, Feb./Mar. 2002, pp. 716–727.
[37] S. Dey, S. Bhattacharyya, and U. Maulik, ‘‘New quantum inspired metaheuristic techniques for multi-level colour image thresholding,’’ Appl. Soft
_Comput., vol. 46, pp. 677–702, Sep. 2016._
[38] H. Duan, ‘‘Ant colony optimization: Principle, convergence and application,’’ in Handbook of Swarm Intelligence. Berlin, Germany: Springer,
2011, pp. 373–388.
[39] H. Zapata, N. Perozo, W. Angulo, and J. Contreras, ‘‘A hybrid swarm
algorithm for collective construction of 3D structures,’’ Int. J. Artif. Intell.,
vol. 18, no. 1, pp. 1–18, 2020.
[40] R. E. Precup and R.-C. David. _Nature-Inspired_ _Optimization_
_Algorithms_ _for_ _Fuzzy_ _Controlled_ _Servo_ _Systems._ Oxford, U.K.:
Butterworth-Heinemann, 2019.
[41] B. H. Abed-Alguni, ‘‘Island-based cuckoo search with highly disruptive
polynomial mutation,’’ Int. J. Artif. Intell., vol. 17, no. 1, pp. 57–82, 2019.
[42] R.-E. Precup, R.-C. David, E. M. Petriu, A.-I. Szedlak-Stinean, and
C.-A. Bojan-Dragos, ‘‘Grey wolf optimizer-based approach to the tuning of
pi-fuzzy controllers with a reduced process parametric sensitivity,’’ IFAC_PapersOnLine, vol. 49, no. 5, pp. 55–60, 2016._
-----
SAYED A. MOHSIN received the B.Sc. degree in
statistics and computer science from the Faculty
of Science, Alexandria University, Egypt, in 1995,
the diploma degree in information technology
from the Department of Information Technology, Institute of Graduate Studies and Research
(IGSR), Alexandria University, in 2002, and the
M.Sc. degree in information from Alexandria University, in 2014. He has 20 years of extensive
experience in software architecture, design, and
development. He combines his Software Development experience and Standard Software Process (SSP) with technical expertise using a variety of
programming languages and database systems, including C#, Java, Power
Builder, Microstrategy, Teradata, SQL Server, and ORACLE. Since 2018, he
has been a Principal Business Intelligence Manager with Misr Technology
Services Company.
SAAD MOHAMED DARWISH received the
B.Sc. degree in statistics and computer science
from the Faculty of Science, Alexandria University, Egypt, in 1995, the M.Sc. degree in
information technology from the Department of
Information Technology, Institute of Graduate
Studies and Research (IGSR), Alexandria University, in 2002, and the Ph.D. degree from Alexandria
University, for a thesis in image mining and image
description technologies. Since June 2017, he has
been a Professor with the Department of Information Technology, IGSR.
He is the author or coauthor of more than 100 articles publications in
prestigious journals and top international conferences. He has supervised
around 60 M.Sc. and Ph.D. students. His research and professional interests
include image processing, optimization techniques, security technologies,
database management, machine learning, biometrics, digital forensics, and
bioinformatics. He received several citations. He has served as a reviewer
for several international journals and conferences.
AHMED YOUNES received the Ph.D. degree
from the University of Birmingham, U.K., in 2004.
He is currently a Professor of quantum computing with Alexandria University. He introduced the
partial diffusion operator for amplitude amplification and the representation of Boolean quantum
circuits as Reed–Muller expansions. He has publications in quantum algorithms, quantum cryptography, and synthesis and optimization of reversible
circuits. He is also the Founder and a Leader of the
Alexandria Quantum Computing Group (AleQCG).
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2021.3049544?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2021.3049544, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/6287639/9312710/09316285.pdf"
}
| 2,021
|
[
"JournalArticle"
] | true
| null |
[
{
"paperId": "54317098bb142b9fb3d54047e138adfe0866938c",
"title": "Ant Colony Optimization"
},
{
"paperId": "4b82fed7edcb7f4bb3624294fc258b0887768826",
"title": "Dynamic Cost Ant Colony Algorithm for Optimize Distributed Database Query"
},
{
"paperId": "310358140187367a553ec2d90694379e8c4f6b7c",
"title": "A Hybrid Swarm Algorithm for Collective Construction of 3D Structures"
},
{
"paperId": "eff0ba0a9fe21bd62ec531e28d3ab6b0b67c161d",
"title": "Quantum‐inspired genetic programming model with application to predict toxicity degree for chemical compounds"
},
{
"paperId": "9057c5da888055fbe90124e1d4eeb5c43559d9d0",
"title": "Island-based Cuckoo Search with Highly Disruptive Polynomial Mutation"
},
{
"paperId": "e3a494b74311e9e3392c6c367d6cc1f755323d22",
"title": "Optimal Ant and Join Cardinality for Distributed Query Optimization Using Ant Colony Optimization Algorithm"
},
{
"paperId": "c106f1b869432dc3a0492d0aefaf688e9b689881",
"title": "Quantum-Inspired Evolutionary Approach for the Quadratic Assignment Problem"
},
{
"paperId": "8f021f7f9388dca0fccc31065067a061eb8e037e",
"title": "A review of different cost-based distributed query optimizers"
},
{
"paperId": "903168901c1432d9b78bd343c31057c0311aa1fe",
"title": "Quantum-inspired algorithm for cyber-physical visual surveillance deployment systems"
},
{
"paperId": "d8d4debd5f825a73a98252d5f0bad898600d74e4",
"title": "Reading a Single Qubit System Using Weak Measurement with Variable Strength"
},
{
"paperId": "f806ad81847e978a0d92cd6f7c358149128619d3",
"title": "New quantum inspired meta-heuristic techniques for multi-level colour image thresholding"
},
{
"paperId": "b652a3a63403ade57a2aa93347fadbb66c8a964d",
"title": "Demonstration of quantum advantage in machine learning"
},
{
"paperId": "a75835708db9ff402bc7283e8b225ecfe19ba47d",
"title": "Query Optimization of Distributed Database Based on Parallel Genetic Algorithm and Max-Min Ant System"
},
{
"paperId": "0147d174e2b30cbfa4f0555e34f7f352e4e04c37",
"title": "A multi-colony ant algorithm for optimizing join queries in distributed database systems"
},
{
"paperId": "19863d7f57b6fc479e46c954ccd38adc5e5b2a2c",
"title": "Query Optimization Strategies in Distributed Databases"
},
{
"paperId": "98f66e4597fb51a1f9990d30856f2e190dc44da1",
"title": "Quantum-inspired evolutionary algorithms: a survey and empirical study"
},
{
"paperId": "b4512685ea98133ffa903eb4737844cc3b43462c",
"title": "A novel quantum evolutionary algorithm for quadratic Knapsack problem"
},
{
"paperId": "c04d611e9e36bd0e1f66b158b8b6a544903813ba",
"title": "An evolutionary genetic algorithm for optimization of distributed database queries"
},
{
"paperId": "0ba20540d4c8e7666077bcef90f07b1c8c5f9364",
"title": "Evolution of Query Optimization Methods"
},
{
"paperId": "c2fbf654eec6d6c1a8559ccd87d5385517cb6c14",
"title": "Application of Ant Colony Optimization Algorithm to Multi-Join Query Optimization"
},
{
"paperId": "dc6a494f316a51cda807e4f0962279f6f1a70b4a",
"title": "An Introduction to Quantum Computing"
},
{
"paperId": "8465c1a8de222b44760785b5ecabe5c54a280f4e",
"title": "SQL query optimization through nested relational algebra"
},
{
"paperId": "f20a5269c425d7154ce250b0bf4074d60555686f",
"title": "Quantum-inspired evolutionary algorithm for a class of combinatorial optimization"
},
{
"paperId": "10ba0cfb64de29fb1036482ef5015f9359fd6722",
"title": "Decoupled query optimization for federated database systems"
},
{
"paperId": "b8e23cfbf5a9c195730de76e6f3bb365f29f4ca5",
"title": "Iterative dynamic programming"
},
{
"paperId": "fc11be43ea70148f5488453dd76fd10e158f33c4",
"title": "Heuristic and randomized optimization for the join ordering problem"
},
{
"paperId": "f6a17a05c615fdd971e4b873715a74b148582b7e",
"title": "Using join operations as reducers in distributed query processing"
},
{
"paperId": "4bca561a9bc360135bfc364b666687b0e394361b",
"title": "Randomized algorithms for optimizing large join queries"
},
{
"paperId": "cf549500a3b9f60b495fdb668298eeacfb1d3057",
"title": "Optimizing Join Queries in Distributed Databases"
},
{
"paperId": "9e4e45c6497cfd66c2a185812d0ae3387592f8d7",
"title": "Nature-inspired Optimization Algorithms for Fuzzy Controlled Servo Systems"
},
{
"paperId": "6400db8b3b439efed59f3b2b4e21d37b9b998e34",
"title": "Grey Wolf Optimizer-Based Approach to the Tuning of Pi-Fuzzy Controllers with a Reduced Process Parametric Sensitivity"
},
{
"paperId": "ecc1f8ab50ee84faf9eed8ef43c6ec6d668c0a7b",
"title": "Query Optimization Strategies in Distributed Databases"
},
{
"paperId": "dd5ba49cafbe6f062fa7d7b45bf82eb06c500f9d",
"title": "Query Processing In Distributed Database Through Data Distribution"
},
{
"paperId": "ed2e8227e0c2b411a4d79a2d921e816f0f540174",
"title": "Dynamic Programming with Ant Colony Optimization Metaheuristic for Optimization of Distributed Database Queries"
},
{
"paperId": "cb8e134ec4a2b146bc5be5a77abfdc09630e680e",
"title": "Ant Colony Optimization: Principle, Convergence and Application"
},
{
"paperId": "92e53bfd6f87393601396c2df0606761f1a0e47c",
"title": "Review of Relational Algebra for Dynamic Distributed Federated Databases"
},
{
"paperId": "e9cbcfbf3e5b34c5fbdc4a8c8f9c9a82b965a9db",
"title": "Using Heuristics and Genetic Algorithms for Large-scale Database Query Optimization"
},
{
"paperId": null,
"title": "DatabasesManagementSystems .3rded.NewYork,NY, USA"
},
{
"paperId": "f7b9584494cb04118c7ad27c4e80b9fa39bb912d",
"title": "A Genetic Algorithm for Database Query Optimization"
}
] | 15,347
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02e9efffa5cff47f161fabbd54a8cd25965341cb
|
[
"Computer Science"
] | 0.897898
|
Box2Box - A P2P-based file-sharing and synchronization application
|
02e9efffa5cff47f161fabbd54a8cd25965341cb
|
IEEE P2P 2013 Proceedings
|
[
{
"authorId": "2020760",
"name": "Andri Lareida"
},
{
"authorId": "2059452",
"name": "T. Bocek"
},
{
"authorId": "32631658",
"name": "Sebastian Golaszewski"
},
{
"authorId": "2994540",
"name": "Christian Luthold"
},
{
"authorId": "2110605930",
"name": "Marc Weber"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
**Zurich Open Repository and**
**Archive**
University of Zurich
University Library
Strickhofstrasse 39
CH-8057 Zurich
www.zora.uzh.ch
Year: 2013
## Box2Box - A P2P-based File-Sharing and Synchronization Application
Lareida, Andri ; Bocek, Thomas ; Golaszewski, Sebastian ; Lüthold, Christian ; Weber, Marc
Abstract: Due to an increasing number of devices connected to the Internet, data synchronization becomes
more important. Centrally managed storage services, such as Dropbox, are popular for synchronizing
data between several devices. P2P-based approaches that run fully decentralized, such as BitTorrentSync, are starting to emerge. This paper presents Box2Box, a new P2P file synchronization application
which supports novel features not present in BitTorrent-Sync. Box2Box is demonstrated in several use
cases each targeted at another feature.
DOI: https://doi.org/10.1109/P2P.2013.6688736
Posted at the Zurich Open Repository and Archive, University of Zurich
ZORA URL: https://doi.org/10.5167/uzh-91891
Conference or Workshop Item
Originally published at:
Lareida, Andri; Bocek, Thomas; Golaszewski, Sebastian; Lüthold, Christian; Weber, Marc (2013).
Box2Box - A P2P-based File-Sharing and Synchronization Application. In: 13th IEEE International
Conference on Peer-to-Peer Computing, Trento, Italy, 9 September 2013 - 11 September 2013. IEEE
International Conference, 1-2.
DOI: https://doi.org/10.1109/P2P.2013.6688736
-----
# Box2Box - A P2P-based File-Sharing and Synchronization Application
### Andri Lareida, Thomas Bocek, Sebastian Golaszewski, Christian L¨uthold, Marc Weber
University of Zurich, Department of Informatics (IFI), Communication Systems Group (CSG), Zurich Switzerland
Email: [lareida|bocek]@ifi.uzh.ch, [sebastian.golaszewski|christian.luethold|marc.weber]@uzh.ch
Abstract—Due to an increasing number of devices connected
to the Internet, data synchronization becomes more important.
Centrally managed storage services, such as Dropbox, are popular for synchronizing data between several devices. P2P-based
approaches that run fully decentralized, such as BitTorrent-Sync,
are starting to emerge. This paper presents Box2Box, a new P2P
file synchronization application which supports novel features not
present in BitTorrent-Sync. Box2Box is demonstrated in several
use cases each targeted at another feature.
I. INTRODUCTION
P2P systems are still popular and account for a large portion of Internet traffic [1], [2]. Most of this P2P traffic is related
to file sharing (BitTorrent [3] (BT)), but also new types of P2P
applications are emerging. Another trend is that the number of
devices connected to the Internet is increasing [4], especially
mobile devices [5]. Therefore, synchronization between these
devices becomes more important since users tend to have
more than one device on which data is accessed, modified,
or created.
Centralized systems, such as Dropbox [6] or GoogleDrive [7], offer synchronization solutions which enable multiple devices to synchronize their data. However, users are
bound to their pricing and terms of service, and lose control
over their data when uploading to one of these centralized
solutions. Furthermore, recent events leading to the shut down
of the Megaupload service [8] show that the single point of
failure property of centralized system is a problem. BitTorrentSync [9] (BT-Sync) on the other hand offers a decentralized
solution for synchronizing files of any size among several
devices. However, BT-Sync does not currently offer any versioning features allowing reproduction of falsely deleted or
modified content. Furthermore, synchronization is only possible if the devices are online. To ensure privacy, BT-Sync
uses secrets which are folder-based and can be used to share
folders among several devices or users. These secrets have to
be exchanged out-of-band.
Box2Box is a P2P solution similar to BT-Sync, but supports
novel features like friend lists and recommendation, versioning, and support for high availability peers running on user
controlled nano data centers (UNaDa). A UNaDa uses home
routers which are always online to offer services to the user.
It differs from the nano data center approach [10] in the way
that it is controlled by the user instead of the service provider.
II. DESIGN
Designing a distributed sharing and synchronization system
imposes challenges, such as consistency, conflict resolution,
versioning, and management of shared files, which are harder
to solve compared to a centrally managed system. The focus
of this section is the design of the following mechanisms: synchronization and sharing, versioning, friend recommendation,
and deploying a stable peer on a UNaDa.
To share and synchronize files in the P2P network, meta
data about the file, its location, and its version is created and
stored in the DHT. The key to store the meta data in the DHT
is randomly selected to prevent guessing of keys. The peer that
stores the meta data is responsible to notify observing peers
of changes in sharing and versioning information on a per-file
basis. In case a (new) file needs to be stored in the network,
the peer keeping the meta data is instructed to update its meta
data and add the requesting peer to its observer list. If the
file upload succeeds, the peer with the meta data updates the
location of this file and notifies all observing peers, including
all peers of the file owner and friends that share this file. A
new version will be stored in a different location, thus, the old
version remains accessible.
Consistency and conflict resolution is achieved on a besteffort basis. When a user modifies a file, a modification
containing the new and the previous version information of
this file is announced to the responsible peer. If a second user
tries to update the same file during the procedure, another
modification with the same previous version is announced.
In that case, a version has two or more successors, and the
user, trying to update as second, receives a conflict report.
Furthermore, the file is marked as such at this peer. Although
a conflict is reported, Box2Box can cope with it and leaves it
to the user to decide which version should be used for further
modifications.
A peer in Box2Box is able to share its files with a friend.
A friend can be added in two different ways: add a friend via
the GUI, which will trigger a friend request. If such a friend
request is acknowledged the two peers are in a friend relation
and can share files. A friend can also be recommended based
on the friends-of-friends approach. A peer ranks unknown
friends of friends according to the number of occurrences in
the friend lists. Peers exchange their friend list upon request.
The user can then decide to send a friend request.
The master peer feature of Box2Box allows a user to
deploy a high availability peer on a UNaDa. This super peer
is responsible for storing all the user’s files (large and small)
since this peer has high availability and large disk space for
storage. A distinction of large and small files is made as
in the DHT large files may not be stored due to bandwidth
restrictions. Thus large files are stored on UNaDas exclusively,
-----
while the small files can be stored in the DHT and on a
UNaDa.
III. DEMO SCENARIO
The scenario is based on one user running three peers on
three devices, two peers on a laptop (P1 and P2) and one peer
on a UNaDa (P3). In addition, a large network of 100 peers is
running in the background on a single laptop. In this scenario
the demo presents five use cases. These use cases are depicted
in Figure 1 and show the setup of the demonstration and the
peer interaction in each use case. The gray box that includes
P1, P2, and P3 represents one user. The peers P4-P104 have
one user per peer, resulting in 100 users. The GUI and console
log of P1 and P2 will be shown on screens.
1) P1 is online and P2 is offline. The user, on P1,
adds a large file to B2B, since P2 is offline no
synchronization is possible. P1 goes offline and P2
comes online. Still, no synchronization will happen,
since no direct transfer is possible. P2 will be notified
about the new file though. P1 comes online again
and the large file is transferred to P2, similar to
BitTorrent-Sync (cf. Figure 1a). The synchronized file
appears on P2 and the synchronization process can be
observed in the log files.
2) Only P1 is online to the network and adds a small
file to B2B. This file is encrypted and stored in the
underlying P2P-network (P1 - P104). P1 goes offline
and P2 comes online. Because the small file is stored
in the network it can be transferred to P2 without P1
being online. Therefore, P2 downloads and decrypts
the file (cf. Figure 1a).
3) The user installs Box2Box on his UNaDa (P3), acting
as the super peer which stores all files (cf. Figure 1a).
This means that all the files of the user are directly
stored on the super peer.
4) The user on P2 requests a friend recommendation
and receives a list of firends of friends. After the two
users have mutually established their friend relation
they can share files. The user on P2 shares a file with
the selected friend. Since it is a small file it can be
synchronized offline (cf. Figure 1b).
5) The two peers P1 and P2 update a file at the same
time resulting in a conflict. The peer that was first
to upload his file is continuing as usual. The Peer
that was second receives a conflict notification (cf.
Figure 1b). After the user resolves the conflict a new
version of the file is created and uploaded.
IV. FUTURE WORK
The next step is to release Box2Box to the public as open
source software. For the initial release several improvements
are still necessary.
Future work includes backup storage on friend peers. The
user will be able to set the redundancy ratio and split the
backup in a way that allows to reconstruct the backup if a
given fraction of the friends are online (e.g., backup can be
reconstructed if 3 of 5 friends are online). Furthermore, friends
will have the status as trusted peers and storage of data can
|2) P4-P104 2)|P1 3) 1) P3 3) P2|4) P4-P104|P1 5) P3 5) P2|Col5|
|---|---|---|---|---|
||||||
**P1**
**P2**
**P3**
**P3**
**P3**
**P3**
(a) Use Cases 1-3
**P2**
**P3**
**P3**
(b) Use Cases 4-5
**P1**
**P3**
Fig. 1. Demonstration Setup of the five Box2Box Use Cases.
prefer to use trusted peers to store data to mitigate security
concerns.
Currently a JavaFX front-end is used, which will be replaced by a web-based GUI that is work in progress. For
future work, a desktop integration that allows the user to
use Box2Box transparently is foreseen. Furthermore, mobile
versions with a reduced set of functions will be supported.
Besides encryption, future work investigates more security aspects and potential attack scenarios, such as access
restriction or colluding peers. An interesting security aspect
is to consider friend peers as trusted entities. Relying on these
trusted peers can mitigate many attack scenarios.
ACKNOWLEDGMENTS
This work was supported partially by the SmartenIT and
the FLAMINGO projects funded by the EU FP7 Program
under Contract No. FP7-2012-ICT-317846 and No. FP7-2012ICT-318488, respectively.
REFERENCES
[1] Sandvine Inc. UCL, “Global Internet Phenomena Report1H,”
http://www.sandvine.com/downloads/documents/Phenomena 1H 2013/
Sandvine Global Internet Phenomena Report 1H 2013.pdf, last
visited: 30.5.2013, May 2013.
[2] CISCO. (2013, Jan.) Cisco Visual Networking Index: Forecast
and Methodology, 2011-2016. [Online]. Available: http://www.cisco.
com/en/US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/white
paper c11-481360 ns827 Networking Solutions White Paper.html
[3] B. Cohen, “Incentives Build Robustness in BitTorrent,” in 1st Workshop
on Economics of Peer-to-Peer Systems (P2PECON), Berkeley, CA,
USA, June 2003.
[4] Miniwatts Marketing Group, “Internet Growth Statistics,” http://www.
internetworldstats.com/emarketing.htm, last visited: 5.8.2009, January
2008.
[5] M. Meeker and L. Wu. (2013, May) Internet trends. [Online].
Available: http://allthingsd.com/tag/mary-meeker/
[6] Dropbox Inc. (2013, May) Dropbox. [Online]. Available: http:
//www.dropbox.com/
[7] Google Inc. (2013, May) Google drive. [Online]. Available: http:
//drive.google.com
[8] BBC. (2013, May) Megaupload file-sharing site shut down. [Online].
Available: http://www.bbc.co.uk/news/technology-16642369
[9] BitTorrent Labs. (2013, May) BitTorrent Sync. [Online]. Available:
http://labs.bittorrent.com/experiments/sync.html
[10] V. Valancius, N. Laoutaris, L. Massouli´e, C. Diot, and P. Rodriguez,
“Greening the internet with nano data centers,” in Proceedings of the
5th international conference on Emerging networking experiments and
technologies. ACM, 2009, pp. 37–48.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/P2P.2013.6688736?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/P2P.2013.6688736, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://www.zora.uzh.ch/id/eprint/91891/1/P2P2013_demo_16.pdf"
}
| 2,013
|
[
"JournalArticle"
] | true
| 2013-12-19T00:00:00
|
[] | 3,425
|
en
|
[
{
"category": "Political Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02ea0769a0460e00e1a54fcd763ee94d4f576916
|
[
"Political Science"
] | 0.889936
|
Report of The Second Eastern European Conference on Cryptocurrencies (4 March 2019, Białystok, Poland)
|
02ea0769a0460e00e1a54fcd763ee94d4f576916
|
[
{
"authorId": "2122572177",
"name": "Hanna Deilidka"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
DOI: 10.15290/acr.2019-2020.12-13.13
**Hanna Deilidka**
University of Bialystok
Poland
annadejl@mail.ru
# REPORT OF THE SECOND EASTERN EUROPEAN CONFERENCE ON CRYPTOCURRENCIES (4 MARCH 2019, BIAŁYSTOK, POLAND)
On March 4, 2019, at the Faculty of Law of the University
of Bialystok took place the Second Eastern European Conference on Cryptocurrencies organized by the Scientific
Circle of Financial Law, Scientific Circle of Commercial
Law and Scientific Circle of Tax Law operating at the Faculty of Law of the University of Bialystok. The conference
was covered by the Honorary Patronage of the Ministry
of Science and Higher Education, Marshal of the Podlasie
Voivodship Artur Kosicki, Podlasie Voivode Bohdan
Paszkowski and the Rector of the University of Bialystok
prof. dr hab. Robert W. Ciborowski. Among the Honorary
Patrons of the event were also the Deputy Marshal of the
Podlasie Voivodeship Stanisław Derehajło, the Temida 2
Publishing House and Euronet Norbert Saniewski sp. j.,
which enabled to show a machine digging cryptocurrency, i.e. the so-called cryptocurrency excavator.
The conference began at 9:00 in the Hall of the Faculty
of Law and was opened by prof. zw. dr hab. Ewa Monika
Guzik - Makaruk, Deputy Dean for Science of the Faculty
of Law, and prof. dr hab. Eugeniusz Ruśkowski, head of
the Department of Public Finance and Financial Law and
the Supervisor of the Scientific Circle of Financial Law.
After the presentation and welcome of the invited guests,
the substantive part of the conference began, divided
into expert panels led by dr Ewa Lotko and dr Urszula
Zawadzka-Pąk from the Department of Public Finance
and Financial Law, and doctoral and student panels led
by Magdalena Olchanowska, Ewelina Marcińczyk and
Hanna Deilidka - representatives of the Scientific Circle
of Financial Law.
The lecture opening the conference “Using cryptocurrencies in money laundering” was given by dr hab. Wojciech
Filipkowski, prof. UwB, representing the Department of
Criminal Law and Criminology at the Faculty of Law,
UwB. In his speech, prof. Filipkowski referred to threats
resulting from the popularity of cryptocurrencies transformed into a tool for committing crimes.
Another speech at the expert panel of the conference
was by dr hab. Sławomir Presnarowicz, prof. UwB from
the Department of Public Finance and Financial Law at
the Faculty of Law of the University of Bialystok, entitled “Cryptocurrencies in Poland in 2019 - selected tax
aspects”. The idea behind the presentation was to show
this issue based on an outline of the changes that have
occurred in recent months as a result of interest of tax
administration in cryptocurrencies.
Then the next speaker was mgr Grzegorz Jarosiewicz, representing the Tax Department at the Faculty of Law, who
delivered a lecture entitled “Legal regulations concerning
taxation of virtual currencies trading”. In his speech, mgr
Jarosiewicz presented all issues raising questions about
the tax treatment of cryptocurrencies.
Next lecture was given by prof. Olga Lutova from N.I. Lobachevsky State University in Nizhny Novgorod (Russia)
“Cryptocurrencies in the Russian Federation”, in which
she presented the situation of virtual money in the Russian system.
Another speaker - prof. Aleksander Morozov (also from
the N.I. Lobachevsky State University of in Nizhny
-----
Novgorod) - presented his speech entitled “Existing approaches to legal regulation of cryptocurrency taxation
in the Russian jurisdiction”, from which it was possible to
draw differences between the Polish and Russian system
of virtual money taxation.
Next lecture “Cryptocurrency: problematic aspects of legal regulation” delivered by prof. Imed Tsindeliani from
the Russian Academy of Justice in Moscow focused on
obstacles to recognise crypto-currency by the legislature.
Then a lecture entitled “Cryptocurrency and state sovereignty” was presented by prof. Dmitry Szczerbik from the
Polotsk State University in Polotsk, in which he presented
possible disturbances in the state relationship with the
newly created type of coins.
Another speaker was also a representative of the Polotsk
State University. Professor Aliaksej Radziuk delivered
a lecture entitled “The Social Impact of Cryptocurrency
Experiment in Belarus” thanks to which it was possible
to learn how bitcoin influenced the Belarusian society
and what were their first reactions to entering the crypto
market.
The last speech of the expert panel was by prof. Ryma
Kluczko (Yanka Kupala State University of Grodno)
“Criminal legal assessment of crimes involving cryptocurrency”. The lecture focused on the possible use of
cryptocurrencies in the broadly defined crime, presenting
the most serious threats associated with it.
The expert panel ended at 12:10, then the second part of
the conference began, during which doctoral students
gave their speeches.
The first presentation in this part was delivered by mgr
Maksymilian Szal, a PhD student at the Department of
Civil and Commercial Law at the Faculty of Law, entitled “Cryptocurrencies as the subject of contribution to
a commercial company”. Other PhD students who took
part in the conference included representatives of universities from Poland and abroad:
– Masaryk University in Brno (Czech Republic)
– mgr Richard Bartes (“Selected legal aspects of
cryptocurrencies in the Czech Republic”)
– Polotsk State University (Belarus) – mgr Viktoria
Dorina (“International legal regulation of cryptocurrencies”), mgr Pavel Salauyou (“Legal regulation of Blockchain technology and cryptocurrency:
the problem of choosing lawmaking strategies”)
– University of Wroclaw – mgr Łukasz Cymbaluk
(“Political implications of cryptocurrencies”)
– Cardinal Stefan Wyszyński University in Warsaw –
mgr Sylwia Szutko (“Consequences of new taxation
rules for cryptocurrencies and qualify income from
virtual currency trading to income from cash capitals”), mgr Ida Jóźwiak (“Cryptocurrencies as the
subject of regulations in the field of counteracting
money laundering and terrorism financing”)
– University of Warsaw – mgr Katarzyna Ziółkowska (“ICO and crypto-assets in the EU regulatory
framework - conclusions from the position of the
European Securities and Markets Authority published on January 9, 2019”), mgr Konrad Sukojaj
(“Taxation of trading in cryptocurrencies with tax
on goods and services”)
– Maria Curie-Skłodowska University in Lublin –
mgr Maciej Błotnicki (“Selected aspects of the
functioning of virtual currencies on the basis of
applicable criminal law regulations - adequacy, or
lack thereof in the proper protection of legal goods
in the 21[st] century?”)
– The Jagiellonian University in Kraków – mgr Wiktor Podsiadło (“Taxation of virtual currencies, tax
on natural persons”)
– University of Bialystok – mgr Cezary Pachnik
(“Cryptocurrency as an instrument for the pursuit of autonomy or independence of indigenous
peoples and national minorities in the light of the
principles of international law”), mgr Izabela Grens
- Trykoszko (“Bitcoin as an object of property security”), mgr Paweł Szorc (“Regulations on protection
of personal data as a barrier to the development
of blockchain and cryptocurrency technologies”),
mgr Katarzyna Jarnutowska (“Cryptocurrencies as
an object of private law relations”), mgr Magdalena
Anna Kropiwnicka (“What is bitcoin? Legal character of bitcoin”), mgr Justyna Omeljaniuk (“Cryptocurrencies as subject of crime”), mgr Paweł
Czaplicki (“Initial Coin Offering - legal aspects of
capital acquisition by entrepreneurs using digital
currencies”), mgr Agnieszka Godlewska and mgr
Paulina Grodzka (“Taxation of cryptocurrencies
and tax honesty”), mgr Łukasz Presnarowicz (“Actions of the Office of Competition and Consumer
Protection in the field of cryptocurrency”).
The doctoral panel ended at 15:00 followed by a lunch
break, which lasted until 16:00. Then a practical panel
discussion started.
-----
The speeches ended at 17:00 and then a 10 minutes coffee
break began. At 17:10 the student panel started, during
which representatives of both foreign and Polish universities were also present:
– Polotsk State University (Belarus) – Palina Kavalchuk (“Using cryptocurrency in Belarus”)
– Siedlce University of Natural Sciences and Humanities - Dominik Kowalczyk (“Cryptocurrencies and
threats that they introduce to social security”)
– Nicolaus Copernicus University in Torun - Paulina Wysocka (“Money laundering and cryptocurrencies”), Jakub Rolirad (“Cryptocurrencies as an
excellent thesaurization of property or a financial
pyramid?”)
– University of Lodz - Agnieszka Sobierajska (“Legal
aspects of tokens - civil law analysis including personal tokens”)
– Maria Curie-Skłodowska University in Lublin - Piotr Jackiewicz, Dominika Gozdalska (“Legal and
tax aspects of cryptocurrencies in the Republic of
Poland”)
– University of Warsaw - Ewa Tokarewicz (“Prudence
or short-sightedness - about the approach of a Polish employer to cryptocurrency on the example
of tax on civil law transactions”), Filip Sobociński
(“Is the token equity issue a public offer?”), Kamil
Węgliński (“Cryptocurrencies - groundbreaking
technology of the Internet”)
– Odessa (Ukraine) - Masenko Yaroslav (“EOS Blockchain 3.0”)
– Minsk (Belarus) - Alina Bańkowskaja (“Cryptocurrencies: a role in the modern world”), Kiril Kozal
(“Cryptocurrency and Bitcoin. Money of the new
generation”), Daria Umanskaja (“Prospects for recognition and development of cryptocurrencies in
European countries”), Julia Karazej (“Perspectives
of using cryptocurrencies in the legal sphere”)
– Turkey (Erasmus student at University of Białystok) - Abdullah Bahçe (“Big increase in interest in
bitcoin in Turkey - the fall of the lira”), Onur Duran
(“Currency crisis, and the introduction of the crypto-currency exchange”), Elif Sila Cesur (“Solutions
for introducing cryptocurrencies”), Burhan Budak
(“Abduction of cryptoblogs - forensic aspects of
cryptocurrencies trading”), Barış Gökler (“Bitcoin
vs. alcony – comparison”)
– University of Bialystok - Bartłomiej Korolczuk
(“Potential of the Blockchain Model”)
After thanking all invited guests as well as all participants
of the conference, the conference was closed at 20:40 by
Magdalena Olchanowska, president of the Scientific Circle of Financial Law.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.15290/ACR.2019-2020.12-13.13?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.15290/ACR.2019-2020.12-13.13, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYSA",
"status": "HYBRID",
"url": "https://repozytorium.uwb.edu.pl/jspui/bitstream/11320/10870/1/ACR_12_13_2019_2020_H_Deilidka_Report.pdf"
}
| 2,020
|
[] | true
| null |
[
{
"paperId": null,
"title": "At 17:10 the student panel started, during which representatives of both foreign and Polish universities were also present: -Polotsk State University (Belarus) -Palina Kavalchuk"
},
{
"paperId": null,
"title": "After thanking all invited guests as well as all participants of the conference, the conference was closed at 20:40 by Magdalena Olchanowska, president of the Scientific Circle of Financial Law"
},
{
"paperId": null,
"title": "Cryptocurrencies as an excellent thesaurization of property or a financial pyramid?\") -University of Lodz -Agnieszka Sobierajska"
},
{
"paperId": null,
"title": "Prudence or short-sightedness -about the approach of a Polish employer to cryptocurrency on the example of tax on civil law transactions\")"
}
] | 2,583
|
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02ebfba4f4b6a6e825ece2c841bec48bb1c05e35
|
[
"Computer Science"
] | 0.922464
|
Special issue on network-based high performance computing
|
02ebfba4f4b6a6e825ece2c841bec48bb1c05e35
|
Journal of Supercomputing
|
[
{
"authorId": "30963515",
"name": "H. Sarbazi-Azad"
},
{
"authorId": "1924106",
"name": "A. Shahrabi"
},
{
"authorId": "1761129",
"name": "H. Beigy"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"The Journal of Supercomputing",
"J Supercomput"
],
"alternate_urls": [
"https://link.springer.com/journal/11227",
"https://www.springer.com/computer/swe/journal/11227?changeHeader"
],
"id": "26ed29a9-64ce-4d6c-9024-8b022fd2fe22",
"issn": "0920-8542",
"name": "Journal of Supercomputing",
"type": "journal",
"url": "http://www.springer.com/computer/programming/journal/11227"
}
| null |
# Special issue on network-based high performance computing
**H. Sarbazi-Azad** **A. Shahrabi** **H. Beigy**
**·** **·**
Published online: 19 March 2010
© Springer Science+Business Media, LLC 2010
Over the past decade, ever-increasing demands for greater computational power have
necessitated the development of High Performance Computing (HPC) systems with
high availability and reliability. This trend has transformed the traditional model of
parallel processing into a model of computing where all components of HPC systems
are applied together in a cooperative network in order to solve scientific problems of
unprecedented complexity. The key element of HPC architectures is, of course, the
underlying network since it aims to provide a low latency communication for parallelism. Network-based computing is now a subject of interest across the complete
range of scales in which distributed systems operate, from those comprising multiple
engines on a single chip to those harnessing the power of large numbers of powerful computers using wide area connections to implement grids or other cluster-based
structures.
This special issue is devoted to presenting a range of relevant topics in the area
of network-based HPC, covering a selection of its many aspects. We invited authors
of selected papers from International CSI Computer Conference (CSICC2008) to
H. Sarbazi-Azad (�) · H. Beigy
Department of Computer Engineering, Sharif University of Technology, Tehran, Iran
[e-mail: azad@sharif.edu](mailto:azad@sharif.edu)
H. Beigy
[e-mail: beigy@sharif.edu](mailto:beigy@sharif.edu)
H. Sarbazi-Azad
School of Computer Science, Institute for Research in Fundamental Sciences (IPM), Tehran, Iran
[e-mail: azad@ipm.ir](mailto:azad@ipm.ir)
A. Shahrabi
School of Computing and Engineering, Glasgow Caledonian University, Glasgow, UK
[e-mail: a.shahrabi@gcal.ac.uk](mailto:a.shahrabi@gcal.ac.uk)
-----
extend their papers for consideration in the special issue. Also, we widely announced
the special issue and invited researchers of the field to submit papers to the special
issue. As a result, initially, 24 papers were submitted of which 18 papers were selected
for review. After a rigorous review process, 13 papers were selected for publication
in the special issue. What follows, immediately below, is a brief summary of these
contributions intended to give a preliminary indication of the scope and variety of the
issue.
Job scheduling has always been a fundamental issue as it has a direct impact on the
performance of any multicomputer network. Focusing on both two-dimensional and
three-dimensional mesh-connected multicomputers, a new non-FCFS job-scheduling
scheme is proposed by Ababneh, Bani-Mohammad, and Ould-Khaoua. The proposed
scheme aims to bound job waiting delays, while achieving superior performance in
terms of higher system utilization and lower job turnaround times.
Peer-to-peer network has been recently appeared as any distributed network architecture composed of participants that make a portion of their resources available to
other network participants without requiring any centralized coordination. The quality of live streaming in peer-to-peer multipath networks is the subject of the research
conducted by Liu and Chen. Their study analyzes the key considerations and factors influencing live stream quality during system operations and improves present
P2P (peer-to-peer) live streaming systems by allowing users to enjoy high quality of
service under the limitations of network resources.
As another study under the peer-to-peer networks umbrella, Xhafa, Barolli, Caballe and Fernandez address the efficient management of peer groups in JXTA-based
P2P systems as a key issue in many P2P applications that use peer group as a unit.
Motivated by the need to support online teams of real virtual campuses, they propose
the management of peer groups in JXTA-Overlay, a JXTA-based P2P middleware for
the development of P2P applications.
The third paper in peer-to-peer networks area focuses on the effectiveness and
scalability of search algorithms. By combining search and trust systems, Mashayekhi
and Habibi propose a robust and efficient trust-based search framework for unstructured P2P networks. The proposed framework maintains limited size routing indexes,
combining search and trust data to guide queries to most reputable nodes.
CONFIIT (Computation Over Network with Finite number of Independent and
Irregular Tasks) is a purely decentralized peer-to-peer middleware for Grid computing, which has already been proposed. The main features and reaction of CONFIIT
to topology changes is the subject of another study conducted by Flauzac, Krajecki
and Steffenel. They demonstrate how the car-sequencing problem can be solved in a
distributed environment to illustrate CONFIIT operation.
One of the more significant developments in network computing in recent times
has been the emergence of Grid computing as a mechanism for harnessing processing
resources and bringing them to bear on compute-intensive tasks. In such highly distributed environments, estimating the available bandwidth between clusters is a key
issue for efficient task scheduling. In a research by Batista, Chaves, da Fonseca and
Ziviani, the performance of two well-known available bandwidth estimation tools has
been analyzed.
Distributed Real Time (DRT) systems are increasingly in demand of object profiling, scheduling and migration algorithms to respond to unpredictable transient
-----
changes in load and availability of resources in an open environment. Du and Ruan
propose a robust DRT model that does not require precise system parameters such
as task execution times. By using the proposed model, it is easy to achieve coupling
among processors and include various periodic and aperiodic tasks, load migration,
and disturbance effects.
Load balancing for emerging and future highly distributed high-performance computing systems has always been an attractive research area and much progress has
been reported relating to the design of new methods and algorithms. In another study
conducted by Randles, Abu-Rahmeh, Johnson and Taleb-Bendiab, a scalable and reliable load-balancing scheme for the distributed resources accessible on Grid networks
is demonstrated through matching the load on a resource network to approach regular
connectivity on a network graph. The proposed scheme provides a distributed load
balancing system by generating almost regular networks.
Scheduling parallel applications with precedence-constraints is emerging as a new
challenge in volunteer computing systems. Choon Lee, Zomaya, and Siegel propose two novel robust task-scheduling heuristics, which identify best task-resource
matches in terms of makespan and robustness. Both approaches are based on a
proactive reallocation scheme enabling output schedules to tolerate a certain degree
of performance degradation. Schedules are initially generated by focusing on their
makespan. These schedules are scrutinized for possible rescheduling using additional
volunteer computing resources to increase their robustness.
A striking feature of recent developments in networking has been the rise of wireless connectivity, but while this yields great benefits especially for roaming nodes,
some issues such as network sustainability have also to be addressed. Focusing on ad
hoc networks, Xu, Wang, and Srimani propose a self-stabilizing protocol for weakly
connected dominating set in a given network graph. Self-stabilization is a protocol
design paradigm that is especially useful in resource-constrained infrastructure-less
networks since nodes can make moves based on local knowledge only and yet a
global task is accomplished in a fault-tolerant manner.
Another research in the area of mobile ad hoc networks investigates the enhancement of routing protocols using the improvement achieved during broadcasting route
requests in distance vector routing protocols. Yasin, Khalaf and Al-Dubai propose a
new probabilistic method to improve the performance of existing on-demand routing
protocols by reducing the route requests overhead during the route discovery operation.
Within the same networking environment, mobile ad hoc network, and in order to
improve Media Access Control (MAC) under self-similar traffic, Abu-Tair, Min, Ni,
and Liu propose an adaptive MAC scheme which dynamically adjusts the increasing
function and resetting mechanism of contention window based on the status of network load. The performance of this scheme is investigated in comparison with the
legacy of DCF under self-similar traffic and different mobility models.
Finally, Ysami and Mozaffari move the focus to anomaly detection algorithms
used to obtain sufficient information about complex network traffic in intrusion detection systems. They propose, based on the k-means clustering and the ID3 decision
tree learning approaches in machine learning theory, a combinatorial approach for
unsupervised classification of anomalous and normal activities in computer network
ARP traffic.
-----
In closing, as guest co-editors, we express our thanks to the editor-in-chief of
the Journal of Supercomputing, Professor H. Arabnia, for hosting this special issue
devoted to network-based high performance computing and for his support and advice throughout the process of bringing the original conception to fruition. We also
thank all the authors for their contributions, including those whose papers were not
included in this special issue and, last, the many reviewers who contributed their time
and energy to providing valuable evaluations and recommendations.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s11227-010-0423-1?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s11227-010-0423-1, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007/s11227-010-0423-1.pdf"
}
| 2,010
|
[
"JournalArticle"
] | true
| 2010-07-01T00:00:00
|
[] | 2,074
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02ecf3b5def3e709f9eca48b0f8760355f3356bb
|
[
"Medicine"
] | 0.83466
|
A Study of the Machine Learning Approach and the MGARCH-BEKK Model in Volatility Transmission
|
02ecf3b5def3e709f9eca48b0f8760355f3356bb
|
Journal of Risk and Financial Management
|
[
{
"authorId": "40867975",
"name": "Prashant Joshi"
},
{
"authorId": "2109643967",
"name": "Jinghua Wang"
},
{
"authorId": "152562846",
"name": "M. Busler"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Risk Financial Manag"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-318032"
],
"id": "5bccd387-3836-42f2-9b60-9c190333ae01",
"issn": "1911-8066",
"name": "Journal of Risk and Financial Management",
"type": "journal",
"url": "https://www.mdpi.com/journal/jrfm"
}
|
This study analyzes the volatility spillover effects in the US stock market (SP500) and cryptocurrency market (BGCI) using intraday data during the COVID-19 pandemic. As the potential drivers of portfolio diversification, we measure the asymmetric volatility transmission on both markets. We apply MGARCH-BEKK and the algorithm-based GA2M machine learning model. The negative shocks to returns impact the SP500 and the cryptocurrency market more than the positive shocks on both markets. This study also indicates evidence of unidirectional cross-market asymmetric volatility transmission from the cryptocurrency market to the SP500 during the COVID-19 pandemic. The research findings show the potential benefit of portfolio diversification between the SP500 and BGCI.
|
Journal of
### Risk and Financial Management
_Article_
# A Study of the Machine Learning Approach and the MGARCH-BEKK Model in Volatility Transmission
**Prashant Joshi** **[1,]*, Jinghua Wang** **[2]** **and Michael Busler** **[3]**
1 School of Business, Saint Martin’s University, 5000 Abbey Way SE, Lacey, WA 98503, USA
2 Martin Tuchman School of Management, New Jersey Institute of Technology, University Heights,
Newark, NJ 07102, USA; jinghua.wang@njit.edu
3 School of Business, Stockton University, 101 Vera King Farris Drive, Galloway, NJ 08205, USA;
michael.busler@stockton.edu
***** Correspondence: pjoshi@stmartin.edu
**Abstract: This study analyzes the volatility spillover effects in the US stock market (S&P500) and**
cryptocurrency market (BGCI) using intraday data during the COVID-19 pandemic. As the potential
drivers of portfolio diversification, we measure the asymmetric volatility transmission on both
markets. We apply MGARCH-BEKK and the algorithm-based GA[2] _M machine learning model. The_
negative shocks to returns impact the S&P500 and the cryptocurrency market more than the positive
shocks on both markets. This study also indicates evidence of unidirectional cross-market asymmetric
volatility transmission from the cryptocurrency market to the S&P500 during the COVID-19 pandemic.
The research findings show the potential benefit of portfolio diversification between the S&P500
and BGCI.
[����������](https://www.mdpi.com/article/10.3390/jrfm15030116?type=check_update&version=3)
**�������**
**Citation: Joshi, Prashant, Jinghua**
Wang, and Michael Busler. 2022. A
Study of the Machine Learning
Approach and the MGARCH-BEKK
Model in Volatility Transmission.
_Journal of Risk and Financial_
_[Management 15: 116. https://](https://doi.org/10.3390/jrfm15030116)_
[doi.org/10.3390/jrfm15030116](https://doi.org/10.3390/jrfm15030116)
Academic Editor: Jong-Min Kim
Received: 4 February 2022
Accepted: 28 February 2022
Published: 2 March 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Keywords: MGARCH-BEKK; GA[2]M; machine learning; volatility spillovers robustness; cryptocurrency**
**JEL Classification: C32; C58; C63**
**1. Introduction**
The cryptocurrency market reshaped the traditional concepts of the financial markets.
Blockchain technology using computer science knowledge created a financial revolution,
impacting the financial markets from every perspective. It can be noticed that cryptocurrencies have received considerable attention from scholars, investors, and policymakers. As of
April 2021, the global market capitalization of cryptocurrency was approximately $1983
billion[1]. The cryptocurrency market is becoming widespread. The cryptocurrency market is
still new, despite this growth. It is therefore interesting to examine the relationship between
the stock market and the cryptocurrency market and analyze the spillover effects within
them. Scholars and industry practitioners have many debates about this topic. Symitsi
and Chalvatzis (2018) found unilateral volatility transmission from energy and technology
stocks to Bitcoin. Conrad et al. (2018) found that Bitcoin volatility is impacted by the
S&P500 volatility.
Our research is the first study that combines the machine learning approach and
the MGARCH-BEKK model to study the strength and linkage of volatility transmission
between the cryptocurrency and US stock markets. The current literature mainly focuses
on the application of the various GARCHs on the volatility transmission of the financial
markets (Wang and Ngene 2020; Ji et al. 2019; Mensi et al. 2019). There is little literature
applying the machine learning approach in finance research. Bertomeu et al. (2021) identify
and interpret the patterns present in ongoing accounting misstatements. They use an
algorithmic machine learning approach to address the importance of a wide set of variables
to detect material misstatements. However, there is a lack of research using machine
-----
_J. Risk Financial Manag. 2022, 15, 116_ 2 of 9
learning in the study of volatility transmission. In our study, we use an advanced machine
learning method to conduct the robustness test in support of the empirical findings.
With the broader application of big data in industry and academia, machine learning has become an effective algorithmic technology in analyzing data and developing
automated applications. Machine learning is a learning process that is conducted in
an intelligent manner for the purpose of making comprehensive data-driven decisions.
Witten et al. (2005) point out that the data-driven systems are effectively boosted by machine learning algorithms in terms of classification analysis, regression, data clustering,
and dimensionality reduction. Machine learning algorithms can process and train a model
to learn the trends, and using that knowledge to make predictions or inferences from
real-world data. It is different from classical statistical methods in two ways. First, the
classical statistical methods mainly focus on inference to discover relationships between
the variables describing the effects of the model with no white noise (Bzdok et al. 2018). In
contrast, machine learning concentrates on prediction to find the future movement patterns
(Witten et al. 2005). Additionally, the algorithmic process is also helpful to capture the
complex relationships in a forecasting context. Secondly, machine learning algorithms can
deal with large and wide datasets at a fast pace. Classical statistical methods are useful for
datasets with fewer input variables and comparatively smaller sizes.
The algorithmic machine learning approach attracts the attention of scholars and
practitioners in finance (Warin and Stojkov 2021). However, there is a lack of research using
the generalized additive model (GAM) in finance (Hastie and Tibshirani 1987). Machine
learning models have a strong ability to learn the previous price movement patterns in both
short or long time periods and using the learned information to predict the future price
movements. In our study, we use the generalized additive 2 model (GA[2] _M) to estimate_
the impact power of one market on another one in line with the forecasting framework.
This model is intelligible and provides more accurate results when ranking the impact
features. The algorithm-based machine learning approach is accompanied with the study
of robustness. We evaluate the impact feature of the regression coefficients in the GA[2] _M_
machine learning model (Lou et al. 2012, 2013) in support of an investigation of the volatility
dynamics between the S&P500 and the Bloomberg Galaxy Crypto Index (BGCI).
Our research question is about whether there are intraday volatility interactions
between the crypto and the stock markets. To answer this question, we employ MGARCHBEKK (Engle and Kroner 1995) and a machine learning GA[2] _M framework to investigate_
volatility transmission between the BGCI and S&P500. Our intraday analysis addresses
the fundamental mechanism between the markets from the perspective of asymmetric
estimation of the volatility spillover. The intraday data is superior to the daily data in
studying the dynamic relationships of the cryptocurrency market (Wang and Ngene 2020),
because it helps us to find patterns in prices at shorter time intervals.
Several empirical studies analyzed the stock price movements across different stock
markets. Previous studies (Cardona et al. 2017; Bollerslev et al. 1988; Kroner and Ng 1998)
have shown how returns are related between stock markets and examine their influence on
pricing and trading strategies.
The global linkage of emerging markets allows for the information and shocks to flow
easily across the markets as demonstrated by Li and Majerowska (2008). The integration of
stock markets reduces the benefits of portfolio diversification.
We question how the volatility of the cryptocurrency market is affected by the stock
market. Symitsi and Chalvatzis (2018) pointed out that there is unilateral volatility from
stocks to Bitcoin to some extent. Market integration influences volatility in stock markets
and the risk of the assets. Therefore, it is important to analyze volatility and its transmission.
Shi et al. (2020) found that the price volatility of Ethereum, Ripple, Dash, Stellar, Bitcoin,
and Litecoin are related. Aslanidis et al. (2021) assessed market linkages across seventeen
major cryptocurrencies by employing the daily returns from August 2015 to July 2020
using principal component analysis and a vector autoregression framework. The results
suggested strong linkages between returns and volatility.
-----
_J. Risk Financial Manag. 2022, 15, 116_ 3 of 9
There are some researchers who have tried to examine the volatility behavior and interactions in cryptocurrency markets. Yousaf and Ali (2020) identified no significant volatility
spillover between cryptocurrencies during the pre-COVID-19 period but found bidirectional volatility spillover during the COVID-19 pandemic using the DCC-GARCH model.
Canh et al. (2019) found substantial volatility interactions among the cryptocurrencies with
the DCC-GARCH model.
Bouri et al. (2018) found volatility spillover in the bitcoin market. Cardona et al. (2017)
found volatility spillover in North and South American stock markets using MGARCHBEKK models. Liu and Serletis (2019) found volatility spillover across cryptocurrencies
and financial markets in the United States, Germany, the United Kingdom and Japan.
The intraday analysis is better suitable for the cryptocurrency market. However,
previous researchers have focused on the daily volatility analysis instead of intraday
volatilities analysis. Using MGARCH-BEKK framework, Worthington and Higgs (2004)
found return spillovers in the major stock markets of Asia. Their study revealed weaker
cross-volatility spillover than the spillover from own markets.
Most of the studies have focused on stock markets to examine volatility relationships
but a few studies examine volatility dynamics across cryptocurrencies. There are a few
such studies on cryptocurrencies and equity markets during the recent period. This
paper tries to further the cryptocurrency literature in numerous ways. Firstly, it uses
a multivariate asymmetric GARCH model to examine the intraday asymmetric volatility
spillover. Secondly, it studies the most recent period, during the COVID-19 pandemic.
Thirdly, it uses high-frequency (hourly) intraday data to examine the linkage. Fourth, it is
the first study to apply a machine learning approach to the study of the return and volatility
transmission between the S&P500 and BGCI using the MGARCH-BEKK (1,1) model.
The remainder of this paper is structured as follows: Section 2 covers data and preliminary examination. Section 3 discusses the methodology employed. Empirical analysis is
presented in Sections 4 and 5 offers a summary.
**2. Preliminary Examination**
This study utilizes the intraday hourly closing value of the Bloomberg Galaxy Crypto
Index and S&P500 from 1 June 2020 to 11 December 2020. The Bloomberg Galaxy Crypto
Index (BGCI) is a benchmark index for cryptocurrencies in the US. S&P500 is one of the
most commonly referenced equity indices. It tracks the performance of stock prices of the
500 largest companies.
Returns are measured as the difference between the natural logarithm of closing prices.
Figures 1 and 2 display the returns of the share price indices. They indicate volatility
clustering in the data.
**Figure 1. Returns of S&P500.**
-----
_J. Risk Financial Manag. 2022, 15, x FOR PEER REVIEW_ 4 of 9
_J. Risk Financial Manag. 2022, 15, 116_ 4 of 9
**Figure 2.Figure 2. Returns of BGCI.Returns of BGCI.**
Table 1 summarizes the returns. The Jarque–Bera statistics suggest that the returns are
Table 1 summarizes the returns. The Jarque–Bera statistics suggest that the returns
non-normal. Excess kurtosis suggests the returns are leptokurtic. Multivariate GARCH
are non-normal. Excess kurtosis suggests the returns are leptokurtic. Multivariate
(MGARCH) is a valid model to analyze volatility transmission (Li 2007) with the dataset.
GARCH (MGARCH) is a valid model to analyze volatility transmission (Li 2007) with the
dataset.
**Table 1. Summary Statistics.**
**Table 1. Summary Statistics.**
**Summary** **Excess** **Jarque-**
**Mean** **Std. Dev.** **Skewness** **Probability**
**Statistics** **Kurtosis** **Bera**
**Summary** **Excess**
**Mean Std. Dev. Skewness** **Jarque-Bera** **Probability**
**Statistics RSP** 0.0134 0.3066 _−0.1568Kurtosis 0.8725_ 31.1151 0
RBGCIRSP 0.0134 0.0162 0.3066 0.6207 −0.1568 −0.01491 0.8725 1.3667 31.1151 67.6623 0 0
Notes: The results of the summary statistics rely on the intraday one-hour data with 869 observations from 1 June
2020 to 11 December 2020. The data source is from the Bloomberg terminal.RBGCI 0.0162 0.6207 −0.01491 1.3667 67.6623 0
Notes: The results of the summary statistics rely on the intraday one-hour data with 869 observa
**3. Methodologytions from 1 June 2020 to 11 December 2020. The data source is from the Bloomberg terminal.**
In accordance with the discussions of preliminary data analysis and the literature
**3. Methodology**
review, we developed a framework consisting of the time series model and a machine
In accordance with the discussions of preliminary data analysis and the literature
learning approach. Based on the earlier literature, we aimed to test the hypothesis that
review, we developed a framework consisting of the time series model and a machine
there exists a bidirectional volatility transmission between the BGCI and S&P500. The
MGARCH-BEKK model (learning approach. Based on the earlier literature, we aimed to test the hypothesis that Engle and Kroner 1995) provides a convenient way to analyze the
cross-market spillover. The application of machine learning is a promising methodologythere exists a bidirectional volatility transmission between the BGCI and SP500. The
in asset pricing, corporate governance, international finance and accounting (MGARCH-BEKK model (Engle and Kroner 1995) provides a convenient way to analyze Warin and
Stojkov 2021the cross-market spillover. The application of machine learning is a promising methodol-, etc.). We used the algorithm-based GA[2] _M as an alternative model for the_
robustness test. It helped to identify the importance of individual features to evaluate theogy in asset pricing, corporate governance, international finance and accounting (Warin
strength and linkages between the S&P500 and the BGCI. There is no existing research onand Stojkov 2021, etc.). We used the algorithm-based 𝐺𝐴[�]𝑀 as an alternative model for
applying a machine learningthe robustness test. It helped to identify the importance of individual features to evaluate GA[2] _M model to the study of volatility transmission across the_
financial markets. Our study is the first to implement it in support of the empirical findings.the strength and linkages between the S&P500 and the BGCI. There is no existing research
on applying a machine learning 𝐺𝐴[�]𝑀 model to the study of volatility transmission
_3.1. MGARCH-BEKK Modelacross the financial markets. Our study is the first to implement it in support of the em-_
pirical findings. Volatility transmission is mainly examined by the MGARCH-VEC, DCC-GARCH, and
MGARCH-BEKK models (Bauwens et al. 2006). The MGARCH-VEC and DCC-GARCH
models have limitations. The MGARCH-VEC model requires estimation of several parame-3.1. MGARCH-BEKK Model
ters and a positive conditional variance matrix, while the DCC-GARCH model requires aVolatility transmission is mainly examined by the MGARCH-VEC, DCC-GARCH,
positive conditional correlation matrix. These models lack usefulness in analyzing cross-and MGARCH-BEKK models (Bauwens et al. 2006). The MGARCH-VEC and DCCmarket volatility spillover. We, therefore, employed the following MGARCH-BEKK modelGARCH models have limitations. The MGARCH-VEC model requires estimation of sevdeveloped by Engle and Kroner (1995) to overcome the above problems in analyzing the
eral parameters and a positive conditional variance matrix, while the DCC-GARCH
volatility spillover:
model requires a positive conditional correlation matrix. These models lack usefulness in
analyzing cross-market volatility spillover. We, therefore, employed the following Dt = A[′] _A + V[′]et[′]−1[e][t][ V][ +][ W][′][D][t][−][1][W]_ (1)
MGARCH-BEKK model developed by Engle and Kroner (1995) to overcome the above Kroner and Ng (1998) developed the BEKK model to examine the asymmetric volatility.
problems in analyzing the volatility spillover:
_Dt = A[′]_ _A +𝐷𝑡 V= 𝐴[′]et[′]−[′]𝐴+ 𝑉1[e][t][ V][ +]′𝑒′𝑡−1[ W]𝑒[′][D]𝑡 𝑉+ 𝑊[t][−][1][W][ +][′]𝐷[ K]𝑡−1[′][ f]𝑊t′′−1_ _[f][t][−][1][K]_ (2)(1)
-----
_J. Risk Financial Manag. 2022, 15, 116_ 5 of 9
The diagonal parameters of matrices V and W capture their own stock market’s shocks
and volatility, while the off-diagonal elements of the matrices assess volatility transmission
effects across the markets. The matrix K measures the asymmetric volatility response.
_3.2. Generalized Additive Models (GAMs)_
Lou et al. (2012) proposed a new method, presenting a large-scale empirical comparison of methods for traditional learning generalized additive models (GAMs)[2]. Lou, and
others, explained the different shape models that influence the additive model. In 2013,
Lou, Caruana, Gehrke and Hooker developed the GA[2] _M model by adding selected terms_
of interacting pairs of features to traditional GAMs. This new model was intelligible and
accurate for ranking all possible pairs of variables. We applied this new model to test the
interactions between the stock market and cryptocurrency market.
_n_ � �2
_RSS_ = ∑ _yi −_ _Mkj(xi)_
_i=1_
_n_
= _i∑=1_ _y[2]i_ _[−]_ [2][ ∑]p _[M][kj][.][pQ][t][.][p][ +][ ∑]p_
� �2 (3)
_Mkj.p_ _Q[w].p_
In the above equation, {(xi, yi)}1[n] [shows][ n][ size, where][ y][i][ is the response variable and]
_xi = (xi1, . . ., xin) has n features. Mkj.p is the prediction value on region p and p ∈{e, f_, g, h}.
� �
_xi =_ _v[1]i_ [, . . .,][ v]i[d][i] is a sorted set of possible values for variable xi, where di = |dom(xi)|.
_Q[w][�]gk, gj�_ = [e, f, g, h] is the lookup table for sum of weight on cuts (gk, gj) and Qt�gk, gj� =
[e, f, g, h] is the lookup table for sum of targets on cuts (gk, gj). The cut points have the
lowest RSS that can replace the feature values to obtain the best Mkj, assigning weight as
�xk, xj�, to assess the strength of the interaction.
**4. Empirical Analysis**
In this study, we used a unit root test to test for nonstationarity. We examined volatility
spillover effects with the MGARCH model. Lastly, we conducted the robustness test using
the machine learning approach to examine volatility transmission among the S&P500 and
the cryptocurrency market.
_4.1. Unit Root Test_
We used the augmented Dickey–Fuller Test (ADF)[3] to check for stationarity in the
data. The test is presented below.
∆rt = α + δrt−1 +
_p_
## ∑ βi∆rt−i + εt (4)
_i=1_
Here, r denotes the return series. Table 2 contains the results of the unit root test.
**Table 2. Unit root test.**
**Stock Markets** **Return Series**
SP _−30.7039_
BGCI 29.5112
Notes: critical values at 1%, 5% and 10%, are −3.441, −2.865 and −2.569 respectively.
The results reported in Table 2 show that the return series are stationary. Now, we
proceed to examine the volatility linkages.
_4.2. MGARCH-BEKK Effects_
The results of the MGARCH-BEKK are presented in Table 3. The stock indices of the
BGCI and S&P500 are indexed 1 and 2, respectively.
-----
_J. Risk Financial Manag. 2022, 15, 116_ 6 of 9
**Table 3. Asymmetric MGARCH.**
**S&P500 (i = 1)** **BGCI (i = 2)**
vi1 0.224(0.00) _−0.019(0.24)_
vi2 _−0.006(0.93)_ _−0.170(0.00)_
wi1 0.969(0.00) 0.003(0.43)
wi2 0.006(0.70) 0.972(0.00)
ki1 0.127(0.08) _−0.056(0.00)_
ki2 _−0.031(0.79)_ 0.163(0.00)
Multivariate ARCH test (Lags = 12) 94.27(0.36)
Multivariate Q-test (Lags = 12) 24.91(0.97)
Notes: the probability values are presented in the parenthesis. The Coefficients v, w and k measure ARCH,
GARCH and asymmetric GARCH effects.
Multivariate ARCH and Q statistics tests suggested that the asymmetric BEKK model
is a suitable model. The study implements the fluctuations test proposed by Nyblom (1989).
This test is recommended for detecting possible changes in the parameters or structural
breaks when observations are obtained sequentially in time. The results of the test are
presented in Table 4.
**Table 4. Results of the fluctuations test.**
**Test** **Statistic** **_p-Value_**
Joint 3.451 0.22
1 0.327 0.11
2 0.110 0.52
3 0.202 0.25
4 0.308 0.13
5 0.389 0.08
6 0.248 0.19
7 0.034 0.96
8 0.024 0.99
9 0.221 0.22
10 0.188 0.28
Notes: the p-value is the measure of the significance of the statistics in the testing results. There are no significant
results shown by the test.
All the parameters reported in Table 4 are statistically insignificant, which suggests
that there is no structural break and that the estimated MGARCH-BEKK model is a
proper model.
The matrices V and W, shown in Table 3, refer to the volatility relationships between
the stock indices. The diagonal elements in matrix V and in matrix W measure the ARCH
and GARCH effects respectively. As shown in Table 3, the parameters v11 and v22 suggest
the existence of ARCH effects, while the statistically significant values of parameters w11
and w22 indicate the presence of a GARCH effect. The statistically significant own market
GARCH parameter implies their own volatility influences the conditional variance. The
negative ARCH parameter of the BGCI shows that greater past shocks in BGCI have had
less effect on its current volatility.
The statistically insignificant off-diagonal elements of matrix V and W indicate that
there are no shock and volatility transmissions between the markets. Own markets volatility
spillover, as measured by GARCH parameters, are statistically significant. The volatility is
more pronounced in the BGCI (0.972) than in the S&P500 (0.969). The current conditional
volatility of both indices depends on their own past volatility. It does not depend on past
volatility of the other index.
We detected evidence of asymmetric responses for S&P500 and BGCI, suggesting
that the negative news induced more volatility. There exists a moderate dictation of
asymmetric volatility transmission from the BGCI to the S&P500 implying that the good
-----
_J. Risk Financial Manag. 2022, 15, 116_ 7 of 9
news in the crypto market causes more volatility in S&P500 than the bad news. The absence
of bidirectional shocks and volatility spillover suggests an absence of interdependence
between the markets. It implies that it is difficult to predict the volatility of one market
using information from the other market.
_4.3. Robustness Test_
We use the algorithm-based GA[2] _M for the robustness test for both returns and volatil-_
ity between the S&P500 and BGCI. Table 5 reflects the importance of the relationships
between the explanatory variables and the target variable in terms of the returns. In the
_GA[2]_ _M forecasting machine learning model, the importance of the explanatory variables_
is ranked in terms of their contributions to explaining the target variable. Two financial
indexes, RSP and RBGCI, are constructed as the target variables in the GA[2] _M forecast-_
ing model separately. We observed that the S&P500 gained a negative power to explain
BGCI ( 0.005) and that the BGCI also has a negative explanatory power to the S&P500
_−_
( 0.014). The most important explanatory feature is ranked at 100%. The least important
_−_
feature is ranked at 0. The results of Table 5 fall below 0 indicating that both indexes lack
connectedness from the perspective of returns.
**Table 5. Feature importance on returns.**
**Target Variable**
RSP RBGCI
RSP - _−0.0050_
RBGCI _−0.0104_
Notes: this table indicates the important feature in the context of the forecasting model. The target variable is
the dependent variable. The variables in the first column are expressed as the independent variables. A higher
number suggests a stronger explanatory power of the independent variables to the target variable.
Table 6 further identifies the connectedness in terms of volatility. The BGCI had a
stronger positive power in explaining the S&P500 (0.1773) than the S&P500 to explain the
BGCI (0.028), inferring that there is asymmetrical volatility transmission between the BGCI
and S&P500. Both markets contributed explanatory power in explaining the volatility
transmission to each other. Our results are robust to volatility transmission.
**Table 6. Feature importance on volatility.**
**Target Variable**
VSP VBGCI
VSP - 0.028
VBGCI 0.1773
Notes: this table indicates the important feature in the context of the forecasting model. The target variable is
the dependent variable. The variables in the first column are expressed as the independent variables. A higher
number in the table suggests a stronger explanatory power of the independent variables to the target variable.
**5. Summary**
This is the first study combining both a machine learning approach and a MGARCHBEKK to identify the volatility spillover and transmission across markets. We answered
our research question to find that there is insignificant volatility spillover across the stock
indices. The empirical findings provide the implication for practitioners and researchers in
portfolio diversification and policy study. More importantly, the study explores the application of the new technology—GA[2] _M in finance beyond the classical time series approaches._
The MGARCH-BEKK model found a lack of volatility spillover between the markets.
The MGARCH-BEKK results showed that the past shocks and volatility of own markets
have more influence on the recent volatility. The algorithm machine learning approach
confirmed that there was no positive impact power between the returns of S&P500 and the
BGCI. We detected the volatility spillover from the BGCI to the S&P500 is slightly higher
-----
_J. Risk Financial Manag. 2022, 15, 116_ 8 of 9
than the transmission in the opposite direction. The study detected the unidirectional
low magnitude asymmetric responses spillover from the BGCI to S&P500. The analysis
demonstrated the evidence of asymmetric responses in both markets. The analysis suggests
that the past volatility of own markets has useful information in forecasting volatility. The
empirical results of GA[2] _M show that our findings are robust._
Overall, we discovered a lack of interdependence in volatility, indicating a possible
portfolio diversification advantage for investors. Asset allocation or hedging will be useful
to portfolio managers. Our results also provide a theoretical framework for policymakers
when making regulations.
**Author Contributions: Conceptualization, P.J. and J.W.; methodology, P.J., J.W. and M.B.; software,**
P.J. and J.W.; validation, P.J., J.W. and M.B.; formal analysis, P.J. and J.W.; investigation, P.J. and J.W.;
resources, P.J., J.W. and M.B.; data curation, P.J. and J.W.; writing—original draft preparation, P.J.,
J.W. and M.B.; writing—review and editing, P.J., J.W. and M.B.; visualization, P.J., J.W. and M.B.;
supervision, P.J., J.W. and M.B.; project administration, P.J., J.W. and M.B. All authors have read and
agreed to the published version of the manuscript.
**Funding: This research received no external funding.**
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: The data presented in this study are available on request from the**
corresponding author. The data are not publicly available due to privacy concern.
**Conflicts of Interest: The authors declare no conflict of interest.**
**Notes**
1 CoinMarketCap, April 2021.
2 Hastie and Tibshirani (1987) create the Generalized Additive Models that combine generalized linear models and additive models.
3 Dickey and Fuller (1979, 1981) proposed ADF test.
**References**
Aslanidis, Nektarios, Aurelio Bariviera, and Alejandro Perez-Laborda. 2021. Are cryptocurrencies becoming more interconnected?
_[Economics Letters 199: 109725. [CrossRef]](http://doi.org/10.1016/j.econlet.2021.109725)_
Bauwens, Luc, Sébastien Laurent, and Jeroen VK Rombouts. 2006. Multivariate Garch Models: A Survey. Journal of Applied Econometrics
[21: 79–109. [CrossRef]](http://doi.org/10.1002/jae.842)
Bertomeu, Jeremy, Edwige Cheynel, Eric Floyd, and Wenqiang Pan. 2021. Using Machine Learning to Detect Misstatements. Review of
_[Accounting Studies 26: 468–519. [CrossRef]](http://doi.org/10.1007/s11142-020-09563-8)_
Bollerslev, Tim, Robert Engle, and Jeffrey Wooldridge. 1988. A Capital Asset Pricing Model with Time-Varying Covariances. Journal of
_[Political Economy 96: 116–31. [CrossRef]](http://doi.org/10.1086/261527)_
Bouri, Elie, Mahamitra Das, Rangan Gupta, and David Roubaud. 2018. Spillovers between Bitcoin and other assets during bear and
[bull markets. Applied Economics 50: 5935–49. [CrossRef]](http://doi.org/10.1080/00036846.2018.1488075)
[Bzdok, Danilo, Naomi Altman, and Martin Krzywinski. 2018. Statistics versus machine learning. Nature Methods 15: 233–34. [CrossRef]](http://doi.org/10.1038/nmeth.4642)
Canh, Nguyen Phuc, Udomsak Wongchoti, Su Dinh Thanh, and Nguyen Trung Thong. 2019. Systematic risk in cryptocurrency market:
[Evidence from DCC-MGARCH model. Finance Research Letters 29: 90–100. [CrossRef]](http://doi.org/10.1016/j.frl.2019.03.011)
Cardona, Laura, Marcela Gutiérrez, and Diego A. Agudelo. 2017. Volatility transmission between US and Latin American stock
[markets: Testing the decoupling hypothesis. Research in International Business and Finance 39: 115–27. [CrossRef]](http://doi.org/10.1016/j.ribaf.2016.07.008)
Conrad, Christian, Anessa Custovic, and Eric Ghysels. 2018. Long- and Short-Term Cryptocurrency Volatility Components: A
[GARCH-MIDAS Analysis. Journal of Risk and Financial Management 11: 23. [CrossRef]](http://doi.org/10.3390/jrfm11020023)
Dickey, David A., and Wayne A. Fuller. 1979. Distribution of the Estimators for Autoregressive Time Series With a Unit Root. Journal of
_the American Statistical Association 74: 427–31._
Dickey, David A., and Wayne A. Fuller. 1981. Likelihood Ratio Statistics for Autoregressive Time Series with a Unit Root. Econometrica
49: 1057–72.
Engle, Robert F., and Kenneth F. Kroner. 1995. Multivariate Simultaneous Generalized ARCH. Economic Theory 11: 122–50.
Hastie, Trevor, and Robert Tibshirani. 1987. Generalized Additive Models: Some Applications. Journal of the American Statistical
_Association 82: 371–86._
Ji, Qiang, Elie Bouri, Chi Keung Marco Lau, and David Roubaud. 2019. Dynamic connectedness and integration in cryptocurrency
markets. International Review of Financial Analysis 63: 257–72.
-----
_J. Risk Financial Manag. 2022, 15, 116_ 9 of 9
Kroner, F. Kroner, and Victor K. Ng. 1998. Modeling Asymmetric Comovements of Asset Returns. Review of Financial Studies 11: 817–44.
Li, Hong. 2007. International linkages of the Chinese stock exchanges: A multivariate GARCH analysis. Applied Financial Economics
17: 285–97.
Li, Hong, and Ewa Majerowska. 2008. Testing stock market linkages for Poland and Hungary: A multivariate GARCH approach.
_Research in International Business and Finance 22: 247–66._
Liu, Jinan, and Apostolos Serletis. 2019. Volatility in the Cryptocurrency Market. Open Economies Review 30: 779–811.
Lou, Yin, Rich Caruana, and Johannes Gehrke. 2012. Intelligible Models for Classification and Regression. Paper presented at 18th
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Beijing, China, August 12–16; pp. 150–58.
Lou, Yin, Rich Caruana, Johannes Gehrke, and Giles Hooker. 2013. Accurate Intelligible Models with Pairwise Interactions. Paper
presented at 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Chicago, IL, USA, August
11–14; pp. 623–31.
Mensi, Walid, Yun-Jung Lee, Khamis Hamed AI-Yahyaee, Ahmet Sensoy, and Seong-Min Yoon. 2019. Intraday downward/upward
multifractality and long memory in Bitcoin and Ethereum markets: An asymmetric multifractal detrended fluctuation analysis.
_Finance Research Letters 31: 19–25._
Nyblom, Jukka. 1989. Testing for the Constancy of Parameters Over Time. Journal of the American Statistical Association 84: 223–30.
Shi, Yongjing, Aviral Kumar Tiwari, Giray Gozgor, and Zhou Lu. 2020. Correlations among cryptocurrencies: Evidence from
multivariate factor stochastic volatility model. Research in International Business and Finance 53: 101231.
Symitsi, Efthymia, and Konstantinos Chalvatzis. 2018. Return, volatility and shock spillovers of Bitcoin with energy and technology
companies. Economics Letters 170: 127–30.
Wang, Jinghua, and Geoffrey Ngene. 2020. Does the Bitcoin Still Own Its Dominant Power? An Intraday Analysis. International Review
_of Financial Analysis 71: 101551._
Warin, Thierry, and Aleksandar Stojkov. 2021. Machine Learning in Finance: A Metadata-Based Systematic Review of the Literature.
_Journal of Risk and Financial Management 14: 302._
Witten, Ian H., Eibe Frank, Mark A. Hall, and Christopher Pal. 2005. Data Mining: Practical Machine Learning Tools and Techniques, 4th ed.
Burlington: Morgan Kaufmann.
Worthington, Andrew, and Higgs Higgs. 2004. Transmission of equity returns and volatility in Asian developed and emerging markets:
A multivariate GARCH analysis. International Journal of Finance & Economics 9: 71–80.
Yousaf, Imran, and Shoaib Ali. 2020. The COVID-19 outbreak and high frequency information transmission between major cryptocurrencies: Evidence from the VAR-DCC-GARCH approach. Borsa Istanbul Review 20: S1–S10.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/jrfm15030116?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/jrfm15030116, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1911-8074/15/3/116/pdf?version=1647937605"
}
| 2,022
|
[] | true
| 2022-03-02T00:00:00
|
[
{
"paperId": "908dde691ff196118380d7c1b6b96fce3427cda1",
"title": "Machine Learning in Finance: A Metadata-Based Systematic Review of the Literature"
},
{
"paperId": "214803edc14e67a3c3d478f44716826eaeb90d1e",
"title": "The COVID-19 outbreak and high frequency information transmission between major cryptocurrencies: Evidence from the VAR-DCC-GARCH approach"
},
{
"paperId": "26ce108aa012635bf8dacd8e48f369565b12eb0c",
"title": "Correlations among cryptocurrencies: Evidence from multivariate factor stochastic volatility model"
},
{
"paperId": "31dfe6ced6cbace1a479f34c4ccb3d6caad10650",
"title": "Does Bitcoin still own the dominant power? An intraday analysis"
},
{
"paperId": "1040d65f3ecc9411dcb3fece96955591738541f0",
"title": "Are Cryptocurrencies Becoming More Interconnected?"
},
{
"paperId": "8035f758353a67d41d977e54fef01fae5aa07dba",
"title": "Using machine learning to detect misstatements"
},
{
"paperId": "adb7c8ae9b566ecd4b583629c86e9ca7e5bc3671",
"title": "Intraday downward/upward multifractality and long memory in Bitcoin and Ethereum markets: An asymmetric multifractal detrended fluctuation analysis"
},
{
"paperId": "157eb63e98df046a3afcc0d0c45f94600da9f181",
"title": "Volatility in the Cryptocurrency Market"
},
{
"paperId": "16d3a0aeab0bd3301dcd05850bcadfdc2832ac4e",
"title": "Systematic risk in cryptocurrency market: Evidence from DCC-MGARCH model"
},
{
"paperId": "0336acd79011c0ef2bdea9fe97b6f81686adf3dd",
"title": "Dynamic connectedness and integration in cryptocurrency markets"
},
{
"paperId": "04a959ec5322f19902defe339d84d309c69cb669",
"title": "Return, volatility and shock spillovers of Bitcoin with energy and technology companies"
},
{
"paperId": "a08695531eaedd6d5ec2786662d8659e2659f57e",
"title": "Spillovers between Bitcoin and other assets during bear and bull markets"
},
{
"paperId": "2acf7458d9a3b3c4b1186ddc05bc0e4374cccb6c",
"title": "Long- and Short-Term Cryptocurrency Volatility Components: A GARCH-MIDAS Analysis"
},
{
"paperId": "fa9906b466bbbff3a8c206b499cd34323a91d1b2",
"title": "Points of Significance: Statistics versus machine learning"
},
{
"paperId": "e183560fed714189b6b399f3c8f27c770cf08f8b",
"title": "Volatility Transmission between US and Latin American Stock Markets: Testing the Decoupling Hypothesis"
},
{
"paperId": "b86cc51cb2ad625439ca9623fb86fa7ff0573b84",
"title": "Testing stock market linkages for Poland and Hungary: A multivariate GARCH approach"
},
{
"paperId": "44a776ab9accc5a66a4952f78c0b6fa43ab6d26d",
"title": "International linkages of the Chinese stock exchanges: a multivariate GARCH analysis"
},
{
"paperId": "abb5e72426abd553ab123f5353a1b5b3fcf46b7b",
"title": "Multivariate GARCH Models: A Survey"
},
{
"paperId": "82cf0a3648b056856e63b0d5818da0977f18316b",
"title": "Modeling Asymmetric Comovements of Asset Returns"
},
{
"paperId": "ee3a97da9b16c88e82de4426b41d62884e837b5a",
"title": "Multivariate Simultaneous Generalized ARCH"
},
{
"paperId": "4cb999f3aed181df3bf37ae612a68193017d2eec",
"title": "Testing for the Constancy of Parameters over Time"
},
{
"paperId": "2a4f45782ac678ba2846ae2af5c7b9b1da325256",
"title": "A Capital Asset Pricing Model with Time-Varying Covariances"
},
{
"paperId": "04d8c91dd81919c886d5b392772e67cac0788594",
"title": "Generalized Additive Models: Some Applications"
},
{
"paperId": "b1468cd54465fbe8ac00810ddca1bcb77cfced5c",
"title": "LIKELIHOOD RATIO STATISTICS FOR AUTOREGRESSIVE TIME SERIES WITH A UNIT ROOT"
},
{
"paperId": "5cbbb5deb4d92dc0504fb7f2af0f6fe7da355d98",
"title": "Distribution of the Estimators for Autoregressive Time Series with a Unit Root"
},
{
"paperId": "9c444da345a86e5227921ad969b7c31bd280b775",
"title": "Transmission of equity returns and volatility in Asian developed and emerging markets : a multivariate Garch analysis"
}
] | 8,995
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02eee6f7cbcfac3f2c5e365cd8ba356c55e5e0b2
|
[
"Computer Science"
] | 0.893634
|
Continuous Timestamping for Efficient Replication Management in DHTs
|
02eee6f7cbcfac3f2c5e365cd8ba356c55e5e0b2
|
The Globe
|
[
{
"authorId": "1709023",
"name": "Reza Akbarinia"
},
{
"authorId": "1771611",
"name": "Mounir Tlili"
},
{
"authorId": "1685125",
"name": "Esther Pacitti"
},
{
"authorId": "144255847",
"name": "P. Valduriez"
},
{
"authorId": "144674433",
"name": "Alexandre A. B. Lima"
}
] |
{
"alternate_issns": [
"2246-8838"
],
"alternate_names": [
"Globe",
"Int Conf Data Manag Grid P2P Syst",
"International Conference on Data Management in Grid and P2P Systems"
],
"alternate_urls": [
"http://journals.aau.dk/index.php/globe/index",
"http://www.wikicfp.com/cfp/program?id=1137",
"https://journals.aau.dk/index.php/globe/about"
],
"id": "84603a9f-46e3-4e3d-925d-25b6913bfbdb",
"issn": "0311-3930",
"name": "The Globe",
"type": "conference",
"url": "https://journals.aau.dk/index.php/globe/"
}
| null |
# Continuous Timestamping for Efficient Replication Management in DHTs
## Reza Akbarinia, Mounir Tlili, Esther Pacitti, Patrick Valduriez, Alexandre A.
B. Lima
To cite this version:
#### Reza Akbarinia, Mounir Tlili, Esther Pacitti, Patrick Valduriez, Alexandre A. B. Lima. Continuous Timestamping for Efficient Replication Management in DHTs. GLOBE’10: Third International Con- ference on Data Management in Grid and P2P Systems, Bilbao, Spain. pp.38-49. lirmm-00607932
## HAL Id: lirmm-00607932
https://hal-lirmm.ccsd.cnrs.fr/lirmm-00607932
#### Submitted on 11 Jul 2011
#### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
#### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
-----
## Continuous Timestamping for Efficient Replication Management in DHTs
Reza Akbarinia[1], Mounir Tlili[2], Esther Pacitti[3], Patrick Valduriez[4], Alexandre A.
B. Lima[5]
1,2INRIA and LINA, Univ. Nantes, France
3LIRMM and INRIA, Univ. Montpellier, France
4INRIA and LIRMM, Montpellier, France
5COPPE/UFRJ, Rio de Janeiro, Brazil
1,4Firstname.Lastname@inria.fr, 2pacitti@lirmm.fr, 3Firstname.Lastname@univ-nantes.fr,
5assis@cos.ufrj.br
**Abstract. Distributed Hash Tables (DHTs) provide an efficient solution for**
data location and lookup in large-scale P2P systems. However, it is up to the
applications to deal with the availability of the data they store in the DHT, e.g.
via replication. To improve data availability, most DHT applications rely on
data replication. However, efficient replication management is quite
challenging, in particular because of concurrent and missed updates. In this
paper, we propose an efficient solution to data replication in DHTs. We propose
a new service, called Continuous Timestamp based Replication Management
(CTRM), which deals with the efficient storage, retrieval and updating of
replicas in DHTs. To perform updates on replicas, we propose a new protocol
that stamps update actions with timestamps generated in a distributed fashion.
Timestamps are not only monotonically increasing but also continuous, i.e.
without gap. The property of monotonically increasing allows applications to
determine a total order on updates. The other property, i.e. continuity, enables
applications to deal with missed updates. We evaluated the performance of our
solution through simulation and experimentation. The results show its
effectiveness for replication management in DHTs.
### 1 Introduction
Distributed Hash Tables (DHTs), e.g. CAN [7] and Chord [10], provide an efficient
solution for data location and lookup in large-scale P2P systems. While there are
significant implementation differences between DHTs, they all map a given key _k_
onto a peer p using a hash function and can lookup p efficiently, usually in O(log n)
routing hops, where n is the number of peers [2]. One of the main characteristics of
DHTs (and other P2P systems) is the dynamic behavior of peers which can join and
leave the system frequently, at any time. When a peer gets offline, its data becomes
unavailable. To improve data availability, most applications which are built on top of
DHTs, rely on data replication by storing the (key, data) pairs at several peers, _e.g._
using several hash functions. If one peer is unavailable, its data can still be retrieved
from the other peers that hold a replica. However, update management is difficult
-----
because of the dynamic behaviour of peers and concurrent updates. There may be
_replica holders (i.e. peers that maintain replicas) that do not receive the updates, e.g._
because they are absent during the update operation. Thus, we need a mechanism that
efficiently determines whether a replica on a peer is up-to-date, despite missed
updates. In addition, to deal with concurrent updates, we need to determine a total
order on the update operations.
In this paper, we give an efficient solution to replication management in DHTs. We
propose a new service, called Continuous Timestamp based Replication Management
(CTRM), which deals with the efficient storage, retrieval and updating of replicas in
DHTs. To perform updates on replicas, we propose a new protocol that stamps the
updates with timestamps which are generated in a distributed fashion using groups of
peers managed dynamically. The updates’ timestamps are not only monotonically
increasing but also continuous, i.e. without gap. The property of monotonically
increasing allows CTRM to determine a total order on updates and to deal with
concurrent updates. The continuity of timestamps enables replica holders to detect the
existence of missed updates by looking at the timestamps of the updates they have
received. Examples of applications that can take advantage of continuous
timestamping are the P2P collaborative text editing applications, e.g. P2P Wiki [11],
which need to reconcile the updates done by collaborating users. We evaluated our
CTRM service through experimentation and simulation; the results show its
effectiveness. In our experiments, we compared CTRM with two baseline services,
and the results show that with a low overhead in update response time, CTRM
supports fault tolerant data replication using continuous timestamps. The results also
show that data retrieval with CTRM is much more efficient than the baseline services.
We investigated the effect of peer failures on the correctness of CTRM, the results
show that it works correctly even in the presence of peer failures.
The rest of this paper is organized as follows. In Section 2, we define the problem
we address in this paper. In Section 3, we propose our replication management
service, CTRM. Section 4 describes a performance evaluation of our solution. Section
5 discusses related work. Section 6 concludes.
### 2 Problem Definition
In this paper we deal with improving data availability in DHTs. Like several other
protocols and applications designed over DHTs, e.g. [2], we assume that the lookup
service of the DHT behaves properly. That is, given a key k, it either finds correctly
the responsible for k or reports an error, e.g. in the case of network partitioning where
the responsible peer is not reachable.
To improve data availability, we replicate each object (for instance a file) at a
group of peers of the DHT which will call replica holders. Each replica holder keeps
a _replica copy of a DHT object. Each replica may be updated locally by a replica_
holder or remotely by other peers of the DHT. This model is in conformance with the
_multi-master replication model_ [5]. Updates on replicas are asynchronous, i.e., an
update is applied first to a replica and afterwards (after the update’s commitment), to
the other replicas of the same object.
-----
The problem that arises is that a replica holder may fail or leave the system at any
time. Then, when re-joining (or recovering) it may need to retrieve the updates it
missed when gone. Furthermore, updates on different replica copies of an object may
be performed in parallel. To ensure consistency, updates must be applied to all
replicas in a specific total order.
In this model, to ensure consistency of replicas, we need a distributed mechanism
that determines 1) a total order for the updates; 2) the number of missed updates at a
replica holder. Such a mechanism allows dealing with concurrent updates, i.e.
committing them in the same order at all replica holders. In addition, it allows a
rejoining (recovering) replica holder to determine whether its local replica is up-todate or not, and how many updates should be applied on the replica if it is not up-todate.
One solution for realizing such a mechanism is to stamp the updates with
timestamps that are monotonically increasing and continuous. We call such a
mechanism update with continuous timestamps.
Let patch be the action (or set of actions) generated by a peer during one update
operation. Then, the property of update with continuous timestamps can be defined as
follows.
**Definition 1: Update with continuous timestamps (UCT).** A mechanism of
update is UCT _iff patches of updates are stamped by increasing real numbers such_
that, for any two consecutive committed updates, the difference between their
timestamps is one.
Formally, consider two consecutive committed updates u1 and u2 on a data d, and
let _pch1 and_ _pch2 be the patches of_ _u1 and_ _u2, respectively. Assume that_ _u2 is done_
after _u1, and let_ _t1 and_ _t2 be the timestamps of_ _pch1 and_ _pch2 respectively. Then we_
should have t2 = t1 + 1;
To support the UCT property in a DHT, we must deal with two challenges: 1) To
generate continuous timestamps in the DHT in a distributed fashion; 2) To ensure that
any two consecutive generated timestamps are used for two consecutive updates.
Dealing with the first challenge is hard, in particular due to the dynamic behavior of
peers which can leave or join the system at any time and frequently. This behavior
makes inappropriate the timestamping solutions based on physical clocks, because the
distributed clock synchronization algorithms do not guarantee good synchronization
precision if the nodes are not linked together long enough [6]. Addressing the second
challenge is difficult as well, because there may be generated timestamps which are
used for no update, e.g. because the timestamp requester peer may fail before doing
the update.
### 3 Replication Management
In this section, we propose a replication management service, called Continuous
Timestamp based Replication Management (CTRM), that deals with efficient storage,
retrieval and updating of replicas on top of DHTs, while supporting the UCT
property. The rest of this section is organized as follows. We firstly give an overview
of CTRM. Secondly, we introduce the concept of replica holder groups which is an
-----
efficient approach for replica storage by CTRM. Thirdly, we propose a new protocol
used by CTRM for performing updates on replicas. Finally, we show how CTRM
deals with peer faults that may happen during the execution of the protocol.
#### 3.1 Overview
To provide high data availability, CTRM replicates each data in the DHT at a group
of peers, called replica holder group, determined by using a hash function. After each
update on a data, the corresponding patch is sent to the group where a monotonically
increasing timestamp is generated by one of the members, i.e. the responsible of the
group. Then the patch and its timestamp are published to the members of the group
using an update protocol, called UCT protocol (see the details in Section 3.3).
To retrieve an up-to-date replica of a data, the request is sent to the responsible of
the data’s replica holder group. The responsible peer sends the data and the latest
generated timestamp to the group members, one by one, and the first member that has
received all patches returns its replica to the requester. To verify whether all patches
are received, replica holders check the two following conditions, called _up-to-date_
_conditions: 1) the timestamps of the received patches are continuous, i.e. there is no_
missed update; 2) the latest generated timestamp is equal to the timestamp of the
latest patch received by the replica holder.
The above up-to-date conditions are also verified periodically by each member of
the group. If the conditions do not hold, the member updates its replica by retrieving
the missed patches and their corresponding timestamps from the responsible of the
group or other members that hold them.
#### 3.2 Replica Holder Groups
Let Gk be the group of peers that maintain the replicas of a data whose ID is k. We
call these peers the _replica holders for k. For replica holders of each data, we use_
peers that are relatively close in the overlay network. For each group, there is a
responsible peer which is also one of its members. For choosing the responsible of the
group Gk, we use a hash function hr, and the peer p that is responsible for key=hr(k) in
the DHT, is the responsible of _Gk. In this paper, the peer that is responsible for_
key=hr(k) is denoted by _rsp(k, hr), i.e._ called responsible of _k with respect to hash_
function _hr. In addition to_ _rsp(k, hr), some of the peers that are close to it, .e.g. its_
neighbors, are members of Gk. Each member of the group knows the address of other
members of the group. The number of members of a replica holders group, i.e. Gk,
is a system’s parameter.
If the peer p that is responsible for a group leaves the system or fails, another peer,
say q, becomes responsible for the group, i.e. the new responsible of the key=hr(k) in
the DHT. In almost all DHTs (e.g. CAN [7] and Chord [10]), the new responsible
peer is one of the neighbors of the previous one.
-----
1. On update requester:
- Send {k, pch} to rsp(k, hr)
- Monitor rsp(k, hr) using a failure detector
- Go to Step 8 if rsp(k, hr) fails
2. On rsp(k, hr): upon receiving {k, pch}
- Set ck = ck + 1; // increase counter by one
// initially we have ck=0;
- Let ts = ck, send {k, pch, ts} to other replica
holders;
- Set a timer on, called ackTimer, to a default
time
3. On each replica holder: upon receiving {k, pch,
_ts}_
- Maintain {k, pch, ts} in a temporary
memory on disk;
- Send ack to rsp(k, hr);
4. On rsp(k, hr): upon expiring ackTimer
- If (number of received acks ≥ threshold δ)
then send “commit” message to the replica
holders;
- Else set ck = ck - 1, and send “abort”
message to the update requester;
5. On each replica holder: upon receiving
“commit”
- Maintain {pch, ts} as a committed patch
for k.
- Update the local replica using pch;
- Send “terminate” message to rsp(k, hr)
6. On rsp(k, hr): upon receiving the first
‘terminate’ message
- Send “terminate” to update requester
7. On update requester: receiving the ‘terminate’
from rsp(k, hr)
- Commit the update operation
8. On update requester: upon detecting a failure
on rsp(k, hr)
- If the ‘terminate’ message is received then
commit the update operation;
- Else, check replica holders, if at least one
of them received the ‘commit’ message
then commit the update operation;
- Else, abort the update operation;
**Figure 1. UCT protocol**
If a responsible peer p leaves the system normally, i.e. without fail, it sends to the
next responsible peer, i.e. q, the last timestamps of all data replicated in the group. If p
fails, then the next responsible peer, say q, contacts the members of the group (most
of which are its neighbors) and asks them to return the timestamps which they
maintain for the data replicated over them. Then, for each replicated data, q initializes
a timestamp equal to the highest received timestamp from the group members.
Each group member p periodically sends alive messages to the responsible of the
group, and the responsible peer returns to it the current list of members. If the
responsible peer does not receive an alive message from a member, it assumes that the
member has failed. When a member of a group leaves the system or fails, after getting
aware of this departure, the responsible of the group invites a close peer to join the
group, e.g. one of its neighbors. The new member receives from the responsible peer
a list of other members as well as up-to-date replicas of all data replicated by the
group.
Each peer can belong to several groups, but it can be responsible for only one
group. Each group can hold the replicas of several data items.
#### 3.3 Update with Continuous Timestamps
In this section, we propose a protocol, called UCT (Update with Continuous
Timestamps) that deals with updating replicas in CTRM.
To simplify the description of our update protocol, we assume the existence of (not
perfect) failure detectors [3] that can be implemented as follows. When we setup a
-----
failure detector on a peer p to monitor peer q, the failure detector periodically sends
ping messages to _q in order to test whether_ _q is still alive (and connected). If the_
failure detector receives no response from q, then it considers q as a failed peer, and
triggers an error message to inform p about this failure.
Let us now describe the UCT protocol. Let p0 be the peer that wants to update a
data whose ID is k. The peer p0 is called update requester. Let pch be the patch of the
update performed by p0. Let p1 be the responsible of the replica holder group for k, i.e.
_p1= rsp(k, hr). The protocol proceeds as follows (see Figure 1):_
- **Update request. In this phase,** the update requester, i.e. _p0, obtains the address of_
the responsible of the replica holder group, _i.e._ _p1, by using the DHT's lookup_
service, and sends to it an update request containing the pair _(k, pch). Then,_ _p0_
waits for a commit message from p1. It also uses a failure detector and monitors
_p1. The wait time is limited by a default value, e.g. by using a timer. If p0 receives_
the terminate message from _p1,_ then it commits the operation. If the timer
timeouts or the failure detector reports a fault of p1, then p0 checks whether the
update has been done or not, _i.e. by checking the data at replica holders. If the_
answer is positive, then the operation is committed, else it is aborted.
- **Timestamp generation and replica publication.** After receiving the update
request, p1 generates a timestamp for k, e.g. _ts, by increasing a local counter that_
it keeps for _k, say_ _ck._ Then, it sends _(k, pch, ts) to the replica holders,_ _i.e. the_
members of its group, and asks them to return an acknowledgement. When a
replica holder receives _(k, pch, ts), it returns the acknowledgement to_ _p1_ and
maintains the data in a temporary memory on disk. The patch is not considered as
an update before receiving a commit message from p1. If the number of received
acknowledgements is more than or equal to a threshold δ, then _p1 starts the_
update confirmation phase. Otherwise _p1 sends an abort message to_ _p0. The_
threshold δ is a system parameter, e.g. it is chosen in such a way that the
probability that δ peers of the group simultaneously fail is almost zero.
- **Update confirmation. In this phase, p1 sends the commit message to the replica**
holders. When a replica holder receives the commit message, it labels {pch, ts} as
a committed patch for k. Then, it executes the patch on its local replica, and sends
a terminate message to _p1. After receiving the first terminate message from_
replica holders, p1 sends a terminate message to p0. If a replica holder does not
receive the commit message for a patch, it discards the patch upon receiving a
new patch containing the same or greater timestamp value.
Notice that the goal of our protocol is not to provide eager replication, but to have
at least δ replica holders that receive the patch and its timestamp. If this goal is
attained, the update operation is committed. Otherwise it is aborted, and the update
requester should try its update later.
Let us now consider the case of concurrent updates, e.g. two or more peers want to
update a data d at the same time. In this case, the concurrent peers send their request
to the responsible of the _d’s group, say_ _p1. The peer_ _p1 determines an order for the_
requests, e.g. depending on their arrival time or on the distance of requesters if the
-----
requests arrive at the same time. Then it processes the requests one by one according
their order, i.e. it commits or aborts one request and starts the next one. Thus,
concurrent updates make no problem of inconsistency for our replication management
service.
#### 3.4 Fault Tolerance
Let us now study the effect of peer failures on the UCT protocol and discuss how they
are handled. By peer failures, we mean the situations where a peer crashes or gets
disconnected from the network abnormally, e.g. without informing the responsible of
the group. We show that these failures do not block our update protocol. We also
show that even in the presence of these failures, the protocol guarantees continuous
timestamping, _i.e. when an update is committed, the timestamp of its patch is only_
one unit greater than that of the previous one. For this, it is sufficient to show that
each generated timestamp is attached with a committed patch, or it is aborted. By
aborting a timestamp, we mean returning the counter's value to its value before the
update operation.
During our update protocol, a failure may happen on the responsible of the group
or on a replica holder. We first study the case of the responsible of the group. In this
case, the failure may happen in one of the following time intervals:
- **_I1: after receiving the update request and before generating the timestamp._**
If the responsible of the group fails in this interval, then after some time, the
failure detector detects the failure or the timer timeouts. Afterwards, the update
requester checks the update at replica holders, and since it has not been done, the
operation is aborted. Therefore, a failure in this interval does not block the
protocol, and continuous timestamping is assured, _i.e. because no update is_
performed.
- **_I2: after_** **_I1 and before sending the patch to replica holders. In this interval,_**
like in the previous one, the failure detector detects the failure or the timer
timeouts, and thus the operation is aborted. The timestamp ts, which is generated
by the failed responsible peer, is aborted as follows. When the responsible peer
fails, its counters get invalid, and the next responsible peer initializes its counter
using the greatest timestamp of the committed patches at replica holders. Thus,
the counter returns to its value before the update operation. Therefore, in the case
of crash in this interval, continuous timestamping is assured.
- **_I3: after I2 and before sending the commit message to replica holders. If the_**
responsible peer fails in this interval, since the replica holders have not received
the commit, they do not consider their received data as a valid replica. Thus,
when the update requester checks the update, they answer that the update has not
been done and the operation gets aborted. Therefore, in this case, continuous
timestamping is not violated.
-----
- **_I4: after_** **_I3 and before sending the terminate message to the update_**
**requester. In this case, after detecting the failure or timeout, the update requester**
checks the status of the update in the DHT and finds out that the update has been
done, thus it commits the operation. In this case, the update is done with a
timestamp which is one unit greater than that of the previous update, thus the
property of continuous timestamping is enforced.
### 4 Experimental Validation
In this section, we evaluate the performance of CTRM through experimentation over
a 64-node cluster and simulation. The experimentation over the cluster was useful to
validate our algorithm and calibrate our simulator. The simulation allows us to study
scale up to high numbers of peers (up to 10,000 peers).
#### 4.1 Experimental and Simulation Setup
Our experimentation is based on an implementation of the Chord [10] protocol. We
tested our algorithms over a cluster of 64 nodes connected by a 1-Gbps network. Each
node has two Intel Xeon 2.4 GHz processors, and runs the Linux operating system. To
study the scalability of CTRM far beyond 64 peers, we also implemented a simulator
using SimJava. After calibration of the simulator, we obtained simulation results
similar to the implementation results up to 64 peers.
Our default settings for different experimental parameters are as follows. The
latency between any two peers is a random number with normal distribution and a
mean of 100 ms. The bandwidth between peers is also a random number with normal
distribution and a mean of 56 Kbps (as in [1]). The simulator allows us to perform
tests with up to 10,000 peers, after which simulation data no longer fit in RAM and
makes our tests difficult. Therefore, the default number of peers is set to 10,000.
In our experiments, we consider a dynamic P2P system, _i.e. there are peers that_
leave or join the system. Peer departures are timed by a random Poisson process (as in
[8]). The average rate, i.e. λ, for events of the Poisson process is λ=1/second. At each
event, we select a peer to depart uniformly at random. Each time a peer goes away,
another joins, thus keeping the total number of peers constant (as in [8]).
We also consider peer failures. Let _fail rate be a parameter that denotes the_
percentage of peers that leave the system due to a fail. When a peer departure event
occurs, our simulator should decide on the type of this departure, i.e. normal leave or
fail. For this, it generates a random number which is uniformly distributed in [0..100];
if the number is greater than fail rate then the peer departure is considered as a normal
leave, else as a fail. In our tests, the default setting for fail rate is 5% (as in [1]). In
our tests, unless otherwise specified, the number of replicas of each data is 10.
Although they cannot provide the same functionality as CTRM, the closest prior
works to CTRM are the BRICKS project [4], denoted as BRK, and the Update
Management Service (UMS) [1]. The assumptions made by these two works are close
to ours, e.g. they do not assume the existence of powerful servers. BRK stores the
-----
data in the DHT using multiple keys, which are correlated to the data key. To be able
to retrieve an up-to-date replica, BRK uses versioning, i.e. each replica has a version
number which is increased after each update. UMS has been proposed to support data
currency in DHTs, i.e. the ability to return an up-to-date replica. It uses a set of _m_
hash functions and replicates the data randomly at _m different peers. UMS works_
based on timestamping, but the generated timestamps are not necessarily continuous.
#### 4.2 Update Cost
Let us first investigate the performance of CTRM’s update protocol. We measure the
performance of data update in terms of response time and communication cost. By
update response time, we mean the time needed to send the patch of an update
operation to the peers that maintain the replicas. By update communication cost, we
mean the number of messages needed to update a data.
Using our simulator, we ran experiments to study how the response time increases
with the addition of peers. Using the simulator, Figure 2 depicts the total number of
messages while increasing the number of peers up to 10,000, with the other
simulation parameters set as defaults described in Section 4.1. In all three services,
the communication cost increases logarithmically with the number of peers. However,
the communication cost of CTRM is much better than that of UMS and BRK. The
reason is that UMS and BRK perform multiple lookups in the DHT, but CTRM does
only one lookup, i.e. only for finding the responsible of the group. Notice that each
lookup needs O(log n) messages where n is the number of peers of the DHT.
Figure 3 shows the update response time with the addition of peers up to 10,000,
with the other parameters set as described in Section 4.1. The response time of CTRM
is a little bit higher than that of UMS and BRK. The reason is that for guaranteeing
continuous timestamping, the update protocol of CTRM performs two round-tips
between the responsible of the group and the other members of the group. But, UMS
and BRK only send the update actions to the replica holders by looking up the replica
holders in parallel (note that the impact of parallel lookups on response time is very
slight, but they have a high impact on communication cost). However, the difference
in the response time of CTRM and that of UMS and BRK is small because the roundtrips in the group are less time consuming than lookups. This slight increase in
response time of CTRM’s update operation is the price to pay for guaranteeing
continuous timestamping.
#### 4.3 Data Retrieval Response Time
We now investigate the data retrieval response time of CTRM. By data retrieval
response time, we mean the time to return an up-to-date replica to the user.
Figure 4 shows the response time of CTRM, UMS and BRK with the addition of
peers up to 10000, with the other parameters set as defaults described in Section 4.1.
The response time of CTRM is much better than that of UMS and BRK. This
difference in response time can be explained as follows. Both CTRM and UMS
services contact some replica holders, say r, in order to find an up-to-date replica, e.g.
-----
|Tim 8 Response 4 0 1000 2000 4000 6000 8000 10000 Number of peers|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
16
12
8
UMS
BRK
**Figure 2. Communication**
cost of updates vs. number of
peers
**Effect of number of replicas** CTRM
UMS
**on data retrieval response time** BRK
32
28
24
20
16
12
8
4
0
5 10 15 20 25
**Number of replicas of each data**
**Figure 5. Effect of the**
number of replicas on
response time of data
retrievals
**Figure 3. Response time of**
update operation vs. number of
peers
**Figure 4. Response time of**
data retrievals vs. number of
peers
4
0
**Figure 4.**
**Figure 6. Timestamp continuity** **Figure 7. Consistency of**
vs. fail rate returned results vs. number of
concurrent updates
1000 2000 4000 6000 8000 10000
**Number of peers**
Response time of
_r=6. For contacting these replica holders, CTRM performs only one lookup (to find_
the responsible of the group) and some low-cost communications in the group. But,
UMS performs exactly r lookups in the DHT. BRK retrieves all replicas of data from
the DHT (to determine the latest version), and for each replica it performs one lookup.
Thus the number of lookups done by BRK is equal to the total number of data
replicas, i.e. 10 in our experiments.
Let us now study the effect of the number of replicas of each data, say _m, on_
performance of data retrieval. Figure 5 shows the response time of data retrieval for
the three solutions with varying the number of replicas up to 30. The number of
replicas has almost a linear impact on the response time of BRK, because to retrieve
an up-to-date replica it has to retrieve all replicas by doing one lookup for each
replica. But, it has a slight impact on CTRM, because for finding an up-to-date replica
CTRM performs only one lookup, and some low cost communications, i.e. in the
group.
#### 4.4 Effect of Peer Failures on Timestamps Continuity
Let us now study the effect of peer failures on the continuity of timestamps used for
data updates. This study is done only for CTRM and UMS that work based on
timestamping. In our experiments we measure timestamp continuity rate by which we
mean the percentage of the updates whose timestamps are only one unit higher than
that of their precedent update. We varied the fail rate parameter, and observed its
effect on timestamp continuity rate.
Figure 6 shows timestamp continuity rate for CTRM and UMS while increasing
the fail rate, with the other parameters set as described in Section 4.1. The peer
failures do not have any negative impact on the continuity of timestamps generated by
CTRM, because our protocol assures timestamp continuity. However, when
-----
increasing the fail rate in UMS, the percentage of updates whose timestamps are not
continuous increases.
#### 4.5 Effect of Concurrent Updates on Result Consistency
In this section, we investigate the effect of concurrent updates on the consistency of
the results returned by CTRM. In our experiments, we perform _u updates done_
concurrently by _u different peers using the CTRM service, and after finishing the_
concurrent updates, we invoke the service’s data retrieval operation from n randomly
chosen peers (n=50 in our experiments). If there is any difference between the data
returned to the _n peers, we consider the result as inconsistent. We repeat each_
experiment several times, and report the percentage of the experiments where the
results are consistent. We perform the same experiments using the BRK service.
Figure 7 shows the results with the number of concurrent updates, i.e. u, increasing
up to 8, and with the other parameters set as defaults described in Section 4.1. As
shown, in 100% of experiments the results returned by CTRM are consistent. This
shows that our update protocol works correctly even in the presence of concurrent
updates. However, the BRK service cannot guarantee the consistency of results in the
case of concurrent updates, because two different updates may have the same version
at different replica holders.
### 5 Related Work
Most existing P2P systems support data replication, but usually they do not deal with
concurrent and missed updates.
OceanStore [9] is a data management system designed to provide a highly
available storage utility on top of P2P systems. It allows concurrent updates on
replicated data, and relies on reconciliation to assure data consistency. The
reconciliation is done by a set of powerful servers using a consensus algorithm. The
servers agree on which operations to apply, and in what order. However, in the
applications, which we address, the presence of powerful servers is not guaranteed.
The BRICKS project [4] provides high data availability in DHTs through
replication. For replicating a data, BRICKS stores the data in the DHT using multiple
keys, which are correlated to the data key, e.g. _k. There is a function that given_ _k,_
determines its correlated keys. To be able to retrieve an up-to-date replica, BRICKS
uses versioning. Each replica has a version number which is increased after each
update. However, because of concurrent updates, it may happen that two different
replicas have the same version number, thus making it impossible to decide which
one is the latest replica.
In [1], an update management service, called UMS, was proposed to support data
currency in DHTs, i.e. the ability to return an up-to-date replica. However, UMS does
not guarantee continuous timestamping which is a main requirement for collaborative
applications which need to reconcile replica updates. UMS uses a set of _m hash_
functions and replicates randomly the data at _m different peers, and this is more_
-----
expensive than the groups which we use in CTRM, particularly in terms of
communication cost. A prototype based on UMS was demonstrated in [12].
### 6 Conclusion
In this paper, we addressed the problem of efficient replication management in DHTs.
We proposed a new service, called continuous timestamp based replication
management (CTRM), which deals with efficient and fault tolerant data replication,
retrieval and update in DHTS, by taking advantage of replica holder groups and
monotonically increasing and continuous timestamps.
### References
[1] Akbarinia, R., Pacitti, E., Valduriez, P.: Data Currency in Replicated DHTs.
_SIGMOD Conf., 211-222 (2007)_
[2] Chawathe, Y., Ramabhadran, S., Ratnasamy, S., LaMarca, A., Shenker, S.,
Hellerstein, J.M.: A case study in building layered DHT applications.
_SIGCOMM Conf., 97-108 (2005)_
[3] Dabek, F., Kaashoek, M.F., Karger, D., Morris, R., Stoica, I.: Wide-Area
Cooperative Storage with CFS. ACM Symp. on Operating Systems Principles,
202-215 (2001)
[4] Knezevic, P., Wombacher, A., Risse, T.: Enabling High Data Availability in a
DHT. Proc. of Int. Workshop on Grid and P2P Computing Impacts on Large
_Scale Heterogeneous Distributed Database Systems, 363-367 (2005)_
[5] Özsu, T., Valduriez, P.: _Principles of Distributed Database Systems. 2nd_
Edition, Prentice Hall, 1999.
[6] PalChaudhuri, S., Saha, A.K., Johnson, D.B.: Adaptive Clock Synchronization
in Sensor Networks. Int. Symp. on Information Processing in Sensor Networks,
340-348 (2004)
[7] Ratnasamy, S., Francis, P., Handley, M., Karp, R., Shenker, S.: A scalable
content-addressable network. SIGCOMM Conf., 161-172 (2001)
[8] Rhea, S.C., Geels, D., Roscoe, T., Kubiatowicz, J.: Handling churn in a DHT.
_USENIX Annual Technical Conf., 127-140 (2004)_
[9] Rhea, S.C., Eaton, P., Geels, D., Weatherspoon, H., Zhao, B., Kubiatowicz, J.:
Pond: the OceanStore Prototype. _USENIX Conf. on File and Storage_
_Technologies, 1-14 (2003)_
[10] Stoica, I., Morris, R., Karger, D.R., Kaashoek, M.F. Balakrishnan, H.: Chord:
a scalable peer-to-peer lookup service for internet applications. _SIGCOMM_
_Conf., 149-160 (2001)_
[11] Xwiki Concerto Project: http://concerto.xwiki.com
[12] Tlili, M., Dedzoe, W.K., Pacitti, E., Valduriez, P., Akbarinia, R., Molli, P.,
Canals, G., Laurière, S.: P2P logging and timestamping for reconciliation.
_PVLDB 1(2): 1420-1423 (2008)_
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-642-15108-8_4?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-642-15108-8_4, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-00607932/file/2010_-_Globe_-_Contiuous_Timestamping_.pdf"
}
| 2,010
|
[
"JournalArticle"
] | true
| 2010-09-01T00:00:00
|
[] | 9,335
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02ef4d44394a3ba19bcf08e99cba018c39293bc8
|
[
"Computer Science"
] | 0.831586
|
APPFL: Open-Source Software Framework for Privacy-Preserving Federated Learning
|
02ef4d44394a3ba19bcf08e99cba018c39293bc8
|
IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum
|
[
{
"authorId": "3412995",
"name": "Minseok Ryu"
},
{
"authorId": "2000219579",
"name": "Youngdae Kim"
},
{
"authorId": "7826427",
"name": "Kibaek Kim"
},
{
"authorId": "2246887844",
"name": "Ravi Madduri"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IPDPSW",
"IEEE Int Symp Parallel Distrib Process Work Phd Forum"
],
"alternate_urls": null,
"id": "7ddefda0-174f-499a-9dce-855879dd01b7",
"issn": null,
"name": "IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum",
"type": "conference",
"url": null
}
|
Federated learning (FL) enables training models at different sites and updating the weights from the training instead of transferring data to a central location and training as in clas-sical machine learning. The FL capability is especially important to domains such as biomedicine and smart grid, where data may not be shared freely or stored at a central location because of policy regulations. Thanks to the capability of learning from decentralized datasets, FL is now a rapidly growing research field, and numerous FL frameworks have been developed. In this work we introduce APPFL, the Argonne Privacy-Preserving Federated Learning framework. APPFL allows users to leverage implemented privacy-preserving algorithms, implement new al-gorithms, and simulate and deploy various FL algorithms with privacy-preserving techniques. The modular framework enables users to customize the components for algorithms, privacy, communication protocols, neural network models, and user data. We also present a new communication-efficient algorithm based on an inexact alternating direction method of multipliers. The algorithm requires significantly less communication between the server and the clients than does the current state of the art. We demonstrate the computational capabilities of APPFL, including differentially private FL on various test datasets and its scalability, by using multiple algorithms and datasets on different computing environments.
|
# APPFL: Open-Source Software Framework for Privacy-Preserving Federated Learning
### Minseok Ryu
_Mathematics and Computer Science Division_
_Argonne National Laboratory_
Lemont, IL, USA
mryu@anl.gov
### Kibaek Kim
_Mathematics and Computer Science Division_
_Argonne National Laboratory_
Lemont, IL, USA
kimk@anl.gov
### Youngdae Kim
_Mathematics and Computer Science Division_
_Argonne National Laboratory_
Lemont, IL, USA
youngdae@anl.gov
**_Abstract—Federated learning (FL) enables training models at_**
**different sites and updating the weights from the training instead**
**of transferring data to a central location and training as in clas-**
**sical machine learning. The FL capability is especially important**
**to domains such as biomedicine and smart grid, where data may**
**not be shared freely or stored at a central location because of**
**policy regulations. Thanks to the capability of learning from**
**decentralized datasets, FL is now a rapidly growing research**
**field, and numerous FL frameworks have been developed. In**
**this work we introduce APPFL, the Argonne Privacy-Preserving**
**Federated Learning framework. APPFL allows users to leverage**
**implemented privacy-preserving algorithms, implement new al-**
**gorithms, and simulate and deploy various FL algorithms with**
**privacy-preserving techniques. The modular framework enables**
**users to customize the components for algorithms, privacy,**
**communication protocols, neural network models, and user data.**
**We also present a new communication-efficient algorithm based**
**on an inexact alternating direction method of multipliers. The**
**algorithm requires significantly less communication between**
**the server and the clients than does the current state of the**
**art. We demonstrate the computational capabilities of APPFL,**
**including differentially private FL on various test datasets and its**
**scalability, by using multiple algorithms and datasets on different**
**computing environments.**
**_Index_** **_Terms—federated_** **learning,** **data** **privacy,**
**communication-efficient** **algorithm,** **open-source** **software**
I. INTRODUCTION
Federated learning (FL) is a growing research field in
machine learning (ML). FL enables multiple institutions (or
devices) to collaboratively learn without sharing data. Specifically, FL is a form of distributed learning with the goal
of training a global ML model by systematically updating
weights from training on local and decentralized data. Partly
because of its learning capability without sharing data, FL
is listed as one of the key technologies to address the U.S.
This work was supported by the U.S. Department of Energy, Office of
Science, Advanced Scientific Computing Research, under Contract DE-AC0206CH11357.
### Ravi K. Madduri
_Data Science and Learning Division_
_Argonne National Laboratory_
Lemont, IL, USA
madduri@anl.gov
Department of Energy’s and National Institutes of Health’s
grand challenges on adopting artificial intelligence (AI) to
tackle complex biomedical data (e.g., Bridge2AI program [1]).
Moreover, recent reports have highlighted the increasing need
to enable AI/ML for privacy-sensitive datasets (e.g., Chapter
15 of [2]).
FL by itself, however, does not guarantee the privacy of
data, because the information extracted from the communication of FL algorithms can be accumulated and effectively
utilized to infer the private local data used for training
(e.g., [3]–[6]). In addition to data privacy, challenges exist
in FL in areas of algorithm design, statistics, and software
architecture. Privacy-preserving techniques have been studied
and integrated into FL algorithms (e.g., [5]–[8]), often named
as privacy-preserving FL (PPFL). While FL algorithms have
been advanced to achieve greater accuracy and scalability,
practical use and integration of new techniques such as privacy preservation require much more careful algorithm design
to ensure efficient communication (see, e.g., [9], [10]). For
increased adoption of FL techniques, research communities
need to develop not only new PPFL open-source frameworks
but also new benchmarks using existing implementations.
In addition to the streamlined deployment of PPFL packages, simulation of PPFL is particularly important for quantification of model performance, learning, and privacy preservation. These simulations are compute intensive and challenging
to scale with the increasing number of FL clients and differences in the sample size at each client. Therefore, a scalable
simulation capability is necessary for PPFL packages.
In this paper we introduce the Argonne Privacy-Preserving
Federated Learning (APPFL) framework, an open-source
PPFL framework that (i) provides application programming
interfaces to easily implement and combine key algorithmic
components required for PPFL in a plug-and-play manner; (ii)
can be used for simulations on high-performance computing
(HPC) architecture with MPI; and (iii) runs on heterogeneous
-----
architectures. Examples of algorithmic components include FL
algorithms, privacy techniques, communication protocols, FL
models to train, and data. We present a new communicationefficient FL algorithm based on an inexact alternating direction method of multipliers (IADMM), which significantly
reduces the data that is needed to iteratively communicate
between the server and clients, as compared with the inexact communication-efficient ADMM (ICEADMM) algorithm
recently developed in [9]. In APPFL, in addition to the wellknown federated averaging (FedAvg) algorithm [11] as a
special case of IADMM algorithms, we have implemented our
new algorithm, as well as the ICEADMM algorithm.
We also present the performance results from the APPFL
framework with two communication protocols: remote procedure calls (gRPC [12]) and the Message Passing Interface
(MPI). While the quantitative results from using gRPC mimic
those from using PPFL on heterogeneous computing architectures, MPI allows scalable simulations of PPFL by utilizing
multiple GPUs on high-performance computing architecture.
In our demonstration we report and discuss (i) the strongscaling results from APPFL on the Summit supercomputer
at Oak Ridge National Laboratory and (ii) communication
efficiency with gRPC in practical settings. We also discuss
the implications of using gRPC on heterogeneous computing
machines.
We summarize our contributions as follows.
1) We develop an open-source software package APPFL,
a PPFL framework that provides various capabilities
needed to implement and simulate PPFL.
2) We develop a new communication-efficient IADMM
algorithm that significantly reduces the communication
as compared with the ICEADMM algorithm [9].
3) We provide extensive numerical results by simulating
PPFL with gRPC and MPI with respect to the performance of PPFL algorithms and communication.
The rest of the paper is organized as follows. We present
the architecture of APPFL, PPFL algorithms implemented in
APPFL, and numerical demonstration of APPFL in Sections
II, III, and IV, respectively. In Section V we summarize our
conclusions and discuss future implementations to enhance the
capability of APPFL.
II. APPFL ARCHITECTURE
APPFL is an open-source Python package that provides
privacy-preserving federated learning tools for users in practice while allowing research communities to implement, test,
and validate various ideas for PPFL. The source code is available in a public GitHub repository, and APPFL v0.0.1 [13] has
been released as a package distributed and can be easily in[stalled via pip https://pypi.org/project/appfl/. In this section we](https://pypi.org/project/appfl/)
present an overview of the APPFL architecture and compare
APPFL with several existing FL frameworks.
_A. Overview_
Figure 1 presents an overview of the APPFL architecture.
The APPFL framework has five major components: (i) feder
ated learning algorithms, (ii) differential privacy schemes, (iii)
communication protocols, (iv) neural network models, and (v)
data for training and testing.
_1) FL algorithm: A federated learning algorithm deter-_
mines how a server updates global model parameters based
on local model parameters trained and sent by clients. Our
framework assumes the following general structure of FL:
�
_i∈Ip_
1 �
_fi(w; xi, yi)_ _,_ (1)
_Ip_
min
_w_
_P_
�
_p=1_
� _Ip_
_I_
where w ∈ R[m] is a model parameter and xi and yi are the
_ith data input and data label, respectively, P is the number_
of clients, Ip is an index set of data points from a client p,
_Ip := |Ip| is the total number of data points for a client p,_
_I =_ [�]p[P]=1 _[I][p][ is the total number of data points, and][ f][i][ is the]_
loss of the prediction on (xi, yi) made with w. The objective
function fi can be convex (e.g., linear model) and nonconvex
(e.g., neural network model).
In APPFL, we currently have implemented the popular FedAvg algorithm [11] and two IADMM-based algorithms: our new improved training- and communicationefficient IADMM (IIADMM) algorithm (see Section III-A)
and the ICEADMM algorithm [9]. Additional user-defined
FL algorithms can be implemented by inheriting our Python
class BaseServer and implementing the virtual function
update(). For IADMM-based algorithms, clients may need
to perform additional work during training, such as forming an
augmented Lagrangian function and updating dual variables.
This additional work can be customized as well by inheriting
our BaseClient class and implementing the virtual function
update().
_2) Differential privacy: Differential privacy schemes en-_
able protecting data privacy by making it difficult to deduce
confidential information from aggregated data such as a local
model parameter ωi of client i’s. The work [14] shows that one
can recover an original image with high accuracy using only
gradients sent to the server, without sharing the training data
between clients and a server. Therefore, additional privacypreserving schemes such as differential privacy capability are
critical for a privacy-preserving FL. We currently support the
output perturbation method based on the Laplace mechanism
[15], namely, adding Laplacian noise to the local model
parameters before sending them to the server. We plan to add
more advanced schemes in the near future. More details are
given in III-B. Users can also implement their own differential
privacy schemes by implementing them in the virtual function
update() of BaseClient class.
_3) Communication protocol: Two communication proto-_
cols, MPI and gRPC, have been implemented in our APPFL
framework for sharing model parameters between clients and
a server. The MPI protocol provides an efficient communication method in a cluster environment by utilizing collective
communication and remote direct memory access capabilities.
On the other hand, gRPC enables communication between
multiple platforms and languages, which will likely be the case
-----
**Communication**
MPI
gRPC
MQTT (TBD)
𝝎𝟏
**Client(BaseClient)**
**Server(BaseServer)**
while not terminated **User-defined algorithms:** 𝝎𝟏
gather local updates
update global model (customizable) FedAvgICEADMM 𝝎
distribute global model IIADMM 𝝎𝑷
…
**…** 𝝎𝑷 𝝎
**Client(BaseClient)**
**User-defined** while not terminated
**algorithms:** receive global model
FedAvg + DP **…** update local model (customizable)
ICEADMM + DP send local model
IIADMM + DP
… Inputlayer Hiddenlayer Outputlayer **Differential**
_x0_ _h1_ **Privacy (DP)**
𝝎𝟏 **…** _x1_ _h2_ _y1_ 𝝎𝑷
|User-defined 𝝎 algorithms: 𝟏 FedAvg … ICEADMM IIADMM 𝝎 𝑷 …|Col2|Col3|
|---|---|---|
||𝝎 𝟏 … 𝝎 𝑷|𝝎|
**Data** **Model(torch.nn.Module)** **Data** **Model(torch.nn.Module)**
|Client(BaseClient)|Col2|
|---|---|
|while not terminated receive global model update local model (customizable) send local model||
|Data|Input Hidden Output Differential layer layer layer x0 h1 Privacy (DP) x1 h2 y1 x2 h3 y2 x3 h4 Model(torch.nn.Module)|
Fig. 1: Overview of APPFL architecture
for cross-silo FL. For an efficient cross-device FL involving
a massive number of devices (e.g., [16]), we plan to support
MQTT, a lightweight, publish-subscribe network protocol that
[transports messages between devices (https://mqtt.org/).](https://mqtt.org/)
_4) User-defined model: The user-defined model is a neural_
network model that inherits PyTorch’s neural network module1
(torch.nn.Module). All clients are supposed to use the
same neural network model architecture for training and
testing. APPFL does not assume anything about the model
other than that it inherits torch.nn.Module; users can
freely specify their own neural network models.
_5) User-defined data: Each client is required to define_
the training data, which is typically not accessible from the
server or any other clients. We leverage the concept of the
PyTorch dataset in APPFL. Users can load their datasets to
our APPFL framework by using the Dataset class that
inherits the PyTorch Dataset class. This allows us to utilize
the PyTorch’s DataLoader that provides numerous useful
functions including data shuffling and mini-batch training.
When testing data is available at a server, APPFL provides
a validation routine that evaluates the accuracy of the current
global model. This validation can be used to monitor and
determine the convergence of an FL.
_B. Existing FL frameworks_
A few open-source FL frameworks exist. These include
Open Federated Learning (OpenFL) [17], Federated Machine
Learning (FedML) [18], TensorFlow Federated (TFF) [19],
and PySyft [20]. In Table I we compare them based on
advanced functionality available in APPFL. See [21] for a
more detailed summary and comparison of the existing opensource FL frameworks.
Here we briefly discuss the capabilities of each framework
in terms of their relevance to APPFL.
TABLE I: Comparison of APPFL with some of the existing
open-source FL frameworks
OpenFL FedML TFF PySyft APPFL
Data privacy ✓ ✓ ✓
MPI ✓ ✓
gRPC ✓ ✓ ✓
MQTT ✓ 1
_1) OpenFL: This is an open-source FL framework de-_
veloped by Intel. It was initially developed as part of a
research project on FL for healthcare and designed for a
multi-institutional setting. In OpenFL, an FL environment is
constructed based on collaborator and aggregator nodes that
form a star topology; in other words, all collaborator nodes
are connected to an aggregator node. Communication between
nodes is through gRPC via a mutually authenticated transport
layer security network connection.
_2) FedML: This is an open research library to facilitate_
FL algorithm development and fair performance comparison.
It supports on-device training for edge devices, distributed
computing, and single-machine simulation. It utilizes gRPC
and MQTT for device communication to simulate cross-device
FL on real-world hardware platforms. Also, it utilizes MPI for
simulating FL in a distributed-computing setting. Regarding
the privacy and security aspect, it implements weak differential
privacy that aims to prevent a backdoor attack, which requires
less noise in training data compared with what is required for
ensuring data privacy [22].
_3) Tensor Flow Federated (TFF): This is an open-source_
framework from Google for machine learning and other computations on decentralized data [19]. In TFF, an FL environment is constructed by using multiple GPUs that are used
as clients. Also, TFF can be simulated on a Google Cloud
platform. Currently, TFF supports FedAvg and differential
|Col1|OpenFL|FedML|TFF|PySyft|APPFL|
|---|---|---|---|---|---|
|Data privacy MPI gRPC MQTT|✓|✓ ✓ ✓|✓ 1|✓|✓ ✓ ✓|
-----
privacy for private federated learning.
_4) PySyft: This is an open-source FL framework from_
OpenMined, an open-source community [20]. In PySyft, an
FL environment is constructed by Virtual Workers, WebSocket
_Workers, or GridNodes. While Virtual Workers live on the_
same machine and do not communicate over the network,
the others leverage WebSocket as a communication medium
to ensure that a broad range of devices can participate in a
PySyft network. Currently, PySyft supports FedAvg and
differential privacy for private federated learning.
III. PRIVACY-PRESERVING ALGORITHMS
In this section we present our new communication-efficient
algorithm, IIADMM. This algorithm significantly reduces the
amount of information transfer between the server and the
clients, as compared with the ICEADMM algorithm recently
developed in [9]. In Section III-A we also show that FedAvg
[11], the popular FL algorithm, is a special form of the
IADMM-based algorithms. Furthermore, we describe differential privacy (DP) techniques applied to the FL algorithms in
Section III-B.
_A. IIADMM_
We consider the reformulation of (1) given as follows:
model parameter w[t][+1] given from the server, as in (3b) and
(3c), respectively. In ADMM, both local primal and dual
information (zp[t] _[, λ]p[t]_ [)][ are sent from each client][ p][ to the central]
server is required.
To reduce the computation burden of (3b) without affecting
the overall convergence, the subproblem (3b) can be replaced
with its inexact version:
_zp[t][+1]_ _←_ arg minzp∈R[m][ ⟨][g][(][z]p[t] [)][, z][p][⟩−⟨][λ][t]p[, z][p][⟩]
+ _[ρ][t]_ _p[∥][2][,]_ (4)
2 2
_[∥][w][t][+1][ −]_ _[z][p][∥][2][ +][ ζ]_ _[t]_ _[∥][z][p][ −]_ _[z][t]_
where g(zp[t] [) =][ 1]I �i∈Ip _[∇][f][i][(][z]p[t]_ [)][ is a gradient at][ z]p[t] [, and][ ζ] _[t][ is]_
a proximity parameter that controls the distance between the
new iterate zp[t][+1] and the previous iterate zp[t] [.]
A process {(3a) → (4) → (3c)}t[T]=1 [is denoted by IADMM.]
We improve the IADMM process by conducting (i) multiple
local primal updates using batches of data, namely, iteratively solving (4) based on batches of data Ip, and (ii) two
independent but identical dual updates at both server and
clients, which result in eliminating the need for communicating
dual information between clients and the server. We refer to
the proposed algorithm as IIADMM and present its steps in
Algorithm 1.
**Algorithm 1 IIADMM**
1: Initialize z1[1][, . . ., z]P[1] _[, λ]1[1][, . . ., λ][1]P_
_[∈]_ [R][m]
2: for each round t = 1, 2, . . ., T do
3: _w[t][+1]_ _←_ _P[1]_ �Pp=1[(][z]p[t] _[−]_ _ρ[1][t][ λ]p[t]_ [)]
4: **for each agent p ∈** [P ] in parallel do
5: _zp[t][+1]_ _←_ **ClientUpdate(t, p, w[t][+1]; zp[1][, λ][1]p[)]**
6: _λ[t]p[+1]_ _←_ _λ[t]p_ [+][ ρ][t][(][w][t][+1][ −] _[z]p[t][+1])_
7: **end for**
8: end for
9:
10: ClientUpdate(t, p, w[t][+1]; zp[1][, λ][1]p[)]
11: Initialize z[1][,][1] _w[t][+1]_
_←_
12: Split Ip into a collection {B1, . . ., BBp _} of batches_
13: for local step ℓ = 1, . . ., L do
14: **for batch b = 1, . . ., Bp do**
15: gradient: g ← _|B1b|_ �i∈Bb _[∇][f][i][(][z][ℓ,b][)]_
16: update:
_z[ℓ,b][+1]_ _z[ℓ,b]_ _p_ _[−]_ _[ρ][t][(][w][t][+1][ −]_ _[z][ℓ,b][)]_
_←_ _−_ _[g][ −]_ _[λ][t]_
_ρ[t]_ + ζ _[t]_
�
_i∈Ip_
1 �
_fi(zp; xi, yi)_ (2a)
_Ip_
min
_w,{zp}p[P]=1_
_P_
�
_p=1_
� _Ip_
_I_
s.t. w = zp, ∀p ∈ [P ], (2b)
where w ∈ R[m] is a global model parameter and zp ∈ R[m] is
a local model parameter defined for every client p [P ] :=
_∈_
1, . . ., P . The Lagrangian dual formulation of (2) is given
_{_ _}_
by
_P_
� � 1 � �
max min _fi(zp; xi, yi) + ⟨λp, w −_ _zp⟩_ _,_
_{λp}p[P]=1_ _w,{zp}p[P]=1_ _p=1_ _I_ _i∈Ip_
where λp ∈ R[m] is a dual vector associated with the consensus
constraints (2b). Then, ADMM steps [23] are given by
_P_
�
_p=1_
� 1
_I_
_w[t][+1]_ arg min
_←_ _w∈R[m]_
_P_
�
_p=1_
�
_⟨λ[t]p[, w][⟩]_ [+][ ρ][t] _p[∥][2][�],_ (3a)
2
_[∥][w][ −]_ _[z][t]_
1 �
_zp[t][+1]_ _←_ arg minzp∈R[m] _I_ _fi(zp; xi, yi) −⟨λ[t]p[, z][p][⟩]_
_i∈Ip_
+ _[ρ][t]_ (3b)
2
_[∥][w][t][+1][ −]_ _[z][p][∥][2][,][ ∀][p][ ∈]_ [[][P] []][,]
_λ[t]p[+1]_ _←_ _λ[t]p_ [+][ ρ][t][(][w][t][+1][ −] _[z]p[t][+1]), ∀p ∈_ [P ], (3c)
17: if b = Bp: z[ℓ][+1][,][1] _←_ _z[ℓ,B][p]_
18: **end for**
19: end for
20: zp[t][+1] _←_ _z[L][+1][,][1]_
21: λ[t]p[+1] _←_ _λ[t]p_ [+][ ρ][t][(][w][t][+1][ −] _[z]p[t][+1])_
22: return zp[t][+1]
Specifically, in line 3, the global model parameter w[t][+1] is
updated based on a closed-form solution expression of (3a).
This global model update is conducted at the central server
where ρ[t] _> 0 is a hyperparameter, the choice of which_
may be sensitive to the learning performance, similar to the
learning rate of the stochastic gradient descent (SGD) method.
In the context of FL, the global model parameter w[t][+1] in
(3a) is updated at the central server based on the local model
parameters zp[t] [and][ λ]p[t] [, where][ z]p[t] [is a local primal and][ λ]p[t]
is a local dual information. For every clients p, the local
parameters zp[t] [are updated at the clients by using the global]
-----
(lines 1–8). The resulting global model parameter is distributed
to all clients in line 5. Then, the local model parameters
_{zp[t][+1]}p[P]=1_ [are updated at each client side through the multiple]
local primal updates using the batches of data described in
lines 10–22. Specifically, the global model parameter w[t][+1]
received from the central server is set to be an initial point
_z[1][,][1]_ for local model updates in line 11. In line 12, the local
data is split to several batches. For every local step ℓ and batch
_b, the gradient of the loss function is computed in line 15,_
and the local model parameter is updated based on a closedform solution expression of (4) in line 16. After updating the
local primal parameters zp[t][+1] in line 20, the dual parameters
_λ[t]p_ [is updated via (][3c][) in line 21. Then only the local primal]
parameters are sent to the central server, i.e., from line 22 to
line 5. In line 6, the dual parameters λ[t]p [is updated via (][3c][).]
Note that the two independent dual updates in line 21 and line
6 are identical for every round because the initial local primal
and dual information (z[1], λ[1]) is shared once at the beginning
of the algorithm.
The proposed IIADMM is similar to ICEADMM [9] in that
both are variants of IADMM. However, ICEADMM conducts
multiple local primal and dual updates without using the
batches of data, namely, iteratively solving (4) and (3c) for L
times while Bp = 1. This method of local updates results in
communicating not only primal but also dual information from
clients to the server for every communication round, which
can be a significant communication burden particularly in an
FL setting, as discussed in IV-D. Nevertheless, a benefit of
utilizing the dual information is a potential improvement on
the performance of the algorithm, for example, by introducing
an adaptive penalty, as discussed in [24] and [25].
We also highlight that IADMM is a generalization of the
well-known FedAvg [11] composed of (i) averaging local
model parameters for a global update, namely, w[t][+1] =
1 �P
_P_ _p=1_ _[z]p[t]_ [, and (ii) SGD steps for local updates, namely,]
_zp[t][+1]_ = zp[t] _p[)][, where][ η][ is a learning rate (or step size)]_
_[−]_ _[ηg][(][z][t]_
because FedAvg utilizes the primal information (i.e., zp[t] [) for]
updating a global model parameter, while IADMM utilizes
not only primal but also dual information (i.e., λ[t]p[). One can]
easily see from ICEADMM [9] that FedAvg is a special case
of ICEADMM by setting λ[t] = 0, ζ _[t]_ = 0, ρ[t] = 1/η for every
iteration t.
To summarize, the proposed IIADMM utilizes both primal
and dual information for updating a global parameter and
has the potential to improve learning performance by utilizing
the dual information (i.e., a benefit from ICEADMM) while
communicating only primal information (i.e., a benefit from
FedAvg).
_B. Differential privacy_
In APPFL, DP techniques are integrated with the FL algorithms for learning while preserving data privacy against an
inference attack [26] that can take place in any communication
round.
_Definition 1: A randomized function_ provides ¯ϵ-DP if,
_A_
for any two datasets and that differ in a single entry and
_D_ _D[′]_
for any set,
_S_
� P(A(D) ∈S)
ln _ϵ,¯_ (5)
��� P(A(D[′]) ∈S) ���� _≤_
where ( ) (resp. ( )) is a randomized output of on
_A_ _D_ _A_ _D[′]_ _A_
input (resp. ).
_D_ _D[′]_
This implies that as ¯ϵ decreases, it becomes hard to distinguish the two datasets and by analyzing the randomized
_D_ _D[′]_
output of . Here, ¯ϵ is a privacy budget indicating that stronger
_A_
privacy is achieved with a lower ¯ϵ.
A popular way of constructing a randomized function
_A_
that ensures ¯ϵ-DP is to add some noise directly to the true
output ( ), namely,
_T_ _D_
( ) = ( ) + ξ,[˜] (6)
_A_ _D_ _T_ _D_
which is known as the output perturbation method. Several
types of noise _ξ[˜] lead to ¯ϵ-DP. An example is Laplacian_
noise extracted from a Laplace distribution with zero mean
and scale parameter b := ¯∆/ϵ¯, where ¯ϵ is from (5) and
¯∆ _≥_ maxD′∈N (D) ∥A(D) _−A(D[′])∥_ is an upper bound on the
sensitivity of the output with respect to the collection ( )
_N_ _D_
of datasets differing in a single entry from the given dataset
_D[′]_
[15].
_D_
In Section IV we demonstrate the Laplace-based output
perturbation method that guarantees ¯ϵ-DP on data for any
communication round of the FL algorithms implemented in
APPFL. In the output perturbation method, the true output
(i.e., zp[t][+1] in line 20 in Algorithm 1) is perturbed by adding
the noise _ξ[˜] generated by a Laplace distribution with zero_
mean and the scale parameter b = ∆[¯] _/ϵ¯, where_ ∆[¯] should
satisfy ∆[¯] _≥_ _ρ[t]+1_ _ζ[t][ max][D][′][∈N][ (][D][)][ ∥][g][(][D][)]_ _[−]_ _[g][(][D][′][)][∥][. Clipping the]_
gradient by a positive constant C leads to _g_ _C, which_
_∥_ _∥≤_
allows us to set ∆= 2[¯] _C/(ρ[t]_ + ζ _[t]). After the perturbation, the_
resulting randomized outputs are sent to the server.
More advanced methods exist that guarantee DP other than
the output perturbation. For example, an objective perturbation
method [27] ensures DP on data by perturbing the objective
function of an optimization problem rather than perturbing the
output of the problem. As theoretically shown in [27], [28],
the objective perturbation provides more accurate learning than
does the output perturbation. As a future implementation of
APPFL, we plan to incorporate advanced DP methods for
improving the performance of the PPFL algorithms.
IV. DEMONSTRATION OF APPFL
In this section we demonstrate the capabilities of APPFL
by extensive experimentation using test datasets on different
computing architectures. For all experiments, we use APPFL
version 0.0.1, available through pip. The code for this demon[stration is also available at https://github.com/APPFL/APPFL.](https://github.com/APPFL/APPFL)
_A. Experimental settings_
Our demonstration uses four datasets: MNIST, CIFAR10,
FEMNIST, and CoronaHack. The MNIST and CIFAR10 data
-----
(a) MNIST (b) CIFAR10 (c) FEMNIST (d) CoronaHack
Fig. 2: Test accuracy under various ¯ϵ 3, 5, 10, provided by FedAvg (1st row), ICEADMM (2nd row), and IIADMM
_∈{_ _∞}_
(3rd row) for various datasets
are available from Torchvision 0.11. The FEMNIST and
CoronaHack datasets are available from the LEAF [29] and
kaggle [1] projects, respectively. For MNIST, CIFAR10, and
CoronaHack, we split the entire training datasets into four,
each of which represents a client’s dataset. The FEMNIST
datasets are preprocessed to sample 5% of the entire 805,263
data points in a non-i.i.d. (independent and identically distributed) manner, resulting in 36,699 training and 4,176 testing
samples distributed over 203 clients.
We use the convolutional neural network model, consisting
of two 2D convolution layers, a 2D max pooling layer, the
elementwise rectified linear unit function, and two layers of
linear transformation. We note that the goal of this demonstration is not to find the best neural network architecture for
each test instance or to achieve a good learning performance.
To demonstrate the differential privacy, we add random
noise extracted from a Laplace distribution with zero mean and
a scale parameter b = ∆[¯] _/ϵ¯ to local model parameters before_
sending them to a central server. Note that ∆[¯] is a sensitivity
of the local model parameters computed automatically based
on the dataset and algorithm chosen in APPFL; therefore ¯ϵ,
from the definition of ¯ϵ-DP in (5), controls the privacy level
of the FL algorithms in APPFL.
[1The CoronaHack dataset has been obtained from kaggle, available at https:](https://www.kaggle.com/praveengovi/coronahack-chest-xraydataset/version/3)
[//www.kaggle.com/praveengovi/coronahack-chest-xraydataset/version/3.](https://www.kaggle.com/praveengovi/coronahack-chest-xraydataset/version/3)
_B. Demonstration of different PPFL algorithms_
In this subsection we demonstrate and discuss the numerical behavior of our new algorithm IIADMM (Section III),
as compared with the behavior of two existing algorithms:
FedAvg and ICEADMM. We set L = 10 local updates and
_T = 50 iterations (i.e., communication rounds) for all FL_
algorithms. Also, we set each batch of the local training data
to have at most 64 data points for local updates in FedAvg
and IIADMM. Note that (i) the SGD with momentum [30]
is utilized for FedAvg and (ii) all data points are used for
calculating a gradient in ICEADMM as implemented in [9].
Experiments were run on Swing, a 6-node GPU computing
cluster at Argonne National Laboratory. Each node has 8
NVIDIA A100 40GB GPUs and 128 CPU cores. For all the
algorithms, we use 1 GPU for a central server computation
(i.e., global update) and 4 GPUs for clients’ computation (i.e.,
local update).
Figure 2 displays testing accuracy resulting from the FL
algorithms under the changes of privacy parameter ¯ϵ
_∈_
3, 5, 10,, where decreasing ¯ϵ ensures stronger data privacy
_{_ _∞}_
and ¯ϵ = represents a non-private setting. For all FL
_∞_
algorithms, the results show that the test accuracy decreases as
_ϵ¯ decreases, which is a well-known trade-off between learning_
performance and data privacy. As compared with ICEADMM,
our algorithm IIADMM provides better test accuracy in all
datasets considered. This result implies the ineffectiveness
-----
of multiple local dual updates in ICEADMM as IIADMM
conducts multiple local primal updates only. As compared
with FedAvg, IIADMM provides better test accuracy for most
datasets (e.g., MNIST, CIFAR10, and CoronaHack) when ¯ϵ
is smaller. This result partly demonstrates the effectiveness of
the proximal term in (4) that mitigates the negative impact
of random noises generated for data privacy on the learning
performance.
We note that the magnitude of noise generated for ensuring
_ϵ¯-DP may vary over FL algorithms because it also depends_
on the sensitivity ∆[¯], which varies over FL algorithms. For
example, the sensitivity in FedAvg depends on the learning
rate (or step size), whereas the sensitivity in IIADMM depends
on the penalty and proximity parameters, namely, ρ[t] and ζ _[t]_
in (4). Since these parameters affect not only the data privacy
but also the learning performance, they should be fine-tuned.
As part of future work, we plan to utilize both primal and dual
information to further improve the performance of IIADMM.
_C. Scaling results of PPFL simulations on Summit_
2[5]
2[4]
2[3]
2[2]
2[1]
2[0]
|Col1|Col2|Col3|l|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||Idea|l|||||||
||||FL|||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
5 11 24 50 101 203
# MPI processes for clients
To demonstrate the scalability of PPFL over a large number
of clients, we measure the average time (computation + communication) for clients’ local updates over various numbers
of MPI processes in APPFL in a cluster. Our experiment
setup may be considered as an ideal situation where the
communication efficiency of an FL is maximized via the use of
InfiniBand and RDMA (remote direct memory access), which
provides low latency and no extra copies of data. As we will
show later in this section, even under such an ideal situation,
we observe the significance of communication efficiency in
overall computation time of an FL as we increase the number
of clients.
Our experiments werr performed on the FEMNIST dataset
on the Summit supercomputer at Oak Ridge National Laboratory. Each compute node has 6 Nvidia V100 GPUs. A
total of 203 clients are equally divided into a number of MPI
processes, and each MPI process is assigned to a dedicated
GPU. One MPI process is reserved for a server to perform
its global update. For communications between a server and
clients for local updates, we use a collective communication
protocol, MPI.gather(), which is configured to use an
RDMA technology for a direct data transfer between GPUs.
Of the 50 communication rounds, we take the average of the
time over the last 49 rounds by excluding the time for the first
round, which includes the compile time of the Python code.
Figure 3a shows the strong-scaling results of APPFL with
an increasing number of MPI processes. In the figure, the
ideal plot is a reference having a perfect scaling. APPFL
shows almost perfect scaling with a smaller number of MPI
processes; however, its speedup decreases as we increase the
number of MPI processes. This deterioration is mainly because
of the relative increase in its communication time via MPI.
More specifically, Figure 3b presents the percentage[2] of
2The percentage of the communication time of each MPI process is
computed by 100 × (time for MPI.gather() / (time for MPI.gather()
+ compute time for local model updates)).
Fig. 3: Scaling results of APPFL on FEMNIST dataset
MPI communication time by MPI.gather() in the total
elapsed time for local model updates for each MPI setting,
which has been computed by averaging the percentages of
all MPI processes. From the results we observe that the
MPI communication time does not scale as well as the pure
compute time for local model updates. While the size of data
to send has decreased by more than a factor of 40 (5 vs 203
MPI processes), its communication time has decreased only
by a factor of 8. In contrast, the compute time shows perfect
scaling.
Our experimental results indicate that communication efficiency may significantly affect the overall computational
performance of FL as we increase the number of clients. We
plan to investigate ways to mitigate these adverse effects of
communication time on training efficiency when we employ
a large number of clients, such as an asynchronous update
scheme and a different training scheme (e.g., dynamically
controlling the number of local updates).
30
25
20
15
10
5
(a) Strong scaling of local updates
5 11 24 50 101 203
# MPI processes for clients
(b) Percentage of MPI.gather() in local update time
-----
_D. Simulations of PPFL with gRPC_
10[1]
10[0]
|Col1|Col2|Col3|
|---|---|---|
||||
||gRPC||
||MPI||
||||
||||
||||
||||
||||
||||
0 50 100 150 200
node) runs 6 clients with a dedicated GPU being assigned
to each of them. Although these nodes are connected via
InfiniBand, our gRPC is not configured to use RDMA, so
communication via gRPC may not be as efficient as with MPI,
which is configured to directly transfer data between GPUs via
RDMA. Our configuration provides a more realistic network
environment.
Figure 4a presents a comparison of cumulative communication times between MPI and gRPC over 49 rounds excluding
the first round since it typically involves compile time. As
we see in the figure, MPI shows up to 10 times faster
communication time than does gRPC. We think that the main
reason for this performance degradation of gRPC is that (i) it
performs serialization and deserialization of user-given data
via protocol buffers and (ii) it involves copying data from
GPUs to CPUs, in contrast to RDMA-enabled MPI where we
directly transfer data between GPUs.
Another reason for such a degraded performance is that
gRPC tends to show inconsistent communication time between
rounds, as illustrated in Figure 4b. We sample 5 clients
with IDs 1, 5, 100, 150, and 200 and present a box plot
showing quantile information of communication times over 49
rounds. From the figure, we observe a significant difference in
communication time by a factor of 30 between rounds. This
could be viewed as different communication times depending
on network traffic.
Similar to the observations described in Section IV-C, we
believe an asynchronous update scheme of an FL will allow
us to more efficiently perform FL in the presence of this
communication inefficiency.
_E. Impact of heterogeneous architectures_
Client ID
(a) Cumulative communication time over 49 rounds
2[2]
2[0]
2[−][2]
2[−][4]
2[−][6]
1 50 100 150 200
Client ID
(b) Box plot of communication time of gRPC over 5 clients
Fig. 4: Communication times of gRPC and MPI on FEMNIST
dataset
We present and discuss the impact of using gRPC on
APPFL, as compared with MPI, over the FEMNIST dataset.
This comparison is important in order to understand the
communication efficiency of real-world FL settings because
they typically involve clients remotely apart from each other
with heterogeneous architectures, whereas MPI is available
only on a cluster environment. In such settings, we may not
be able to exploit fast network devices and protocols such
as InfiniBand and RDMA that we employed in Section IV-C.
These limitstions may result in even worse efficiency of the
network communications in the total computation time than
the case for MPI, as we have seen in Figure 3b.
To perform the experiments, we have a setup similar to
that in Section IV-C, except that we now use gRPC for
communication. A total of 203 clients have been launched
on 34 compute nodes (physically apart from one another)
on the Summit cluster where each node (except for the last
We next discuss the impact of heterogeneous architectures
with some quantitative evidence. While most FL studies are
simulated on homogeneous computing architectures, a practical FL setting may be composed of many heterogeneous
computing machines. Consider a cross-silo setting where one
institution updates the local model on a machine with NVIDIA
A100 GPUs (e.g., on Argonne’s Swing) and the other institution updates the local model on a machine with NVIDIA
V100 GPUs (e.g., on Oak Ridge’s Summit). This can cause a
significant load imbalance between the two local updates. For
example, the local update on one A100 GPU is faster than that
on one V100 GPU by a factor of 1.64 (6.96 seconds vs. 4.24
seconds). This implies that the heterogeneous architectures in
FL will be an important factor for the design of efficient FL
algorithms.
V. CONCLUDING REMARKS AND FUTURE WORK
In this paper we introduced APPFL our open-source PPFL
framework that allows research communities to develop, test,
and benchmark FL algorithms, data privacy techniques, and
neural network architectures for decentralized data. In addition
to the implementation of existing FL algorithms, we have
developed and implemented a new communication-efficient FL
algorithm that significantly reduces the communication data
-----
every iteration. Two communication protocols, gRPC and MPI,
have been implemented and numerically demonstrated with
APPFL. In particular, we demonstrated the communication
performance of using gRPC and the scalability of distributed
training with MPI on the Summit supercomputer.
Many interesting and challenging questions are being actively investigated by the FL research communities. We conclude this paper by discussing our future technical work for
APPFL.
1) The current communication topology used in our framework is based on a client-server architecture, which
may suffer from load imbalance in local computations.
We plan to implement the asynchronous updates of an
FL model in our framework. We will also develop decentralized privacy-preserving algorithms that allow the
neighboring communication without the central server
for learning.
2) We will enhance the learning performance of IIADMM
by adaptively updating algorithm parameters such as
penalty ρ[t] and proximity ζ _[t]. In addition to existing_
techniques, ML approaches (e.g., reinforcement learning [31]) can be used for updating such parameters.
3) Computation of the sensitivity parameter ∆[¯] used in
Section IV is key to achieving greater learning performance while preserving data privacy. We will develop
efficient algorithms to compute the scale parameter for
differential privacy.
4) To better understand the communication bottleneck
among devices (vs. nodes on a cluster), we will test our
framework with large-scale deep neural network models
that require a large amount of data transfer between a
server and clients.
ACKNOWLEDGMENT
We gratefully acknowledge the computing resources provided on Swing, a high-performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory. This research also used resources
of the Oak Ridge Leadership Computing Facility at the Oak
Ridge National Laboratory, which is supported by the Office
of Science of the U.S. Department of Energy under Contract
No. DE-AC05-00OR22725.
REFERENCES
[1] “National Institutes of Health Bridge to Artificial Intelligence
[(Bridge2AI) program,” https://commonfund.nih.gov/bridge2ai, accessed:](https://commonfund.nih.gov/bridge2ai)
2022-01-24.
[2] E. Schmidt, B. Work, S. Catz, S. Chien, C. Darby, K. Ford, J.-M.
Griffiths, E. Horvitz, A. Jassy, W. Mark et al., “National Security Commission on Artificial Intelligence (AI),” National Security Commission
on Artificial Intellegence, Tech. Rep., 2021.
[3] W. Wei, L. Liu, M. Loper, K.-H. Chow, M. E. Gursoy, S. Truex, and
Y. Wu, “A framework for evaluating gradient leakage attacks in federated
learning,” arXiv preprint arXiv:2004.10397, 2020.
[4] M. Ryu and K. Kim, “A privacy-preserving distributed control of optimal
power flow,” IEEE Transactions on Power Systems, pp. 1–1, 2021.
[5] ——, “Differentially private federated learning via inexact ADMM,”
_arXiv preprint arXiv:2106.06127, 2021._
[6] ——, “Differentially private federated learning via inexact ADMM with
multiple local updates,” arXiv preprint arXiv:2202.09409, 2022.
[7] O. Choudhury, A. Gkoulalas-Divanis, T. Salonidis, I. Sylla, Y. Park,
G. Hsu, and A. Das, “Differential privacy-enabled federated learning
for sensitive health data,” arXiv preprint arXiv:1910.02578, 2019.
[8] K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin,
T. Q. S. Quek, and H. V. Poor, “Federated learning with differential
privacy: Algorithms and performance analysis,” IEEE Transactions on
_Information Forensics and Security, vol. 15, pp. 3454–3469, 2020._
[9] S. Zhou and G. Y. Li, “Communication-efficient ADMM-based federated
learning,” arXiv preprint arXiv:2110.15318, 2021.
[10] M. Chen, N. Shlezinger, H. V. Poor, Y. C. Eldar, and S. Cui,
“Communication-efficient federated learning,” Proceedings of the Na_tional Academy of Sciences, vol. 118, no. 17, 2021._
[11] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas,
“Communication-efficient learning of deep networks from decentralized
data,” in Artificial intelligence and statistics. PMLR, 2017, pp. 1273–
1282.
[12] X. Wang, H. Zhao, and J. Zhu, “gRPC: A communication cooperation
mechanism in distributed systems,” ACM SIGOPS Operating Systems
_Review, vol. 27, no. 3, pp. 75–86, 1993._
[13] M. Ryu, K. Kim, and Y. Kim, “APPFL: Argonne PrivacyPreserving Federated Learning,” Feb. 2022. [Online]. Available:
[https://doi.org/10.5281/zenodo.5976144](https://doi.org/10.5281/zenodo.5976144)
[14] J. Geiping, H. Bauermeister, H. Dr¨oge, and M. Moeller, “Inverting
gradients – how easy is it to break privacy in federated learning?”
in Advances in Neural Information Processing Systems, H. Larochelle,
M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, Eds., vol. 33. Curran
Associates, Inc., 2020, pp. 16 937–16 947.
[15] C. Dwork, A. Roth et al., “The algorithmic foundations of differential
privacy.” Found. Trends Theor. Comput. Sci., vol. 9, no. 3-4, pp. 211–
407, 2014.
[16] P. Beckman, R. Sankaran, C. Catlett, N. Ferrier, R. Jacob, and M. Papka,
“Waggle: An open sensor platform for edge computing,” in 2016 IEEE
_SENSORS._ IEEE, 2016, pp. 1–3.
[17] G. A. Reina, A. Gruzdev, P. Foley, O. Perepelkina, M. Sharma,
I. Davidyuk, I. Trushkin, M. Radionov, A. Mokrov, D. Agapov et al.,
“OpenFL: An open-source framework for federated learning,” arXiv
_preprint arXiv:2105.06413, 2021._
[18] C. He, S. Li, J. So, X. Zeng, M. Zhang, H. Wang, X. Wang,
P. Vepakomma, A. Singh, H. Qiu et al., “Fedml: A research library and benchmark for federated machine learning,” arXiv preprint
_arXiv:2007.13518, 2020._
[[19] “TensorFlow Federated: Machine learning on decentralized data,” https:](https://www.tensorflow.org/federated)
[//www.tensorflow.org/federated, accessed: 2022-01-24.](https://www.tensorflow.org/federated)
[20] A. Ziller, A. Trask, A. Lopardo, B. Szymkow, B. Wagner, E. Bluemke, J.M. Nounahon, J. Passerat-Palmbach, K. Prakash, N. Rose et al., “PySyft:
a library for easy federated learning,” in Federated Learning Systems.
Springer, 2021, pp. 111–139.
[21] I. Kholod, E. Yanaki, D. Fomichev, E. Shalugin, E. Novikova, E. Filippov, and M. Nordlund, “Open-source federated learning frameworks
for IoT: A comparative review and analysis,” Sensors, vol. 21, no. 1, p.
167, 2021.
[22] Z. Sun, P. Kairouz, A. T. Suresh, and H. B. McMahan, “Can you really
backdoor federated learning?” arXiv preprint arXiv:1911.07963, 2019.
[23] S. Boyd, N. Parikh, and E. Chu, Distributed optimization and statistical
_learning via the alternating direction method of multipliers._ Now
Publishers Inc, 2011.
[24] Z. Xu, G. Taylor, H. Li, M. A. Figueiredo, X. Yuan, and T. Goldstein,
“Adaptive consensus ADMM for distributed optimization,” in Interna_tional Conference on Machine Learning. PMLR, 2017, pp. 3841–3850._
[25] S. Mhanna, G. Verbiˇc, and A. C. Chapman, “Adaptive ADMM for
distributed AC optimal power flow,” IEEE Transactions on Power
_Systems, vol. 34, no. 3, pp. 2025–2035, 2018._
[26] R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership
inference attacks against machine learning models,” in 2017 IEEE
_Symposium on Security and Privacy (SP)._ IEEE, 2017, pp. 3–18.
[27] K. Chaudhuri, C. Monteleoni, and A. D. Sarwate, “Differentially private
empirical risk minimization.” Journal of Machine Learning Research,
vol. 12, no. 3, 2011.
[28] T. Zhang and Q. Zhu, “Dynamic differential privacy for ADMM-based
distributed classification learning,” IEEE Transactions on Information
_Forensics and Security, vol. 12, no. 1, pp. 172–187, 2016._
[29] S. Caldas, P. Wu, T. Li, J. Koneˇcn`y, H. B. McMahan, V. Smith,
and A. Talwalkar, “LEAF: a benchmark for federated settings,”
-----
_[arXiv preprint arXiv:1812.01097, 2018. [Online]. Available: https:](https://arxiv.org/abs/1812.01097)_
[//arxiv.org/abs/1812.01097](https://arxiv.org/abs/1812.01097)
[30] N. Qian, “On the momentum term in gradient descent learning algorithms,” Neural networks, vol. 12, no. 1, pp. 145–151, 1999.
[31] S. Zeng, A. Kody, Y. Kim, K. Kim, and D. K. Molzahn, “A reinforcement
learning approach to parameter selection for distributed optimization in
power systems,” arXiv preprint arXiv:2110.11991, 2021.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2202.03672, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2202.03672"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-02-08T00:00:00
|
[
{
"paperId": "b839c5eb7ad65bbde66cf9ad36c8daeced57b440",
"title": "Communication-Efficient ADMM-based Federated Learning"
},
{
"paperId": "bb22e5158f473def73d6a8870b3c82da85c21ba1",
"title": "Differentially Private Federated Learning via Inexact ADMM"
},
{
"paperId": "a3893072ac8aaaa01cbd754a59fabc4d6c1adb2c",
"title": "Communication-efficient federated learning"
},
{
"paperId": "db2c008815d8f6dd9fe2d1ddd933caeb4d5c9e42",
"title": "A Privacy-Preserving Distributed Control of Optimal Power Flow"
},
{
"paperId": "2ad68fa2e33558d166f1a38c5980cc3a6addeb18",
"title": "Open-Source Federated Learning Frameworks for IoT: A Comparative Review and Analysis"
},
{
"paperId": "c5c4142a01981787a71bf6ebcb791520c458ab5d",
"title": "FedML: A Research Library and Benchmark for Federated Machine Learning"
},
{
"paperId": "9853a348f61aec83b410f307ab905a4ae001fcd4",
"title": "A Framework for Evaluating Gradient Leakage Attacks in Federated Learning"
},
{
"paperId": "698ab1cc02a79596a87f92d5a0882ab1a7aee266",
"title": "Inverting Gradients - How easy is it to break privacy in federated learning?"
},
{
"paperId": "fb4098dd30489715a7adc6a1f7839e283d1e37ee",
"title": "Can You Really Backdoor Federated Learning?"
},
{
"paperId": "afa778ba0ba6333e25671cfb691a4bdda13b2868",
"title": "Federated Learning With Differential Privacy: Algorithms and Performance Analysis"
},
{
"paperId": "ca8d34bc72e65364016034a8f105ba1cca601a69",
"title": "Differential Privacy-enabled Federated Learning for Sensitive Health Data"
},
{
"paperId": "c1d1a77468533c3456566ab78911990369383dc3",
"title": "Adaptive ADMM for Distributed AC Optimal Power Flow"
},
{
"paperId": "8dcbcaaf337d7bd22e580f1bb7a795ed4bb604fd",
"title": "LEAF: A Benchmark for Federated Settings"
},
{
"paperId": "301d1f3e87cd948aa47e7f9bad087d4b5702ba15",
"title": "Adaptive Consensus ADMM for Distributed Optimization"
},
{
"paperId": "f0dcc9aa31dc9b31b836bcac1b140c8c94a2982d",
"title": "Membership Inference Attacks Against Machine Learning Models"
},
{
"paperId": "b52a60ec1de00a81b6d09155f4587daa65fa067d",
"title": "Waggle: An open sensor platform for edge computing"
},
{
"paperId": "d1dbf643447405984eeef098b1b320dee0b3b8a7",
"title": "Communication-Efficient Learning of Deep Networks from Decentralized Data"
},
{
"paperId": "0023582fde36430c7e3ae81611a14e558c8f4bae",
"title": "The Algorithmic Foundations of Differential Privacy"
},
{
"paperId": "b57c54350769ffa59ff57f79ee5aad918844d298",
"title": "Differentially Private Empirical Risk Minimization"
},
{
"paperId": "9c13098a386b7854b97b3a5317b22d775d7d21df",
"title": "GRPC: a communication cooperation mechanism in distributed systems"
},
{
"paperId": "9c530d2e71f9a66319c924cef583897f01807e8b",
"title": "PySyft: A Library for Easy Federated Learning"
},
{
"paperId": "2bd0d37276bf38e3b7cda0e8eb26b607305ab6bc",
"title": "A Reinforcement Learning Approach to Parameter Selection for Distributed Optimization in Power Systems"
},
{
"paperId": null,
"title": "National Security Commission on Artificial Intelligence (AI)"
},
{
"paperId": "70c25a7b3b285d90dbe3d528d96638f6208279ca",
"title": "Dynamic Differential Privacy for ADMM-Based Distributed Classification Learning"
},
{
"paperId": "735d4220d5579cc6afe956d9f6ea501a96ae99e2",
"title": "On the momentum term in gradient descent learning algorithms"
},
{
"paperId": null,
"title": "TensorFlow Federated: Machine learning on decentralized data"
},
{
"paperId": null,
"title": "“National Institutes of Health Bridge to Artificial Intelligence (Bridge2AI) program,”"
}
] | 13,716
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02efdc6e9556c59cb992e4a6eed07f1fb6735974
|
[
"Computer Science"
] | 0.829562
|
Policy-Based Signatures
|
02efdc6e9556c59cb992e4a6eed07f1fb6735974
|
IACR Cryptology ePrint Archive
|
[
{
"authorId": "1703441",
"name": "M. Bellare"
},
{
"authorId": "1932793",
"name": "Georg Fuchsbauer"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IACR Cryptol eprint Arch"
],
"alternate_urls": null,
"id": "166fd2b5-a928-4a98-a449-3b90935cc101",
"issn": null,
"name": "IACR Cryptology ePrint Archive",
"type": "journal",
"url": "http://eprint.iacr.org/"
}
| null |
# Policy-Based Signatures
Mihir Bellare[1] and Georg Fuchsbauer[2]
1 Department of Computer Science and Engineering,
University of California San Diego, USA
2 Institute of Science and Technology Austria
**Abstract. We introduce policy-based signatures (PBS), where a signer**
can only sign messages conforming to some authority-specified policy.
The main requirements are unforgeability and privacy, the latter meaning that signatures not reveal the policy. PBS offers value along two
fronts: (1) On the practical side, they allow a corporation to control
what messages its employees can sign under the corporate key. (2) On
the theoretical side, they unify existing work, capturing other forms of
signatures as special cases or allowing them to be easily built. Our work
focuses on definitions of PBS, proofs that this challenging primitive is realizable for arbitrary policies, efficient constructions for specific policies,
and a few representative applications.
## 1 Introduction
PBS. In a standard digital signature scheme [25,29], a signer who has established
a public verification key vk and a matching secret signing key sk can sign any
message that it wants. We introduce policy-based signatures (PBS), where a
signer’s secret key skp is associated to a policy p ∈{0, 1}[∗] that allows the signer
to produce a valid signature σ of a message m only if the message satisfies
the policy, meaning (p, m) belongs to a policy language L 0, 1 0, 1
_⊆{_ _}[∗]_ _× {_ _}[∗]_
associated to the scheme.
This cannot be achieved if the signer creates her keys in a standalone way.
In our model, a signer is issued a signing key skp for a particular policy p by
an authority, as a function of a master secret key msk held by the authority.
Verification that σ is a valid signature of m is then done with respect to the
authority’s public parameters pp.
Within this framework, we consider a number of security goals. The most
basic are unforgeability and privacy. Unforgeability says that producing a valid
signature for message m is infeasible unless one has a secret key skp for some
policy p such that (p, m) _L. (You can only sign messages that you are allowed_
_∈_
to sign.) Privacy requires that signatures not reveal the policy under which they
were created. We will propose and explore different formalizations of these goals.
A trivial way to achieving PBS is via certificates. In more detail, to issue a
secret key skp for policy p, the authority generates a fresh key pair (sk, pk) for an
ordinary signature scheme, creates a certificate cert consisting of a signature of
(p, pk) under the authority’s signing key msk, and returns skp = (sk, pk, p, cert)
H. Krawczyk (Ed.): PKC 2014, LNCS 8383, pp. 520–537, 2014.
_⃝c_ International Association for Cryptologic Research 2014
-----
Policy-Based Signatures 521
to the signer. The latter’s signature on m is now an ordinary signature of m
under sk together with (pk, p, cert), and verification is possible given the public
verifying key pp of the authority. However, while this will provide unforgeability,
it does not provide privacy, because the policy must be revealed in the signature
to allow for verification. Similarly, privacy in the absence of unforgeability is
also trivial. The combination of the two requirements, however, results in a nontrivial goal.
PBS may be viewed as an authentication analogue of functional encryp
tion [15]. We can view the latter as allowing decryption to be policy-restricted
rather than total, an authority issuing decryption keys in a way that enforces
the policy. Correspondingly, in PBS the signing capability is policy-restricted,
an authority issuing signing keys in a way that enforces the policy.
Why PBS? Given that there already exist many forms of signatures, one might
ask why another. PBS offers value along two fronts, practical and theoretical.
On the practical side, the setup of PBS is natural in a corporate or other hierarchical environment. For example, a corporation may want to allow employees to
sign under the company public key pp, but may want to restrict the signing capability of different employees based on their positions and privileges. However,
the company policies underlying the restrictions need to be kept private. On
the theoretical side, PBS decreases rather than increases complexity in the area
because it serves as an umbrella notion unifying existing notions by capturing
some as special cases and allowing others to be derived in simple and natural
ways. In particular, this is true for a significant body of work on signatures that
have privacy features, including group signatures [22,10], proxy signatures [35],
ring signatures [38,14], mesh signatures [17], anonymous proxy signatures [28],
attribute-based signatures [34] and anonymous credentials [19,6].
Policy languages. We wish to allow policies as expressive and general as
possible. We accordingly allow the policy language to be any language in P,
which captures most typical applications, where one can test in polynomial time
whether a given policy allows a given message. At first this may seem as general
as one can get, but we go further, allowing the policy language to be any language
in NP. This means that the policies that can be expressed and enforced are
restricted neither in form nor type, the only condition being that, given a witness,
one can test in polynomial time whether a policy allows a given message. We
will see applications where it is important that policy languages can be in NP
rather than merely in P.
Definitions and relations. We first provide an unforgeability definition and
an indistinguishability-based privacy definition. Unforgeability says that an adversary cannot create a valid signature of a message m without having a key for
some policy p such that (p, m) _L, even when it can obtain keys for other poli-_
_∈_
cies, and signatures for other messages under the target policy. Indistinguishability says that the verifier cannot tell under which of two keys a signature was
created assuming both policies associated to the keys permit the corresponding
-----
522 M. Bellare and G. Fuchsbauer
message. Our definition also implies that the verifier cannot decide whether two
signatures were created using the same key.
However, indistinguishability may not always provide privacy. For example, if
for each message m there is only one policy pm such that (pm, m) ∈ _L then even_
a scheme where a signature of m reveals pm satisfies indistinguishability. We
provide a stronger, simulatability-based privacy notion that says that real signatures look like ones a simulator could generate without knowledge of the policy
or any key. This strong notion of privacy is not subject to the above-discussed
weaknesses of indistinguishability. The situation parallels that for functional encryption (FE), where an indistinguishability-based requirement was shown to
not always suffice [15,37] and stronger simulatability requirements have been defined and considered [15,37,11,23,2,5,36]. However, for FE, impossibility results
show that the strongest and most desirable simulation-based definitions are not
achievable [15,11,23,2,36]. In contrast, for PBS we show that our simulatability
notion is achievable in the standard model under standard assumptions.
We also strengthen unforgeability to provide an extractability notion for
PBS. We show that simulatability implies indistinguishability, and simulatability+extractability implies unforgeability. Simulatability+extractability emerges
as a powerful security notion that enables a wide range of applications.
Constructions. PBS for arbitrary NP policy languages achieving simulatability+extractability is an ambitious target. The first question that emerges is
whether this can be achieved, even in principle, let alone efficiently. We answer in
the affirmative via two generic constructions based on standard primitives. The
first uses ordinary signatures, IND-CPA encryption and standard non-interactive
zero-knowledge (NIZK) proofs. The second uses only ordinary signatures and
simulation(-sound) extractable NIZK proofs [30].
While our generic constructions prove the theoretical feasibility of PBS, their
use of general NIZKs makes them inefficient. We ask whether more efficient solutions may be given without resorting to the random-oracle model [12]. Combining
Groth-Sahai proofs [31] and structure-preserving signatures [1], we design efficient PBS schemes for policy languages expressible via equations over a bilinear
group. This construction requires a twist over usual applications of Groth-Sahai
proofs; namely, in order to hide the policy, we swap the roles of constants and
variables. This provides a tool that, like structure-preserving signatures, is useful
in cryptographic applications where policies may be about group elements.
Applications and implications. We illustrate applicability by showing how
to derive a variety of other primitives from PBS in simple and natural ways. This
shows how PBS can function as a unifying framework for signatures and beyond.
In Section 5 we show that PBS implies group signatures meeting the strong CCA
version of the definition of [10]. In the full version [7] we also show that PBS
implies attribute-based signatures [34] and signatures of knowledge [21]. These
applications are illustrative rather than exhaustive, many more being possible.
Our generic constructions discussed above show which primitives are sufficient
to build PBS. A natural question is which primitives are necessary, namely, which
fundamental primitives are implied by PBS? In [7], we address this and show
-----
Policy-Based Signatures 523
that PBS implies seemingly unrelated primitives like IND-CPA encryption and
simulation-extractable NIZK proofs [30]. By [39] this means PBS implies INDCCA encryption. In particular, this means the assumptions we make for our
generic constructions are not only sufficient but necessary.
Delegatable PBS. In Section 6 we extend the PBS framework to allow delegation. This means that an entity receiving from the authority a key skp1 for a
policy p1 can then issue to another entity a key skp1∥p2 that allows the signing of
messages m which satisfy both policies p1 and p2. The holder of skp1∥p2 can further delegate a key skp1∥p2∥p3, and so on. This is useful in a hierarchical setting,
where a company president can delegate to vice presidents, who can then delegate to managers, and so on. We provide definitions which extend and strengthen
those for the basic PBS setting; in particular, privacy must hold even when the
adversary chooses the user keys. We then show how to achieve delegatable PBS
for policy chains of arbitrary polynomial length. For simplicity, we base our construction, achieving sim+ext security, on append-only signatures [33], which can
however be easily constructed from ordinary signatures.
Discussion. In the world of digital signatures, extensions of functionality typically involve some form of delegation of signing rights: group signatures allow
members to sign on behalf of a whole group, in attribute-based signatures (ABS)
and types of anonymous credentials, keys are also issued by an authority, and
(anonymous) proxy signatures model delegation and re-delegation explicitly. For
most of these primitives, anonymity or privacy notions have been considered. A
group signature, for example, should not reveal which group member produced
a signature on behalf of the group (while an authority can trace group signatures to their signer). In ABS, users hold keys corresponding to their attributes
and can sign messages with respect to a policy, which is a predicate over attributes. Users should only be able make signatures for policies satisfied by their
attributes. Privacy for ABS means that a signature should reveal nothing about
the attributes of the key under which it was produced, other than the fact that
it satisfies the policy.
In the models of primitives such as ABS or mesh signatures, the policy itself
is always public, as is the warrant specifying the policy in (even anonymous)
proxy signatures. With PBS, we ask whether this is a natural limitation of
privacy notions, and whether it is inherently unavoidable that objects like the
policy (which specify why the message could be signed) need to be public.
Consider the example of a company implementing a scheme where each em
ployee gets a signing key and there is one public key which is used by outsiders to
verify signatures in the name of the company. A group-signature scheme would
allow every employee holding a key to sign on behalf of the company, but there is
no fine-grained control over who is allowed to sign which documents. This can be
achieved using attribute-based signatures, where each user is assigned attributes,
and a message is signed with respect to a policy like (CEO or (board member
and general manager)). However, it is questionable whether a verifier needs to
know the company-internal policy used to sign a specific message, and there is
no apparent reason he should know; all he needs to be assured of is that the
-----
524 M. Bellare and G. Fuchsbauer
message was signed by someone entitled to, but not who this person is, what she
is entitled to sign, nor whether two messages were signed by the same person.
This is what PBS provides.
Another issue is that when using ABS we have to assume that the verifier
can tell which messages can be signed under which policies. An attribute-based
signature which is valid under the policy (CEO or intern) tells a verifier that it
could have been produced by an intern, but it does not provide any guarantees
as to whether an intern would have been entitled to sign the message. We ask
whether it is possible to avoid having these types of public policies at all. PBS
answers this in the affirmative.
Related work. The use of NIZKs for signatures begins with [8], who built an
ordinary signature scheme from a NIZK, a pseudorandom function (PRF) and
a commitment scheme. Encryption and ordinary signatures were combined with
NIZKs to create group signatures in [10]. Our first generic construction builds
on these ideas. Our second generic construction, inspired by [26,9], exploits the
power of simulation-extractable NIZKs to give a conceptually simpler scheme
that, in addition to the NIZK, uses only an ordinary signature scheme.
In independent and concurrent work, Boyle, Goldwasser and Ivan (BGI) [18]
introduce functional signatures, where an authority can provide a key for a
function f that allows the signing of any message in the range of f . This can
be captured as a special case of PBS in which the policy is f and the policy
language is the set of all (f, m) such that m is in the range of f, a witness
for membership being a pre-image of m under f . BGI define unforgeability and
an indistinguishability-based privacy requirement, but not the stronger simulatability or extractability conditions that we define and achieve. BGI have a
succinctness condition which we do not have.
A related primitive is malleable signatures, introduced by Chase, Kohlweiss,
Lysyanskaya and Meiklejohn [20]. They are defined with respect to a set of
functions, so that given a signature of m, anyone can derive a signature of
_F_
_f_ (m) for f . Concurrently to our work, Backes, Meiser and Schr¨oder [3]
_∈F_
introduced delegatable functional signatures, but in their model delegatees have
public keys and signatures are verified under the authority’s and the delegatee’s
keys. Privacy means that signatures from delegatees are indistinguishable from
signatures from the authority.
Three recent works independently and concurrently introduce PRFs where
one may issue a key to evaluate the PRF on a subset of the points of the
domain [16,18,32]. These can be viewed as PRF analogues of policy-based signatures in which a policy corresponds to a set of inputs and a key allows computation of the PRF on the inputs in the set. Boneh and Waters [16] also provide
a policy-based key-distribution scheme.
In their treatment of policy-based cryptography, Bagga and Molva [4] mention
both policy-based encryption and policy-based signatures. However they do not
consider privacy, without which, as noted above, the problem is easy. Moreover,
they have no formal definitions of security requirements or proofs that their
bilinear-map-based schemes achieve any well-defined security goal.
-----
Policy-Based Signatures 525
## 2 Preliminaries
Notations and conventions. If S is a finite set then _S_ denotes its size and
_|_ _|_
_s_ _S denotes picking an element uniformly from S and assigning it to s. For i_
_←[$]_ _∈_
N we let [i] = {1, . . ., i}. We denote by λ ∈ N the security parameter and by 1[λ]
its unary representation. Algorithms are randomized unless otherwise indicated
and “PT” stands for “polynomial-time”. By y ← _A(x1, . . . ; R), we denote the_
operation of running algorithm A on inputs x1, . . . and coins R and letting y
denote the output. By y ←[$] A(x1, . . .), we denote letting y ← _A(x1, . . . ; R) with_
_R chosen at random. We denote by [A(x1, . . .)] the set of points that have positive_
probability of being output by A on inputs x1, . . . .
A map R : 0, 1 0, 1 0, 1 is said to be an NP-relation if it is
_{_ _}[∗]_ _× {_ _}[∗]_ _→{_ _}[∗]_
computable in time polynomial in the length of its first input. For x 0, 1
_∈{_ _}[∗]_
we let WSR(x) = {w : R(x, w) = 1} be the witness set of x. We let L(R) = {x :
WSR(x) ̸= ∅} be the language associated to R. The fact that R is an NP-relation
means that (R) **NP.**
_L_ _∈_
Game-playing framework. For our security definitions and proofs we use the
code-based game-playing framework of [13]. A game Exp (Figure 1, for example)
consists of a finite number of procedures. We execute a game with an adversary
_A and security parameter λ ∈_ N as follows. The adversary gets 1[λ] as input.
It can then query game procedures. Its first query must be to Initialize with
argument 1[λ], and its last to Finalize, and these must be the only queries to
these oracles. The output of the execution, denoted Exp (λ) is the output of
_A_
Finalize. The running time of the adversary is a function of λ in which oracle
_A_
calls are assumed to take unit time.
## 3 Policy-Based Signatures
Policy languages. A policy checker is an NP-relation PC : 0, 1 0, 1
_{_ _}[∗]×{_ _}[∗]_ _→_
0, 1 . The first input is a pair (p, m) representing a policy p 0, 1 and a mes_{_ _}_ _∈{_ _}[∗]_
sage m 0, 1, while the second input is a witness w 0, 1 . The associated
_∈{_ _}[∗]_ _∈{_ _}[∗]_
language L(PC) = {(p, m) : WSPC((p, m)) ̸= ∅} is called the policy language associated to PC. That (p, m) (PC) means that signing m is permitted under
_∈L_
policy p. We say that (p, m, w) is PC-valid if PC((p, m), w) = 1.
PBS schemes. A policy-based signature scheme = (Setup, KeyGen, Sign,
_PBS_
Verify) is a 4-tupe of PT algorithms:
1. Setup: On input the unary-encoded security parameter 1[λ], setup algorithm
Setup returns public parameters pp and a master secret key msk.
2. KeyGen: On input msk and p, where p 0, 1 is a policy, key-generation
_∈{_ _}[∗]_
algorithm KeyGen outputs a signing key sk for p.
3. Sign: On input sk, m and w, where m 0, 1 is a message and w 0, 1
_∈{_ _}[∗]_ _∈{_ _}[∗]_
is a witness, signing algorithm Sign outputs a signature σ.
4. Verify: On input pp, m and σ, verification algorithm Verify outputs a bit.
-----
526 M. Bellare and G. Fuchsbauer
proc Initialize **Exp[UF]PBS** **Exp[IND]PBS**
(pp, msk) ← Setup(1[λ]) ; j ← 0
Return pp
proc MakeSK(p)
proc Initialize
_Qj ←[j][2]j + 1 ; ←[$] KeyGen Q[j][1]( ←pp,p msk, p) ; Q[j][3] ←∅_ _b(pp ←,[$] msk {0, 1) ←}_ Setup(1[λ])
proc RevealSK(i) Return (pp, msk)
If i ̸∈ [j] then return ⊥ proc LR(p0, p1, m, w0, w1)
_sk ←_ _Q[i][2] ; Q[i][2] ←⊥_ ; Return sk If PC((p0, m), w0) = 0
proc Sign(i, m, w) or PC((p1, m), w1) = 0
If i ̸∈ [j] or Q[i][2] = ⊥ then return ⊥ then return ⊥
_Q[i][3] ←_ _Q[i][3] ∪{m}_ _sk0 ←_ KeyGen(msk, p0)
Return Sign(pp, Q[i][2], m, w) _sk1 ←_ KeyGen(msk, p1)
proc Finalize(m, σ) _σb ←_ Sign(skb, m, wb)
Return (σb, sk0, sk1)
If Verify(pp, m, σ) = 0 then return false
For i = 1, . . ., j do proc Finalize(b[′])
If (Q[i][1], m) ∈L(PC) then Return (b = b[′])
If Q[i][2] = ⊥ or m ∈ _Q[i][3]_
then return false
Return true
**Fig. 1. Games defining unforgeability and indistinguishability for PBS**
We say that the scheme is correct relative to policy checker PC if for all λ ∈ N, all
PC-valid (p, m, w), all (pp, msk) [Setup(1[λ])] and all σ [Sign(KeyGen(msk, p),
_∈_ _∈_
_m, w)] we have Verify(pp, m, σ) = 1._
Unforgeability. Our basic unforgeability requirement is that it be hard to
create a valid signature of m without holding a key for some policy p such that
(p, m) ∈L(PC). The formalization is based on game Exp[UF]PBS [in Figure 1. For]
_λ ∈_ N we let Adv[UF]PBS,A[(][λ][) = Pr[][Exp][UF]PBS,A _[⇒]_ [true][]. We say that][ PBS][ is]
_unforgeable, or UF-secure, if Adv[UF]PBS,A[(][·][) is negligible for every PT][ A][. Via a]_
MakeSK query, the adversary can have the game create a key for a policy p.
Then, via Sign, it can obtain a signature under this key for any message of its
choice. (This models a chosen-message attack.) It may also, via its RevealSK
oracle, obtain the key itself. (This models corruption of users or the formation of
collusions of users who pool their keys.) These queries naturally give the adversary the capability of creating signatures for certain messages, namely messages
_m such that for some p with (p, m)_ (PC), it either obtained a key for p or
_∈L_
obtained a signature for m. Unforgeability asks that it cannot sign any other
messages. Note that we did not explicitly specify how Sign behaves when run
on a key for p, and m, w with PC((p, m), w) = 0. However, if it outputs a valid
signature, this can be used to break UF-security.
-----
Policy-Based Signatures 527
Indistinguishability. Privacy for policy-based signatures requires that a signature not reveal the policy associated to the key and neither the witness that
was used to create the signature. A first idea would be the following formalization: an adversary outputs a message m, two policies p0, p1, and two witnesses
_w0, w1, such that (p0, m, w0) and (p1, m, w1) are PC-valid. For either p0 or p1 the_
experiment computes a secret key and uses it to produce a signature on m, from
which the adversary has to determine which policy was used. It turns out that
this notion is too weak, as it does not guarantee that two signatures produced
under the same secret key do not link, as seen as follows. Consider a scheme
satisfying the security notion just sketched and modify it by attaching to each
secret key a random string during key generation and alter Sign to append to the
signature the random string contained in the secret key. Clearly, two signatures
under the same key are linkable, but yet the scheme satisfies the definition. We
therefore give the adversary both secret keys in addition to the signature.
Let Exp[IND]PBS,A [be the game defined in Figure 1. We say that][ PBS][ has][ in-]
_distinguishability if for all PT adversaries A we have that Adv[IND]PBS,A[(][λ][) =]_
Pr[Exp[IND]PBS,A[(][λ][)][ ⇒] [true][]][ −] 2[1] [is negligible in][ λ][. We assume that either all policy]
descriptions p are of equal length, or that A outputs p0 and p1 with |p0| = |p1|.
Unlinkability could be formalized via a game where an adversary is given
two signatures and must decide whether they were created using the same key.
Indistinguishability implies unlinkability, as an adversary against the latter could
be used to build another one against indistinguishability, who can simulate the
unlinkability game by using the received signing keys to produce signatures.
Discussion. The unforgeability and indistinguishability notions we have defined
above are basic, intuitive, and suffice for many applications. However, they have
some weaknesses, and some applications call for stronger requirements.
First, we claim that indistinguishability does not always provide the privacy
we may expect. To see this, consider a policy checker PC such that for every
message m there is only one p with (p, m) (PC). (See our construction of
_∈L_
group signatures in Section 5 for an example of such a PC.) Now consider a
scheme which satisfies indistinguishability, and modify it so that the key contains
the policy and the signing algorithm appends the policy to the signature. This
scheme clearly does not hide the policy, yet still satisfies indistinguishability.
Indeed, in Exp[IND]
_PBS[, in order to satisfy][ PC][((][p][0][, m][)][, w][0][) = 1 =][ PC][((][p][1][, m][)][, w][1][),]_
the adversary must return p0 = p1. If the signatures in the original scheme have
not revealed the bit b then attaching the same policy to both will not do so either.
The notion of simulatability we provide below will fill the gap. It asks that there
is a simulator which can create simulated signatures without having access to
any signing key or witness, and that these signatures are indistinguishable from
real signatures.
With regard to unforgeability, one issue is that in general it cannot be effi
ciently verified whether an adversary has won the game, as this involves checking
whether (p, m) (PC) for all p queried to MakeSK and m from the adver_∈L_
sary’s final output, and membership in (R) may not be efficiently decidable.
_L_
(This is the case for (R) defined in (4) in Section 5.) Although not a problem
_L_
-----
528 M. Bellare and G. Fuchsbauer
**Exp[SIM]**
_PBS_
proc Initialize
_b ←[$] {0, 1} ; j ←_ 0
(pp0, msk0, tr) ←[$] SimSetup(1[λ])
(pp1, msk1) ←[$] Setup(1[λ])
Return (ppb, mskb)
proc Key(p)
_j ←_ _j + 1 ; sk0 ←[$] SKeyGen(tr, p)_
_sk1 ←[$] KeyGen(msk1, p)_
_Q[j][1] ←_ _p ; Q[j][2] ←_ _sk1_
Return skb
proc Signature(i, m, w)
If i ̸∈ [j] then return ⊥
If PC((Q[i][1], m), w) = 1
then σ0 ←[$] SimSign(tr, m)
Else σ0 ←⊥
_σ1 ←[$] Sign(Q[i][2], m, w) ; Return σb_
proc Finalize(b[′])
Return (b = b[′])
proc Initialize
(pp, msk, tr) ←[$] SimSetup(1[λ])
_QK ←∅_ ; QS ←∅ ; Return pp
proc SKeyGen(p)
_sk ←[$] SKeyGen(tr, p)_
_QK ←_ _QK ∪{p} ; Return sk_
proc SimSign(m)
_σ ←[$] SimSign(tr, m)_
_QS ←_ _QS ∪{(m, σ)} ; Return σ_
proc Finalize(m, σ)
If Verify(pp, m, σ) = 0
then return false
If (m, σ) ∈ _QS then return false_
(p, w) ← Extr(tr, m, σ)
If p /∈ _QK or PC((p, m), w) = 0_
then return true
Return false
**Fig. 2. Games defining simulatability and extractability for PBS**
in itself, it can become one, for example when using the notion in a proof by
game hopping, as a distinguisher between two games must efficiently determine
whether an adversary has won the game. (See [7] for such a proof.) The extractability notion we will provide below will fill this gap as well as be more
useful in applications. It requires that from a valid signature, using a trapdoor
one can extract a policy and a valid witness. To satisfy this notion, a signature
must contain information on the policy and can thus not hide its length. For
simplicity, we assume from now on that all policies are of the same length.
Simulatability. We formalize simulatability by requiring that there exist the
following algorithms: SimSetup, which outputs parameters and a master key that
are indistinguishable from those output by Setup, as well as a trapdoor; SKeyGen,
which outputs keys indistinguishable from those output by KeyGen; and SimSign,
which on input the trapdoor and a message (but no signing key nor witness)
produces signatures that are indistinguishable from regular signatures.
Let Exp[SIM]PBS [be the game defined in Figure 2. We require that for every PT]
adversary A we have Adv[SIM]PBS,A[(][λ][) = Pr[][Exp][SIM]PBS,A[(][λ][)][ ⇒] [true][]][ −] 2[1] [is negligible]
in λ. Note that in all our constructions, tr contains msk and SKeyGen is defined
as KeyGen. We included SKeyGen to make the definition more general.
Extractability. We define our notion in the spirit of “sim-ext” security for
signatures of knowledge [21]. Let Adv[EXT]PBS,A[(][λ][) = Pr[][Exp][EXT]PBS,A[(][λ][)][ ⇒] [true][] with]
-----
Policy-Based Signatures 529
**Exp[EXT]PBS** [defined in Figure 2. We say that][ PBS][ has][ extractability][ if there exists]
an algorithm Extr which taking a trapdoor, a message and a signature outputs
a pair (p, w) ∈{0, 1}[∗], such that Adv[EXT]PBS,A[(][·][) is negligible for every PT][ A][.]
Although the definition might not seem completely intuitive at first, it implies
that, as long as the adversary outputs a valid message/signature pair and does
not simply copy a SimSign query/response pair, the only signed messages it
can output are those that satisfy the policy of one of the queried keys: assume
_A outputs (m[∗], σ[∗]) such that (∗) for all p ∈_ _QK: (p, m[∗]) /∈L(PC). Then let_
(p[∗], w[∗]) ← Extr(tr, m, σ). If PC((p[∗], m[∗]), w[∗]) = 0, the adversary wins Exp[EXT]PBS [.]
On the other hand, if PC((p[∗], m[∗]), w[∗]) = 1 then (p[∗], m[∗]) (PC), thus by ( )
_∈L_ _∗_
we have p[∗] _∈/_ _QK and it wins too. Note that this notion corresponds to strong_
_unforgeability for signature schemes._
Sim-ext security implies IND and UF. In [7] we show that our two latter
security notions are indeed strengthenings of the former two:
**Theorem 1. Any policy-based signature scheme which satisfies simulatability**
_satisfies indistinguishability. Any PBS scheme which satisfies simulatability and_
_extractability satisfies unforgeability._
## 4 Constructions of Policy-Based Signature Schemes
We first show that PBS satisfying SIM+EXT can be achieved for any language
in NP. Then we develop more efficient schemes for specific policy languages.
**4.1** **Generic Constructions**
We now show how to construct policy-based signatures satisfying simulatability
and extractability (and, by Theorem 1, IND and UF) for any NP-relation PC.
In [7] we show that the assumptions we make are not only sufficient but necessary.
An first approach could be the following, similar to the generic construction
of group signatures in [10]: The issuer creates a signature key pair (mvk, msk)
and publishes mvk as pp. When a user is issued a key for a policy p, the issuer
creates a key pair (vkU _, skU_ ), signs p∥vkU and sends this certificate to the user
together with (p, vkU _, skU_ ). To sign a message m, the user first signs it under
_skU_, thereby establishing a chain mvk → _vkU →_ _m via the certificate and the_
signature. The actual signature is a (zero-knowledge) proof of knowledge of such
a chain and the fact that the message satisfies the policy signed in the certificate.
While this approach yields a scheme satisfying IND and UF, it would fail to
achieve extractability. We thus choose a different approach: The user’s key is
simply a signature from the issuer on the policy. Now to sign a message, the user
first picks a key pair (ovk, osk) for a strongly unforgeable one-time signature
scheme[1] and makes a zero-knowledge proof π that he knows either (I) an issuer
1 In such a scheme it must be infeasible for an adversary, after receiving a verification
key ovk and after obtaining a signature σ on one message m of his choice, to output
a signature σ[∗] on a message m[∗], such that (m, σ) ̸= (m[∗], σ[∗]).
-----
530 M. Bellare and G. Fuchsbauer
Setup(1[λ])
_crs ←[$] Setupnizk(1[λ])_
(pk, dk) ←[$] KeyGenpke(1[λ])
(mvk, msk) ←[$] KeyGensig(1[λ])
Return pp ← (crs, pk, mvk) and msk
KeyGen(msk, p)
_s ←[$] Signsig(msk, 1∥p)_
Return skp ← (pp, p, s)
Sign(skp, m, w)
Parse ((crs, pk, mvk), p, s) ← _skp_
If PC((p, m), w) = 0 then return ⊥
(ovk, osk) ←[$] KeyGenots(1[λ])
_ρp, ρs, ρw ←[$] {0, 1}[λ]; Cp ←_ Enc(pk, p; ρp)
_Cs ←_ Enc(pk, s; ρs); Cw ← Enc(pk, w; ρw)
_π ←[$] Prove(crs, (pk, mvk, Cp, Cs, Cw,_
_ovk, m), (p, s, w, ρp, ρs, ρw))_
_τ ←[$] Signots(osk, (m, Cp, Cs, Cw, π))_
Return σ ← (ovk, Cp, Cs, Cw, π, τ )
Verify(pp, m, σ)
Parse (crs, pk, mvk) ← _pp_
Parse (ovk, Cp, Cs, Cw, π, τ ) ← _σ_
Return 1 iff
Verifynizk(crs, (pk, mvk, Cp, Cs, Cw,
_ovk, m), π) = 1 and_
Verifyots(ovk, (m, Cp, Cs, Cw, π), τ ) = 1
SimSetup(1[λ])
_crs ←[$] Setupnizk(1[λ])_
(pk, dk) ←[$] KeyGenpke(1[λ])
(mvk, msk) ←[$] KeyGensig(1[λ])
Return pp ← (crs, pk, mvk), msk
and tr ← (msk, dk)
SKeyGen((msk, dk), p)
_s ←[$] Signsig(msk, 1∥p)_
Return skp ← (pp, p, s)
SimSign((msk, dk), m)
(ovk, osk) ←[$] KeyGenots(1[λ])
_s ←[$] Signsig(msk, 0∥ovk)_
_ρp, ρs, ρw ←[$] {0, 1}[λ]_
_Cp ←_ Enc(pk, 0; ρp)
_Cs ←_ Enc(pk, s; ρs)
_Cw ←_ Enc(pk, 0; ρw)
_π ←[$] Prove(crs, (pk, mvk, Cp, Cs,_
_Cw, ovk, m), (0, s, 0, ρp, ρs, ρw))_
_τ ←[$] Signots(osk, (m, Cp, Cs, Cw, π))_
Return σ ← (ovk, Cp, Cs, Cw, π, τ )
Extr((msk, dk), m, σ)
Parse (ovk, Cp, Cs, Cw, π, τ ) ← _σ_
_p ←_ Dec(dk, Cp) ; w ← Dec(dk, Cw)
Return (p, w)
**Fig. 3. Generic construction of PBS**
signature on a policy p such that (p, m) (PC) or (II) an issuer signature on
_∈L_
_ovk. Finally, he adds a signature under ovk of both the message and the proof._
As we will see, this construction satisfies both SIM (where the simulator can
make a signature on ovk and use clause (II) for the proof) and EXT (as π is a
proof of knowledge).
We formalize the above: Let Sig = (KeyGensig, Signsig, Verifysig) be a sig
nature scheme which is unforgeable under chosen-message attacks (UF-CMA),
_Ot_ _Sig = (KeyGenots, Signots, Verifyots) a strongly unforgeable one-time signature_
scheme and let PKE = (KeyGenpke, Enc, Dec) be an IND-CPA-secure public-key
encryption scheme. For a policy checker PC we define the following NP-relation:
�(pk, mvk, Cp, Cs, Cw, ovk, m), (p, s, w, ρp, ρs, ρw)� _∈_ _RNP_
_⇐⇒_ _Cp = Enc(pk, p; ρp) ∧_ _Cs = Enc(pk, s; ρs) ∧_ _Cw = Enc(pk, w; ρw)_
_∧_ ��Verifysig(mvk, 1∥p, s) = 1 ∧ PC((p, m), w) = 1�
_∨_ Verifysig(mvk, 0∥ovk, s) = 1�
(1)
-----
Setup(1[λ])
_crs ←[$] Setupnizk(1[λ])_
(mvk, msk) ←[$] KeyGensig(1[λ])
Return pp ← (crs, mvk), msk
KeyGen(msk, p)
_c ←[$] Signsig(msk, p)_
Return sk ← (pp, p, c)
Sign(sk = ((crs, mvk), p, c), m, w)
_σ ←[$] Prove(crs, (mvk, m), (p, c, w))_
Return σ
Verify(pp = (crs, mvk), m, σ)
Return Verifynizk(crs, (mvk, m), σ)
Policy-Based Signatures 531
SimSetup(1[λ])
(crs, tr) ←[$] SimSetupnizk(1[λ])
(mvk, msk) ←[$] KeyGensig(1[λ])
Return pp ← (crs, mvk), msk,
_trpbs ←_ (pp, msk, tr)
SKeyGen((pp, msk, tr), p)
_c ←[$] Signsig(msk, p) ; Return sk ←_ (pp, p, c)
SimSign(((crs, mvk), msk, tr), m)
Return σ ←[$] SimProve(crs, tr, (mvk, m))
Extr(((crs, mvk), msk, tr), m, σ)
(p, c, w) ← Extrnizk(tr, (mvk, m), σ)
Return (p, w)
**Fig. 4. PBS based on SE-NIZKs**
Let NIZK = (Setupnizk, Prove, Verifynizk) be a non-interactive zero-knowledge
(NIZK) proof system for L(RNP). Our construction PBS for a policy checker
PC is detailed in Figure 3, and in [7] we prove the following:
**Theorem 2. If** _satisfies IND-CPA,_ _ig is UF-CMA,_ _t_ _ig is a strongly_
_PKE_ _S_ _O_ _S_
_unforgeable one-time signature scheme and_ _is a NIZK proof system for_
_NIZK_
_L(RNP) then PBS, defined in Figure 3, satisfies simulatability and extractability._
We now present a much simpler construction of PBS by relying on a more advanced cryptographic primitive: simulation-extractable (SE) NIZK proofs [30]
(see [7] for the definition). Let Sig = (KeyGensig, Signsig, Verifysig) be a signature
scheme and for a policy checker PC let NIZK = (Setupnizk, Prove, Verifynizk,
SimSetupnizk, SimProve, Extrnizk) be a SE-NIZK for the following NP-relation,
whose statements are of the form X = (vk, m) with witnesses W = (p, c, w) and
((vk, m), (p, c, w)) ∈ _RNP_ _⇐⇒_ Verifysig(vk, p, c) = 1 ∧ ((p, m), w) ∈ PC
Then the scheme in Figure 4 is a PBS for PC which satisfies SIM+EXT. In [7]
we prove this for a more general scheme allowing delegation.
**4.2** **Efficient Construction via Groth-Sahai Proofs**
Our efficient construction of PBS will be defined over a bilinear group. This is a
tuple (p, G, H, T, G, H), where G, H and T are groups of prime order p, generated
by G and H, respectively, and e : G _×_ H → T is a bilinear map such that e(G, H)
generates T. We denote the group operation multiplicatively and let 1G, 1H and
1T denote the neutral elements of G, H and T. Groth-Sahai proofs [31] let us prove
that there exists a set of elements (X, Y ) = (X1, . . ., Xn, Y1, . . ., Yℓ) ∈ G[n] _× H[ℓ]_
which satisfy equations E(X, Y ) of the form
_k_
�
_i=1_
_ℓ_
�
_j=1_
_e(Pi, Qi)_
_ℓ_
�
_j=1_
_e(Aj, Yj)_
_n_
�
_i=1_
_e(Xi, Bi)_
_n_
�
_i=1_
_e(Xi, Yj)[γ][ij]_ = 1T (2)
-----
532 M. Bellare and G. Fuchsbauer
Such an equation E is called a pairing-product equation [2] (PPE) and is uniquely
defined by its constants P, Q, A, B and Γ := (γij )i∈[n],j∈[ℓ]. These equations
have already found many uses in cryptography, of which the following two are
relevant here: they can define the verification predicate of a digital signature
(see [1]), or witness the fact that a ciphertext encrypts a certain value (see [7]).
Our aim is to construct policy-based signatures where policies define (sets of)
PPEs, which must be satisfied by the message and the witness.
Groth and Sahai define a setup algorithm which on input a bilinear group
outputs a common reference string crs and an extraction key xk. On input crs, an
equation E and a satisfying witness (X, Y ), algorithm Provegs outputs a proof π.
Proofs are verified by Verifygs(crs, E(·, ·), π). Under the SXDH assumption (see
[31]), proofs are witness-indistinguishable [27], that is, proofs for an equation
using different witnesses are computationally indistinguishable. Moreover, they
are extractable and thus proofs of knowledge [24]: From every valid proof π,
Extrgs(xk, E(·, ·), π) extracts a witness (X, Y ) such that E(X, Y ) = 1.
In our Groth-Sahai-based construction of PBS, messages and witnesses will
be group elements and a policy defines a set of equations as in (2) that have to
be satisfied. The policy checker is thus defined as follows: the policy p defines
a set of equations (E1, . . ., En) and PC((p, m), w) = 1 iff Ei(m, w) = 1 for all
_i ∈_ [n], where m ∈ G[n][m] _× H[ℓ][m]_ and w ∈ G[n][w] _× H[ℓ][w]._
GS proofs only allow us to extract group elements; however, an equation—
and thus a policy—is defined by a set of group elements and exponents γij. In
order to hide a policy, we need to swap the roles of constants and variables in
an equation, as this will enable us to hide the policy defined by the constants.
We first transform equations as in (2) into a set of equivalent equations without
exponents. To do so, we introduce auxiliary variables _Y[�]ij_, add i _·_ _j new equations_
and define the set E[(no-exp)] as follows:
_e(Aj, Yj)_
_e(Xi, Bi)_
�
_e(Pi, Qi)_
�
�
��
_∧_
_e(Xi,_ _Y[�]ij_ ) = 1T
�
(3)
_i,j_ _[e][(][G,][ �][Y][ij]_ [) =][ e][(][G][γ][ij] _[, Y][j][)]_
A witness (X, Y ) satisfies E in (2) iff (X, Y, (Y[�]ij := Yj[γ][ij] )i,j) satisfies the set
of equations E[(no-exp)] in (3). Now we can show that a (clear) message (M _, N_ )
satisfies a “hidden” policy defined by equation E, witnessed by elements (V, W ),
since we can express policies as sets of group elements.
Our second building block are structure-preserving signatures [1], which were
designed to be combined with GS proofs: their keys, messages and signatures
consist of elements from G and H and signatures are verified by evaluating
PPEs. GS proofs let us prove knowledge of keys, messages, and/or signatures
which satisfy verification, without revealing anything beyond this fact.
Our construction now follows the blueprint of the generic scheme in Figure 3.
The setup creates a CRS for GS proofs and a key pair (mvk, msk) for a structure
2 This is a simulatable pairing-product equation, that is, one for which Groth-Sahai
proofs can be made zero-knowledge.
-----
Policy-Based Signatures 533
preserving scheme Sig sp. (Note that here we need not encrypt any witnesses like
in the generic construction, since GS proofs are extractable.) We transform every
PPE E contained in a policy to a set of equations E[(no-exp)] without exponents.
The policies can thus be expressed as sets of group elements describing the
equations E[(no-exp)], which can be signed by Sig sp.
A signing key is a signature on the policy under msk and signing is done by
choosing a one-time signature key pair (ovk, osk), proving a statement analogous
to (1) and signing the proof and the message with osk. A further technical
obstacle is that we need to express the disjunction in the statement to be proven
as (a conjunction of) sets of PPEs. We achieve this by following Groth’s approach
in [30]. The details of the construction can be found in [7].
A simple use case. Messages that are elements of bilinear groups and policies
demanding that they satisfy PPEs will prove useful to construct other cryptographic schemes like group signatures. Yet, our pairing-based construction might
seem too abstract for deploying PBS to manage signing rights in a company—one
of the motivations given in the introduction.
However, consider the following simple example: A company issues keys to
their employees which should allow them to sign only messages h _m that start_
_∥_
with a particular header h. (E.g. h could be “Contract with company X”, so
employees are limited to signing contracts with X.) This can be implemented by
mapping messages h _m to (F_ (h), F (m)) via a collision-resistant hash function
_∥_
_F : {0, 1}[∗]_ _→_ G. (E.g. first hash to Zp via some f and then set F (x) = G[f] [(][x][)].)
The policy p[∗] requiring messages to start with h[∗] can then be expressed as
PC((p[∗], h _m)) = 1_ _e(F_ (h[∗]), H) e(F (h), H _[−][1]) = 1._
_∥_ _⇔_
Another option would be to additionally demand that an employee hold a
credential (verified via PPEs), which she must use as a witness when signing.
## 5 Applications and Implications
Here we illustrate how PBS can provide a unifying framework for work on advanced forms of signatures and beyond, capturing some primitives as special
cases and allowing others to be derived in simple and natural ways. Here we
show how PBS allows one to easily obtain group signatures [10]. In [7] we show
that they imply signatures of knowledge [21] and attribute-based signatures [34].
These applications are illustrative rather than exhaustive.
Section 4.1 shows which primitives are sufficient for policy-based signatures.
We now ask the converse question, namely which primitives are necessary, that
is, which fundamental cryptographic primitives are implied by PBS? In [7] we
show that PBSs imply simulation-extractable NIZKs and IND-CPA encryption.
By a result [39] they thus imply IND-CCA public-key encryption. The sufficient
assumptions we make in our constructions of Section 4.1 are thus also necessary.
CCA-Secure Group Signatures from PBS. Group signatures [22] let members sign anonymously on behalf of a group. To deter misuse, the group manager
holds a secret key which can open signatures, that is, reveal the member that
-----
534 M. Bellare and G. Fuchsbauer
made the signature. As defined in [10], a group-signature scheme is a 4-tuple
_GS_
of PT algorithms. On input 1[λ] and the group size 1[n], key generation algorithm
GKg returns the group public key gpk, the manager’s secret key gmsk and a
vector of member secret keys gsk. On input gsk[i] and a message m 0, 1,
_∈{_ _}[∗]_
signing algorithm GSig returns a group signature γ by member i on m. On input
_gpk, m and γ, verification algorithm GVf outputs a bit. On input gmsk, m and γ,_
the opening algorithm Open returns an identity i [n] or .
_∈_ _⊥_
Full anonymity requires that an adversary cannot decide which of two group
members of its choice produced a group signature, even when given an oracle
that opens any other signature. Traceability means that an adversary, which is
allowed to corrupt users, cannot produce a group signature which opens to a
user that was not corrupted. (We give a formal definition in [7].)
We now construct group signatures from CCA-secure public-key encryption
and PBS. Since the former can be constructed from PBS (as we show in [7]),
this means that PBS implies group signatures. The main idea is to define a
group signature as a ciphertext plus a PBS. When making a group signature
on a message m, a member is required to encrypt her identity as c and then
sign (c, m). This is enforced by issuing to the member a PBS key whose policy
ensures that c must be an encryption of the member’s identity. Let =
_PKE_
(KeyGenpke, Enc, Dec) be a public-key encryption scheme satisfying IND-CCA
and let PBS = (Setup, KeyGenpbs, Sign, Verify) be a PBS for the following NPrelation:
PC�((ek, i), (c, m)), r� _c = Enc(ek, i; r) ._ (4)
_⇐⇒_
(See [7] for an encryption scheme such that (4) lies in the language of our efficient
PBS from Section 4.2.) In [7] we sow that the following group-signature scheme
satisfies full anonymity and traceability as formalized by [10].
GKg(1[λ], 1[n])
(pp, msk) ←[$] Setup(1[λ])
(ek, dk) ←[$] KeyGenpke(1[λ])
For i = 1, . . ., n do
_ski ←[$] KeyGenpbs(msk, (ek, i))_
**_gsk[i] ←_** (pp, ek, i, ski)
Return (gpk ← (pp, ek), gmsk ← _dk, gsk)_
GVf((pp, ek), m, (c, σ))
Return Verify(pp, (c, m), σ)
GSig((pp, ek, i, ski), m)
_r ←[$] {0, 1}[λ]_
_c ←_ Enc(ek, i; r)
_σ ←[$] Sign(ski, (c, m), r)_
Return (c, σ)
Open(gmsk, m, (c, σ))
If Verify(pp, (c, m), σ) = 0
Then return ⊥
Return Dec(gmsk, c)
## 6 Delegatable Policy-Based Signatures
In an organization, policies may be hierarchical, reflecting the organization structure. Thus, a president may declare a high-level policy to vice presidents and
issue keys to them. Each of the vice presidents augments the policy with their
own sub-policies for managers below them, and so on. To support this, we extend
PBS to allow delegation. We define and achieve delegatable policy-based signatures, where a user holding a key for some policy can delegate her key to another
-----
Policy-Based Signatures 535
user and possibly restrict the associated policy. We formalize this by associating
keys to vectors of policies and require that keys can (only) sign messages which
are allowed under all policies associated to the key. In order to restrict the policy
at delegation, users can add policies to the associated vector.
Consider the following simple use case: A company issues a key to a manager
Alice which enables her to sign contracts with companies X, Y and Z. Now Bob
is negotiating a contract with Z on behalf of Alice, so she gives Bob a key that
only lets him sign contracts with Z.
In [7] we provide a syntax and definitions of UF and IND, as well as SIM and
EXT, which are straightforward generalizations of those for PBS. However, we
strengthen IND by letting the adversary (who obtains msk) construct the keys
under one of which the experiment makes a signature. This ensures that when
Alice delegates different keys to Bob and Carol, she will not be able to tell by
whom a message was signed. Analogously, we let the adversary choose the key
in SIM.
With regard to a construction, we note that in the PBS schemes in Figures 3
and 4, a signing key skp is simply a signature from the authority on the associated
policy p. We add delegation to PBS by replacing the signature with an append_only signature [33]. These signatures allow anyone holding a signature on a_
message p to create a signature on p∥p[′] for any p[′]. One can thus append a new
part to a signed message, but this is the only transformation allowed. Appendonly signatures can be constructed from any signature scheme. Holding a key,
which is a signature on a vector of policies p, a user can delegate the key after
(possibly) appending a new policy.
Due to space constraints, the definitions as well as the constructions are deferred to the full version [7].
**Acknowledgments. Mihir Bellare was supported in part by NSF grants CNS-**
1228890, CNS-1116800, CNS-0904380 and CCF-0915675. Georg Fuchsbauer was
supported by the European Research Council, ERC Starting Grant (259668PSPC); part of his work was done while at Bristol University, supported by
EPSRC grant EP/H043454/1.
## References
1. Abe, M., Fuchsbauer, G., Groth, J., Haralambiev, K., Ohkubo, M.: Structurepreserving signatures and commitments to group elements. In: Rabin, T. (ed.)
CRYPTO 2010. LNCS, vol. 6223, pp. 209–236. Springer, Heidelberg (2010)
2. Agrawal, S., Gorbunov, S., Vaikuntanathan, V., Wee, H.: Functional encryption:
New perspectives and lower bounds. In: Canetti, R., Garay, J.A. (eds.) CRYPTO
2013, Part II. LNCS, vol. 8043, pp. 500–518. Springer, Heidelberg (2013)
3. Backes, M., Meiser, S., Schr¨oder, D.: Delegatable functional signatures. Cryptology
ePrint Archive, Report 2013/408 (2013)
4. Bagga, W., Molva, R.: Policy-based cryptography and applications. In: S. Patrick,
A., Yung, M. (eds.) FC 2005. LNCS, vol. 3570, pp. 72–87. Springer, Heidelberg
(2005)
-----
536 M. Bellare and G. Fuchsbauer
5. Barbosa, M., Farshim, P.: On the semantic security of functional encryption
schemes. In: Kurosawa, K., Hanaoka, G. (eds.) PKC 2013. LNCS, vol. 7778, pp.
143–161. Springer, Heidelberg (2013)
6. Belenkiy, M., Chase, M., Kohlweiss, M., Lysyanskaya, A.: P-signatures and noninteractive anonymous credentials. In: Canetti, R. (ed.) TCC 2008. LNCS, vol. 4948,
pp. 356–374. Springer, Heidelberg (2008)
7. Bellare, M., Fuchsbauer, G.: Policy-based signatures. Cryptology ePrint Archive,
Report 2013/413 (2013)
8. Bellare, M., Goldwasser, S.: New paradigms for digital signatures and message
authentication based on non-interactive zero knowledge proofs. In: Brassard, G.
(ed.) CRYPTO 1989. LNCS, vol. 435, pp. 194–211. Springer, Heidelberg (1990)
9. Bellare, M., Meiklejohn, S., Thomson, S.: Key-versatile signatures and applications: RKA, KDM and Joint Enc/Sig. Cryptology ePrint Archive, Report 2013/326
(2013)
10. Bellare, M., Micciancio, D., Warinschi, B.: Foundations of group signatures: Formal
definitions, simplified requirements, and a construction based on general assumptions. In: Biham, E. (ed.) EUROCRYPT 2003. LNCS, vol. 2656, pp. 614–629.
Springer, Heidelberg (2003)
11. Bellare, M., O’Neill, A.: Semantically-secure functional encryption: Possibility results, impossibility results and the quest for a general definition. In: Abdalla, M.,
Nita-Rotaru, C., Dahab, R. (eds.) CANS 2013. LNCS, vol. 8257, pp. 218–234.
Springer, Heidelberg (2013)
12. Bellare, M., Rogaway, P.: Random oracles are practical: A paradigm for designing
efficient protocols. In: Ashby, V. (ed.) ACM CCS 1993, pp. 62–73. ACM Press
(November 1993)
13. Bellare, M., Rogaway, P.: The security of triple encryption and a framework for code-based game-playing proofs. In: Vaudenay, S. (ed.) EUROCRYPT
2006. LNCS, vol. 4004, pp. 409–426. Springer, Heidelberg (2006)
14. Bender, A., Katz, J., Morselli, R.: Ring signatures: Stronger definitions, and constructions without random oracles. In: Halevi, S., Rabin, T. (eds.) TCC 2006.
LNCS, vol. 3876, pp. 60–79. Springer, Heidelberg (2006)
15. Boneh, D., Sahai, A., Waters, B.: Functional encryption: Definitions and challenges.
In: Ishai, Y. (ed.) TCC 2011. LNCS, vol. 6597, pp. 253–273. Springer, Heidelberg
(2011)
16. Boneh, D., Waters, B.: Constrained pseudorandom functions and their applications.
Cryptology ePrint Archive, Report 2013/352 (2013)
17. Boyen, X.: Mesh signatures. In: Naor, M. (ed.) EUROCRYPT 2007. LNCS,
vol. 4515, pp. 210–227. Springer, Heidelberg (2007)
18. Boyle, E., Goldwasser, S., Ivan, I.: Functional signatures and pseudorandom functions. Cryptology ePrint Archive, Report 2013/401 (2013)
19. Camenisch, J., Lysyanskaya, A.: An efficient system for non-transferable anonymous credentials with optional anonymity revocation. In: Pfitzmann, B. (ed.) EUROCRYPT 2001. LNCS, vol. 2045, pp. 93–118. Springer, Heidelberg (2001)
20. Chase, M., Kohlweiss, M., Lysyanskaya, A., Meiklejohn, S.: Malleable signatures:
Complex unary transformations and delegatable anonymous credentials. Cryptology ePrint Archive, Report 2013/179 (2013)
21. Chase, M., Lysyanskaya, A.: On signatures of knowledge. In: Dwork, C. (ed.)
CRYPTO 2006. LNCS, vol. 4117, pp. 78–96. Springer, Heidelberg (2006)
22. Chaum, D., van Heyst, E.: Group signatures. In: Davies, D.W. (ed.) EUROCRYPT
1991. LNCS, vol. 547, pp. 257–265. Springer, Heidelberg (1991)
-----
Policy-Based Signatures 537
23. De Caro, A., Iovino, V., Jain, A., O’Neill, A., Paneth, O., Persiano, G.: On the
achievability of simulation-based security for functional encryption. In: Canetti, R.,
Garay, J.A. (eds.) CRYPTO 2013, Part II. LNCS, vol. 8043, pp. 519–535. Springer,
Heidelberg (2013)
24. De Santis, A., Micali, S., Persiano, G.: Noninteractive zero-knowledge proof systems. In: Pomerance, C. (ed.) CRYPTO 1987. LNCS, vol. 293, pp. 52–72. Springer,
Heidelberg (1988)
25. Diffie, W., Hellman, M.E.: New directions in cryptography. IEEE Transactions on
Information Theory 22(6), 644–654 (1976)
26. Dodis, Y., Haralambiev, K., L´opez-Alt, A., Wichs, D.: Efficient public-key cryptography in the presence of key leakage. In: Abe, M. (ed.) ASIACRYPT 2010. LNCS,
vol. 6477, pp. 613–631. Springer, Heidelberg (2010)
27. Feige, U., Shamir, A.: Witness indistinguishable and witness hiding protocols. In:
22nd ACM STOC, pp. 416–426. ACM Press (May 1990)
28. Fuchsbauer, G., Pointcheval, D.: Anonymous proxy signatures. In: Ostrovsky, R.,
De Prisco, R., Visconti, I. (eds.) SCN 2008. LNCS, vol. 5229, pp. 201–217. Springer,
Heidelberg (2008)
29. Goldwasser, S., Micali, S., Rivest, R.L.: A digital signature scheme secure against
adaptive chosen-message attacks. SIAM Journal on Computing 17(2), 281–308
(1988)
30. Groth, J.: Simulation-sound NIZK proofs for a practical language and constant size
group signatures. In: Lai, X., Chen, K. (eds.) ASIACRYPT 2006. LNCS, vol. 4284,
pp. 444–459. Springer, Heidelberg (2006)
31. Groth, J., Sahai, A.: Efficient non-interactive proof systems for bilinear groups. In:
Smart, N.P. (ed.) EUROCRYPT 2008. LNCS, vol. 4965, pp. 415–432. Springer,
Heidelberg (2008)
32. Kiayias, A., Papadopoulos, S., Triandopoulos, N., Zacharias, T.: Delegatable pseudorandom functions and applications. Cryptology ePrint Archive, Report 2013/379
(2013)
33. Kiltz, E., Mityagin, A., Panjwani, S., Raghavan, B.: Append-only signatures. In:
Caires, L., Italiano, G.F., Monteiro, L., Palamidessi, C., Yung, M. (eds.) ICALP
2005. LNCS, vol. 3580, pp. 434–445. Springer, Heidelberg (2005)
34. Maji, H.K., Prabhakaran, M., Rosulek, M.: Attribute-based signatures. In: Kiayias,
A. (ed.) CT-RSA 2011. LNCS, vol. 6558, pp. 376–392. Springer, Heidelberg (2011)
35. Mambo, M., Usuda, K., Okamoto, E.: Proxy signatures for delegating signing operation. In: ACM CCS 1996, pp. 48–57. ACM Press (March 1996)
36. Matt, C., Maurer, U.: A constructive approach to functional encryption. Cryptology ePrint Archive, Report 2013/559 (2013)
37. O’Neill, A.: Definitional issues in functional encryption. Cryptology ePrint Archive,
Report 2010/556 (2010)
38. Rivest, R.L., Shamir, A., Tauman, Y.: How to leak a secret. In: Boyd, C. (ed.)
ASIACRYPT 2001. LNCS, vol. 2248, pp. 552–565. Springer, Heidelberg (2001)
39. Sahai, A.: Non-malleable non-interactive zero knowledge and adaptive chosenciphertext security. In: 40th FOCS, pp. 543–553. IEEE Computer Society Press
(October 1999)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-642-54631-0_30?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-642-54631-0_30, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007%2F978-3-642-54631-0_30.pdf"
}
| 2,014
|
[
"JournalArticle"
] | true
| 2014-03-26T00:00:00
|
[] | 16,753
|
en
|
[
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02f01d99161b54a0a128d7e47baa41d4435f0b37
|
[] | 0.901654
|
The impact of digital money on monetary and fiscal policy
|
02f01d99161b54a0a128d7e47baa41d4435f0b37
|
Ekonomika preduzeca
|
[
{
"authorId": "30987723",
"name": "D. Vujović"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Ekon preduzeca"
],
"alternate_urls": [
"http://www.ses.org.rs/srp/ekonomika_preduzeca.php"
],
"id": "53653d01-4a98-4784-a0e2-0a0e2d3104b2",
"issn": "0353-443X",
"name": "Ekonomika preduzeca",
"type": null,
"url": "http://scindeks.ceon.rs/journalDetails.aspx?issn=0353-443X"
}
|
Digital money era is in full swing. It has already changed the structure of the global monetary system. Like industrial revolutions of the past few centuries, the digital money revolution is based on: (i) new IT and accounting technology (crypto algorithms, distributed ledger technology, internet, and deep penetration of smart phones), and (ii) demand for greater financial inclusion, and for more efficient financial services. The advent of unregulated private mobile money with more than 4 billion users and trillions of dollars in financial transaction has awakened fears of monetary system instability and dwindling traction of the old monetary and fiscal policy. The response has been a relentless effort by more than 100 central banks around the world to develop a public digital currency. Retail CBDCs issued by central banks will be available to everybody to provide stability and liquidity to the financial system in times of need. There will be uncertainties and challenges regarding the conduct of monetary and fiscal policy. Many expected improvements will come with inevitable tradeoffs in the speed and effectiveness of monetary policy transmission, and in achieving greater fiscal transparency without violating individual rights and privacy. Serbia will benefit greatly from improved fiscal transparency and reduced shadow economy associated with digital money revolution. At the same time it will be vulnerable to currency substitution pressures from future digital Euro and reduced traction of monetary policy in the presence of multiple e-money flows. Timely legal preparations for bank-led mobile money and Central Bank digital cash, and applied research of complex future policy risks is strongly advised.
|
**Dušan Vujović**
Metropolitan University
FEFA – Faculty of Economics, Finance and
Administration
Belgrade
### Abstract
UDK: 336.7:004
336.02
338.23:336.74
DOI: 10.5937/EKOPRE2302065V
Date of Receipt: January 24, 2023
# THE IMPACT OF DIGITAL MONEY ON MONETARY AND FISCAL POLICY
## Uticaj digitalnog novca na monetarnu i fiskalnu politiku
### Sažetak
Digital money era is in full swing. It has already changed the structure
of the global monetary system. Like industrial revolutions of the past
few centuries, the digital money revolution is based on: (i) new IT and
accounting technology (crypto algorithms, distributed ledger technology,
internet, and deep penetration of smart phones), and (ii) demand for
greater financial inclusion, and for more efficient financial services. The
advent of unregulated private mobile money with more than 4 billion
users and trillions of dollars in financial transaction has awakened fears of
monetary system instability and dwindling traction of the old monetary
and fiscal policy. The response has been a relentless effort by more than
100 central banks around the world to develop a public digital currency.
Retail CBDCs issued by central banks will be available to everybody to
provide stability and liquidity to the financial system in times of need.
There will be uncertainties and challenges regarding the conduct of
monetary and fiscal policy. Many expected improvements will come with
inevitable tradeoffs in the speed and effectiveness of monetary policy
transmission, and in achieving greater fiscal transparency without violating
individual rights and privacy. Serbia will benefit greatly from improved
fiscal transparency and reduced shadow economy associated with digital
money revolution. At the same time it will be vulnerable to currency
substitution pressures from future digital Euro and reduced traction of
monetary policy in the presence of multiple e-money flows. Timely legal
preparations for bank-led mobile money and central bank digital cash,
and applied research of complex future policy risks is strongly advised.
**Keywords: crypto-assets, bitcoin, stablecoin, e-money, mobile**
_money, CBDC, monetary policy, fiscal policy_
Era digitalnog novca je u punom zamahu. Već je promenila strukturu
globalnog monetarnog sistema. Kao industrijske revolucije tokom
prošlih nekoliko vekova, i ova digitalna revolucija novca zasnovana je
na: (i) novim IT računovodstvenim tehnologijama (kripto algoritmima,
decentralizovanom računovodstvu, internetu i dubokoj penetraciji
pametnih mobilnih telefona) i (ii) očekivanjima veće finansijske inkluzije
i tražnji za efikasnijim finansijskim uslugama. Pojavljivanje neregulisanog
privatnog mobilnog novca koji danas već ima 4 milijarde korisnika i
trilione dolara u finansijskim transakcijama probudilo je opravdani strah
o mogućoj nestabilnosti monetarnog sistema pri opadajućoj efikasnosti
stare monetarne i fiskalne politike. Odgovor je ogroman napor više od
100 centralnih banaka u svetu da razviju javni digitalni novac. Novac koji
bi izdavale centralne banke, tzv. retail CBDC biće dostupan svima radi
održanja stabilnosti i likvidnosti finansijskog sistema u slučaju potrebe.
Sigurno će biti neizvesnosti i izazova u vođenju monetarne i fiskalne
politike u novim uslovima. Mnoga očekivana poboljšanja doneće sa
sobom i neizbežne teškoće u brzini i efektivnosti mehanizama transmisije
monetarne politike, kao i izazove u dostizanju višeg stepena fiskalne
transparentnosti bez narušavanja ličnih sloboda i privatnosti. Srbiji će
digitalni novac doneti poboljšanu fiskalnu transparentnost i smanjenje
sive ekonomije. Istovremeno, Srbija će biti izložena pritiscima eurizacije
posle pojavljivanja digitalnog evra, kao i dejstvu smanjene efektivnosti
monetarne politike u prisustvu višestrukih egzogenih tokova mobilnog
novca. Zato se preporučuju blagovremene pravne reforme neophodne
za uvođenje CBDC i dobro funkcionisanje mobilnog novca u saradnji sa
bankarskim sistemom, kao i primenjena istraživanja budućih složenih
rizika ekonomske politike.
**Ključne reči: kripto valute, bitkoin, stabilni koin, e-novac, mobilni**
_novac, CBDC, monetarna politika, fiskalna politika_
-----
EKONOMIKA PREDUZEĆA
### Introduction
Digital money era is in full swing now. Decades long efforts
to scale down or eliminate cash – the epitome of money
and legal tender – relied on traditional cashless payment
instruments: checks, payment cards, direct account debits,
wire transfers and the like. This slow but persistent tide
of cashless payments has recently been overpowered by
a true digital money tsunami.
The first wave started with bitcoin and other private
_sui generis cryptocurrencies, and quickly expanded into_
crypto generated stablecoins backed by major currencies
and/or low risk bonds to counter the excessive volatility
of bitcoins. Privately and anonymously generated crypto
protection in tandem with clearance and accounting
mechanisms based on distributed ledger technology
(DLT), challenged two quintessential properties of the
regulated two-tier banking system. These were to print and
distribute fiat money that is almost free of counterfeiting
risks, and to provide an efficient clearing and accounting
mechanism as a basis for payments and normal functioning
of the economy.
Despite providing alternative safety features and
decentralized payment clearance procedures, the impact of
cryptocurrencies and stablecoins on the long held monopoly
of the banking sector and stability of the financial sector
remained relatively limited due to their small size, high
volatility and lack of widespread acceptance.
The second wave brought on mobile money pioneered
by fin-tech companies and internet trading giants relying on
their dominant position in internet-based retail transactions
and widespread penetration and use of smart phones by
people with limited access to banking services. Instead
of algorithm based ex-ante protection, mobile money
provided security through client registration, prepayment
of minimal balances and strict ex-post enforcement of
payment discipline.
The impact of mobile money on the financial sector
is likely to continue to grow exponentially in line with the
number of users in China, India and Africa, and expected
growth trends in middle and higher income countries
based on reputable providers (Apple Pay, Google Pay,
Pay Pal, Samsung Pay, Venmo, Zelle, etc.). As discussed
by Shirono et al. [32], large and growing shares of private
unregulated and uninsured digital mobile money issued by
mobile network operators (in so called non-Bank mobile
money systems), may pose a stability and regulatory risk
in difficult times if an adequate access to liquidity reserves
is not secured.
Once these risks got recognized, the response of the
monetary authorities worldwide was to explore the possibility
of adapting and extending the concept of central bank
money to the requirements of digital money revolution.
In other words, to issue Central Bank Digital Currency (or
CBDC), a digital form of physical currency which has been
printed as legal tender during past centuries. Presently,
almost 100 countries around the world (including the EU)
are exploring the possibility of issuing CBDC that would
best respond to the demands of providing liquidity and
securing stability of the monetary system, while enabling
the conduct of monetary policy in line with mandated
objectives of price stability and employment.
This would complete digital transformation on
the instrument side and pave the way to gradually
eliminating cash and reaching cashless economy and
cashless society in the not so distant future. Many
challenges will have to be addressed along the way
including the issues of financial inclusion and privacy. In
many cases good solutions would depend on our ability
to find and sustain the right balance between positive
and negative effects. Positive developments rendered
by digital revolution include better access to cheaper
financial services, greater fiscal discipline, improved
procurement and public financial management, tracking
of payments enabling elimination of shadow economy
and illegal activities, etc. Key negative effects include
potential loss of privacy, further financial exclusion of
certain social groups due to old age, limited access to
ITC technology and skills, possible abuse of growing
body of information on individual consumption, social
political and other preferences.
This brings us to the conduct of monetary and fiscal
policy in such a changed environment, the main theme of
the paper addressed in section 4. Before that, in section
2, we briefly review the status of the global financial
sector by looking at key lessons learned from the previous
-----
Global Financial Crisis of 2008. In section 3 we define
and discuss the characteristics of key digital financial
instruments brought by the first wave (cryptocurrencies,
and stablecoins), and second wave (mobile money), as well
as the response of central banks through digital form of
official legal tender money. We offer some concluding
remarks on policy issues and themes for further policy
research of relevance for Serbia in section 5.
### Lessons learned from the Global Financial Crisis
In the wake of the 2008 crisis Stiglitz [33] and Rajan [29]
assessed the crisis as a “financial market failure” caused
by the absence of adequate regulatory framework and
proper risk pricing, with contagion that led to the global
financial crisis and previously unthinkable government
bailout in trillions and trillions of Dollars and huge
economic losses worldwide.
The belief in the efficiency of the financial markets
held by the leading neoliberal economic school and
adopted by key policymakers at the time (Greenspan,
Summers, etc.) was so strong that it promulgated laws
which legally prevented the US monetary and financial
authorities from regulating the growing and increasingly
complex derivatives. The usual assumptions of efficient
markets (perfect competition, perfect information, no
externalities) obviously did not hold in the US and the
increasingly connected global financial sector.
Firstly, because the sector was dominated by large
oligopolistic players not only by the size of their balance
sheet (such as the13 US megabanks), but also by the
overwhelming influence they had in the government and
legislature through campaign financing and important
policy positions held in the administration and academia.
Secondly, due to large and growing presence of overly
complex multilayer financial instruments where true risk
and performance information were not fully known to
issuers themselves, let alone the clients and the policy
makers. The situation became even more complex after
the wholesale increase in the so called sub-prime lending
instruments based on overly optimistic borrower income
and real-estate price projections, as well as interest rate
and credit risks.
Transition Issues
Thirdly, in the absence of clear regulation and tight
on-site and off-site supervision, megabanks started losing
touch with reality. Glaring example is the stark contrast
between the only one percent share of AAA corporate
securities vis-à-vis 60 percent share of AAA “asset-backed
securities”. The first is a “real world rating number” earned
by real corporations confirming their income and profits
in the markets. The second is a fake number attached to
packaged mortgage backed (or similar) securities “goldplated” by the packaging company, in this case megabank.
Interestingly enough Rajan shows [29, p.132] that this does
not necessarily have to be a sham. Through the “magic
of combining diversification with tranching” banks can
create securities of different seniority and, thus, create
average or even mediocre securities into “repackaged
AAA-rated securities” since under normal circumstances:
(i) mortgage default probabilities tend to be low,
(ii) incidence of defaults is not correlated since people
default for highly personal (health, family, job
loss) reasons,
(iii) real estate prices do not fall substantially and
across many locations at the same time, and
(iv) interest rates do not abruptly increase and
refinancing conditions do not worsen across the
board.
Rajan provides an example[1] which shows that
if these assumptions hold, as they should in normal
times, commercial and investment banks would not face
significant risks. More specifically, the holder of senior
securities would suffer losses only 1 percent of the time
or less if more than two mortgages are packaged together.
But the assumptions did not hold. By 2007 defaults
became more frequent than usual and highly correlated
due to general layoffs. Real estate prices collapsed creating
substantial negative net worth for many house owners.
Programmed interest rates increase based on subprime
clauses made things worse. The conditions in the financial
1 Rajan [29, p. 134] shows how packaging two or more low-quality loans
can produce a AAA-rated security. If on the basis of two mortgages (assets) with face value of $1 and 10 percent chance of default, an investment bank structures a deal with one junior security with face value of
$1 that bears the brunt of losses until they exceed $1, and one senior
security that bears the losses after that
-----
EKONOMIKA PREDUZEĆA
market worsened, practically eliminating refinancing
options due to market and liquidity risks.
In short, the financial market faced a perfect storm
caused by regulatory failure, poor management (risk
pricing practices) both at the micro-dealer and corporate
level. The unregulated asset-backed securities and custom
derivatives based thereon were a time bomb. And their
share in the books of major banks in the US and around
the world was way too high.
The questions are: Why did this happen? And how?
The initial departure from the canonic features of the
financial sector was neoliberal drive towards deregulation
of the financial sector during the Reagan administration
in the 1980s. Wages in the financial sector started to grow
relative to other sectors in the economy based on the new
set of wage, bonus and career incentives that favored
performance without properly accounting for risks. Similar
incentive changes happened at the higher management
and corporate levels. Bank mergers in the 1990s created
mega banks that became too influential and ‘too big to
fail’. This further increased appetite for excessive risk
taking at all management and corporate levels as profits
were allowed to be taken out through wages and bonuses,
while losses were hidden in overpriced non-transparent
complex instruments to be picked up by the government
when the inevitable crises comes eventually.
As Rajan [29, p. 136] notes, it is not surprising that
banks were tempted to create and promote risky mortgagebacked securities in the absence of strict regulatory rules
and supervision practices. But it is truly a puzzle why so
many banks with strong analytical and risk departments
retained those senior securities as the crises broke out
and the mirage of modeling probabilities crumbled in
the face of reality.
The global financial crisis confirmed that complex
financial markets are neither efficient nor stable without
good nonbiased regulation. Active policies should
moderate (or if needed prevent) the emergence of mega
banks and other financial institutions with ‘too big to fail’
macroeconomic and social consequences. The regulators
must carefully follow the relevant trends and hidden risks
and timely intervene to prevent perfect storm situations
that inevitably lead to massive market failure. Failure to do
so creates huge fiscal cost at the national level and equally
high economic costs and sufferings absorbed at the level
of individuals and vulnerable social and income groups.
Figure 1 shows the cost of the 2008 crisis. During
2007-2008 the financial sector lost more than 1/3 of its
value added. It took more than five years to recover that
loss. Today, financial sector accounts for 8-9 percent of
the US GDP, has the highest wages and excellent key
performance indicators. Despite these successes, it is
**Figure 1: US financial sector value added share (as percent of GDP)**
10.0
9.0
### y = 0.0002x - 2.6129 R[2] = 0.42127
8.0
7.0
6.0
5.0
4.0
Source: U S Bureau of Economic Analysis
-----
important to remember some critical lessons from the
regulatory and policy failures of the previous crises, most
of all, the 2008 global financial crisis.
First, the design of financial sector regulatory
framework and the conduct of monetary and financial
policies are endogenous in their true nature and, hence,
affect the behavior of banks and financial institutions.
Second, the incentive systems and signals may
sometimes lead in the wrong direction or be conflicting,
especially in the presence of risks which have to be properly
factored in while pursuing higher performance in the
presence of complex instruments.
Third, government preference for price stability,
employment and growth, as well as targeted housing
financing must not be interpreted as willingness to be
drawn into expensive bailouts benefiting failed banks
and financial institutions. This is especially relevant at
this time as large fin-tech and other non-bank financial
institutions embark on private digital money creation and
domestic and international payment systems.
Fourth, financial sector reform is inevitable to truly
and consistently implement all lessons learned from
the previous crisis as well as prepare to secure stability
of the new digital forms of money and complement
the system with appropriately designed public digital
currency (presently best known as CBDC or Central
Bank Digital Currency). Aside from new instruments
and payment innovations, the core part of the reformed
financial sector will have to rest on a well-managed
interface between private and public sector regarding
both regulatory and policy issues.
### Digital money instruments
Digital money revolution, also labeled “New Era of Digital
Money” [1] and the “The Rise of Digital Money” [2],
shared many common characteristics of many industrial
revolutions we have seen in the past two centuries. Forces
of change for private digital money included [18]:
A. Technology and infrastructure including but not
limited to:
- crypto algorithms to generate and protect privately
(and anonymously) issued digital money;
Transition Issues
- distributed ledger technology (DLT) allowing
decentralized clearance and accounting;
- internet and powerful communication systems; and
- deep penetration of smart phones, tablets and
laptops at user level.
B. Demand for efficient and reliable financial services
and modern service providers including
- payments and transfers (domestic and international,
for small and large amounts),
- investment
C. Responsiveness to consumer behavior and
evolving expectations
D. Potential for higher level of financial inclusion for
- SMEs (entrepreneurs),
- previously un-bankable social and economic
groups, and
- general population and businesses in areas with
poor bank penetration.
### Cryptoassets – Bitcoin
Cryptocurrencies or Crypto-assets as ECB Task Force
officially calls them are based on blockchain concept
published in 2008 under the pseudonym Nakamoto,
whose existence has never been confirmed. Bitcoin, first
and best known crypto-asset out of some 2000 issued
thus far accounts for about 2/3 of market capitalization
of crypto-assets (based on [7]). In the absence of formal
definition, bitcoin is crypto-asset with decentralized
trading and clearing system. It is issued based on strict
cryptographic rules regarding ownership of both existing
and new units.
Crypto-assets are relatively small (about 2 percent
of EU money aggregates), have limited acceptance and
low penetration due to, among other factors, very high
volatility.
As a result, bitcoin and crypto-assets in general
have had a very limited impact on monetary aggregates
and monetary policy thus far. Officially, crypto-assets are
not considered part of broad money as they did not …
“perform the basic functions of money as unit of account,
a medium of exchange and a store of value … prices of
goods and services are not quoted in any cryptocurrency
anywhere … the number of transactions in Bitcoin is
-----
EKONOMIKA PREDUZEĆA
modest. At the same time, the mining process is energy
intensive …” [7, p. 4].
### Stablecoin
By contrast, stablecoins also utilize crypto-algorithms and
DLT but limit volatility by having a credible custodian
and by being fully backed by a major currency (Dollar
or Euro) or low risk securities.
As long as the share of national stablecoins remains
small, and they are backed by stable major currencies, their
impact on monetary policy and transmission channels is
likely to be small and neutral. In the unlikely case of a
strong global stablecoin, which may provide incentives
or otherwise induce commodity exporters and/or energy
importers to fix prices in such stablecoin, this could
impose constraints on the conduct of domestic price
stabilization policies.
### e-Money or mobile money
Based on Shirono et al. [32], large fin-tech companies are
leading the digital money revolution. Mobile money or
e-money is their flagship instrument which can be acquired
through a very simple registration procedure with one of
local provider shops of Mobile Network Operators (MNO).
Users must have a simple smart phone and some money
to deposit on the mobile account. It does not require a
banking account. Based on online database maintained
by GSMA (Global Systems for Mobile Communications)
and IMF held FAS (Financial Access Survey), mobile
money presently offers more access points globally than
traditional banking sector.
Based on GSMA data, the number of registered mobile
money accounts in the world (excluding China) increased
exponentially from 134 million in 2002 to 1.35 billion
in 2021: a tenfold increase. During the same period, the
number of active mobile accounts increased even faster,
from 62 million to 864 million, almost 14 times.
The value of transactions reached one trillion USD
in 2021, a 31% increase over 2020. By type of transaction,
person-to-person (P2P) transactions were the highest with
USD 387 million (37%), followed by Cash-In payments
with USD 261 million (25%) and Cash-Out withdrawals
of USD 178 million (17%). The fastest growing mobile
money transactions were payments to merchants (94%
increase over 2020) and international remittances (48%)
indicating a diversification into areas that used to be
dominated by payment cards and international wire
transfers, respectively.
Additionally, mobile money is usually only one of
the growing array of expanding digital financial services
offered by Fin-Tech (also known as non-banking financial
institutions), telecom, and other related companies.
The number of mobile money users has been growing
exponentially over the past decade. In addition to Africa
known as the cradle of mobile money (M-Pesa), e-money
has been expanding fast in Asia (China, India) providing
services to billions of people seeking reliable, efficient
(inexpensive) and widely accepted payment services for
literally trillions of small value transactions daily.
Mobile money is a safe, simple and efficient (affordable)
form of digital money that provides all functions of
money: unit of account, stable store of value and medium
of exchange. It provides easy access to most people, and
guarantees simple and inexpensive payments and transfers,
including remittances. From the monetary statistics point
of view, mobile-money outstanding balances are a part
of broad money, and thus affect the value and quality
of monetary aggregates, as well as the characteristics of
so-called transmission channels of monetary policy. The
reporting of changes in mobile-money balances depends
on the dominant business model and the applicable
regulatory framework. Over the last 5-6 years mobile money
balances have increased significantly in all African and
Asian countries where e-money represents a significant
portion of broad money.
It should be stressed that mobile banking is very
different from mobile-money or e-money. In mobile
banking, users access their bank account using custom
application software installed on their smart phones. All
transactions in mobile banking are performed on the client’s
bank account. Smart phones are just used to remotely
access bank account and initiate those transactions. In
mobile money, transactions are done directly peer-topeer between registered and authenticated users based
on previously deposited balances on the payee side and
legitimate payments (for goods or services) and transfers.
-----
Individual bank accounts are not needed to perform
mobile-money transactions.
So far three major business models have emerged
in the so-called Mobile Money Ecosystem. Shirono et. al.
[32] identify two major models:
The original “MNO-led model” was created by
major mobile network operators (MNO) such as M-Pesa
launched by Safaricom in Kenya, Vodafone in Tanzania,
and GlobeTelecom in Philippines. No bank accounts or prior
credit history are needed to become mobile-money client.
“Bank-led model” is initiated by banks but relies on
MNOs to manage the network and financial services based
on mobile phones. Irrespective of bank involvement, no
bank account is needed to become a client.
The third model is a “Fin-Tech-led model” where
providers of financial/payment services initiate mobilemoney operation. These include some of the presently
largest mobile-money providers such as AliPay, WeChat
Pay, Apple Pay, Google Pay, PayPal, etc.
The MNO and Fin-Tech led models share many
common features and can be merged into a “non-bankled model”.
Five essential functions have been identified in each
of the models:
- Network service provider role is usually carried out
by one or more MNOs;
- Mobile money agents provide direct contact with
present and future customers; The network of agents
is supported by MNOs, and payment providers/FinTech companies, as well as banks in the “bank-led
model”;
- Payment service provider is responsible for front
end interface with agents and customers, back-end
processing and, most importantly, for payment
clearance and settlement; Payment services can be
provided by MNOs, FinTech companies, as well as
banks in the “bank-led model”;
- Mobile money issuer who holds the liability for
mobile money and guarantees the conversion of
mobile money balances back to cash/legal tender
when demanded; In the “non-bank led model” the
issuer can be MNO or FinTech company, and in the
“bank-led model” the issuer can only be the bank; and
Transition Issues
- Deposit holder (usually a bank in all models) is
responsible for funds deposited/pre-paid by mobile
money customers.
A variant of “bank-led model” has been created in
India labeled “narrow bank model”. It allows a formation
of so called “payment banks” under exiting banking laws
and regulatory environment with limited set of financial
services. Eligible MNOs or Fin-Techs can obtain a limited
banking license which allows them to accept deposits, issue
ATM and debit cards, offer payments and other financial
services excluding lending. Restrictions also apply on
the placement of deposits requiring that 3/4 of demand
deposits be invested in low risk government securities or
treasury bills with up to one year maturity, and 1/4 held
with commercial banks as minimal operational liquidity.
Similar rules have evolved in other countries with
significant share of mobile money in monetary aggregates
### to preserve financial stability and allow liquidity
interventions in cases of a financial crisis due to external
shocks or “runs”. The remaining concerns that apply at
times of severe liquidity and financial crisis have led to
proposals for the introduction of CBDCs discussed in the
next subsection.
RBI, the central bank of India, has also pioneered
Universal Payment Interface as an enhancement to the
mobile money system allowing some 400 million users in
Rural areas with older telephones (without smart phone
features) to join mobile money and access modern payment
services. To further increase financial inclusion, RBI has
also sponsored Unstructured Supplementary Service
Data (USSD) as another cashless option for those who
do not own or carry any phone or tablet, and do not have
access to internet. On the higher end, RBI supported the
development of Immediate Payment Service for users with
mobile money accounts also registered for mobile banking.
### Central bank digital money – CBDC
Unprecedented growth of mobile money in Africa, South
and East Asia generated 1.35 billion users worldwide in 2021.
This number is more than doubled when supplemented
by the missing numbers for China (1.3 billion for Ali Pay
and 900 million for WeChat Pay), and corrected for underreported users in Europe and North America (as suggested
-----
EKONOMIKA PREDUZEĆA
by data of major mobile money operators such as Apple
Pay, Google pay, PayPal, Samsung Pay and Venmo). With
fast increasing value of e-money transactions and growing
balances, mobile money proved to be very convenient and
a reliable unit of account for billions of users.
Adrian et. al. [1] ask a critical question: How stable
is e-money compared to other competing forms of money
(crypto-assets, stablecoins, commercial bank deposit
money, cash or CBDC)?
First, e-money is exposed to liquidity risk which
depends directly on the market liquidity of the asset mix
held by the issuer of mobile money. In normal times this
may not be an issue. In times of financial crisis, however,
the issuer may not be able to convert less liquid assets to
cash fast enough to prevent the “run” in the absence of
central bank liquidity backstop.
Second, e-money is also subject to default risk of
the issuing entity due to losses (bankruptcy) or inability
to short-term obligations. In that case, pre-paid funds
in mobile-money accounts could be frozen or seized by
creditors which represents a serious risk with potential
spillovers and damaged reputation.
Third, market risk can affect assets held by an
e-money provider if his net worth becomes negative (i.e.
if losses exceed equity).
Fourth, e-money can also be subject to foreign
exchange risk if some claims are denominated in foreign
currency or a basket of currencies.
With these risks and high potential for further growth
of a widespread adoption, mobile money represents a
major potential challenge for the stability of the monetary
system in case of crisis unless adequate liquidity backstop
solutions can be implemented seamlessly. These could either
be based on limited inclusion of MNO and/or Fin-Tech
companies into the banking system following the “narrow
banking model” introduced in India, or the introduction
of a public digital money issued by the central bank to
which we devote the remainder of this section.
### CBDC research and objectives
Central banks around the world are exploring the possibility
of issuing retail central bank (public) digital money. Based
on January 2023 online tracker data, out of 119 countries
around the world, CBDCs have been Launched already in
11countries, and Piloted in 17. In addition, 39 countries
are at Research stage and 33 at Development stage in 33.
In 15 countries work on CBDCs is inactive at present, and
in 2 countries CBDC work has been cancelled.[2]
A wide range of CBDC objectives is quoted in the ample
literature on the subject. Panetta et al. [27] emphasize that
the primary objective of issuing CBDCs is a necessity to
secure access to public money in an economy increasingly
dominated by private digital money.
In a survey of pragmatic CBDC issues, US Federal
Reserve [1, pp. 1-2] states that policymakers and staff are
guided by an understanding that CBDCs should:
- provide positive net benefits to the economy (adjusted
for risks and time distribution of effects);
- be more efficient and effective in achieving desired
objectives than alternative instruments;
- complement, rather than abruptly replace, existing
forms of money and methods of financial services;
- protect consumer privacy;
- safeguard against criminal activity; and
- enjoy broad support from a broad range of key
stakeholders.
As recognized early in the debate by Bordo and
Levine [11] CBDCs can be either
- wholesale digital money instrument made available
only to commercial banks, much like the present
central bank reserves, or
- retail digital money instrument available to all
economic agents in an economy, much like central
bank FIAT money (cash or legal tender). Retail
CBDCs can be
- account based or
- token based digital monies.
Both wholesale and retail CBDCs can be interest
bearing as deposit money or no interest bearing. This is
presently a heavy debated issue with possible significance
in the conduct of monetary policy, currency substitution,
crowding out commercial bank deposits with possible far
reaching consequences on the volume and cost of lending.
2 CBDC Stage of Research and Development, by Country as of January
2023 can be accessed at Central Bank Digital Currency (CBDC) Tracker
(cbdctracker.org) as well as specialized site sponsored by Atlantic Council.
Central Bank Digital Currency Tracker - Atlantic Council
-----
Recent research suggests that these effects could be
managed through the design of CBDCs and targeted policy
measures that could limit the size of CBDC holdings, provide
multi-tier remuneration (interest payments) depending on
share of CBDCs in bank portfolios, use of CBDC caps etc.
CBDCs have a positive impact on the stability of
the financial system based on sovereign digital money,
faster and more efficient (cheaper) payments and financial
transactions in general.
One issue that attracted a lot of attention is the
potential impact of CBDC during times of financial crisis
and a potential loss of confidence in commercial banks. The
fact that retail CBDCs can be held with zero financial and
handling cost (unlike cash) may exacerbate run on banks
if no restrictions are put in place before hand. Paneta et
al. [27] quote recent research results which indicate that
increased risks of bank runs in the presence of CBDC
can be effectively contained by design features of the
instrument itself, as well as through properly calibrated
safeguards and information of deposit flows enabled by
tracking properties of digital instruments.
It should be noted that design features and safeguards
also help in sustaining the monetary policy transmission
channels. More research is needed to resolve the dilemma
of CBDC remuneration and constraints on CBDC holdings
in the realistic context of real-life policy choices. Zero lower
bound on interest rates is one such issue. The attractiveness
of CBDC as an efficient payment instrument, form of
investment in times of crisis, and an anchor of price and
financial stability. As Schiling et al. [31] put it: the objectives
of payment efficiency, financial system stability and price
stability cannot be all achieved at the same time.
### Impact on monetary and fiscal policy
Without repeating policy issues already discussed in the
introduction, the section devoted to policy lessons from
the Global financial crisis, and in the context of individual
digital money instruments, this section aims to highlight
some of the key remaining policy issues with high impact
on the effectiveness of monetary and fiscal policy.
The effect of crypto assets on money aggregates is
small primarily because bitcoin and similar crypto assets
Transition Issues
do not satisfy the definition of money and are normally not
recorded as addition to broad money. Stablecoins backed
by major currencies may add to the value of monetary
aggregates, but their size remains marginal at present.
Mobile money is officially considered as money which
adds to the size of broad money. The reporting depends
on the business model followed: In “bank-based e-money
models” outstanding balances should automatically be
reported as additions to M2. In “non-bank-based models”
the reporting depends on the specific legal and regulatory
arrangements. The responsibility for reporting can be
placed on banks holding e-money deposits, or MNOs or
Fin-Tech companies issuing e-money. CBDCs are part of
CB money issued in digital form and thus gets reported
in a standard way.
As discussed above, private digital money is a
convenient and efficient way to provide payment and transfer
services. In all aspects they are equal or more efficient than
the traditional payment instruments. The effect on the
stability of the monetary system and transmission channels
depends on the inherent financial characteristics of mobile
money issuers. As discussed in the previous section, both
mobile money and CBDCs bring some stability and policy
effectiveness issues. Current research has already identified
a number of design features and safeguards that can help
address main risks in normal times, as well as prevent
“runs” and widespread costs during crisis.
The ongoing research of the impact on transmission
channels is limited by the lack of both adequate models
and empirical evidence. Much of modern monetary
policy wisdom is based on empirical relations as a basis of
evaluating and calibrating the policy interest rate channel
and other instruments at central bank disposal.
Much of the policy discussion surrounding the
development of CBDC instrument is focused on the
challenges that could potentially be caused by currency
substitution. The advent of strong major digital central
bank currencies, such as digital US Dollar or digital Euro
may create incentives for currency substitution in countries
with weaker currencies and macroeconomic fundamentals.
This could trigger a process of digital dollarization or
digital euroization that is faster and deeper than similar
processes observed in the past, based on traditional major
-----
EKONOMIKA PREDUZEĆA
currencies. Excessive currency substitution may adversely
affect domestic monetary policy due to limited control
over domestic liquidity and, hence, less efficient impact
on price stability and real performance.
Currency substitution in the presence of digital
CBDC is not very different from present dual currency
situations faced by many small economies with large
remittances and share of shadow economy. Methods of
dealing with the currency substitution problem may have
to be adapted to much faster financial flows associated
with the dominance of digital currencies. The fact that
most digital moneys would leave a trace which could help
fight shadow economy and illegal economic activity may
actually diminish one the main drivers of dual currency.
Digital revolution is expected to have a profound
impact on the ease and transaction cost of cross border
payments. This will create considerable savings for
workers’ remittances, SME transactions, trade flows and
international transfers. At the same time, digitalization
of international payments will remove most barriers to
capital flows and make standard policies of “capital account
restrictions” more difficult if not impossible without stark
violations of the spirit of public and private digital monies.
Furthermore, the presence of public CB digital currency
with practically unlimited capital mobility will require
adequate choices regarding foreign exchange rate regime,
and the independence of monetary policy.
On the fiscal side, digital money revolution will
bring a possibility of a major reduction in the shadow
economy based on digital tracking left behind every
transaction (payment or transfer) and much higher level
of transparency of accounting and fiscal/tax reporting.
Carefully drafted laws should increase fiscal transparency
and revenues without violating privacy and personal
information. Challenges in protecting privacy and data
integrity are very serious and merit utmost attention of
the government, the legislature and the broad public.
Digital transactions would also help improve the
efficiency of public spending through transparent and
truly competitive procurement procedures, and monitoring
of public spending effects on the achievement of stated
budget objectives in health, education, social assistance,
and infrastructure investment.
As a result, there will be an improved base for better
public expenditure management based on multi-year
expenditure framework and program based budgeting
aligned with development objectives.
Finally, the digital monetary revolution will accelerate
all flows and processes, and pose new challenges in the
areas of monetary and fiscal policy coordination.
Serbia will benefit greatly from improved fiscal
transparency and reduced shadow economy associated
with digital money revolution. Despite significant variation
in the estimates, the shadow economy remains a serious
concern strongly linked to the share of cash transactions
(in both local currency and Euros). All other factors being
equal, declining share of cash and growing use of digital
monies with tracking capabilities are likely to bring many
shadow activities in the open, reduce or eliminate underreporting of taxable income and transactions in otherwise
registered businesses, and increase fiscal transparency
on both the revenue and expenditure side of the budget.
To internalize these benefits, Serbia will have to revisit
its tax, budget and procurement laws, and modernize
tax administration to target likely pockets of tax evasion
among large tax payers, and in unregistered and illegal
activities, instead of putting undue pressure on small
and medium size businesses with poorly disguised urge
to collect revenues ignoring social and long-term growth
consequences.
At the same time Serbia will be vulnerable to currency
substitution pressures from future digital Euro due to high
dependence on remittances coming mostly from Euro
area, and the possibly large stock of dual currency in the
country. Furthermore, reduced effectiveness and traction
of monetary policy caused by currency substitution will
be stressed further by: (a) the presence of likely multiple
exogenous e-money flows spreading like wild fire in
many EU and other countries with significant trade and
remittance flows, and (b) inability to fine tune capital flows.
To effectively respond to these challenges Serbia is
best advised to engage in timely legal preparations for
the anticipated needs of a possible (or likely) increase
in “bank-led mobile money” and central bank digital
currency. In parallel, mirroring the initiatives of ECB,
BIS and u Fed, Serbia should initiate applied research of
-----
complex future policy risks and seek effective institutional
and policy responses.
### Conclusion
Era of digital money has started slowly, at the outskirts
of privately generated crypto-security associated with
extreme volatility. In slightly over a decade digital money
has spread like a wildfire to now include more than 4
billion users of mobile money and force a quantum change
in the central bank money. Paper money, bank notes,
legal tender are on the way out. CBDC will be a digital
reincarnation of central bank money, available retail for
all banks, companies and individuals to provide liquidity
and public sector backbone to the monetary system.
We will soon live in a brave new world of digital
money. Phrases like “Show me the money” from Jerry
Maguire, “Cash is the king” and “Money makes the world
go round” will no longer make sense. Our life will be easier.
Transactions will be faster and cheaper.
There will be uncertainties and challenges regarding the
conduct of monetary and fiscal policy. Many improvements
will come with necessary tradeoffs in the speed and
effectiveness of monetary policy transmission, and the
challenges of achieving greater fiscal transparency without
violating individual rights and privacy.
Serbia will benefit greatly from improved fiscal
transparency and reduced shadow economy associated
with digital money revolution. At the same time it will
be vulnerable to currency substitution pressures from
future digital Euro and reduced traction of monetary
policy in the presence of multiple e-money flows. Timely
legal preparations for bank-led mobile money and central
bank digital cash, and applied research of complex future
policy risks is strongly advised.
### References
1. Adrian, T., & Mancini-Griffoli, T. (2021). A New Era of Digital
_Money. FD Finance and Development. Retrieved from https://_
www.imf.org/external/pubs/ft/fandd/2021/06/online/digitalmoney-new-era-adrian-mancini-griffoli.htm
2. Adrian, T., & Mancini-Griffoli, T. (2019). The Rise of Digital
_Money. IMF Fintech Notes. Washington, D.C.: IMF._
Transition Issues
3. Ahnert, T., Hoffmann, P., & Monnet, M. (2022). The digital
_economy, privacy, and CBDC, ECB (ECB Working Paper, No._
2662) ISBN 978-92-899-5111-1.
4. Aggarwal, K., Malik, S., Mishra, D. K., & Paul, D. (2021). Moving
from Cash to Cashless Economy: Toward Digital India. Journal
_of Asian Finance, Economics and Business, 8(4), 43-54._
5. Armas, A., & Singh, M. (2022). Digital Money and Central
_Banks Balance Sheet (IMF Working Paper, 2022. WP/22/206)._
Washington, D.C.: IMF.
6. Arvidsson, N. (2019). Building a Cashless Society: The Swedish
_Route to the Future of Cash Payments (Kindle Edition). Springer_
International Publishing.
7. Assenmacher, K. (2020, May). Monetary policy implications of
_digital currencies (SUERF Policy Note, Issue No.165)._
8. Barrdear, J., & Kumhof, M. (2022). The macroeconomics of
central bank digital currencies. Journal of Economic Dynamics
_and Control, 142, 104148._
9. Bank for International Settlements (2021). CBDCs: An opportunity
_for the monetary system (BIS Annual Economic Report 2021)._
10. Board of Governors of the Federal Reserve System. (2022).
_Money and Payments: The U.S. Dollar in the Age of Digital_
_Transformation. Retrieved from https://www.federalreserve._
gov/publications/files/money-and-payments-20220120.pdf
11. Bordo, M. D, & Levin, A. T. (2017). Central Bank Digital Currency
_and the Future of Monetary Policy (NBER Working Paper 23711)._
National Bureau of Economic Research. Retrieved from http://
www.nber.org/papers/w23711
12. Brunnermeier, M. K., & Niepelt, D. (2019). On the equivalence
of private and public money. Journal of Monetary Economics,
_106, 27-41._
13. Carapella, F., & Flemming, J. (2020, November 9). Central Bank
_Digital Currency: A Literature Review (FEDS Notes). Washington:_
Board of Governors of the Federal Reserve System. https://
doi.org/10.17016/2380-7172.2790
14. Carstens, A. (2021). Digital currencies and the future of the
_monetary system. Remarks by General Manager, Bank for_
International Settlements. Hoover Institution policy seminar
Basel 27 January 2021.
15. Chiu, J., & Keister, T. (2022). The economics of digital currencies:
Progress and open questions. Journal of Economic Dynamics
_and Control, 142, 104496._
16. Davoodalhosseini, S. M. (2022). Central bank digital currency
and monetary policy. Journal of Economic Dynamics and
_Control, 142, 104150._
17. Ikeda, D. (2022). Digital Money as a Medium of Exchange
_and Monetary Policy in Open Economies (Discussion Paper_
No. 2022-E-10). Institute for Monetary and Economic Studies
Bank of Japan.
18. IMF. (2020). The Rise of Public and Private Digital Money: A
_Strategy to Continue Delivering on the IMF’s Mandate (IMF_
Board Paper). Washington, D.C.: IMF.
19. Johnson, S., & Kwak, J. (2011). 13 Bankers: The Wall Street
_Takeover and the Next Financial Meltdown. New York: Vintage_
books, Random House.
20. Kahn, C., Singh, M., & Alwazir, J. (2022). Digital Money and
Central Bank Operations (IMF Working Paper, 2022. WP/22/85).
Washington, D.C.: IMF.
-----
EKONOMIKA PREDUZEĆA
21. Khiaonarong, T., & Humphrey, D. (2022). Falling Use of Cash
and Demand for Retail Central Bank Digital Currency (IMF
Working Paper WP/22/27). Washington, D.C.: IMF.
22. Kumhof, M., & Jakab, Z. (2016, March). The Truth about Banks.
Finance & Development, 53(1). Retrieved from https://www.
imf.org/external/pubs/ft/fandd/2016/03/pdf/kumhof.pdf
23. McLeay, M., Radia, A., & Thomas, R. (2014). Money creation
in the modern economy. Bank of England Quarterly Bulletin,
_54(1), 14-27._
24. Monnet, E, Riva, A., & Ungaro, S. (2021). Bank runs and central
_bank digital currency. VoxEU. Retrieved from https://cepr.org/_
voxeu/columns/bank-runs-and-central-bank-digital-currency
25. Niepelt, D. (2021, November 24). Central bank digital currency:
_Considerations, projects, outlook. VoxEU. Retrieved from https://_
cepr.org/voxeu/columns/central-bank-digital-currencyconsiderations-projects-outlook
26. OECD. (2002). The Future of Money. Paris: OECD.
27. Panetta, F., Mehl, A., Neumann, M. M., & Jamet, J-F. (2022,
April 13). Monetary policy and financial stability implications of
_central bank digital currencies. VoxEU. Retrieved from https://_
cepr.org/voxeu/columns/monetary-policy-and-financialstability-implications-central-bank-digital-currencies
28. Prasad, E. S. (2021). The future of money: how the digital
_revolution is transforming currencies and finance (Kindle Edition)._
The Belknap Press of Harvard University Press.
29. Rajan, R. G. (2010). Fault Lines: _How Hidden Fractures Still_
_Threaten the World Economy, Princeton University Press,_
Princeton.
**Dušan Vujović**
30. Sanches, D., & Keister, T. (2021). Should Central Banks Issue
_Digital Currency? (Federal Reserve Bank of Philadelphia_
Working Paper 21-37).
31. Schilling, L., Fernández-Villaverde, J., & Uhlig, H. (2020). Central
_Bank Digital Currency: When Price and Bank Stability Collide_
(BFI Working Papers, No 2020-180). Becker Friedman Institute
for Economics at the University of Chicago.
32. Shirono, K., Das, B., Fan, Y., Chhabra, E., & Carcel, H. (2021). Is
_Mobile Money Part of Money? Understanding the Trends and_
_Measurement (IMF Working Paper WP/21/177). Washington,_
D.C.: IMF.
33. Stiglitz, J. E. (2010). Lessons from the Global Financial Crisis of
2008. Seoul Journal of Economics. 23(3), 321-339.
34. Turrin, R. (2021). China’s Digital Currency Revolution. Gold
River: Authority Publishing.
35. van Oordt, M. R.C. (2022). Discussion of “Central bank digital
currency: Stability and information”. Journal of Economic
_Dynamics and Control, 142, 104503._
36. Williamson, S. D. (2022). Central bank digital currency and
flight to safety. Journal of Economic Dynamics and Control,
_142, 104146._
37. World Bank. (2014). Global Financial Development Report 2014:
_Financial Inclusion. Washington, DC: World Bank._
38. World Payments Report, Capgemini. (2022). World Payments
Report 2022. Retrieved from https://worldpaymentsreport.
com/index.html
is a Professor of Economics at FEFA (Faculty of Economics, Finance and Administration), Belgrade, and a World
Bank consultant in the areas of macroeconomic policy, fiscal and governance reform, and innovation for
growth. Dr Vujović is a member of WAAS (World Academy of Arts and Sciences). He chairs NALED (National
Alliance for Local Economic Development) Research Council and provides consulting services to various
Serbian and international research and policy institutes. From April 27, 2014 – May 16, 2018 Dr. Vujović held
three ministerial positions in the Government of Serbia: Economy April 2014 - September 2014, Finance
August 2014 - May 2018, and Defence February - March 2016. He received the best Minister of Finance in
Eastern and Central Europe award for 2017. He was a USAID consultant on budget and fiscal reform issues,
and a research fellow at CASE Institute, Warsaw. Dr Vujovic past career includes various positions at the
World Bank, such as Country Manager for Ukraine, and Co-Director of the Joint Vienna Comprehensive
program for government officials from the transition economies, Lead Economist in the World Bank ECA
region and in the Independent Evaluation Group. He authored and co-authored a number of publications on
macroeconomic policy, development, and institutional reform and transition issues published as papers in
domestic and international journals, and chapters in books published by The World Bank, Oxford University
Press North Holland Edward Elgar etc
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5937/ekopre2302065v?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5937/ekopre2302065v, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNCND",
"status": "GOLD",
"url": "https://scindeks-clanci.ceon.rs/data/pdf/0353-443X/2023/0353-443X2301065V.pdf"
}
| 2,023
|
[
"JournalArticle"
] | true
| null |
[
{
"paperId": "fbe04e7851cc3b92e544a3cf35a00d40f142f84f",
"title": "Digital Money and Central Banks Balance Sheet"
},
{
"paperId": "4cb177d64e3ac8ab2d31d421e80595f8e062aa8b",
"title": "Discussion of “Central Bank Digital Currency: Stability and Information”"
},
{
"paperId": "d86c63d1a104700abedd6f47a103a2d3b643eec7",
"title": "The Economics of Digital Currencies: Progress and Open Questions"
},
{
"paperId": "295b0f2d47d28b91b2b783b8b664cd90fc704b73",
"title": "Digital Money and Central Bank Operations"
},
{
"paperId": "81e7af1e5488bf58ff5fea581882205d287229d0",
"title": "Falling Use of Cash and Demand for Retail Central Bank Digital Currency"
},
{
"paperId": "9a43f59f685e4ad7c538d47f342bc3b3953b14cd",
"title": "The Future of Money"
},
{
"paperId": "7463c323661dbabf84dfc22e1edb6d928c603da1",
"title": "Is Mobile Money Part of Money? Understanding the Trends and Measurement"
},
{
"paperId": "450fd1f1368a379aefe3d50361d3d467cacfd0a7",
"title": "Central bank digital currency and flight to safety"
},
{
"paperId": "019fac25b962f021860c476a452e47712d3b3cb5",
"title": "The macroeconomics of central bank digital currencies"
},
{
"paperId": "b5681501376ff9e475b3907832b650b8d9f3d18f",
"title": "Central Bank Digital Currency: A Literature Review"
},
{
"paperId": "e4d29cd0d5829fdf9cf7c71a75a9479df40ec4f4",
"title": "Central Bank Digital Currency: When Price and Bank Stability Collide"
},
{
"paperId": "2858e0b9b452819401167b68e66a2ad87c671f78",
"title": "The Rise of Digital Money"
},
{
"paperId": "64e80e34fc664d9d7d86bae35a398a164c1c3590",
"title": "On the Equivalence of Private and Public Money"
},
{
"paperId": "2491a0c89cf72bbee3a77d6b5a59b901af377801",
"title": "Building a Cashless Society"
},
{
"paperId": "1ca2403d065e0715c560978012972b7e71c21f47",
"title": "Central Bank Digital Currency and Monetary Policy"
},
{
"paperId": "a6207f03078e22b3bcbb714999b9b01a64765d3e",
"title": "NBER WORKING PAPER SERIES CENTRAL BANK DIGITAL CURRENCY AND THE FUTURE OF MONETARY POLICY"
},
{
"paperId": "1a6eaf43c3a293a629daf82ae626f9e842a54c90",
"title": "Fault lines."
},
{
"paperId": null,
"title": "Global Financial Development Report 2014:Financial Inclusion"
}
] | 12,326
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02f1c80f0030bab898055dcb9b62e37ad0b419b0
|
[
"Computer Science"
] | 0.878757
|
Large Scale Product Recommendation of Supermarket Ware Based on Customer Behaviour Analysis
|
02f1c80f0030bab898055dcb9b62e37ad0b419b0
|
Big Data and Cognitive Computing
|
[
{
"authorId": "1759254",
"name": "Andreas Kanavos"
},
{
"authorId": "3451452",
"name": "Stavros Anastasios Iakovou"
},
{
"authorId": "1718248",
"name": "S. Sioutas"
},
{
"authorId": "1980071",
"name": "Vassilis Tampakas"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Big Data Cogn Comput"
],
"alternate_urls": [
"https://www.mdpi.com/journal/BDCC",
"https://www.mdpi.com/journal/BDCC/about",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-1077183"
],
"id": "e7c556b0-162f-4c48-9d25-fe3c141f984e",
"issn": "2504-2289",
"name": "Big Data and Cognitive Computing",
"type": null,
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-1077183"
}
|
In this manuscript, we present a prediction model based on the behaviour of each customer using data mining techniques. The proposed model utilizes a supermarket database and an additional database from Amazon, both containing information about customers’ purchases. Subsequently, our model analyzes these data in order to classify customers as well as products, being trained and validated with real data. This model is targeted towards classifying customers according to their consuming behaviour and consequently proposes new products more likely to be purchased by them. The corresponding prediction model is intended to be utilized as a tool for marketers so as to provide an analytically targeted and specified consumer behavior. Our algorithmic framework and the subsequent implementation employ the cloud infrastructure and use the MapReduce Programming Environment, a model for processing large data-sets in a parallel manner with a distributed algorithm on computer clusters, as well as Apache Spark, which is a newer framework built on the same principles as Hadoop. Through a MapReduce model application on each step of the proposed method, text processing speed and scalability are enhanced in reference to other traditional methods. Our results show that the proposed method predicts with high accuracy the purchases of a supermarket.
|
**_[big data and](http://www.mdpi.com/journal/bdcc)_**
**_cognitive computing_**
_Article_
# Large Scale Product Recommendation of Supermarket Ware Based on Customer Behaviour Analysis
**Andreas Kanavos** **[1,2,3,]*, Stavros Anastasios Iakovou** **[1,4], Spyros Sioutas** **[2]** **and Vassilis Tampakas** **[3]**
1 Computer Engineering and Informatics Department, University of Patras, Patras 26504, Greece;
sai1u17@soton.ac.uk
2 Department of Informatics, Ionian University, Corfu 49132, Greece; sioutas@ionio.gr
3 Computer & Informatics Engineering Department, Technological Educational Institute of Western Greece,
Antirrion 12210, Greece; vtampakas@teimes.gr
4 Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, UK
***** Correspondence: kanavos@ceid.upatras.gr
Received: 13 January 2018; Accepted: 3 May 2018; Published: 9 May 2018
����������
**[�������](http://www.mdpi.com/2504-2289/2/2/11?type=check_update&version=2)**
**Abstract: In this manuscript, we present a prediction model based on the behaviour of each customer**
using data mining techniques. The proposed model utilizes a supermarket database and an additional
database from Amazon, both containing information about customers’ purchases. Subsequently,
our model analyzes these data in order to classify customers as well as products, being trained and
validated with real data. This model is targeted towards classifying customers according to their
consuming behaviour and consequently proposes new products more likely to be purchased by
them. The corresponding prediction model is intended to be utilized as a tool for marketers so as to
provide an analytically targeted and specified consumer behavior. Our algorithmic framework
and the subsequent implementation employ the cloud infrastructure and use the MapReduce
Programming Environment, a model for processing large data-sets in a parallel manner with a
distributed algorithm on computer clusters, as well as Apache Spark, which is a newer framework
built on the same principles as Hadoop. Through a MapReduce model application on each step of the
proposed method, text processing speed and scalability are enhanced in reference to other traditional
methods. Our results show that the proposed method predicts with high accuracy the purchases of a
supermarket.
**Keywords: Apache Spark; big data; cloud computing; customer behaviour; data analytics; knowledge**
extraction; Hadoop; MapReduce; personalization; recommendation system; supervised learning;
text mining
**1. Introduction**
During the last several years, the vast explosion of Internet data has fueled the development of
Big Data management systems and technologies by companies, such as Google and Yahoo. The rapid
evolution of technology and Internet has created huge volume of data at very high rate [1], deriving
from commercial transactions, social networks, scientific research, etc. The mining and analysis of this
volume of data may be beneficial for the humans in crucial areas such as health, economy, national
security and justice [2], leading to more qualitative results. At the same time, the computational and
management costs are quite high due to the ever-increasing volume of data.
Because customer analysis is the cornerstone of all organizations, it is essential for more and more
companies to store their data into large data-centers aiming to initially analyze them and to further
understand how their consumers behave. Concretely, a large amount of information is accessed and
then processed by companies so as to get a deeper knowledge about their products’ sales as well as
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 2 of 19
consumers’ purchases. Owners, from those who have small shops to large organizations, try to record
information that probably contains useful data about consumers.
In addition to the abundance of Internet data, the rapid development of technology provides even
higher quality regarding network services. More to this point, the Internet is utilized by a large number
of users for information in each field, such as health, economy, etc. As a result, the companies are
concentrated on users’ desired information and personal transactions in order to give their customers
personalized promotions. Moreover, businesses provide their customers with cards so that they can
record every buying detail and, thus, this procedure has led to a huge amount of data and search
methods for data processing.
Historically, in the collection and processing of data, several analysts have been involved.
Nowadays, the data volume requires the use of specific methods so as to enable analysts to export
correct conclusions due to its heavy size. In addition, the increased volume drives these methods to
use complex tools in order to perform automatic data analysis. Thus, the purpose of data collection
can be now regarded a simple process.
The analysis of a dataset can be considered as a key aspect in understanding the way that
customers think and behave during a specific period of the year. There are many classification
and clustering methods that can provide great help to analysts in order to aid them broaching the
consumers’ minds. More specifically, supervised machine learning techniques are utilized in the
present manuscript in the process of mass marketing, and more in detail in the specific field of
supermarket ware.
On the other hand, despite all this investment and technology, organizations still continue to
struggle with personalizing customer and employee interactions. It is simply impractical as well as
unsustainable for many analytic applications to be driven because it takes a long time to produce usable
results in the majority of use cases. In particular, applications cannot generate and then automatically
test a large number of hypotheses that are necessary to fully interpret the volume of data being captured.
For addressing these issues, new and innovative approaches, which use Artificial Intelligence and
Machine Learning methodologies, now enable accelerated personalization with fewer resources. The
result is more practical and actionable with the use of customer insights, as will be shown in the
present manuscript.
According to viral marketing [3], clients influence each other by commenting on specific fields of
e-shops. In other words, one can state that e-shops work like virtual sensors producing a huge amount
of data, i.e., Big Data. Practically, this method appears in our days when people communicate in real
time and affect each other on the products they buy. The aim of our proposed model is the analysis of
every purchase and the proposal of new products for each customer.
This article introduces an extended version of [4], whereas some techniques were as well utilized
in [5]. Concretely, a work on modeling and predicting customer behavior using information concerning
supermarket ware is discussed. More specifically, a new method for product recommendation by
analyzing the purchases of each customer is presented; with use of specific dataset’s categories,
we were able to classify the aforementioned data and subsequently create clusters.
More to the point, the following steps are performed: initially, the analysis of the sales
rate as Big Data analytics with the use of MapReduce and Spark implementation is utilized.
Then, the distances of each customer from the corresponding supermarket are clustered and accordingly
the prediction of new products that are more likely to be purchased from each customer separately is
implemented. Furthermore, with the use of three well-known techniques, e.g., Vector Space Model,
Term Frequency-Inverse Document Frequency (Tf-Idf) and Cosine Similarity, a novel framework is
introduced. Concretely, opinions, reviews, and advertisements, as well as different texts that consider
customer’s connection towards supermarkets, are taken into account in order to measure customer’s
behavior. The proposed framework is based on the measurement of text similarity by applying cloud
computing infrastructure.
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 3 of 19
In our approach, we handle the scalability bottleneck of the existing body of research by employing
cloud programming techniques. Existing works that deal with customers’ buying habits as well as
purchases analysis do not address the scalability problem at hand, so we follow a slightly different
approach. In our previous work [4], we followed similar algorithmic approaches, but, in main memory,
while here we extend and adapt our approach in the cloud environment addressing the need for Big
Data processing.
Furthermore, the proposed work can be successfully applied to other scenarios as well.
Data mining undoubtedly plays a significant role in the process of mass marketing where a product
is promoted indiscriminately to all potential customers. This can be implemented by allowing the
construction of models that predict a customer’s response given their past buying behavior as well as
any available demographic information [6]. In addition, one key aspect in our proposed work is that it
treats each customer in an independent way; that is, a customer can make a buying decision without
knowing what all the other customers usually buy. In this case, we do not consider the influence that
is usually taken into account when dealing with such situations as friends, business partners and
celebrities often affect customers’ buying habit patterns [7].
The remainder of the paper is structured as follows: Section 2 discusses the related work.
Section 3 presents in detail the techniques that have been chosen, while, in Section 4, our proposed
method is described. Additionally, in Section 5, the distributed architecture along with the algorithm
paradigms in pseudocode and further analysis of each step are presented. We utilize our experiments
in Section 6, where Section 7 presents the evaluation experiments conducted and the results gathered.
Ultimately, Section 8 describes conclusions and draws directions for future work.
**2. Related Work**
Driven by real-world applications, managing and mining Big Data have shown to be a challenging
yet very compelling task. While the term “Big Data” literally concerns data volumes, on the other
hand, the HACE (heterogeneous, autonomous, complex and evolving) theorem [8] suggests that Big
Data consist of the following three principal characteristics: huge with diverse and heterogeneous data
sources, autonomous with decentralized and distributed control, and finally complex and evolving
regarding data and knowledge associations. Another definition for Big Data is given in [9], mainly
concerning people and their interactions; more specifically, authors regard the Big Data nature of digital
social networks as the key characteristic. The abundance of information on users’ message interchanges
among peers is not taken lightly; this will aid us to extract users’ personality information for inferring
network social influence behaviour.
The creation as well as the accumulation of Big Data is a fact for a plethora of scenarios nowadays.
Sources like the increasing diversity sensors along with the content created by humans have contributed
to the enormous size and unique characteristics of Big Data. Making sense of this information and
these data has primarily rested upon Big Data analysis algorithms. Moreover, the capability of creating
and storing information nowadays has become unparalleled. Thus, in [10], the gargantuan plethora of
sources that leads to information of varying type, quality, consistency, large volume, as well as the
creation rate per time unit has been identified. The management and analysis of such data (i.e., Big
Data) are unsurprisingly a prominent research direction as pointed out in [11].
In recent years, regarding sales transaction systems, a large percentage of companies maintain
such electronic systems, thus aiming at creating a convenient and reliable environment for their
customers. In this way retailers are enabled to gather significant information for their customers.
As stated below, since the number of data is significantly increasing, more and more researchers have
developed efficient methods and rule algorithms for smart market basket analysis [12]. The “Profset
model” is an application that researchers have developed for optimal product selection on supermarket
data. By using cross-selling potential, this model has the ability to select the most interesting products
from a variety of ware. Additionally, in [13], the authors have analyzed and in following designed an
e-supermarket shopping recommender. Researchers have also invented a new recommendation system
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 4 of 19
where supermarket customers were able to get new products [14]; in this system, matching products
and clustering methods are used in order to provide less frequent customers with new products.
Moreover, new advertising methods based on alternative strategies have been utilized from
companies in order to achieve increasing purchases. Such a model was introduced in [3] having an
analysis of a person-to-person recommendation network with 4 million people along with 16 million
recommendations. The effectiveness of the recommendation network was illustrated by its increasing
purchases. A model regarding a grocery shop for analyzing how customers respond to price and
other point-of-purchase information was created in [15]. Another interesting model is presented
in [16], where authors analyzed the product range effect in purchase data. Specifically, since market
society is affected by two factors, e.g., diversity and rationality in the price system, consumers try
to minimize their spending and in parallel maximize the number of products they purchase.
Thus, researchers invented an analytic framework based on customers’ transaction data where they
found out that customers did not always choose the closest supermarket.
Some supermarkets are too big for consumers to search and to find the desirable product.
Hence, in [17], a recommendation system targeted towards these supermarkets has been created;
using RFID technology with mobile agents, authors constructed a mobile-purchasing system.
Furthermore, in [18], the authors presented another recommendation system based on the past actions
of individuals, where they provided their system to an Internet shopping mall in Korea. In point of
fact, in [19], a new method on personalized recommendation in order to get further effectiveness and
quality since collaborative methods presented limitations such as sparsity, is considered. Regarding
Amazon.com, they used for each customer many attributes, including item views and subject
interests, since they wanted to create an effective recommendation system. This view is echoed
throughout [20], where authors analyzed and compared traditional collaborative filtering, cluster
models and search-based methods. In addition, Weng and Liu [21] analyzed customers’ purchases
according to product features and as a result managed to recommend products that are more likely to
fit with customers’ preferences.
Besides the development of web technologies, the colourfulness of social networks has created
a huge number of reviews on products and services, as well as opinions on events and individuals.
This has led to consumers been used to being informed by other users’ reviews in order to carry out
a purchase of a product, service, etc. One other major benefit is that businesses are really interested
in the awareness of the opinions and reviews concerning all of their products or services and thus
appropriately modify their promotion along with their further development. As a previous work on
opinion clustering emerging in reviews, one can consider the setup presented in [22]. Furthermore,
the emotional attachment of customers to a brand name is a topic of interest in recent years in the
marketing literature; it is defined as the degree of passion that a customer feels for the brand [23].
One of the main reasons for examining emotional brand attachment is that an emotionally attached
person is highly probable to be loyal and pay for a product or service [24]. In [25], authors infer details
on the love bond between users and a brand name through their tweets and this bond is considered as
a dynamic ever evolving relationship. Thus, the aim is to find those users that are engaged and rank
their emotional terms accordingly.
Efficient analysis methods in the era of Big Data is a research direction receiving great attention [26].
The perpetual interest to efficient knowledge discovery methods is mainly supported by the nature
of Big Data and the fact that, in each instance, Big Data cannot be handled and processed to extract
knowledge by most current information systems. Current experience with Cloud Computing applied
to Big Data usually revolves around the following sequence as in [27]: preparation for a processing
job, submission of the job, usually anticipating for an unknown amount of time for results, receive
feedback for the internal processing events and finally receive results.
Large scale data such as graph datasets from social or biological networks are commonplace
in applications and need a different handling in case of storing, indexing and mining. One well
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 5 of 19
known method to facilitate large-scale distributed applications is MapReduce [28] proposed by Dean
and Ghemawat.
For measuring similarity among texts in the cloud infrastructure, many research works have been
proposed in the last several years. Initially, in [29], a method that focuses on a MapReduce algorithm
for computing pairwise document similarity in large document collections is introduced. Also in [30],
a method using the MapReduce model, in order to improve the efficiency of traditional Tf-Idf algorithm
is created. Along the same line of reasoning, the authors in [31] propose the use of the Jaccard similarity
coefficient between users of Wikipedia based on co-occurrence of page edits with the use of the MapReduce
framework. Another piece of supplementary research in reference to large-scale data-sets is utilized in [32],
where a parallel K-Means algorithm based on MapReduce framework is proposed.
**3. Preliminaries**
In this current work, three techniques described in detail below, have been chosen in order to
better emphasize the output of the methodology: Vector Space Model, Tf-Idf and Cosine Similarity.
These techniques were also utilized in the previous work [5].
_3.1. Vector Space Model_
Vector Space Model [33] is an algebraic model for the representation of text documents as vectors.
Each term of a document and each number of occurrences in the document could be represented [34]
with the use of current model. For instance, based on a vocabulary V(t), the document d1 = “This is a
vector, this is algebra” could be represented as follows:
_V(t) =_
1, t = “this”
2, t = “is”
3, t = “a”
4, t = “vector”
5, t = “algebra”
where d1 is the document and t f (t, di) is the term frequency of the t-term in the i[th] document.
We consider d1 = (t f (1, d1), t f (2, d1), t f (3, d1), t f (4, d1), t f (5, d1)) = (2, 2, 1, 1, 1).
_3.2. Tf-Idf_
Tf-Idf (Term Frequency-Inverse Document Frequency) [35] is a numerical statistic that reflects the
significance of a term in a document given a certain corpus. The importance increases proportionally
to the number of times that a word appears in the document but is offset by the frequency of the
word in the corpus. Tf-Idf algorithm is usually used in search engines, text similarity computation,
web data mining, as well as other applications [2,36] that are often faced with massive data processing.
According to Li [30], the Tf-Idf measure of a term is calculated by the use of the following Equation (1):
_ni,j_ _|D|_
_T f_ _Id f =_ log (1)
_×_ _×_
���t ∈ _dj���_ _|d ∈_ _D : t ∈_ _d|_
_3.3. Cosine Similarity_
Cosine Similarity [37] is a measure of similarity between two vectors of an inner product space that
measures the cosine of the angle between them. Regarding document clustering, different similarity
measures are available, whereas the Cosine Similarity is the one being most commonly used.
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 6 of 19
Specifically, for two documents A and B, the similarity between them is calculated by the use of
the following Equation (2):
_A · B_ _n_
_cos(A, B) =_ ∑
_A_ _B_
_∥_ _∥∥_ _∥_ [=] _i=1_
_Ai × Bi_
� �
∑[1]i=1[(][A][i][)][2][ ×]
(2)
∑i[n]=1[(][B][i][)][2]
when this measure takes bigger values (close to 1), then the two documents are identical, and, when it
takes values close to 0, this indicates that there is nothing in common between them (i.e., their document
vectors are orthogonal to each other). Please notice that usually the attribute vectors A and B are the
term frequency vectors of the documents.
_3.4. MapReduce Model_
MapReduce is a programming model that enables the process of large datasets on a cluster using
a distributed and parallel algorithm [28]. The MapReduce paradigm is derived from the Map and
Reduce functions of the Functional Programming model [38]. The data processing in MapReduce
is based on input data partitioning; the partitioned data is executed by a number of tasks in many
distributed nodes. A MapReduce program consists of two main procedures, Map() and Reduce(),
respectively, and is executed in three steps: Map, Shuffle and Reduce. In the Map Phase, input data
are partitioned and each partition is given as an input to a worker that executes the map function.
Each worker processes the data and outputs key-value pairs. In the Shuffle Phase, key-value pairs are
grouped by key and each group is sent to the corresponding Reducer.
A user can define their own Map and Reduce functions depending on the purpose of their
application. The input and output formats of these functions are simplified as key-value pairs. Using this
generic interface, the user can solely focus on their own problem. They do not have to care how the
program is executed over the distributed nodes, about fault tolerant issues, memory management,
and so forth. The architecture of MapReduce model is depicted in Figure 1a. Apache Hadoop [39] is
a popular open source implementation of the Map Reduce model. It allows storage and large-scale
data processing across clusters of commodity servers [40]. The innovative aspect of Hadoop is that
there is no absolute necessity of expensive, high-end hardware. Instead, it enables distributed parallel
processing of massive amounts of data [41] on industry-standard servers with high scalability for both
data storing and processing.
_3.5. Spark Framework_
Apache Spark Framework [42,43] is a newer framework built on the same principles as Hadoop.
While Hadoop is ideal for large batch processes, it drops in performance in certain scenarios, as in
iterative or graph based algorithms. Another problem of Hadoop is that it does not cache intermediate
data for faster performance, but, instead, it flushes the data to the disk between each step. In contrast,
Spark maintains the data in the workers’ memory and, as a result, it outperforms Hadoop in algorithms
that require many operations. It is a unified stack of multiple closely integrated components and
overcomes the issues of Hadoop. In addition, it has a Directed Acyclic Graph (DAG) execution engine
that supports cyclic data flow and in-memory computing. As a result, it can run programs up to 100x
faster than Hadoop in memory, or 10x faster on disk. Spark offers API in Scala, Java, Python and R and
can operate on Hadoop or standalone while using HDFS (Hadoop Distributed File System), Cassandra
or HBase. The architecture of Apache Spark Framework is depicted in Figure 1b.
_3.6. MLlib_
Spark’s ability to perform well on iterative algorithms makes it ideal for implementing Machine
Learning Techniques as, in their vast majority, Machine Learning algorithms are based on iterative
jobs. MLlib [44] is Apache Spark’s scalable machine learning library and is developed as part of the
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 7 of 19
Apache Spark Project. MLlib contains implementations of many algorithms and utilities for common
Machine Learning techniques such as Clustering, Classification, and Regression.
Spark SQL Spark Streaming MLlib GraphX
Spark Core
(a)
Standalone Scheduler YARN Mesos
(b)
**Figure 1. Distributed frameworks. (a) Architecture of MapReduce model; (b) The Spark stack.**
**4. Proposed Method**
In our model, given a supermarket ware dataset, our utmost goal is the prediction whether
a customer will purchase or not a product using data analytics and machine learning algorithms.
This problem can be considered as a classification one since the opinion class consists of specific
options. Furthermore, we have gathered the reviews of Amazon (Seattle, Washington, USA) and in
particular the reviews of each customer, in order to analyze the effect of person-to-person influence in
each product’s market.
The overall architecture of the proposed system is depicted in Figure 2 while the proposed
modules and sub-modules of our model are modulated in the following steps.
Input
**Customer Product Behaviour Analysis**
Products
Customer Id Product Clustering
Product Sampling
based on purchases
Shops
Product
Vector
**Analysis of distance**
**from supermarkets**
Distance Vector
Distance **Choice Classifier**
Clustering
Prediction
**Figure 2. Supermarket model.**
As a next step, we have developed two systems; the first in the MapReduce and the second in
the Apache Spark framework for programming with Big Data. Precisely, an innovative method for
measuring text similarity with the use of common techniques and metrics is proposed. In particular,
a prospective of applying Tf-Idf [35] and Cosine Similarity [45] measurements on distributed text
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 8 of 19
processing is further analyzed where the component of document pairwise similarity calculation is
included. In particular, this method performs pairwise text similarity with the use of a parallel and
distributed algorithm that scales up, regardless of the massive input size.
The proposed method consists of two main components: Tf-Idf and Cosine Similarity, where these
components are designed by following the concept of the MapReduce programming model.
Initially, the terms of each document are counted and the texts are then normalized with the use of Tf-Idf.
Finally, Cosine Similarity of each document pair is calculated and the results are obtained. One major
characteristic of the proposed method is that it is faster and more efficient compared to the traditional
methods; this is due to the MapReduce model implementation in each algorithmic step that tends to
enhance the efficiency of the method as well as the aforementioned innovative blend of techniques.
_4.1. Customer Metrics Calculation_
From the supermarket dataset, four datasets with varying number of records (e.g., 10, 000, 100, 000,
500, 000 and 3, 000, 000) regarding customers’ purchases, containing information about sales over a
vast period of four years, have been randomly sampled. More specifically, the implementation of
our method can be divided into the following steps: initially, the customers along with the products
are sampled while, subsequently, the clustering of the products based on the sales rate takes place.
Then, the customers related on the distance of their houses from the supermarket are clustered and
a recommendation model, with new products separately proposed to each customer based on their
consumer behavior, is utilized. Finally, we sampled the customers of Amazon and then, using the
ratings of the reviews, we came up with the fraction of the satisfied customers.
The training set of the supermarket data consists of eight features, as presented in the following
Table 1 (where we have added a brief description), including customer ID, category of the product,
product ID, shop, number of purchased items, distance from each supermarket, price of the product as
well as the choice.
**Table 1. Training set features.**
**Features** **Description**
Customer ID The ID of the customer
Product Category The category of the product
Product ID The ID of the product
Shop The shop where the customer makes the purchase
Number of items How many products he purchased
Distance Cluster The cluster of the distance
Product Price The price of the product
Choice Whether the customer purchases the product or not
_4.2. Decision Analysis_
Here, we describe the choice analysis based on classification and clustering tools. This method
gathers eight basic features from the given supermarket database as well as eleven different classification
methods in order to further analyze our dataset. In [14], researchers have considered clustering with
the aim of identifying customers with similar spending history. Furthermore, as authors in [46] indicate,
the loyalty of customers to a certain supermarket is measured in different ways; that is, a person is
considered to be loyal towards a specific supermarket if they purchase specific products and also visit
the store on a regular basis. Despite the fact that the percentage of loyal customers seems to be less than
30%, they purchase more than 50% of the total amount of products.
Since the supermarket dataset included only numerical values for each category, we have created
our own clusters in terms of customers as well as products. More concretely, we have measured the
sales of each product and the distances, and we have then created three clusters for products as well as
two classes for distances.
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 9 of 19
More to the point, an organization, in order to measure the impact of various marketing channels,
such as social media, outbound campaigns as well as digital advertising response, might have the
following inputs:
Customer demographics, like gender, age, wealth, education and geolocation,
_•_
Customer satisfaction scores,
_•_
Recommendation likeliness,
_•_
Performance measures, such as email click-through, website visits including sales transaction
_•_
metrics across categories.
**5. Distributed Architecture**
The proposed method for measuring text similarity applying cloud computing infrastructure consists
of four stages. Initially, occurrences of each condition in terms of given documents are counted (Word Count)
and the frequency of every term in each document is measured (Term Frequency). Thereafter, the Tf-Idf
metric of each term is measured (Tf-Idf ) and finally the cosine similarities of the pairs are calculated in
order to estimate the similarity among them (Cosine Similarity). The MapReduce model has been used for
designing each one of the above-mentioned steps. The algorithm paradigm in pseudocode and further
analysis of each step is presented in the following subsections in detail.
_MapReduce Stages_
In the first implementation stage, the occurrences of each word in every document are counted.
The algorithm applied is presented in Algorithm 1.
**Algorithm 1 Word Count.**
1: function Mapper
2: **Method Map(document)**
3: **for each term ∈** _document do_
4: **write ((term, docId), 1)**
5: **end for**
6: end function
7:
8: function Reducer
9: **Method Reduce((term, docId), ones[1, 1, . . ., n])**
10: _sum = 0_
11: **for each one ∈** _ones do_
12: _sum = sum + 1_
13: **end for**
14: **return ((term, docId), oc)** _▷_ _oc_ _N : number of occurences_
_∈_
15: end function
Initially, each document is divided into key-value pairs, whereas the term is considered as the
key and the number (equals to 1) is considered as the value. This is denoted as (term, 1), where the
key corresponds to the term and the value to the number one, respectively. This phase is known
as the Map Phase. In the Reduce Phase, each pair is taken and the sum of the list of ones for the
term is computed. Finally, the key is set as the tuple (document, term) and the value as the number of
occurrences, respectively.
Furthermore, regarding the second implementation phase, the overall number of terms of each
document is computed as described in Algorithm 2.
Regarding the Map Phase of this algorithm implementation, the input is divided into key-value
pairs, whereas the docId is considered as the key and the tuple (term, oc) is considered as the value.
In the Reduce Phase of the algorithm, the total number of terms in each document is counted and the
key-value pairs are returned; that is, the (DocId, N) as the key and the tuples (term, oc) as the value,
where N is the total number of terms in the document.
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 10 of 19
**Algorithm 2 Term Frequency.**
1: function Mapper
2: **Method Map((term, docId), oc)**
3: **for each element** (term, docId) do
_∈_
4: **write (docId, (term, oc))**
5: **end for**
6: end function
7:
8: function Reducer
9: **Method Reduce(docId, (term, oc))**
10: _N = 0_
11: **for each tuple** (term, oc) do
_∈_
12: _N = N + oc_
13: **end for**
14: **return ((docId, N), (term, oc))**
15: end function
In the third implementation stage, the Tf-Idf metric of each term in a document is computed with
the use of the following Equation (3) as presented in Algorithm 3:
_D_
_|_ _|_
_T f_ _Idf =_ _[n]_ (3)
_−_
_N_ _d_ _D : t_ _d_
_[×]_ _|_ _∈_ _∈_ _|_
where _D_ is the number of the documents in corpus and _d_ _D : t_ _d_ is the number of documents
_|_ _|_ _|_ _∈_ _∈_ _|_
where t-term appears.
**Algorithm 3 Tf-Idf.**
1: function Mapper
2: **Method Map((docId, N), (term, oc))**
3: **for each element** (term, oc) do
_∈_
4: **write (term, (docId, oc, N))**
5: **end for**
6: end function
7:
8: function Reducer
9: **Method Reduce(term, (docId, oc, N))**
10: _n = 0_
11: **for each element** (docId, oc, N) do
_∈_
12: _n = n + 1_
13: _T f =_ _[oc]N_
14: _Idf = log_ 1[|][D]+[|]n _▷|D| : number of documents in corpus_
15: **end for**
16: **return (docId, (term, T f** _Idf_ ))
_×_
17: end function
By applying the Algorithm 3 during the Map Phase, the term is set as the key and the tuple
(docId, oc, N) as the value. In that case, the number of documents is calculated during the Reduce
Phase, where the term appears and the result to the n variable is set. The term frequency is subsequently
calculated plus the inverse document frequency of each term as well. Finally, key-value pairs with the
_docId as the key and the tuple (term, T f_ _Idf_ ) as the value are taken as a result.
_×_
In the fourth and final implementation phase, all of the possible combinations of two document
pairs are provided and then the cosine similarity for each of these pairs is computed as presented in
Algorithm 4. Assuming that there are n documents in the corpus, a similarity matrix is generated as in
the following Equation (4):
� �
_n_ _n!_
= (4)
2 2! (n 2)!
_−_
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 11 of 19
**Algorithm 4 Cosine Similarity.**
1: function Mapper
2: **Method Map(docs)**
3: _n = docs.length_
4: **for i = 0 to docs.length do**
5: **for j = i + 1 to docs.length do**
6: **write ((docs[i].id, docs[j].id), (docs[i].tfidf**, docs[j].tfidf ))
7: **end for**
8: **end for**
9: end function
10:
11: function Reducer
12: **Method Reduce((docId_A, docId_B), (docA.tfidf**, docB.tfidf ))
13: _A = docA.tfidf_
14: _B = docB.tfidf_
15: _cosine =_ _√_ _sum(A×B)_
_sum(A[2])×[√]sum(B[2])_
16: **return ((docId_A, docId_B), cosine)**
17: end function
In the Map Phase of Algorithm 4, every potential combination of the input documents is generated
and the document IDs for the key as well as the T f − _Idf vectors for the value is set. Within the Reduce_
Phase, the Cosine Similarity for each document pair is calculated and the similarity matrix is also
provided. This algorithm is visualized as follows in Figure 3.
Map Reducer
Doc1, Doc2 Doc5, Doc6
Doc1(Tf-Idf), CosineSim(Doc5,
Doc2(Tf-Idf) Doc6)
Doc3, Doc4 Doc1, Doc2
Doc3(Tf-Idf), CosineSim(Doc1,
Input Doc4(Tf-Idf) Doc2) Output
Doc5, Doc6 Doc3, Doc4
Doc5(Tf-Idf), CosineSim(Doc3,
Doc6(Tf-Idf) Doc4)
**Figure 3. Cosine similarity algorithm visualization.**
**6. Implementation**
The first stage of the implementation process was the data cleaning. In the dataset utilized,
there was a small number of missing values. In general, there are several methods for data imputation
depending on the features of the dataset. In our case, the missing values were imputed by the mean
value of each feature. The main reason that the corresponding method was implemented is that the
number of missing values was too small (less than 0.1%) and other methods like linear regression
would be time-consuming for the whole process.
After finishing with the data cleaning process, consumers were categorized according to the
amount of money they spend at the supermarket. More specifically, we created three consumer
categories, A, B and C, which correspond to the average money they pay on a regular basis. In addition,
the same process with three categories was implemented for the distance of each consumer’s house
from the supermarket. The overall implementation is presented in terms of Map-Reduce model as
follows in Algorithm 5.
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 12 of 19
**Algorithm 5 Distributed Implementation.**
1: function Mapper
2: **Method Map(purchases)**
3: **for each purchase ∈** _dataset do_
4: cluster consumers into three categories based on budget spent
5: **end for**
6: **for each consumer ∈** _category do_
7: consumer’s details are processed by the same reducer
8: **end for**
9: end function
10:
11: function Reducer
12: **Method Reduce(consumers_list, category)**
_▷_ Recommend products with highest similarity score for every consumer that haven’t
been purchased
13: _consumers_details = 0_
14: **for each consumer ∈** _consumers_list do_
15: _product = 0_
16: _product[[′]consumer[′]] = consumer_
17: _product[[′]_ _product[′]] = product_
18: _consumers_details.append() = product_
19: **end for**
20: **find (User, Tweet)**
21: **for each consumer ∈** _consumerslist do_
22: compute cosine_similarity(consumer, consumers_list)
23: **end for**
24: **for each consumer ∈** _consumers_list do_
25: recommend consumer’s products with the highest similarity
26: **end for**
27: end function
_6.1. Datasets_
The present manuscript utilizes two datasets, e.g., a supermarket database [16] as well as a
database from Amazon [3], which contains information about the purchases of customers.
Initially, we started our experiments with the supermarket database [16] and we extracted the
data using C# language in order to calculate customer metrics. We have implemented an application
where we measured all the purchases of the customers and then several samples of the customers, so as
to further analyze the corresponding dataset, were collected. The five final datasets consist of 10, 000,
100, 000, 500, 000, 1, 000, 000 and 3, 000, 000 randomly selected purchases with all the information from
the supermarket dataset as previously mentioned.
The prediction of any new purchase is based on the assumption that every customer can be
affected by any other one due to the fact that consumers communicate every day and exchange reviews
for products. On the other hand, being budget conscious, they are constricted to select products that
correspond better to their needs. Therefore, a model that recommends new products to every customer
from the most preferred supermarket, is proposed.
By analyzing the prediction model, information about consumers’ behavior can be extracted.
We measured the total amount of products that customers purchased and then categorized them
accordingly. Several classifiers are trained using the dataset of vectors. We separated the dataset and
we used 10-Fold Cross-Validation to evaluate training and test sets. The classifiers that were chosen are
evaluated using TP (True Positive) rate, FP (False Positive) rate, Precision, Recall, as well as F-Measure
metrics. We chose classifiers from five major categories of the Weka library [47] including “bayes”,
“functions”, “lazy”, “rules” and “trees”.
Additionally, we evaluated a model using the results of our experiments on Amazon [3] since
we wanted to measure the number of customers in terms of five product categories, namely book,
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 13 of 19
dvd, music, toy and video. In Table 2, we present the number of delighted and, on the other hand, the
number of unsatisfied customers.
**Table 2. Measurement of satisfaction of customers.**
**Product** **Satisfied** **Not Satisfied**
**Category** **Customers** **Customers**
Book 235,680 68,152
DVD 41,597 16,264
Music 80,149 15,377
Toy 1 1
Video 38,903 13,718
Figure 4 illustrates the amount of customers who are satisfied with products of every category out of
the aforementioned ones. We can observe that the number of satisfied customers is much bigger than the
unsatisfied in four out of five categories (regarding category entitled toy, the number is equal to 1 for both
category of customers). With these results, one can easily figure out that Amazon customers are loyal to
the corresponding company and prefer purchasing products from the abovementioned categories.
**Figure 4. Customer reviews.**
_6.2. Cloud Computing Infrastructure_
A series of experiments for evaluating the performance of our proposed method under many
different perspectives is conducted. More precisely, we have implemented the algorithmic framework
by employing the cloud infrastructure. We used the MapReduce Programming Environment as well
as Apache Spark, which is a newer framework built on the same principles as Hadoop.
Our cluster includes four computing nodes (VMs), each one of which has four 2.6 GHz CPU
processors, 11.5 GB of memory, 45 GB hard disk and the nodes are connected by 1 gigabit Ethernet.
On each node, Ubuntu 16.04 as operating system, Java 1.8.0_66 with a 64-bit Server VM, as well as
Hadoop 1.2.1 and Spark 1.4.1 were installed.
One of the VMs serves as the master node and the other three VMs as the slave nodes.
Furthermore, the following changes to the default Hadoop and Spark configurations are applied;
12 total executor cores (four for each slave machine) are used, the executor memory is set equal to 8 GB
and the driver memory is set equal to 4 GB.
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 14 of 19
**7. Evaluation**
_7.1. Classification Performance_
The reported values in the charts for the classification models are recorded as AdaBoost, J48,
JRip, Multilayer Perceptron, PART, REPTree, RotationForest and Sequential Minimal Optimization
(SMO) as presented in Tables 3–6 and Figure 5. The results for each classifier for their several values
are illustrated in Table 3. Depicted in bold are the selected best classifiers for each value. Furthermore,
Figure 5 depicts the values of F-Measure for each classifier for the five selected different number of
randomly selected purchases.
**Table 3. Classification for predicting new purchases for different number of randomly selected purchases.**
**Classifiers** **TP Rate** **FP Rate** **Precision** **Recall** **F-Measure**
**Purchases = 10,000**
AdaBoost 0.605 0.484 0.598 0.605 0.564
J48 **0.922** 0.095 **0.924** **0.922** **0.921**
JRip 0.847 0.187 0.854 0.847 0.843
Multilayer Perceptron 0.596 0.539 0.66 0.596 0.482
PART 0.748 0.325 0.783 0.748 0.728
REPTree 0.892 0.119 0.892 0.892 0.891
RotationForest 0.785 0.285 0.83 0.785 0.768
SMO 0.574 **0.574** 0.33 0.574 0.419
**Purchases = 100,000**
AdaBoost 0.615 0.48 0.601 0.615 0.559
J48 **0.927** 0.092 **0.929** **0.927** **0.924**
JRip 0.852 0.183 0.856 0.852 0.849
Multilayer Perceptron 0.598 0.536 0.671 0.598 0.487
PART 0.756 0.322 0.788 0.756 0.732
REPTree 0.896 0.116 0.895 0.896 0.893
RotationForest 0.79 0.282 0.863 0.79 0.778
SMO 0.577 **0.577** 0.36 0.577 0.424
**Purchases = 500,000**
AdaBoost 0.635 0.474 0.621 0.635 0.586
J48 **0.947** 0.084 **0.934** **0.947** **0.943**
JRip 0.866 0.172 0.873 0.866 0.851
Multilayer Perceptron 0.622 0.516 0.682 0.622 0.503
PART 0.766 0.314 0.797 0.766 0.738
REPTree 0.912 0.111 0.915 0.912 0.896
RotationForest 0.811 0.226 0.889 0.811 0.783
SMO 0.617 **0.567** 0.388 0.617 0.439
**Purchases = 1,000,000**
AdaBoost 0.65 0.462 0.637 0.65 0.602
J48 **0.962** 0.076 **0.959** **0.962** **0.96**
JRip 0.875 0.161 0.891 0.875 0.864
Multilayer Perceptron 0.65 0.498 0.683 0.65 0.517
PART 0.784 0.302 0.811 0.784 0.742
REPTree 0.921 0.101 0.921 0.921 0.903
RotationForest 0.851 0.214 0.896 0.851 0.79
SMO 0.627 **0.573** 0.401 0.627 0.444
**Purchases = 3,000,000**
AdaBoost 0.713 0.433 0.711 0.713 0.648
J48 **0.977** 0.067 **0.964** **0.977** **0.972**
JRip 0.898 0.146 0.912 0.898 0.876
Multilayer Perceptron 0.691 0.411 0.712 0.691 0.548
PART 0.81 0.297 0.824 0.81 0.746
REPTree 0.933 0.087 0.929 0.933 0.912
RotationForest 0.876 0.206 0.912 0.876 0.797
SMO 0.647 **0.598** 0.417 0.647 0.453
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 15 of 19
Regarding the dataset size of 10, 000 randomly selected purchases, we can observe that J48
achieves the highest score in every category except the FP rate. Subsequently, REPTree follows with
almost 89% TP rate and F-Measure, whereas JRip has a value of F-Measure equal to 84%. In addition,
concerning F-Measure metric, the other algorithms range from 42% of Multilayer Perceptron to 77%
of Rotation Forest. Moreover, we see that almost all classifiers achieve a TP rate value of above 60%,
while the percentages for FP rate are relatively smaller. Precision and Recall metrics have almost the
same values for each classifier, ranging between 60% and 92%.
**Figure 5. F-Measure of each classifier for predicting new purchases regarding the four selected datasets.**
In Tables 4–6, the TP rate, FP rate, Precision, Recall, as well as F-Measure metrics for the
classification of three concrete algorithms (i.e., AdaBoost, J48 as well as Multilayer Perceptron) for
different dataset sizes is presented. It is obvious that the dataset size plays a rather significant role for
three of these classifiers. Specifically, regarding AdaBoost, the F-Measure value rises from almost 56%
for a dataset of 10, 000 purchases to almost 65% for the dataset of 3, 000, 000 purchases; this is actually a
rise of about 8% to 9%. Following the aforementioned classifier, the performance of J48 and Multilayer
Perceptron is not heavily affected by the amount of the purchases in the dataset as the corresponding
increases are 5% and 6%, respectively.
**Table 4. Classification of AdaBoost for different dataset sizes.**
**Dataset Size** **TP Rate** **FP Rate** **Precision** **Recall** **F-Measure**
10,000 0.605 0.484 0.598 0.605 0.564
100,000 0.615 0.48 0.601 0.615 0.572
250,000 0.622 0.476 0.617 0.622 0.578
500,000 0.635 0.474 0.621 0.635 0.586
750,000 0.641 0.469 0.629 0.641 0.593
1,000,000 0.65 0.462 0.637 0.65 0.602
2,000,000 0.703 0.451 0.688 0.703 0.63
3,000,000 0.713 0.433 0.711 0.713 0.648
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 16 of 19
**Table 5. Classification of J48 for different dataset sizes.**
**Dataset Size** **TP Rate** **FP Rate** **Precision** **Recall** **F-Measure**
10,000 0.922 0.095 0.924 0.922 0.921
100,000 0.927 0.092 0.929 0.927 0.924
250,000 0.93 0.089 0.932 0.93 0.933
500,000 0.947 0.084 0.934 0.947 0.943
750,000 0.958 0.079 0.946 0.958 0.952
1,000,000 0.962 0.076 0.959 0.962 0.96
2,000,000 0.97 0.072 0.969 0.97 0.967
3,000,000 0.977 0.067 0.964 0.977 0.972
**Table 6. Classification of Multilayer Perceptron for different dataset sizes.**
**Dataset Size** **TP Rate** **FP Rate** **Precision** **Recall** **F-Measure**
10,000 0.596 0.539 0.66 0.596 0.482
100,000 0.598 0.536 0.671 0.598 0.487
250,000 0.611 0.525 0.679 0.611 0.492
500,000 0.622 0.516 0.682 0.622 0.503
750,000 0.639 0.507 0.687 0.639 0.511
1,000,000 0.65 0.498 0.683 0.65 0.517
2.000.000 0.682 0.463 0.69 0.682 0.531
3,000,000 0.691 0.411 0.712 0.691 0.548
_7.2. Running Time_
In this subsection, the results for the running time regarding multi-class and binary classification
while measuring the scalability of our proposed model are presented in Table 7. The running time
of our implementation using MapReduce as well as Spark against an implementation on a regular
machine is compared. In addition, on the classification process, we experiment not only with binary
features, but also with class and binary features. Furthermore, by using class and binary features,
we extend the execution time since more calculations are necessary in order to create the classes.
It is expected that MapReduce implementation slightly boost the execution time performance and
Spark manages to boost even more the execution time performance. Regarding MapReduce, the level
of time reduction for the binary case reaches 70%, while for the class and binary case, the percentage
touches a 50%. On the other hand, regarding Spark against MapReduce, the level of time reduction
for the binary case reaches 55%, while, for the class and binary case, the percentage is 60%. Despite
needing more preprocessing time to send input to appropriate reducers, in the end, it pays off since
the computational cost in every node is smaller.
**Table 7. Running time per implementation (in seconds).**
**Implementation** **Stable Implementation** **MapReduce** **Spark**
Binary Features 1425 421 187
Class and Binary Features 1902 962 397
_7.3. Speedup_
In this subsection, the effect of the number of computing nodes for both MapReduce and Spark
implementation is estimated. Three different cluster configurations are tested where the cluster consists
of N ∈{1, 2, 3} slave nodes each time. Tables 8 and 9 present the running time-speedup per slave
nodes. Moreover, as stated in the previous subsection, we experiment not only with binary features,
but also with class and binary features.
We observe that the total running time of our solution tends to decrease as we add more
nodes to the cluster for both implementations. Due to the increment of number of computing nodes,
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 17 of 19
the intermediate data are decomposed to more partitions that are processed in parallel. As a result,
the amount of computations that each node undertakes decreases, respectively.
In summary, both Tables 8 and 9 prove that the proposed method (MapReduce and Spark) is
efficient as well as scalable and therefore appropriate for big data analysis.
**Table 8. Running time per slave nodes (in seconds) for MapReduce implementation.**
**Number of Slave Nodes** **1** **2** **3**
Binary Features 945 523 421
Class and Binary Features 1707 1095 962
**Table 9. Running time per slave nodes (in seconds) for Spark implementation.**
**Number of Slave Nodes** 1 2 3
Binary Features 767 345 187
Class and Binary Features 1234 613 397
**8. Conclusions**
In the proposed work, we have presented a methodology for modelling and predicting the
purchases of a supermarket using machine learning techniques. More specifically, two datasets are
utilized: a supermarket database as well as a database from Amazon that contains information about
the purchases of customers. Given the analysis of the dataset from Amazon, a model that predicts
new products for every customer based on the category and the supermarket customers prefer is
created. We have also examined the influence of person-to-person communication, where we found
that customers are greatly influenced by other customer reviews.
More to the point, we handle the scalability bottleneck of existing works by employing cloud
programming techniques. Concretely, the analysis of the sales rate as Big Data analytics with
the use of MapReduce and Spark implementation is utilized. Furthermore, with use of three
well-known techniques, e.g., Vector Space Model, Tf-Idf and Cosine Similarity, a novel framework is
introduced. Concretely, opinions, reviews, and advertisements as well as different texts that consider
customer’s connection towards supermarkets, are taken into account in order to measure customer’s
behavior. The proposed framework is based on the measurement of text similarity by applying cloud
computing infrastructure.
As future work, we plan to create a platform using the recommendation network. Customers
will have the opportunity to choose among many options on new products with lower prices. In the
future, we plan to extend and improve our framework by taking into consideration more features of
the supermarket datasets that may be added in the feature vector and will improve the classification
accuracy. One other future consideration is the experimentation with different clusters so as to better
evaluate Hadoop’s and Spark’s performance in terms of time and scalability. In conclusion, we could
use a survey to have further insights and get an alternative verification of user’s engagement.
**Author Contributions: A.K., S.A.I., S.S. and V.T. conceived of the idea, designed and performed the experiments,**
analyzed the results, drafted the initial manuscript and revised the final manuscript.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. The World’s Technological Capacity to Store, Communicate, and Compute Information. Available online:
[http://www.martinhilbert.net/worldinfocapacity-html/ (accessed on 3 May 2018).](http://www.martinhilbert.net/worldinfocapacity-html/)
2. Han, J.; Pei, J.; Kamber, M. Data Mining: Concepts and Techniques; Elsevier: New York, NY, USA, 2011.
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 18 of 19
3. Leskovec, J.; Adamic, L.A.; Huberman, B.A. The Dynamics of Viral Marketing. ACM Trans. Web 2007, 1.
[[CrossRef]](http://dx.doi.org/10.1145/1232722.1232727)
4. Iakovou, S.A.; Kanavos, A.; Tsakalidis, A.K. Customer Behaviour Analysis for Recommendation of
Supermarket Ware. In Proceedings of the 12th IFIP International Conference and Workshops (AIAI),
Thessaloniki, Greece, 16–18 September 2016; pp. 471–480.
5. Victor, G.S.; Antonia, P.; Spyros, S. CSMR: A Scalable Algorithm for Text Clustering with Cosine
Similarity and MapReduce. In Proceedings of the Artificial Intelligence Applications and Innovations
(AIAI), Rhodes, Greece, 19–21 September 2014; pp. 211–220.
6. Ling, C.X.; Li, C. Data Mining for Direct Marketing: Problems and Solutions. In Proceedings of the
4th International Conference on Knowledge Discovery and Data Mining (KDD), New York, NY, USA,
27–31 August 1998; pp. 73–79.
7. Domingos, P.M.; Richardson, M. Mining the network value of customers. In Proceedings of the 7th ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), San Francisco, CA, USA,
26–29 August 2001; pp. 57–66.
8. Wu, X.; Zhu, X.; Wu, G.; Ding, W. Data Mining with Big Data. IEEE Trans. Knowl. Data Eng. 2014, 26, 97–107.
9. Boyd, D. Privacy and Publicity in the Context of Big Data. In Proceedings of the 19th International Conference
on World Wide Web (WWW), Raleigh, CA, USA, 26–30 April 2010.
10. Laney, D. 3D Data Management: Controlling Data Volume, Velocity, and Variety. META Group Res. Note
**2001, 6, 70.**
11. Hashem, I.A.T.; Yaqoob, I.; Anuar, N.B.; Mokhtar, S.; Gani, A.; Khan, S.U. The Rise of “Big Data” on Cloud
[Computing: Review and Open Research Issues. Inf. Syst. 2015, 47, 98–115. [CrossRef]](http://dx.doi.org/10.1016/j.is.2014.07.006)
12. Brijs, T.; Goethals, B.; Swinnen, G.; Vanhoof, K.; Wets, G. A Data Mining Framework for Optimal Product
Selection in Retail Supermarket Data: The Generalized PROFSET Model. In Proceedings of the Sixth
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Boston, MA, USA,
20–23 August 2000; pp. 300–304.
13. Li, Y.; Zuo, M.; Yang, B. Analysis and design of e-supermarket shopping recommender system. In Proceedings
of the 7th International Conference on Electronic Commerce (ICEC), Xi’an, China, 15–17 August 2005;
pp. 777–779.
14. Lawrence, R.D.; Almasi, G.S.; Kotlyar, V.; Viveros, M.S.; Duri, S. Personalization of Supermarket Product
[Recommendations. Data Min. Knowl. Discov. 2001, 5, 11–32. [CrossRef]](http://dx.doi.org/10.1023/A:1009835726774)
15. Dickson, P.R.; Sawyer, A.G. The price Knowledge and Search of Supermarket Shoppers. J. Mark. 1990, 54,
[42–53. [CrossRef]](http://dx.doi.org/10.2307/1251815)
16. Pennacchioli, D.; Coscia, M.; Rinzivillo, S.; Pedreschi, D.; Giannotti, F. Explaining the Product Range Effect in
Purchase Data. In Proceedings of the 2013 IEEE International Conference on Big Data, Silicon Valley, CA, USA,
6–9 October 2013; pp. 648–656.
17. Yao, C.B.; Tsui, H.D.; Lee, C.Y. Intelligent Product Recommendation Mechanism based on Mobile Agents.
In Proceedings of the 4th International Conference on New Trends in Information Science and Service
Science, Gyeongju, Korea, 11–13 May 2010; pp. 323–328.
18. Kim, J.K.; Cho, Y.H.; Kim, W.; Kim, J.R.; Suh, J.H. A personalized recommendation procedure for Internet
[shopping support. Electron. Commer. Res. Appl. 2002, 1, 301–313. [CrossRef]](http://dx.doi.org/10.1016/S1567-4223(02)00022-4)
19. Cho, Y.H.; Kim, J.K.; Kim, S.H. A personalized recommender system based on web usage mining and
[decision tree induction. Expert Syst. Appl. 2002, 23, 329–342. [CrossRef]](http://dx.doi.org/10.1016/S0957-4174(02)00052-0)
20. Linden, G.; Smith, B.; York, J. Amazon.com Recommendations: Item-to-Item Collaborative Filtering.
_[IEEE Internet Comput. 2003, 7, 76–80. [CrossRef]](http://dx.doi.org/10.1109/MIC.2003.1167344)_
21. Weng, S.; Liu, M. Feature-based Recommendations for one-to-one Marketing. Expert Syst. Appl. 2004, 26,
[493–508. [CrossRef]](http://dx.doi.org/10.1016/j.eswa.2003.10.008)
22. Gourgaris, P.; Kanavos, A.; Makris, C.; Perrakis, G. Review-based Entity-Ranking Refinement. In Proceedings
of the 11th International Conference on Web Information Systems and Technologies (WEBIST), Lisbon, Portugal,
20–22 May 2015; pp. 402–410.
23. Malär, L.; Krohmer, H.; Hoyer, W.D.; Nyffenegger, B. Emotional Brand Attachment and Brand Personality:
[The Relative Importance of the Actual and the Ideal Self. J. Mark. 2011, 75, 35–52. [CrossRef]](http://dx.doi.org/10.1509/jmkg.75.4.35)
24. Thomson, M.; MacInnis, D.J.; Park, C.W. The Ties That Bind: Measuring the Strength of Consumers’
[Emotional Attachments to Brands. J. Consum. Psychol. 2005, 15, 77–91. [CrossRef]](http://dx.doi.org/10.1207/s15327663jcp1501_10)
-----
_Big Data Cogn. Comput. 2018, 2, 11_ 19 of 19
25. Kanavos, A.; Kafeza, E.; Makris, C. Can we Rank Emotions? A Brand Love Ranking System for Emotional
Terms. In Proceedings of the 2015 IEEE International Congress on Big Data, New York, NY, USA,
27 June–2 July 2015; pp. 71–78.
26. Tsai, C.W.; Lai, C.F.; Chao, H.C.; Vasilakos, A.V. Big Data Analytics: A Survey. J. Big Data 2015, 2, 1–32.
[[CrossRef]](http://dx.doi.org/10.1186/s40537-015-0030-3)
27. Fisher, D.; DeLine, R.; Czerwinski, M.; Drucker, S.M. Interactions with Big Data Analytics. Interactions
**[2012, 19, 50–59. [CrossRef]](http://dx.doi.org/10.1145/2168931.2168943)**
28. Dean, J.; Ghemawat, S. MapReduce: Simplified Data Processing on Large Clusters. Commun. ACM 2008, 51,
[107–113. [CrossRef]](http://dx.doi.org/10.1145/1327452.1327492)
29. Elsayed, T.; Lin, J.J.; Oard, D.W. Pairwise Document Similarity in Large Collections with MapReduce.
In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL),
Columbus, OH, USA, 16–17 June 2008; pp. 265–268.
30. Li, B.; Guoyong, Y. Improvement of TF-IDF Algorithm based on Hadoop Framework. In Proceedings
of the 2nd International Conference on Computer Application and System Modeling, Taiyuan, China,
27–29 July 2012; pp. 391–393.
31. Bank, J.; Cole, B. Calculating the Jaccard Similarity Coefficient with Map Reduce for Entity Pairs in Wikipedia.
Available online: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.168.5695&rep=rep1&type=
pdf (accessed on 3 May 2018).
32. Zhou, P.; Lei, J.; Ye, W. Large-Scale Data Sets Clustering based on MapReduce and Hadoop. J. Comput. Inf. Syst.
**2011, 7, 5956–5963.**
33. Turney, P.D.; Pantel, P. From Frequency to Meaning: Vector Space Models of Semantics. J. Artif. Intell. Res.
**2010, 37, 141–188.**
34. Raghavan, V.V.; Wong, S.K.M. A Critical Analysis of Vector Space Model for Information Retrieval. J. Assoc.
_[Inf. Sci. Technol. 1986, 37, 279–287. [CrossRef]](http://dx.doi.org/10.1002/(SICI)1097-4571(198609)37:5<279::AID-ASI1>3.0.CO;2-Q)_
35. Ramos, J. Using TF-IDF to Determine Word Relevance in Document Queries. In Proceedings of the First
Instructional Conference on Machine Learning, Piscataway, NJ, USA, 2–8 December 2003.
36. Turney, P.D. Coherent Keyphrase Extraction via Web Mining. In Proceedings of the 18th International Joint
Conference on Artificial Intelligence (IJCAI), Acapulco, Mexico, 9–15 Ausust 2003; pp. 434–442.
37. Kalaivendhan, K.; Sumathi, P. An Efficient Clustering Method To Find Similarity Between The Documents.
_Int. J. Innov. Res. Comput. Commun. Eng. 2014, 1._
38. Lämmel, R. Google’s MapReduce Programming Model-Revisited. Sci. Comput. Program. 2008, 70, 1–30.
[[CrossRef]](http://dx.doi.org/10.1016/j.scico.2007.07.001)
39. Shvachko, K.; Kuang, H.; Radia, S.; Chansler, R. The Hadoop Distributed File System. In Proceedings of
the IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST), Incline Village, NV, USA,
3–7 May 2010; pp. 1–10.
40. Lin, X.; Meng, Z.; Xu, C.; Wang, M. A Practical Performance Model for Hadoop MapReduce. In Proceedings of
the IEEE International Conference on Cluster Computing Workshops (CLUSTER) Workshops, Beijing, China,
24–28 Spetember 2012; pp. 231–239.
41. Ekanayake, J.; Pallickara, S.; Fox, G.C. MapReduce for Data Intensive Scientific Analyses. In Proceedings of
the 4th International Conference on e-Science, Indianapolis, IN, USA, 7–12 December 2008; pp. 277–284.
42. [Apache Spark. Available online: https://spark.apache.org/ (accessed on 3 May 2018).](https://spark.apache.org/)
43. Karau, H.; Konwinski, A.; Wendell, P.; Zaharia, M. _Learning Spark: Lightning-Fast Big Data Analysis;_
O’Reilly Media: Sebastopol, CA, USA, 2015.
44. [Apache Spark MLlib. Available online: http://spark.apache.org/mllib/ (accessed on 3 May 2018).](http://spark.apache.org/mllib/)
45. Tata, S.; Patel, J.M. Estimating the Selectivity of Tf-Idf based Cosine Similarity Predicates. SIGMOD Rec.
**[2007, 36, 7–12. [CrossRef]](http://dx.doi.org/10.1145/1328854.1328855)**
46. West, C.; MacDonald, S.; Lingras, P.; Adams, G. Relationship between Product Based Loyalty and Clustering
based on Supermarket Visit and Spending Patterns. Int. J. Comput. Sci. Appl. 2005, 2, 85–100.
47. [Weka toolkit. Available online: https://www.cs.waikato.ac.nz/ml/weka/ (accessed on 3 May 2018).](https://www.cs.waikato.ac.nz/ml/weka/)
_⃝c_ 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
[(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/BDCC2020011?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/BDCC2020011, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2504-2289/2/2/11/pdf?version=1527151106"
}
| 2,018
|
[
"JournalArticle"
] | true
| 2018-05-09T00:00:00
|
[
{
"paperId": "98ab5cc2d8af0ddb122198acbfe50aff8a23059b",
"title": "Apache Spark"
},
{
"paperId": "07f5ef7b805d8b067ca7dcaca7722ebfefee850e",
"title": "Customer Behaviour Analysis for Recommendation of Supermarket Ware"
},
{
"paperId": "391a5f286f814d852dddcab1b2b68e5c1af6c79e",
"title": "Data mining with big data"
},
{
"paperId": "2fdc8e7f759557006b0db1a636d1175ba25ee5d5",
"title": "Can We Rank Emotions? A Brand Love Ranking System for Emotional Terms"
},
{
"paperId": "ce2bbc757a4009d83d4a1719925fc64caf890e23",
"title": "Learning Spark: Lightning-Fast Big Data Analytics"
},
{
"paperId": "394665cdc4969f18963dda0bc68051b628a8feaa",
"title": "CSMR: A Scalable Algorithm for Text Clustering with Cosine Similarity and MapReduce"
},
{
"paperId": "ada3032ab7231f574c2385f757735750651217db",
"title": "The rise of the Big Data"
},
{
"paperId": "94341338ac7c8905e5ce5dd5062b3f2a9aaa6576",
"title": "Explaining the product range effect in purchase data"
},
{
"paperId": "a6e9a3decdc711dcde96e19dfe477a67974b8ee6",
"title": "A Practical Performance Model for Hadoop MapReduce"
},
{
"paperId": "a6856c13a2bb6b4ef8150e8eb167062a6c2cb074",
"title": "Improvement of TF-IDF Algorithm Based on Hadoop Framework"
},
{
"paperId": "de29c51d169cdb5066f6832e9a8878900c43f100",
"title": "Interactions with big data analytics"
},
{
"paperId": "64912a1fcf27136b218b84ed49141fdbd21ac4c4",
"title": "Emotional Brand Attachment and Brand Personality: The Relative Importance of the Actual and the Ideal Self"
},
{
"paperId": "64d2fe89ba7f105cb96796742f4dcbf4e56bd2ca",
"title": "The World’s Technological Capacity to Store, Communicate, and Compute Information"
},
{
"paperId": "8ce4c0ee315d86f32ec7354ccdf8d8996e8ee270",
"title": "The Hadoop Distributed File System"
},
{
"paperId": "3a0e788268fafb23ab20da0e98bb578b06830f7d",
"title": "From Frequency to Meaning: Vector Space Models of Semantics"
},
{
"paperId": "cc7f20f2ad4db10ea58facc10c265236a2811755",
"title": "MapReduce for Data Intensive Scientific Analyses"
},
{
"paperId": "08fd33cb1c8837d374bc4c863a09cc792f6c52f2",
"title": "Pairwise Document Similarity in Large Collections with MapReduce"
},
{
"paperId": "9173b9161d72801ff3cea2d8eede3658df391345",
"title": "Google's MapReduce programming model - Revisited"
},
{
"paperId": "e94824483117d51368a7ca658c201006953e5b4b",
"title": "Estimating the selectivity of tf-idf based cosine similarity predicates"
},
{
"paperId": "bee2b59d045fa2cf2e2719f747d0fb66be241cf4",
"title": "The dynamics of viral marketing"
},
{
"paperId": "7e6f2e4a9cf2df921637b308c5aef248fccc1654",
"title": "Analysis and design of e-supermarket shopping recommender system"
},
{
"paperId": "ddaf9fe2faec7c647a8f1a3c517b43f5bff7dea7",
"title": "Feature-based recommendations for one-to-one marketing"
},
{
"paperId": "4a8a8ca8dfb28a7ee9689776a1117e317011f167",
"title": "A personalized recommender system based on web usage mining and decision tree induction"
},
{
"paperId": "27d1bf581e496c08156a408ac63d8332b0e30e60",
"title": "A personalized recommendation procedure for Internet shopping support"
},
{
"paperId": "ad3df67973cb95b696386fbf5bad9db91aaf126a",
"title": "Mining the network value of customers"
},
{
"paperId": "0555dd8ea8af6996f7eeebba796ffdb7a1362d66",
"title": "A data mining framework for optimal product selection in retail supermarket data: the generalized PROFSET model"
},
{
"paperId": "24c70681e0e1f68810b753f32bcca266f8f47804",
"title": "Data Mining for Direct Marketing: Problems and Solutions"
},
{
"paperId": "b1cdcef76c4f843e2fed626693cd595938a5e05d",
"title": "The Price Knowledge and Search of Supermarket Shoppers"
},
{
"paperId": "0cbfd42b2e9ad6303bfacd43665717b4a95de921",
"title": "A critical analysis of vector space model for information retrieval"
},
{
"paperId": "53140b11a27d654e2aaf97568b8d5bd94ff410bd",
"title": "Review-based Entity-ranking Refinement"
},
{
"paperId": "2660dcf5bd16d14862a7bbb241fa4d85ae34327f",
"title": "The rise of \"big data\" on cloud computing: Review and open research issues"
},
{
"paperId": null,
"title": "An Efficient Clustering Method To Find Similarity Between The Documents"
},
{
"paperId": "0a9651ea67a069b26e668ccc28dc9b9d187afd6f",
"title": "Large-Scale Data Sets Clustering Based on MapReduce and Hadoop"
},
{
"paperId": "5e05e78b537a3cc3bd7406f2d2d992aa990eb02a",
"title": "Intelligent product recommendation mechanism based on mobile agents"
},
{
"paperId": null,
"title": "Privacy and Publicity in the Context of Big Data"
},
{
"paperId": "627be67feb084f1266cfc36e5aed3c3e7e6ce5f0",
"title": "MapReduce: simplified data processing on large clusters"
},
{
"paperId": "2796a5df2b41448a9720cbd00d0648e2637c17ca",
"title": "The Ties That Bind: Measuring the Strength of Consumers’ Emotional Attachments to Brands"
},
{
"paperId": "fe906b7adc67f67872a60c96007a53e894393b18",
"title": "Relationship between Product Based Loyalty and Clustering based on Supermarket Visit and Spending Patterns"
},
{
"paperId": "0ece6521492b0d3fa7721b69e3c8fcd43a67d63c",
"title": "Personalization of Supermarket Product Recommendations"
},
{
"paperId": "b3bf6373ff41a115197cb5b30e57830c16130c2c",
"title": "Using TF-IDF to Determine Word Relevance in Document Queries"
},
{
"paperId": "917d64c331fd743f44c8ced5107ab35eb8208e79",
"title": "provided that the source of such material is fully acknowledged. Coherent Keyphrase Extraction via Web Mining"
},
{
"paperId": "fdb98ca19262fa192db2494f4f64369351cb7d7c",
"title": "Industry Report: Amazon.com Recommendations: Item-to-Item Collaborative Filtering"
},
{
"paperId": "00a6388d164f742d99ab490baadb72b6857b3f12",
"title": "Amazon.com Recommendations: Item-to-Item Collaborative Filtering"
},
{
"paperId": "982b955c900b04e9da64e3b39422690c13d6b94f",
"title": "Data Mining - Concepts and Techniques"
},
{
"paperId": null,
"title": "3D Data Management: Controlling Data Volume, Velocity, and Variety"
},
{
"paperId": null,
"title": "Calculating the Jaccard Similarity Coefficient with Map Reduce for Entity Pairs in Wikipedia"
}
] | 16,956
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02f1f8a3ec0b66d194d9f97e5d09f42468285f21
|
[
"Computer Science"
] | 0.91667
|
Formalising Reconciliation in Partitionable Networks with Distributed Services
|
02f1f8a3ec0b66d194d9f97e5d09f42468285f21
|
RODIN Book
|
[
{
"authorId": "2226236",
"name": "Mikael Asplund"
},
{
"authorId": "1401730437",
"name": "S. Nadjm-Tehrani"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# Formalising Reconciliation in Partitionable Networks with Distributed Services
Mikael Asplund and Simin Nadjm-Tehrani
Department of Computer and Information Science,
Link¨oping University SE-581 83 Link¨oping, Sweden
{mikas,simin}@ida.liu.se
## 1 Introduction
Modern command and control systems are characterised by computing services
provided to several actors at different geographical locations. The actors operate
on a common state that is modularly updated at distributed nodes using local
data services and global integrity constraints for validity of data in the value and
time domains. Dependability in such networked applications is measured through
availability of the distributed services as well as the correctness of the state
updates that should satisfy integrity constraints at all times. Providing support
in middleware is seen as one way of achieving a high level of service availability
and well-defined performance guarantees. However, most recent works [1, 2]
that address fault-aware middleware cover crash faults and provision of timely
services, and assume network connectivity as a basic tenet.
In this paper we study the provision of services in distributed object systems, with network partitions as the primary fault model. The problem appears
in a variety of scenarios [3], including distributed flight control systems. The
scenarios combine provision of critical services with data-intensive operations.
Clients can approach any node in the system to update a given object, copies
of which are present across different nodes in the system. A correct update of
the object state is dependent on validity of integrity constraints, potentially involving other distributed objects. Replicated objects provide efficient access at
distributed nodes (leading to lower service latency). Middleware is employed for
systematic upholding of common view on the object states and consistency in
write operations. However, problems arise if the network partitions. That is, if
there are broken/overloaded links such that some nodes become unreachable,
and the nodes in the network form disjoint partitions. Then, if services are delivered to clients approaching different partitions, the upholding of consistency
has to be considered explicitly. Moreover, there should be mechanisms to deal
with system mode changes, with service differentiation during degraded mode.
Current solutions to this problem typically uphold full consistency at the
cost of availability. When the network is partitioned, the services that require
integrity constraints over objects that are no longer reachable are suspended
until the network is physically unified. Alternatively, a majority partition is
assumed to continue delivering services based on the latest replica states. When
-----
the network is reunified the minority partition(s) nodes rejoin; but during the
partition clients approaching the minority partition receive no service. The goal
of our work is to investigate middleware support that enables distributed services
to be provided at all partitions, at the expense of temporarily trading off some
consistency. To gain higher availability we need to act optimistically, and allow
one primary per partition to provisionally service clients that invoke operations
in that partition.
The contributions of the paper are twofold. First, we present a protocol that
after reunification of a network partition takes a number of partition states
and generates a new partition state that includes a unique state per object. In
parallel with creating this new state the protocol continues servicing incoming
requests. Since the state of the (reconciled) post-reunification objects are not
yet finalised, the protocol has to maintain virtual partitions until all operations
that have arrived after the partition fault and provisionally serviced are dealt
with.
Second, we show that the protocol results in a stable partition state, from
which onwards the need for virtual partitions is no longer necessary. The proof
states the assumptions under which the stable state is reached. Intuitively, the
system will leave the reconciliation mode when the rate of incoming requests
is lower than the rate of handling the provisionally accepted operations during reconciliation. The resulting partition state is further shown to have desired
properties. A notion of correctness is introduced that builds on satisfaction of
integrity constraints as well as respecting an intended order of performed operations seen from clients’ point of view.
The structure of the paper is as follows. Section 2 i provides an informal
overview of the formalised protocols in the paper. Section 3 introduces the basic
formal notions that are used in the models. Section 4 describes the intuitive reasoning behind the choice of ordering that is imposed on the performed operations
in the system and relates the application (client) expectations to the support that
can reasonably be provided by automatic mechanisms in middleware. Section 5
presents the reconciliation protocol in terms of distributed algorithms running
at replicas and in a reconciliation manager. Section 6 is devoted to the proofs
of termination and correctness for the protocol. Related work are described in
Sect. 7, and Sect. 8 concludes the paper.
## 2 Overview
We begin by assuming that middleware services for replication of objects are
in place. This implies that the middleware has mechanisms for creating replica
objects, and protocols that propagate a write operation at a primary copy to all
the object replicas transparently. Moreover, the mechanisms for detecting link
failures and partition faults are present in the middleware. The latter is typically
implemented by maintaining a membership service that keeps an up to date view
of which replicas for an object are running and reachable. The middleware also
-----
includes naming/location services, whereby the physical node can be identified
given a logical address.
In normal mode, the system services read operations in a distributed manner; but for write operations there are protocols that check integrity constraints
before propagating the update to all copies of the object at remote nodes. Both
in normal and degraded mode, each partition is assumed to include a designated
primary replica for each object in the system.
The integrity constraints in the system are assumed to fall in two classes:
critical and non-critical. For operations with non-critical constraints different
primary servers continue to service client requests, and provisionally accept the
operations that satisfy integrity constraints. When the partition fault is repaired,
the state of the main partition is formed by reconciling the operations carried
out in the earlier disjoint partitions. The middleware supports this reconciliation
process and guarantees the consistency of the new partition state. The state is
formed by replaying some provisional operations that are accepted, and rejecting
some provisional operations that should be notified to clients as ”undone”. It is
obviously desirable to keep as many of the provisionally accepted operations as
possible.
The goal of the paper is to formally define mechanisms that support the
above continuous service in presence of (multiple) partitions, and satisfactorily
create a new partition upon recovery from the fault. For a system that has a considerable portion of its integrity constraints classified as non-critical this should
intuitively increase availability despite partitions. Also, the average latency for
servicing clients should decrease as some client requests that would otherwise
be suspended or considerably delayed if the system were to halt upon partitions
are now serviced in a degraded mode.
Figure 1 presents the modes of a system in presence of partition faults. The
system is available in degraded mode except for operations for which the integrity
constraints are critical so that they cannot accept the risk of being inconsistent
during partitions (these are not performed at all in degraded mode). The system
is also partially available during reconciling mode; but there is a last short stage
within reconciliation (installing state) during which the system is unavailable.
_Partially available_
**Partition** **Reunify**
**Degraded mode**
**Normal mode** **Reconciling mode**
_Fully available_ _Partially available_
**Installing state**
**Install** **Stop**
_Unavailable_
Fig. 1. System modes
-----
In earlier work we have formalised the reconciliation process in a simple
model and experimentally studied three reconciliation algorithms in terms of
their influence on service outage duration [4]. A major assumption in that work
was that no service was provided during the whole reconciliation process. Simulations showed that the drawback of the ‘non-availability’ assumption can be
severe in some scenarios; namely the time taken to reconcile could be long enough
so that the non-availability of services during this interval would be almost as
bad as having no degraded service at all (thereby no gain in overall availability).
In this paper we investigate the implications of continued service delivery
during the reconciliation process. This implies that we need to formalise a more
refined protocol that keeps providing service to clients in parallel with reconciliation (and potential replaying of some operations). The algorithms are modelled
in timed I/O automata, that naturally model multiple partition faults occurring
in a sequence (so called cascading effects). More specifically, the fault model
allows multiple partition faults in a sequence before a network is reunified, but
no partitions occur during reconciliation. We also exclude crash faults during
reconciliation in order to keep the models and proofs easier to convey. Crash
faults can be accommodated using existing checkpointing approaches [5] with
no known effects on main results of the paper. Furthermore, we investigate correctness and termination properties of this more refined reconciliation protocol.
The proofs use admissible timed traces of timed I/O automata.
## 3 Preliminaries
This section introduces the concepts needed to describe the reconciliation protocol and its properties. We will define the necessary terms such as object, partition
and replica as well as defining consistency criteria for partitions.
3.1 Objects
For the purpose of formalisation we associate data with objects. Implementationwise, data can be maintained in databases and accessed via database managers.
Definition 1. An object o is a triple o = (S, O, T ) where S is the set of possible
states, O is the set of operations that can be applied to the object state and
T ⊆ S × O × S is a transition relation on states and operations.
We assume all operation sets to be disjunct so that every operation is associated with one object.
Transitions from a state s to a state s[′] will be denoted by s ⇝α s′ where
α = ⟨op, k⟩ is an operation instance with op ∈O, and k ∈ IN denotes the unique
invocation of operation op at some client.
Definition 2. An integrity constraint c is a predicate over multiple object states.
Thus, c ⊆ S1 × S2 × . . . × Sn where n is the number of objects in the system.
-----
Intuitively, object operations should only be performed if they do not violate
integrity constraints.
A distributed system with replication has multiple replicas for every object
located on different nodes in the network. As long as no failures occur, the
existence of replicas has no effect on the functional behaviour of the system.
Therefore, the state of the system in the normal mode can be modelled as a set
of replicas, one for each object.
Definition 3. A replica r for object o = (S, O, T ) is a triple r = (L, s[0], s[m])
where the log L = ⟨α1 . . . αm⟩ is a sequence of operation instances defined over
αm
O. The initial state is s[0] ∈ S and s[m] ∈ S is a state such that s[0][ α]⇝[1] . . . ⇝ s[m].
The log can be considered as the record of operations since the last checkpoint
that also recorded the (initial) state s[0].
We consider partitions that have been operating independently and we assume the nodes in each partition to agree on one primary replica for each object.
This will typically be promoted by the middleware. Moreover, we assume that
all objects are replicated across all nodes. For the purpose of reconciliation the
important aspect of a partition is not how the actual nodes in the network are
connected but the replicas whose states have been updated separately and need
to be reconciled. Thus, the state of each partition can be modelled as a set of
replicas where each object is uniquely represented.
Definition 4. A partition p is a set of replicas r such that if ri, rj ∈ p are both
replicas for object o then ri = rj.
The state of a partition p = {(L1, s[0]1[, s][1][)][, . . .,][ (][L][n][, s]n[0] [, s][n][)][}][ consists of the]
state of the replicas ⟨s1, . . ., sn⟩. Transitions over object states can now be naturally extended to transitions over partition states.
Definition 5. Let α = ⟨op, k⟩ be an operation instance for some invocation k
of operation op. Then s[j] ⇝α sj+1 is a partition transition iff there is an object
oi such that si ⇝α s′i [is a transition for][ o][i][,][ s][j][ =][ ⟨][s][1][, . . ., s][i][, . . ., s][n][⟩] [and][ s][j][+][1][ =]
⟨s1, . . ., s[′]i[, . . ., s][n][⟩][.]
We denote by Apply(α, P ) the result of applying operation instance α at
some replica in partition P, giving a new partition state and a new log for the
affected replica.
3.2 Order
So far we have not introduced any concept of order except that a state is always
the result of operations performed in some order. When we later will consider
the problem of creating new states from operations that have been performed
in different partitions we must be able to determine in what (if any) order the
operations must be replayed.
At this point we will merely define the existence of a strict partial order
relation over operation instances. Later, in Sect. 4.2 we explain the philosophy
behind choosing this relation.
-----
Definition 6. The relation → is a irreflexive, transitive relation over the operation instances obtained from operations O1 ∪ . . . ∪On.
In Definition 8 we will use this ordering to define correctness of a partition
state. Note that the ordering relation induces an ordering on states along the
time line whereas the consistency constraints relate the states of various objects
at a given “time point” (a cut of the distributed system).
3.3 Consistency
Our reconciliation protocol will take a set of partitions and produce a new partition. As there are integrity constraints on the system state and order dependencies on operations, a reconciliation protocol must make sure that the resulting
partition is correct with respect to both of these requirements. This section defines consistency properties for partitions.
Definition 7. A partition state s = ⟨s1, . . ., sn⟩ for partition where
P = {(L1, s[0]1[, s][1][)][, . . .,][ (][L][n][, s]n[0] [, s][n][)][}][ is][ constraint consistent][, denoted cc(P), iff]
for all integrity constraints c it holds that s ∈ c.
Next we define a consistency criterion for partitions that also takes into account the order requirements on operations in logs. Intuitively we require that
there is some way to construct the current partition state from the initial state
using all the operations in the logs. Moreover, all the intermediate states should
be constraint consistent and the operation ordering must follow the ordering restrictions. We will use this correctness criterion in evaluation of our reconciliation
protocol.
Definition 8. Let P = {(L1, s[0]1[, s][1][)][, . . .,][ (][L][n][, s]n[0] [, s][n][)][}][ be a partition, and let][ s][k]
be the partition state. The initial partition state is s[0] = ⟨s[0]1[, . . . s]n[0] [⟩][. We say that]
the partition P is consistent if there exists a sequence of operation instances
L = ⟨α1, . . ., αk⟩ such that:
1. α ∈ Li ⇒ α ∈ L
αk
2. s[0][ α]⇝[1] . . . ⇝ s[k]
3. Every s[j] ∈{s[0], . . ., s[k]} is constraint consistent
4. αi → αj ⇒ i < j
## 4 Application-Middleware Dependencies
In Sect. 3 we introduced integrity constraints and an order relation between
operations. These concepts are used to ensure that the execution of operations
is performed according to the clients’ expectations. In this section we will further
elaborate on these two concepts, and briefly explain why they are important for
reconciliation.
Due to the fact that the system continues to provisionally serve requests in
degraded mode, the middleware has to start a reconciliation process when the
-----
system recovers from link failures (i.e. when the network is physically reunified).
At that point in time there may be several conflicting states for each object
since write requests have been serviced in all partitions. In order to merge these
states into one common state for the system we will have to replay the performed
operations (that are stored in the logs of each replica). Some operations may not
satisfy integrity constraints when multiple partitions are considered, and they
may have to be rejected (seen from a client perspective, undone). The replay
starts from the last common state (i.e. from before the partition fault occurred)
and iteratively builds up a new state. Note that the replay of an operation
instance may potentially take place in a different state compared to that where
the operation was originally applied in the degraded mode.
4.1 Integrity Constraints
Since some operations will have to be replayed we need to consider the conditions
required, so that replaying an operation in a different state than that it was
originally executed in does not cause any discrepancies. We assume that such
conditions are indeed captured by integrity constraints.
In other words, the middleware expects that an application writer has created the needed integrity constraints such that replaying an operation during
reconciliation is harmless as long as the constraint is satisfied, even if the state
on which it is replayed is different from the state in which it was first executed.
That is, there should not be any implicit conditions that are checked by the
client at the invocation of the operation. In such a case it would not be possible
for the middleware to recheck these constraints upon reconciliation.
As an example, consider withdrawal from a credit account. It is acceptable to
allow a withdrawal as long as there is coverage for the account in the balance; it
is not essential that the balance should be a given value when withdrawal is alllowed. Recall that that an operation for which a later rejection is not acceptable
from an application point of view should be associated with a critical constraint
(thereby not applied during a partition at all). An example of such an operation
would be the termination of a credit account.
4.2 Expected Order
To explain the notion of expected order we will first consider a system in normal
mode and see what kind of execution order is expected by the client. Then
we will require the same expected order to be guaranteed by the system when
performing reconciliation. In our scenarios we will assume that a client who
invokes two operations α and β in sequence without receiving a reply between
them does not have any ordering requirements on the invocations. Then the
system need not guarantee that the operations are executed in any particular
order. This is true even if the operations were invoked on the same object.
Now assume that the client first invokes α and does not invoke β until it has
received a reply for α confirming that α has been executed. Then the client knows
that α is executed before β. The client process therefore assumes an ordering
-----
between the execution of α and β due to the fact that the events of receiving a
reply for α precedes the event of invoking β. This is the order that we want to
capture with the relation → from Definition 6. When the reconciliation process
replays the operations it must make sure that this expected order is respected.
This induced order need not be specified at the application level. It can be
captured by a client side front end within the middleware, and reflected in a
tag for the invoked operations. Thus, every operation is piggybacked with information about what other operations must precede it when it is later replayed.
This information is derived from the requests that are sent by the client and the
received replies. Note that it is only necessary to attach the IDs of the immediate
predecessors so the overhead will be small.
## 5 The Reconciliation Protocol
In this section we will describe the reconciliation protocol in detail using timed
I/O automata. However, before going into details we provide a short overview of
the idea behind the protocol. The protocol is composed of two types of processes:
a number of replicas and one reconciliation manager.
The replicas are responsible for accepting invocations from clients and sending logs to the reconciliation manager during reconciliation. The reconciliation
manager is responsible for merging replica logs that are sent during reconciling
mode. It is activated when the system is reunified and eventually sends an install message with the new partition state to all replicas. The new partition state
includes empty logs for each replica.
The reconciliation protocol starts with one state per partition is faced with
the task of merging a number of operations that have been performed in different
partitions while preserving constraint consistency and respecting the expected
ordering of operations. In parallel with this process the protocol should take care
of operations that arrive during the reconciliation phase. Note that there may be
unreconciled operations in the logs that should be executed before the incoming
operations that arrive during reconciliation.
The state that is being constructed in the reconciliation manager may not yet
reflect all the operations that are before (→) the incoming operations. Therefore
the only state in which the incoming operation can be applied to is one of the
partition states from the degraded mode. Or in other words, we need to execute
the new operations as if the system was still in degraded mode. In order to do
this we will maintain virtual partitions while the reconciliation phase lasts.
5.1 Reconciliation Manager
In Algorithm 1 the variable mode represents the modes of the reconciliation
process and is basically the same as the system modes described in Fig. 1 except
that the normal and degraded mode are collapsed into an idle mode for the
reconciliation manager, which is its initial mode of operation.
-----
When a reunify action is activated the reconciliation manager goes to reconciling mode. Moreover, the variable P, which represents the partition state, is
initialised with the pre-partition state, and the variable opset that will contain
all the operations to replay is set to empty. Now the reconciliation process starts
waiting for the replicas to send their logs and the variable awaitedLogs is set to
contain all replicas that have not yet sent their logs.
Next, we consider the action receive(⟨“log[′′], L⟩)iM which will be activated
when some replica ri sends its operation log. This action will add logged operations to opset and to ackset[i] where the latter is used to store acknowledge
messages that should be sent back to replica ri. The acknowledge messages are
sent by the action send(⟨“logAck[′′], ackset[i]⟩)Mi. When logs have been received
from all replicas (i.e. awaitedLogs is empty) then the manager can proceed and
start replaying operations. A deadline will be set on when the next handle action
must be activated (this is done by setting last(handle)).
The action handle(α) is an internal action of the reconciliation process that
will replay the operation α (which is minimal according to → in opset) in the
reconciled state that is being constructed. The operation is applied if it results
in a constraint consistent state.
As we will show in Sect. 6.2, there will eventually be a time when opset is
empty at which M will enable broadcast(“stop[′′])M . This will tell all replicas to
stop accepting new invocations. Moreover, M will set the mode to installingState
and wait for all replicas to acknowledge the stop message. This is done to guarantee that no messages remain untreated in the reconciliation process. Finally,
when the manager has received acknowledgements from all replicas it will broadcast an install message with the reconciled partition state and enter idle mode.
5.2 Replica Process
A replica process (see Algorithm 2) is responsible for receiving invocations to
clients and for sending logs to M . We will proceed by describing the states and
actions of a replica process. First note that a replica process can be in four different modes, normal, degraded, reconciling, and unavailable which correspond
to the system modes of Fig. 1.
In this paper we do not explicitly model how updates are replicated from
primary replicas to secondary replicas. Instead, we introduce two global shared
variables that are accessed by all replicas, provided that they are part of the
same group. The first shared variable P [i] represents the partition for the group
with ID i and it is used by all replicas in that group during normal and degraded
mode. The group ID is assumed to be delivered by the membership service.
During reconciling mode the group-ID will be 1 for all replicas since there
is only one partition during reconciling mode. However, as we explained in the
beginning of Sect. 5 the replicas must maintain virtual partitions to service
requests during reconciliation. The shared variable VP[j] is used to represent
the virtual partition for group j which is based on the partition that was used
during degraded mode.
-----
Algorithm 1 Reconciliation manager M
States
mode ∈{idle, reconciling, installingState} ← idle
P ←{(⟨⟩, s[0]1[, s][0]1[)][, . . .,][ (][⟨⟩][, s]n[0] [, s][0]n[)][}][/* Output of protocol: Constructed partition */]
opset /* Set of operations to reconcile */
awaitedLogs /* Replicas to wait for sending a first log message */
stopAcks /* Number of received stop “acks”*/
ackset[i] ←∅ /* Log items from replica i to acknowledge*/
now ∈ IR[0+]
last(handle) ←∞ /* Deadline for executing handle */
last(stop) ←∞ /* Deadline for sending stop */
last(install) ←∞ /* Deadline for sending install */
Actions
Input reunify(g)M
Eff: mode ← reconciling
P ←{(⟨⟩, s[0]1[, s]1[0][)][, . . .,][ (][⟨⟩][, s]n[0] [, s]n[0] [)][}]
opset ←∅
awaitedLogs ←{All replicas}
Input receive(⟨“log”, L⟩)iM
Eff: opset ← opset ∪ L
ackset[i] ← ackset[i] ∪ L
if awaitedLogs ̸= ∅
awaitedLogs ← awaitedLogs \ {i}
else
last(handle) ←
min(last(handle), now + dhan)
Output send(⟨“logAck”, ackset[i]⟩)Mi Internal handle(α)
Eff: ackset[i] ←∅ Pre: awaitedLogs = ∅
mode = reconciling
α ∈ opset
∄β ∈ opset β → α
Eff: if cc(Apply(α, P ))
P ← Apply(α, P )
last(handle) ← now + dhan
opset ← opset \ {α}
if opset = ∅
last(stop) = now + dact
Output broadcast(“stop”)M
Pre: opset = ∅
awaitedLogs = ∅
Eff: stopAcks ← 0
mode ← installingState
last(handle) ←∞
last(stop) ←∞
Output broadcast(⟨“install”, P ⟩)M
Pre: mode = installingState
stopAcks = m · n
Eff: mode ← idle
last(install) = ∞
Input receive(⟨“stopAck”⟩)iM
Eff: stopAcks ← stopAcks + 1
if stopAck = mn
last(install) = now + dact
Timepassage v(t)
Pre: now + t ≤ last(handle)
now + t ≤ last(stop)
now + t ≤ last(install)
Eff: now ← now + t
-----
During normal mode replicas apply operations that are invoked through the
receive(⟨“invoke”, α⟩)cr action if they result in a constraint consistent partition.
A set toReply is increased with every applied operation that should be replied
to by the action send(⟨“reply[′′], α⟩)rc.
A replica leaves normal mode and enters degraded mode when the group
membership service sends a partition message with a new group-ID. The replica
will then copy the contents of the previous partition representation to one that
will be used during degraded mode. Implicit in this assignment is the determination of one primary per partition for each object in the system (as provided
by a combined name service and group membership service). The replica will
continue accepting invocations and replying to them during degraded mode.
When a replica receives a reunify message it will take the log of operations
served during degraded mode (the set L) and send it to the reconciliation manager M by the action send(⟨“log[′′], L⟩)rM . In addition, the replica will enter
reconciling mode and copy the partition representation to a virtual partition
representation. The latter will be indexed using virtual group-ID vg which will
be the same as the group-ID used during degraded mode. Finally, a deadline will
be set for sending the logs to M .
The replica will continue to accept invocations during reconciliation mode
with some differences in handling. First of all, the operations are applied to a
virtual partition state. Secondly, a log message containing an applied operation is
immediately scheduled to be sent to M . Finally, the replica will not immediately
reply to the operations. Instead it will wait until the log message has been
acknowledged by the reconciliation manager and receive(⟨“logAck[′′], L⟩)Mr is
activated. Now any operation whose reply was pending and for whom a logAck
has been received can be replied to (added to the set toReply).
At some point the manager M will send a stop message which will make the
replica to go into unavailable mode and send a stopAck message. During this
mode no invocations will be accepted until an install message is received. Upon
receiving such a message the replica will install the new partition representation
and once again go into normal mode.
## 6 Properties of the Protocol
The goal of the protocol is to restore consistency in the system. This is achieved
by merging the results from several different partitions into one partition state.
The clients have no control over the reconciliation process and in order to guarantee that the final result does not violate the expectations of the clients we need
to assert correctness properties of the protocol. Moreover, as there is a growing
set of unreconciled operations we need to show that the protocol does not get
stuck in reconciliation mode for ever.
In this section we will show that (1) the protocol terminates in the sense that
the reconciliation mode eventually ends and the system proceeds to normal mode
(2) the resulting partition state which is installed in the system is consistent in
the sense of Definition 8.
-----
Algorithm 2 Replica r
Shared vars
P [i] ←{(⟨⟩, s[0]1[, s][0]1[)][, . . .,][ (][⟨⟩][, s]n[0] [, s][0]n[)][}][,][ for][ i][ = 1][ . . . N][ /* Representation for partition][ i][,]
before reunification */
VP[i], for i = 1 . . . N /* Representation for virtual partition i, after reunification */
States
mode ∈{normal, degraded,reconciling, unavailable} ← idle
g ∈{1 . . . N } ← 1 /* Group identity (supplied by group membership service) */
vg ∈{1 . . . N } ← 1 /* Virtual group identity, used between reunification and install */
L ←∅ /* Set of log messages to send to reconciliation manager M*/
toReply ←∅ /* Set of operations to reply to */
pending ←∅ /* Set of operations to reply to when logged */
enableStopAck /* Boolean to signal that a stopAck should be sent */
last(log) ←∞ /* Deadline for next send(⟨“log[′′], . . .⟩) action */
last(stopAck) ←∞ /* Deadline for next send(⟨“stopAck[′′], . . .⟩) action */
now ∈ IR[0+]
Actions
Input partition(g[′])r
Eff: mode ← degraded
P [g[′]] ← P [g]
g ← g[′]
Input receive(⟨“invoke”, α⟩)cr
Eff: switch(mode)
normal | degraded ⇒
if Apply(α,P [g]) is Consistent)
P [g] ← Apply(α, P [g])
toReply ← toReply ∪{⟨α, c⟩}
reconciling ⇒
if Apply(α, VP[vg]) is Consistent)
VP[vg] ← Apply(α, VP[vg])
L ← L ∪{α}
last(log) ← min(last(log), now + dact)
pending ← pending ∪{⟨α, c⟩}
Input receive(⟨“logAck[′′], L⟩)Mr
Eff: replies ←{⟨α, c⟩∈ pending | α ∈ L}
toReply ← toReply ∪ replies
pending ← pending \ replies
Input receive(“stop[′′])Mr
Eff: mode ← unavailable
enableStopAck ← true
last(stopAck) ← now + dact
Input receive(⟨“install[′′], P [′]⟩)Mr
Eff: P [g] ← P [′] /* g = 1 */
mode ← normal
Input reunify(g[′])r
Eff: L ← Lr where ⟨Lr, sr[0][, s][r][⟩∈] [P]
mode ← reconciling
vg ← g
VP[vg] ← P [g]
g ← g[′]
last(log) ← now + dact
Output send(⟨“log[′′], L⟩)rM
Pre: mode ∈{reconciling, unavailable}
L ̸= ∅
Eff: L ←∅
last(log) ←∞
Output send(⟨“reply[′′], α⟩)rc
Pre: ⟨α, c⟩∈ toReply
Eff: toReply ← toReply \ {⟨α, c⟩}
Output send(⟨“stopAck[′′]⟩)rM
Pre: enableStopAck = true
L = ∅
Eff: enableStopAck = false
last(stopAck) ←∞
Timepassage v(t)
Pre: now + t ≤ last(log)
now + t ≤ last(stopAck)
Eff: now ← now + t
-----
6.1 Assumptions
The results rely on a number of assumptions on the system. We assume a partially synchronous system with reliable broadcast. Moreover, we assume that
there are bounds on duration and rate of partition faults in the network. Finally
we need to assume some restrictions on the behaviour of the clients such as the
speed at which invocations are done and the expected order of operations. The
rest of the section describes these assumptions in more detail.
Network Assumptions. We assume that there are two time bounds on the
appearance of faults in the network. TD is the maximal time that the network can
be partitioned. TF is needed to capture the minimum time between two faults.
The relationship between these bounds are important as operations are piled up
during the degraded mode and the reconciliation has to be able to handle them
during the time before the next fault occurs.
We will not explicitly describe all the actions of the network but we will
give a description of the required actions as well as a list of requirements that
the network must meet. The network assumptions are summarised in N1-N6,
where N1, N2, and N3 characterise reliable broadcast which can be supplied by
a system such as Spread[6]. Assumption N4 relates to partial synchrony which
is a basic assumption for fault-tolerant distributed systems. Finally we assume
that faults are limited in frequency and duration (N5,N6) which is reasonable,
as otherwise the system could never heal itself.
N1 A receive action is preceded by a send (or broadcast) action.
N2 A sent message is not lost unless a partition occurs.
N3 A sent broadcast message is either received by all in the group or a partition
occurs and no process receives it.
N4 Messages arrive within a delay of dmsg (including broadcast messages).
N5 After a reunification, a partition occurs after an interval of at least TF.
N6 Partitions do not last for more than TD.
Client Assumptions. In order to prove termination and correctness of the
reconciliation protocol we need some restrictions on the behaviour of clients.
C1 The minimum time between two invoke actions from one client is dinv.
C2 If there is an application-specific ordering between two operations, then the
first must have been replied to before the second was invoked. Formally,
admissible timed system traces must be a subset of ttraces(C2). ttraces(C2)
is defined as the set of sequences such that for all sequences σ in ttraces(C2):
α → β and (send(⟨“invoke[′′], β⟩)cr, t1) ∈ σ ⇒
∃(receive(⟨“reply[′′], α⟩)r[′]c, t0) ∈ σ for some r[′] and t0 < t1.
In Table 1 we summarise all the system parameters relating to time intervals
that we have introduced so far.
-----
Table 1. Parameter summary
TF Minimal time before a partition fault after a reunify
TD Maximal duration of a partition
dmsg Maximal message transmission time
dinv Minimal time between two invocations from one client
dhan Maximal time between two handle actions within reconciliation manager
dact Deadline for actions
Server Assumptions. As we are concerned with reconciliation and do not
want go into detail on other responsibilities of the servers or middleware (such
as checkpointing), we will make two assumptions on the system behaviour that
we do not explicitly model. First, in order to prove that the reconciliation phase
ends with the installment of a consistent partition state, we need to assume that
the state from which the reconciliation started is consistent. This is a reasonable
assumption since normal and degraded mode operations always respect integrity
constraints. Second, we assume that the replica logs are empty at the time when
a partition occurs. This is required to limit the length of the reconciliation as we
do not want to consider logs from the whole life time of a system. In practice,
this has to be enforced by implementing checkpointing during normal operation.
A1 The initial state s[0] is constraint consistent (see Definition 7).
A2 All replica logs are empty when a partition occurs.
We will now proceed to prove correctness of the protocol. First we give a
termination proof and then a partial correctness proof.
6.2 Termination
In this section we will prove that the reconciliation protocol will terminate in
the sense that after the network is physically healed (reunified) the reconciliation protocol eventually activates an install message to the replicas with the
reconciled state. As stated in the theorem it is necessary that the system is able
to replay operations at a higher rate than new operations arrive (reflected in the
ratio q).
Theorem 1. Let the system consist of the model of replicas, and the model
of reconciliation manager. Assume the conditions described in Sect. 6.1. Assume further that the ratio q between the minimum handling rate dhan1 [and]
the maximum interarrival rate for client invocations C · dinv1 [, where][ C][ is the]
maximum number of clients, is greater than one. Then, all admissible system
traces are in the set ttraces(Installing) of action sequences such that for every
(reunify(g)M, t) there is a (broadcast(⟨“install”, P ⟩)M, t[′]) in the sequence, with
t < t[′], provided that TF > [T][D]q−[+7]1 [d] + 9d, where d exceeds dmsg and dact.
-----
t[p] t[reun]M t[log] t[e] t[inst] Time
TD TF
Partition Reunification Partition
Fig. 2. Reconciliation timeline
Proof. Consider an arbitrary admissible timed trace γ such that
(reunify(g)M, t[reun]M ) appears in γ. Let all time points t[i] below refer to points in
γ. The goal of the proof is to show that there exists a point t[inst] after t[reun]M, at
which there is an install message appearing in γ.
The timing relation between two partitions and the time line for manager M
can be visualised in Fig. 2 (see N5 and N6). Let t[reun]i denote the time point at
which the reunification message arrives at process i. The reconciliation activity is
performed over three intervals: initialising (TI), handling (TH), and ending (TE).
The proof strategy is to show that the reconciliation activity ends before the
next partition occurs, considering that it takes one message transmission for the
manager to learn about reunification. That is, dmsg + TI + TH + TE < TF.
Let t[log] be the last time point at which a log message containing a prereunification log is received from some replica. This is the time point at which
handling (replaying) operations can begin. The handling interval (TH) ends when
the set of operations to replay (opset) is empty. Let this time point be denoted
by t[e].
Initialising:
TI = t[log] − t[reun]M
The latest estimate for t[log] is obtained from the latest time point at which a
replica may receive this reunification message (t[reun]r ) plus the maximum time for
it to react (dact) plus the maximum transmission time (dmsg).
TI ≤ max r ) + dact + dmsg − t[reun]M
r [(][t][reun]
By N4 all reunification messages are received within dmsg.
TI ≤ t[reun]M + dmsg + dact + dmsg − t[reun]M ≤ 2dmsg + dact (1)
Handling: The maximum handling time is characterised by the maximum number of invoked client requests times the maximum handling time for each operation (dhan, see Algorithm 1), times the maximum number of clients C. We
divide client invocations in two categories, those that arrive at the reconciliation
manager before t[log] and those that arrive after t[log].
TH ≤ �[pre-t[log] messages] + [post-t[log] messages]� - C · dhan
|TI TH TE|Col2|Col3|
|---|---|---|
||||
||||
|Col1|T I T H T E|
|---|---|
|tp|treun tlog te tinst M T F|
|T D||
-----
The maximum time that it takes for a client invocation to be logged at M is
equal to 2dmsg + dact, consisting of the transmission time from client to replica
and the transmission time from replica to manager as well as the reaction time
for the replica. The worst estimate of the number of post-t[log] messages includes
all invocations that were initiated at a client prior to t[log] and logged at M after
t[log]. Thus the interval of 2dmsg + dact must be added to the interval over which
client invocations are counted.
� TD + dmsg + TI
TH ≤ + [T][H][ + 2][d][msg][ +][ d][act]
dinv dinv
�
- C · dhan (2)
using earlier constraint for TI in (1). Finally, together with the assumption in
the theorem we can simplify the expression as follows:
TH ≤ [T][D][ + 5][d][msg][ + 2][d][act] (3)
q − 1
Ending: According to the model of reconciliation manager M an empty opset
results in the sending of a stop message within dact. Upon receiving the message at every replica (within dmsg), the replica acknowledges the stop message
within dact. The the new partition can be installed as soon as all acknowledge
messages are received (within dmsg) but at the latest within dact. Hence TE can
be constrained as follows:
TE = t[inst] − t[e] ≤ 3dact + 2dmsg (4)
Final step: Now we need to show that dmsg + TI + TH + TE is less than TF (time
to next partition according to N5). From (1), (3), and (4) we have that:
TI + TH + TE ≤ 2dmsg + dact + [T][D][ + 5][d][msg][ + 2][d][act] + 3dact + 2dmsg
q − 1
Given a bound d on delays dact and dmsg we have:
dmsg + TI + TH + TE ≤ [T][D][ + 7][d] + 9d
q − 1
Which concludes the proof according to theorem assumptions. ⊓⊔
6.3 Correctness
As mentioned in Sect. 3.3 the main requirement on the reconciliation protocol
is to preserve consistency. The model of the replicas obviously keeps the partition state consistent (see the action under receive(⟨“invoke[′′], α⟩)cr. The proof
of correctness is therefore about the manager M withholding this consistency
during reconciliation, and specially when replaying actions. Before we go on to
the main theorem on correctness we present a theorem that shows the ordering
requirements of the application (induced by client actions) are respected by our
models.
-----
Theorem 2. Let the system consist of the model of replicas, and the model of
reconciliation manager. Assume the conditions described in Sect. 6.1. Define the
set ttraces(Order) as the set of all action sequences with monotonically increasing times with the following property: for any sequence σ ∈ ttraces(Order), if
(handle((α), t) and (handle((β), t[′]) is in σ, α → β, and there is no
(partition(g), t[′′]) between the two handle actions, then t < t[′]. All admissible
timed traces of the system are in the set ttraces(Order).
Proof. We assume α → β, and take an arbitrary timed trace γ belonging to
admissible timed traces of the system such that (handle(α), t) and (handle(β), t[′])
appear in γ and no partition occurs in between them. We are going to show that
t < t[′], thus γ belongs to ttraces(Order). The proof strategy is to assume t[′] < t
and prove contradiction.
By the precondition of (handle(β), t[′]) we know that α cannot be in the opset
at time t[′] (see the Internal action in M ). Moreover, we know that α must be
in opset at time t because (handle(α), t) requires it. Thus, α must be added to
opset between these two time points and the only action that can add operations
to this set is receive(⟨“log[′′], . . .⟩)rM . Hence there is a time point t[l] at which
(receive(⟨“log[′′], ⟨. . ., α, . . .⟩⟩)rM, t[l]) appears in γ and
t[′] < t[l] < t (5)
Next consider a sequence of actions that must all be in γ with
t[0] < t[1] < . . . < t8 < t[′].
1. (handle((β), t[′])
2. (receive(⟨“log[′′], ⟨. . ., β, . . .⟩⟩, t8)r1M for some r1
3. (send(⟨“log[′′], ⟨. . ., β, . . .⟩⟩, t7)r1M
4. (receive(⟨“invoke[′′], β⟩, t6)cr1 for some c
5. (send(⟨“invoke[′′], β⟩, t5)cr1
6. (receive(⟨“reply[′′], α⟩, t4)cr2 for some r2
7. (send(⟨“reply[′′], α⟩, t3)r2c
8. (receive(⟨“logAck[′′], ⟨. . ., α, . . .⟩⟩, t2)Mr2
9. (send(⟨“logAck[′′], ⟨. . ., α, . . .⟩⟩, t1)Mr2
10. (receive(⟨“log[′′], ⟨. . ., α, . . .⟩⟩, t[0])r2M
We show that the presence of each of these actions requires the presence of
the next action in the list above (which is preceding in time).
– (1⇒2) is given by the fact that β must be in opset and that
(receive(⟨“log[′′], ⟨. . ., β, . . .⟩⟩, t8)r1M is the only action that adds operations
to opset.
– (2⇒3), (4⇒5), (6⇒7) and (8⇒9) are guaranteed by the network (N1).
– (3⇒4) is guaranteed since β being in L = ⟨. . ., β, . . .⟩ at r1 implies that some
earlier action has added β to L and (receive(⟨“invoke[′′], β⟩, t6)cr1 is the only
action that adds elements to L at r1.
– (5⇒6) is guaranteed by C3 together with the fact that α → β.
-----
– (7⇒8) Due to 7 α must be in toReply at r2 at time t[3]. There are two
actions that set toReply: one under the normal/degraded mode, and one
upon receiving a logAck message from the manager M .
First, we show that r2 cannot be added to toReply as a result of
receive(⟨[′′]invoke[′′], α⟩)cr2 in normal mode. Since α is being replayed by the
manager ((handle(α), t) appears in γ) then there must be a partition between applying α and replaying α. However, no operation that is applied in
normal mode will reach the reconciliation process M as we have assumed
(A2) that the replica logs are empty at the time of a partition. And since α
belongs to opset in M at time t, it cannot have been applied during normal
mode.
Second, we show that r2 cannot be added to toReply as a result of
receive(⟨[′′]invoke[′′], α⟩)cr2 in degraded mode. If α was added to toReply in
degraded mode then the log in the partition to which r2 belongs would be
received by M shortly after reunification (that precedes handle operations).
But we have earlier established that α /∈ opset at t[′], and hence α cannot
have been applied in degraded mode. Thus α is added to toReply as a result
of a logAck action and (7⇒8).
– (9⇒10) is guaranteed since α must be in ackset[r2] and it can only be put
there by (receive(⟨“log[′′], ⟨. . ., α, . . .⟩⟩, t[0])r2M
We have in (5) established that the received log message that includes α
appeared in γ at time point t[l], t[′] < t[l]. This contradicts that t[0] = t[l] < t[′], and
concludes the proof. ⊓⊔
Theorem 3. Let the set ttraces(Correct) be the set of action sequences with
monotonically increasing times such that if (broadcast(⟨“install[′′], P ⟩)M, t[inst]) is
in the sequence, then P is consistent according to Definition 8. All admissible
timed executions of the system are in the set ttraces(Correct).
Proof. Consider an arbitrary element σ in the set of admissible timed system
traces. We will show that σ is a member of the set ttraces(Correct). The strategy
of the proof is to analyse the subtraces of σ that correspond to actions of each
component of the system. In particular, the sequence corresponding to actions
in the reconciliation manager M will be of interest.
Let γ be the sequence that contains all actions of σ that are also actions
of the reconciliation manager M (γ = σ|M ). It is trivial that for all processes
C ̸= M it holds that σ|C ∈ ttraces(Correct) as there are no install messages
broadcasted by any other process. Therefore, if we show that γ is a member of
ttraces(Correct) then σ will also be a member of ttraces(Correct).
We will proceed to show that γ is a member of ttraces(Correct) by performing induction on the number of actions in γ.
Base case: Let P be the partition state before the first action in γ. The model
of the reconciliation manager M initialises P to {(⟨⟩, s[0]1[, s][0]1[)][, . . .,][ (][⟨⟩][, s]n[0] [, s][0]n[)][}][.]
Therefore, requirements 1,2 and 4 of Definition 8 are vacuously true and 3 is
given by A1.
-----
Inductive step: Assume that the partition state resulting from action i in γ is
consistent. We will then show that the partition state resulting from action i +1
in γ is consistent. It is clear that the model of the reconciliation manager M does
not affect the partition state except when actions reunify(g)M and handle(α)
are taken. Thus, no other actions need to be considered. We show that reunify
and handle preserve consistency of the partition state.
The action (reunify(g)M, t) sets P to the initial value of P which has been
shown to be consistent in the base case.
The action (handle(α), t) is the interesting action in terms of consistency
for P . We will consider two cases based on whether applying α results in an
inconsistent state or not. Let P [i] be the partition state after action i has been
taken.
(1) If Apply(α, P [i]) is not constraint consistent then the if-statement in the
action handle is false and the partition state will remain unchanged, and thus
consistent after action i + 1 according to the inductive assumption.
(2) If Apply(α, P [i]) is constraint consistent then the partition state P [i][+1] will
be set to Apply(α, P [i]). By the inductive assumption there exists a sequence
L leading to P [i]. We will show that the sequence L[′] = L + ⟨α⟩ satisfies the
requirements for P [i][+1] to be consistent.
Consider the conditions 1-4 in the definition of consistent partition (Def. 8).
1. By the definition of Apply we know that all replicas in P remain unchanged
except one which we denote r. So for all replicas ⟨Lj, s[0]j [, s][j][⟩̸][=][ r][ we know]
that β ∈ Lj ⇒ β ∈ L ⇒ β ∈ L[′]. Moreover the new log of replica r will
be the same as the old log with the addition of operation α. And since all
elements of the old log for r are in L, they are also in L[′]. Finally, since α is
in L[′] then all operations for the log of r leading to P [i][+1] is in L[′].
2. Consider the last state s[k] = ⟨s1, . . ., sj, . . . sn⟩ where sj is the state of the
replica that will be changed by applying α. Let s[′]j [be the state of this replica]
in P [i][+1] which is the result of the transition sj ⇝α s′j[. By the inductive]
αk αk
assumption we have that s[0][ α]⇝[1] . . . ⇝ s[k]. Then s[0][ α]⇝[1] . . . ⇝ s[k][ α]⇝ s[k][+][1] where
s[k][+][1] = ⟨s1, . . ., s[′]i[, . . . s][n][⟩] [is a partition transition according to Definition 5.]
3. By the inductive assumption we know that P [i] is consistent and therefore
∀j ≤ k s[j] is constraint consistent. Further since Apply(α, P [i]) is constraint
consistent according to (2), s[k][+][1] is constraint consistent.
4. The order holds for L according to the inductive assumption. Let t be the
point for handle(β) in γ. For the order to hold for L[′] we need to show that
α ↛ β for all operations β in L. Since β appears in L there must exist a
handle(β) at some time point t[′] in γ. Then according to Theorem 2 α ↛ β
(since if α → β then t < t[′] and obviously t < t[′]). ⊓⊔
## 7 Related Work
In this section we will discuss how the problem of reconciliation after network
partitions has been dealt with in the literature. For more references on related
-----
topics there is an excellent survey on optimistic replication by Saito and Shapiro
[7]. There is also an earlier survey discussing consistency in partitioned networks
by Davidson et al. [8].
Gray et al. [9] address the problem of update everywhere and propose a
solution based on a two-tier architecture and tentative operations. However, they
do not target full network partitions but individual nodes that join and leave
the system (which is a special case of partition). Bayou [10] is a distributed
storage system that is adapted for mobile environments. It allows updates to
occur in a partitioned system. However, the system does not supply automatic
reconciliation in case of conflicts but relies on an application handler to do this.
This is a common strategy for sorting out conflicts, but then the application
writer has to figure out how to solve them. Our approach is fully automatic and
does not require application interaction during the reconciliation process.
Some work has been done on partitionable systems where integrity constraints are not considered, which simplifies reconciliation. Babaouglu et al. [11]
present a method for dealing with network partitions. They propose a solution
that provides primitives for dealing with shared state. They do not elaborate
on dealing with writes in all partitions except suggesting tentative writes that
can be undone if conflicts occur. Moser et al. [12] have designed a fault-tolerant
CORBA extension that is able to deal with node crashes as well as network
partitions. There is also a reconciliation scheme described in [13]. The idea is
to keep a primary for each object. The state of these primaries are transferred
to the secondaries on reunification. In addition, operations which are performed
on the secondaries during degraded mode are reapplied during the reconciliation
phase. This approach is not directly applicable with integrity constraints.
Most works on reconciliation algorithms dealing with constraints after network partition focus on achieving a schedule that satisfies order constraints.
Fekete et al. [14] provide a formal specification of a replication scheme where
the client can specify explicit requirements on the order in which operations are
to be executed. This allows for a stronger requirement than the well-established
causal ordering [15]. Our concept of ordering is weaker than causal ordering, as
it is limited to one client’s notion of an expected order of execution based on the
replies that the client has received. Lippe et al. [16] try to order operation logs to
avoid conflicts with respect to a before relation. However, their algorithm requires
a large set of operation sequences to be enumerated and then compared. The
IceCube system [17, 18] also tries to order operations to achieve a consistent final
state. However, they do not fully address the problem of integrity constraints
that involve several objects. Phatak et al. [19] propose an algorithm that provides reconciliation by either using multiversioning to achieve snapshot isolation
[20] or using a reconciliation function given by the client. Snapshot isolation is
more pessimistic than our approach and would require a lot of operations to be
undone.
-----
## 8 Conclusions and Future Work
We have investigated a reconciliation mechanism designed to bring a system
that is inconsistent due to a network partition back to a consistent state. As the
reconciliation process might take a considerable amount of time it is desirable
to accept invocations during this period.
We have introduced an order relation that forces the reconciliation protocol
to uphold virtual partitions in which incoming operations can be executed. The
incoming operations cannot be executed on the state that is being constructed.
Since the protocol would then have to discard all the operations that the client
expects to have been performed. However, maintaining virtual partitions during
reconciliation will make the set of operations to reconcile larger. Thus, there is
a risk that the reconciliation process never ends.
We have proved that the proposed protocol will indeed result in a stable
partition state given certain timing assumptions. In particular, we need time
bounds for message delays and execution time as well as an upper bound on client
invocation rate. Moreover, we have proved that the result of the reconciliation
is correct based on a correctness property that covers integrity consistency and
ordering of operations.
The current work has not treated the use of network resources by the protocol and has not characterised the middleware overheads. These are interesting
directions for future work. Performing simulation studies would show how much
higher availability is dependent on various system parameters, including the
mix of critical and non-critical operations. Another interesting study would be
to compare the performance with a simulation of a majority partition implementation.
An ongoing project involves implementation of replication and our reconciliation services on top of a number of well-known middlewares, including CORBA
[3]. This will allow evaluation of middleware overhead in this context, and a
measure of enhanced availability compared to the scenario where no service is
available during partitions.
## References
1. Szentivanyi, D., Nadjm-Tehrani, S.: Middleware Support for Fault Tolerance. In:
Middleware for Communications. John Wiley & Sons (2004)
2. Felber, P., Narasimhan, P.: Experiences, strategies, and challenges in building
fault-tolerant corba systems. IEEE Trans. Comput. 53(5) (2004) 497–511
3. DeDiSys: European IST FP6 DeDiSys Project. http://www.dedisys.org (2006)
4. Asplund, M., Nadjm-Tehrani, S.: Post-partition reconciliation protocols for maintaining consistency. In: Proceedings of the 21st ACM/SIGAPP symposium on
Applied computing. (2006)
5. Szentivanyi, D., Nadjm-Tehrani, S., Noble, J.M.: Optimal choice of checkpointing
interval for high availability. In: Proceedings of the 11th Pacific Rim Dependable
Computing Conference, IEEE Computer Society (2005)
6. Spread: The Spread Toolkit. http://www.spread.org (2006)
-----
7. Saito, Y., Shapiro, M.: Optimistic replication. ACM Comput. Surv. 37(1) (2005)
42–81
8. Davidson, S.B., Garcia-Molina, H., Skeen, D.: Consistency in a partitioned network: a survey. ACM Comput. Surv. 17(3) (1985) 341–370
9. Gray, J., Helland, P., O’Neil, P., Shasha, D.: The dangers of replication and a
solution. In: SIGMOD ’96: Proceedings of the 1996 ACM SIGMOD international
conference on Management of data, New York, NY, USA, ACM Press (1996) 173–
182
10. Terry, D.B., Theimer, M.M., Petersen, K., Demers, A.J., Spreitzer, M.J., Hauser,
C.H.: Managing update conflicts in bayou, a weakly connected replicated storage
system. In: SOSP ’95: Proceedings of the fifteenth ACM symposium on Operating
systems principles, New York, NY, USA, ACM Press (1995) 172–182
11. Babaoglu, O., Bartoli, A., Dini, G.: Enriched view synchrony: A programming[¨]
paradigm for partitionable asynchronous distributed systems. IEEE Trans. Comput. 46(6) (1997) 642–658
12. Moser, L.E., Melliar-Smith, P.M., Narasimhan, P.: Consistent object replication
in the eternal system. Theor. Pract. Object Syst. 4(2) (1998) 81–92
13. Narasimhan, P., Moser, L.E., Melliar-Smith, P.M.: Replica consistency of corba
objects in partitionable distributed systems. Distributed Systems Engineering 4(3)
(1997) 139–150
14. Fekete, A., Gupta, D., Luchangco, V., Lynch, N., Shvartsman, A.: Eventuallyserializable data services. In: PODC ’96: Proceedings of the fifteenth annual ACM
symposium on Principles of distributed computing, New York, NY, USA, ACM
Press (1996) 300–309
15. Lamport, L.: Time, clocks, and the ordering of events in a distributed system.
Commun. ACM 21(7) (1978) 558–565
16. Lippe, E., van Oosterom, N.: Operation-based merging. In: SDE 5: Proceedings
of the fifth ACM SIGSOFT symposium on Software development environments,
New York, NY, USA, ACM Press (1992) 78–87
17. Kermarrec, A.M., Rowstron, A., Shapiro, M., Druschel, P.: The icecube approach
to the reconciliation of divergent replicas. In: PODC ’01: Proceedings of the twentieth annual ACM symposium on Principles of distributed computing, New York,
NY, USA, ACM Press (2001) 210–218
18. Preguica, N., Shapiro, M., Matheson, C.: Semantics-based reconciliation for collaborative and mobile environments. Lecture Notes in Computer Science 2888
(2003) 38–55
19. Phatak, S.H., Nath, B.: Transaction-centric reconciliation in disconnected clientserver databases. Mob. Netw. Appl. 9(5) (2004) 459–471
20. Berenson, H., Bernstein, P., Gray, J., Melton, J., O’Neil, E., O’Neil, P.: A critique of
ansi sql isolation levels. In: SIGMOD ’95: Proceedings of the 1995 ACM SIGMOD
international conference on Management of data, New York, NY, USA, ACM Press
(1995) 1–10
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/11916246_2?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/11916246_2, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,006
|
[
"JournalArticle"
] | false
| null |
[] | 15,860
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02f4d7158d7b593a81ea9935f9e52edc25aeef3a
|
[
"Computer Science"
] | 0.865417
|
Ocean Data Portal: A Standards Approach to Data Access and Dissemination
|
02f4d7158d7b593a81ea9935f9e52edc25aeef3a
|
[
{
"authorId": "100997980",
"name": "G. Reed"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# OCEAN DATA PORTAL: A STANDARDS APPROACH TO DATA ACCESS AND
DISSEMINATION
**Greg Reed[(1)], Robert Keeley[(2)], Sergey Belov[(3)], Nikolay Mikhailov[(3)]**
_(1) Australian Ocean Data Centre Joint Facility, Level 2, Building 89, Garden Island, Potts Point NSW 2011, Australia._
_[Email: greg@metoc.gov.au](mailto:greg@metoc.gov.au)_
_(2) ISDM (Integrated Science Data Management), 1202-200 Kent Street, Ottawa, Ontario K1A 0E6, Canada._
_[Email: Robert.Keeley@dfo-mpo.gc.ca](mailto:Robert.Keeley@dfo-mpo.gc.ca)_
_(3) All-Russian Research Institute of Hydrometeorological Information – World Data Center (RIHMI-WDC),_
_[6, Koroleva Street, Kaluga District, OBNINSK 249035, Russian Federation. Email: nodc@meteo.ru](mailto:nodc@meteo.ru)_
**ABSTRACT**
Timely access to quality data is essential for the
understanding of marine processes. The International
Oceanographic Data and Information Exchange (IODE)
programme, through its distributed network of National
Oceanographic Data Centres (NODCs), is developing
the Ocean Data Portal (ODP) to facilitate seamless
access to oceanographic data and to promote the
exchange and dissemination of marine data and services.
The ODP provides the full range of processes including
data discovery, evaluation and access, and delivers a
standards-based infrastructure that provides integration
of marine data and information across the NODC
network.
The key principle behind the ODP is its
interoperability with existing systems and resources and
the IODE is working closely with the Joint WMO-IOC
(World Meteorological Organization-International
Oceanographic Commission) Technical Commission for
Oceanography and Marine Meteorology (JCOMM) to
ensure the ODP is interoperable with the WMO
Information System (WIS) that will provide access to
marine meteorological and oceanographic data and
information to serve a number of applications, including
climate. The ODP supports the data access requirements
of all IOC programmes areas, including GOOS (Global
Ocean Observing System), HAB (Harmful Algal
Blooms) and the Tsunami warning system as well as
JCOMM.
The diverse data standards and formats that have
evolved within the oceanographic community make data
exchange complex and the IODE community has
recognized standards are critical in defining how and
what data is exchanged. To ensure the interoperability
of data exchanged between the NODCs and the ODP,
the IODE, together with JCOMM, has initiated a
standards process that will support the accreditation and
adoption of core standards by the marine meteorological
and oceanographic communities.
**1.** **DATA SHARING PRINCIPLES**
The Earth's oceans form part of an integrated global
system and to address global issues, such as climate
change, it is essential for scientists to have access to
relevant data, information, and products. The full and
open sharing of datasets is fundamental to ensure the
rapid dissemination of data and information is available
to researchers. International policies for the full and
open exchange of scientific data and information are
advocated by a number of international organizations.
The Intergovernmental Oceanographic Commission
(IOC) of UNESCO (United Nations Educational
Scientific and Cultural Organization) has adopted a
resolution entitled _IOC Oceanographic Data Exchange_
_Policy (Resolution IOC-XXII-6). This policy recognizes_
that the timely, free and unrestricted international
exchange of oceanographic data is essential for the
efficient acquisition, integration and use of ocean
observations. These data are gathered for a wide variety
of purposes including the prediction of weather and
climate, the operational forecasting of the marine
environment, the preservation of life, and the mitigation
of human-induced changes in the marine and coastal
environment. Under this policy, IOC member states
agree to provide timely, free and unrestricted access to
all data, associated metadata and products generated
under the auspices of IOC programmes. In addition,
IOC member states are encouraged to provide free and
unrestricted access to relevant data and associated
metadata from non-IOC programmes that are essential
for application to the preservation of life, beneficial
public use and protection of the ocean environment, the
forecasting of weather, the operational forecasting of the
marine environment, the monitoring and modelling of
climate and sustainable development in the marine
environment [1].
Other international organizations have also adopted
similar policies to encourage the sharing of data. The
World Meteorological Organization (WMO) has
-----
adopted a policy for the international exchange of
meteorological and related data and products. WMO
Resolution 40 provides for the free and unrestricted
sharing of data [2].
The Group on Earth Observations (GEO), which is
coordinating efforts to build a Global Earth Observation
System of Systems, or GEOSS, explicitly acknowledges
the importance of data sharing in achieving the GEOSS
vision and anticipated societal benefits. GEO is
developing a set of high level Data Sharing Principles
which call for the "full and open exchange of data,
metadata, and products shared within GEOSS,
recognizing relevant international instruments and
national policies and legislation." These Principles also
note that "All shared data, metadata, and products will
be made available with minimum time delay and at
minimum cost" [3].
**2.** **INTERNATIONAL OCEANOGRAPHIC DATA**
**AND** **INFORMATION** **EXCHANGE** **(IODE)**
**PROGRAMME**
The Intergovernmental Oceanographic Commission’s
IODE programme was established in 1961 “to enhance
_marine research, exploitation and development by_
_facilitating the exchange of oceanographic data and_
_information between participating Member States and_
_by meeting the needs of users for data and information_
_products”. The objectives of the IODE are:_
(i) to facilitate and promote the exchange of all
marine data and information including metadata,
products and information in real-time, near real
time and delayed mode;
(ii) to ensure the long term archival, management and
services of all marine data and information;
(iii) to promote the use of international standards, and
develop or help in the development of standards
and methods for the global exchange of marine
data and information, using the most appropriate
information management and information
technology;
(iv) to assist Member States to acquire the necessary
capacity to manage marine data and information
and become partners in the IODE network; and
(v) to support international scientific and operational
marine programmes of IOC and WMO and their
sponsor organisations with advice and data
management services.
For nearly 50 years, IOC Member States have been
building and contributing to a network of National
Oceanographic Data Centres (NODCs) through the
IODE Programme. Over this period, IOC Member
States have established 80 oceanographic data centres in
IOC Member States (Fig. 1). The IODE network is
responsible for the collection, quality control, and
archive of many millions of ocean observations, and
these data are made available to the Member States. The
IODE programme encourages free and open access to,
and exchange of, marine scientific and oceanographic
data and information among the relevant institutions and
agencies in the member states and focuses on all ocean
related data including physical, chemical, and biological.
_Figure 1. The IODE network of National_
_Oceanographic Data Centres_
Many NODCs provide web interfaces to allow users to
query and retrieve datasets from localized databases.
However, there is currently no focal point where users
can go to identify and gain access to all available ocean
data and products. To facilitate the timely, consistent
and integrated access to the oceanographic data
holdings of the NODC network, the IODE has initiated
the Ocean Data Portal project which will establish a
single point of access to data collections and inventories
of marine data to support data discovery and access. The
objective of the IODE Ocean Data Portal is to build a
distributed network of oceanographic data centres
enabling the searching and retrieving of datasets.
**3.** **DATA STANDARDS**
One of the objectives of the IODE is “to promote the
_use of international standards, and develop or help in_
_the development of standards and methods for the_
_global exchange of marine data and information, using_
_the most appropriate information management and_
_information technology”. Lack of agreement on_
common standards, formats and practices means that
there is a significant amount of human intervention
needed before data downloaded from one site can be
used by another. In the past, with slow communications
methods and relatively small data volumes, this was not
such a significant issue. With the growth of the internet
for the exchange of data and information there has been
an increased capacity for sending much larger volumes
and almost instantly. This has highlighted the need for
community wide standards for management and
exchange of ocean data simply to improve the efficiency
of data exchange. Interoperability between systems can
-----
only be achieved by the use of agreed standards and
there are a number of internationally accepted standards
which are applicable to oceanographic data. These
include those developed by the International
Organization for Standardization (ISO), the World Wide
Web Consortium (W3C) and the Open Geospatial
Consortium (OGC).
In 2008, the IODE, jointly with JCOMM, sponsored a
meeting to examine the potential for the development
and acceptance of community wide standards for marine
data and information management and exchange [4].
The objective of this meeting was to gain general
agreement and commitment to adopt standards related
to key ocean data management thereby facilitating
exchange between oceanographic institutions. The
meeting developed a process to accept, evaluate and
recommend proposals for community wide standards.
Community participation in the process so far has been
slow and if this standards development process is to be
successful, it is essential for the marine data
management community to play an active role.
**4.** **OCEAN DATA PORTAL**
The objective of the Ocean Data Portal is to facilitate
and promote the exchange and dissemination of marine
data and services through the provision of seamless
access to collections and inventories of marine data
from the NODCs in the IODE network and to allow for
discovery, evaluation and access to data via web
services. This is achieved through a standards-based
infrastructure that provides the integration of marine
data and information from a network of distributed
IODE NODCs as well as the resources from other
participating systems [5].
The key principle of the Ocean Data Portal is
interoperability with existing systems and resources.
Participating IODE data centres agree to accept and
implement a set of interoperable arrangements including
the technical specifications and web services for the
integration and shared use of the metadata, data and
products. This interoperability is achieved through the
use of internationally endorsed standards and best
practice (such as ISO and OGC) and does not require
data centres to change their internal data management
systems.
The ODP is currently deployed in five data centres from
the Back Sea region. Further datasets will be
contributed to ODP by data centres in the USA, Canada,
Australia, UK and the East Asian region by early 2010.
**4.1 ODP Architecture**
The architecture of the Ocean Data Portal consists of
three basic components (Fig. 2):
(i) Data Provider. Provides access to data and
metadata of the local data systems. When the
wrapper is installed in the local data system, the
latter becomes a data source for the distributed
data system. A recent addition to the Data Provider
software is the Light Data Provider function which
offers remote registration of local datasets and
allows deployment of the ODP system without the
need to install software by the data provider.
(ii) Integration Server. Provides registration and
operation status monitoring of the distributed data
sources, harvesting of the discovery metadata in
coordination with Data Provider, management of
the common codes/dictionaries and access to
distributed data sources by ODP services. The
Integration Server also interacts with other systems
(portals) by means of discovery metadata exchange.
(iii) ODP Services. Provides administration, discovery,
viewing, analysis and download. The Data Portal
includes a GIS-based user interface, metadata and
data search, data download and visualisation
components. The ODP services include a number
of W3C and OGC web-services.
_Figure 2. Architecture of the Ocean Data Portal_
**4.2 Functional and technical requirements**
The ODP provides the following functionality:
- distributed marine data infrastructure generation
and operation;
- data discovery and access;
- data provision to end-users;
-----
- user management; and
- system monitoring and reporting
The technical aspects include the provision of metadata
and services for a distributed marine data infrastructure
enabling the interactions among data providers, service
providers and the end-users.
The system allows data interaction whilst avoiding data
reformatting and delocalization so the data always
remains within the data provider’s infrastructure. ODP
allows adjustment of services invoking online
request/response processes and other operations and
also allows chaining of services into more complex ones.
The system supports “subscription” type services and
standing orders (e.g. oil spill monitoring and alerting)
and allows the easy identification of, and access to,
requested services and data, with progress follow-up
until completion. The integration of data and services
from multiple domains is possible to facilitate
exploitation of main synergies and service and data
providers can register, provide and promote their
products to other thematic or regional portals.
ODP is built using open standards which provides the
capability to interact with other data portals which
minimizes the investment required by data and service
providers to build on open standards.
Discussions are currently underway between the ODP
developers and SeaDatNet, a pan-European
infrastructure for marine and ocean data management
[(http://www.seadatanet.org), to ensure interoperability](http://www.seadatanet.org/)
between the two systems, ensuring that data requests
and data delivery are coordinated
**4.3 Communication requirements**
The Ocean Data Portal supports geographically
distributed marine data infrastructure operations that can
publish and disseminate technical guides and reports to
IODE data centres and other participating centres.
Dissemination of information about the status of all
ODP partners will ensure coordination and crosscommunication and provide remote software
installations and documentation access. Communication
will also allow ODP partner groups to discuss and
resolve ongoing development issues.
Communication between Integration Server and Data
Provider was improved by web-service creation which
provides existing request-response communication both
with fault-tolerant processing and error catch,
recognition and logging.
Communication across the ODP consists of two
processes:
- Metadata harvesting by the Integration Server from
the Data Providers. These resource descriptions are
exposed to a harvester, which is part of the
Integration Server. This software will regularly (at
any set frequency) check all data centres for new
resource descriptions and download these as
necessary. These descriptions are added to a central
repository that covers all data centres connected to
the distributed system.
- Request on data. Requests are transmitted by HTTP
in encoded form. HTTP-connection between the
Integration Server and Data Provider are active
during processsing and the Data Provider requests
acceptance, validation, execution and response
return. Communication based on web-services
provides transactional and fault-tolerant mode. If
errors occur the Integration Server will immediatelly
receive a message with an error code and description
and all errors will be published by the specific webservice by means of mostError method. If a request
for data was executed successfully the Data Provider
will invoke _postResult method with a response-_
message in the transport protocol structure.
**4.4 Access to distributed marine data**
The Ocean Data Portal provides data and product
dissemination services, which are divided into discovery,
viewing, analysis and download services.
_Discovery service disseminates a data source catalogue_
with descriptions of resources in the form of XML files.
The metadata record is based on ISO 19115. The ODP
service provides user interfaces for data and product
search supported by the catalogue (Fig. 3). The data
source catalogue can be accessed from external systems
directly or alternatively by reformatting into other
metadata structures.
- data search that defines the sampling criteria using a
spatial region, time period, phenomena, platform, etc.
- access to remote data sources via the Integration
Server including request status monitoring; and
- processing of transport data files and tabular-graphic
and map visualization of data using standard forms.
_Analysis service has been developed to provide near_
real-time GIS-layer generation from distributed datasets
both with interactive and fast presentation of
multidisciplinary data and products on a map. It also
includes Web Map Services (WMS) as a viewing
service for data representation on a map. The user can
adjust the composition of the map layers, the number of
maps for viewing and other specifications. The mapping
service enables a joint analysis of data to provide a view
of the spatial variability of marine processes. ODP
-----
renders maps generated by the analysis service using
Open Layers (Fig. 4) and MapServer (Fig. 5).
_Figure 3. Discovery service user interface provides the_
_ability to search the metadata catalogue._
_Viewing service is based on web-based applications_
accessible via the web browser. The services provided
include:
_Download service allows the user to download selected_
data to the local computer after viewing. If time
scheduling is required to download data, the user can
register the site for downloading, the list of required
datasets and the sampling criteria (Fig. 6). The transport
data file formats that are available are:
- NetCDF - E2E convention [6]
- ASCII
- XML
_Figure 4. OpenLayers is used to display thematic layers_
_generated by the ODP WMS service from near real-time_
_Data Provider datasets._
# Selected data can be either downloaded in a single zip-file or viewed using the ODP result viewing service.
_Figure 5. MapServer is used to display thematic layers_
_(shape files) generated by the ODP WMS service from_
_near real-time Data Provider datasets._
_Figure 6. The user is able to download selected data in_
_NetCDF or ASCII format._
**5.** **INTEROPERABILITY WITH OTHER OCEAN**
**DATA SYSTEMS**
Interoperability arrangements for the Ocean Data Portal
are being developed in close cooperation with existing
and developing systems such as the WIS and the
GEOSS.
The WIS provides a single coordinated global
infrastructure for the collection and sharing of
information in support of all WMO and related
international programmes. The WIS consists of three
major components: National Centres (NC), Data
Collection or Product Centres (DCPC), and Global
Information System Centres (GISC). The Ocean Data
Portal will contribute to the WIS as a DCPC. The WIS
will also be the core component of the GEOSS for
weather, water, climate and disaster societal benefit
areas.
-----
GEOSS will be based on existing observing, data
processing, data exchange and dissemination systems
and the implementation of GEOSS will facilitate the
development and availability of shared data, metadata,
and products. GEOSS interoperability will be based on
non-proprietary standards, with preference to formal
international standards. Interoperability will be focused
on interfaces, defining only how system components
interface with each other and thereby minimizing any
impact on affected systems other than where such
affected systems have interfaces to the shared
architecture.
**6.** **SUMMARY**
The Ocean Data Portal will deliver a standards-based
infrastructure to provide integration of marine data and
information from a network of distributed IODE
NODCs as well as the resources from other participating
systems. It will serve to coordinate the view of ocean
data resources with other developing systems such as
WIS and GEOSS. This interoperability will be achieved
through the use of internationally endorsed standards
and it will not be a requirement for data centres to
change their internal data management systems.
**7.** **REFERENCES**
1. Intergovernmental Oceanographic Commission. 2003. IOC
Oceanographic Data Exchange Policy.
[http://www.iode.org/index.php?option=com_content&ta](http://www.iode.org/index.php?option=com_content&task=view&id=51&Itemid=95)
[sk=view&id=51&Itemid=95. Accessed: 2009-04-19.](http://www.iode.org/index.php?option=com_content&task=view&id=51&Itemid=95)
2. World Meteorological Organization. 1995. Resolution 40
(Cg-XII).
[http://www.wmo.int/pages/about/Resolution40_en.html.](http://www.wmo.int/pages/about/Resolution40_en.html)
Accessed: 2009-04-19.
3. Group on Earth Observations. 2009. GEO Data Sharing
Principles Implementation,
[http://www.earthobservations.org/geoss_dsp.shtml.](http://www.earthobservations.org/geoss_dsp.shtml)
Accessed: 2009-4-19.
4. Intergovernmental Oceanographic Commission. 2008.
IODE/JCOMM Forum on Oceanographic Data
Management and Exchange Standards, IOC Project
Office for IODE, Oostende, Belgium 21-25 January
2008. Oostende, Belgium: IOC/IODE Project Office,
45pp. (IOC Workshop Report No. 206) (English)
5. International Oceanographic Data and Information
Exchange. 2009. Ocean Data Portal.
[http://www.oceandataportal.org/. Accessed: 2009-04-19.](http://www.oceandataportal.org/)
6. JCOMM (Joint WMO-IOC Technical Commission on
Oceanography and Marine Meteorology). 2008.
Overview of End-to-End Data Management Technology.
[http://data.meteo.ru/e2edm/files/resourcesmodule/@ran](http://data.meteo.ru/e2edm/files/resourcesmodule/@random4240448bb69c9/1237217263_1___E2EDM_overview_bk_gr_mn.pdf)
[dom4240448bb69c9/1237217263_1___E2EDM_overvi](http://data.meteo.ru/e2edm/files/resourcesmodule/@random4240448bb69c9/1237217263_1___E2EDM_overview_bk_gr_mn.pdf)
[ew_bk_gr_mn.pdf](http://data.meteo.ru/e2edm/files/resourcesmodule/@random4240448bb69c9/1237217263_1___E2EDM_overview_bk_gr_mn.pdf)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.5270/OCEANOBS09.CWP.72?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5270/OCEANOBS09.CWP.72, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "http://www.oceanobs09.net/proceedings/cwp/Reed-OceanObs09.cwp.72.pdf"
}
| 2,010
|
[] | true
| 2010-12-31T00:00:00
|
[] | 5,408
|
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02f9917a2ce6ffc23aa4afd067a542e94dd16c21
|
[
"Computer Science"
] | 0.883154
|
An Innovative Solution for Cloud Computing Authentication: Grids of EAP-TLS Smart Cards
|
02f9917a2ce6ffc23aa4afd067a542e94dd16c21
|
2010 Fifth International Conference on Digital Telecommunications
|
[
{
"authorId": "143849511",
"name": "P. Urien"
},
{
"authorId": "9192035",
"name": "Estel Marie"
},
{
"authorId": "3060330",
"name": "Christophe Kiennert"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
## An Innovative Solution for Cloud Computing Authentication: Grids of EAP-TLS Smart Cards
### Pascal Urien, Estelle Marie, Christophe Kiennert
To cite this version:
#### Pascal Urien, Estelle Marie, Christophe Kiennert. An Innovative Solution for Cloud Com- puting Authentication: Grids of EAP-TLS Smart Cards. ICDT, 2010, Greece. pp.22-27, 10.1109/ICDT.2010.12. hal-00673665
### HAL Id: hal-00673665
https://hal.science/hal-00673665
#### Submitted on 24 Feb 2012
#### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
#### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
-----
# An Innovative Solution for Cloud Computing Authentication: Grids of EAP-TLS Smart Cards
#### Pascal Urien Estelle Marie Christophe Kiennert
Telecom ParisTech EtherTrust Telecom ParisTech
Pascal.Urien@telecom-paristech.fr Estelle.Marie@EtherTrust.com Christophe.Kiennert@telecom-paristech.fr
_Abstract - The increase of authenticating solutions based on_ TLS flaw be discovered and patched, the smartcards can be
**RADIUS** **servers** **questions** **the** **complexity** **of** **their** conveniently loaded with a more secure and up-to-date TLS
**administration whose security and confidentiality are often at** stack. Lastly, in term of management, Certificate handling is
**fault especially within Cloud Computing architectures. More** totally independent from Radius management. As such the
**specifically, it raises the concern of server administration in a** server certificates can be updated at will, and the scalability
**secure environment for both the granting access’ company and** of the server can easily be modulated with a cluster of grids
**its clients. This paper aims to solve this issue by proposing an** whose number depends of the estimated number of clients.
**innovative paradigm based on a grid of smart cards built on a**
This makes this EAP-TLS smart card grid a convenient
**context of SSL smart cards. We believe that EAP-TLS server**
paradigm, albeit its performances are reasonable yet far from
**smart cards offer the security and the simplicity required for**
those of traditional computers.
**an administration based on distributed servers. We specify the**
**design of a RADIUS server in which EAP messages are fully** This paper is organized as follows. Section 2 presents a
**processed by SSL smart cards. We present the scalability of** brief state-of-art of related works. Section 3 introduces the
**this server linked to smart card grids whose distributed** TLS smart card concept. In Section 4, we describe the
**computation** **manages** **the** **concurrence** **of** **numerous** smartcard enabled RADIUS server. In section 5, we give an
**authenticating sessions. Lastly, we relate the details of the first** overview of the platform design and of its performance
**experimental results obtained with the RADIUS server and an**
based on a grid of 32 cards remotely accessed. Finally,
**array composed of 32 Java cards, and demonstrate the**
section 6 concludes this paper.
**feasibility and prospective scalability of this architecture.**
**_Keywords- Security, Smart card, EAP, TLS, AAA, RADIUS,_**
**_Cloud Computing_**
I. INTRODUCTION
Nowadays RADIUS (Remote Authentication Dial In
_User Service) protocol [16] is widely used by Internet_
Service Providers (ISPs) or companies to grant access to
networks services. While it is very difficult to attack or
threaten a RADIUS server if it has been properly
implemented and secured, it is rather impossible to tell to
which extent those who manage this server can be trusted,
especially since the server private key is stored on the server
machine and can be easily stolen or the exchanges
eavesdropped by those who hold the proper rights. This
concern is raised by nowadays Cloud Computing technology
where the distribution of server can be critical when thirdparties are in charge of server management and when no one
knows for sure who is in charge of the server’s credentials or
to which point the person in charges can be trusted,
especially in term of industrial espionage. The SSL smart
cards grid has been designed to cope with the inherent
security issues which are naturally associated to a distributed
architecture such as Cloud Computing, since a unique X509
certificate and the SSL stack are securely embedded in each
SSL smart card. In fact, smart cards are reputed for the
physical security they provide considering it is infeasible to
easily access their content or their computing. In addition,
the fact that the SSL stack is embedded in the smartcard
offers an interesting practicality regarding TLS: would a new
978 0 7695 4071 9/10 $26 00 © 2010 IEEE 22
II. STATE OF ART
Most of RADIUS [16] servers support the EAP-TLS [13]
[14], i.e. a transparent encapsulation of the TLS protocol
[12], working with mutual authentication. Client and server
are equipped with X509 certificates and their associated
RSA private keys. Our framework merges two innovative
technologies: EAP-TLS server smart cards and clusters of
such devices, via cloud services.
A smart card [1] is a tamper resistant device, including
CPU, RAM and non volatile memory. Packets exchanged
to/from this device are named APDUs and are detailed by
the ISO7816 standard. Security is enforced by multiple
physical and logical countermeasures. Most of these
electronic chips support a Java Virtual machine (JVM) and
execute software written in this programming language [2]..
The use of smart cards in TLS authentication has now a
rather long history and has been largely developed according
to different models.
Classical frameworks deal with pkcs#15 [3] tokens that
store certificates, and compute RSA procedures.
In 2000, a first smart card performing SSL operations
was proposed in [4]. However, the weak computing
resources of the Java Cards of that time rendered infeasible a
full implementation.
Later on, a patent [5] described smart card computing
facilities performing functions for TLS exchange, such as
certificates checking or signature with private key.
-----
EAP-TLS smart cards, i.e., trusted computing platform
running EAP procedures were proposed in 2004 [6], and are
detailed later on.
The first grid was designed in [7] and was working with a
cluster of java cards. A Mandelbrot set was generated thanks
to the combined calculation of smart cards.
Lastly, the use of java cards, processing EAP messages in
RADIUS architecture, was previously discussed in [8] [9].
Figure 1 presents the first prototype structure, organized
around an USB hub. The RADIUS code is stored in a
FLASH disk, and EAP server smart cards are inserted in
USB tokens. This component works according to a plug and
play paradigm.
Figure 1. RADIUS server [8], based on EAP-TLS smart cards
In this paper, we propose an architecture that splits the
RADIUS server in two parts (see figure 2). First a pure
software bloc processes the RADIUS protocol. Second a
smart card grid, supports up to four hundred EAP-TLS smart
cards, and comprises a mother board and slave extensions,
each of them supporting up to 32 smart cards. This electronic
rack is usually employed by mobile phone manufacturers
who wish to check their compatibility with SIM cards issued
by multiple operators. Every smart card is associated with a
TCP socket, EAP-TLS procedure is fully managed by a
tamper resistant device, and each socket acts as a virtual link
used to exchange data with the RADIUS server.
RADIUS Internet GRID EAP-TLS
Server Server Smart Card
Array
Figure 2. The smart card grids architecture
The main idea behind this architecture is to deploy trust
as a service for cloud infrastructures. Each EAP-TLS smart
card autonomously processes the SSL protocol. Although the
authentication service is distributed over the WEB, critical
operations such as mutual authentication (either in full or
resume modes) are confined in a trusted computing platform.
III. ABOUT EAP-TLS SMART CARDS
EAP [15] is a universal and flexible authentication
framework. Because it can transport about any authentication
protocol, it solves the interoperability concerns that their
number and their disparity had risen. Formally, EAP
protocol is built on three kinds of messages: requests
delivered by servers; responses returned by clients; and
notifications issued by servers in order to indicate success or
failure of authentication procedures.
The EAP smart cards functionality and binary encoding
interface are detailed in [11]. These devices process EAP
methods [15] and act as server or client entity. They
communicate via a serial link, whose throughput ranges
between 9600 and 230,000 bauds. There are two classes of
operations, sending data (writing to smart card) and
receiving data (reading from smart card); information is
segmented in small blocks (up to 256 bytes) named APDUs,
described by the ISO 7816 standard.
NAS RADIUS EAP-TLS
Server Server
Access-Request EAP-Identity.resp (25 Bytes)
**T1**
Access-Challenge EAP-TLS.req/Start (10 Bytes) 35 ms
**T2**
Access-Request **T3** EAP-TLS.resp/Client-Hello (60 Bytes)
EAP-TLS.req/Server-Hello (1310 B)
Fragment #1
6 APDUs
Access-Challenge 440ms
**T4**
Access-Request EAP-TLS.req/Ack
**T5**
EAP-TLS.req/Server-Hello (130B) 10 ms
Access-Challenge Fragment#2
**T6**
Access-Request EAP-TLS.resp/Client-Finished (990B)
**T7** 4 APDUs
4300 ms
Access-Challenge EAP-TLS.resp/Server-Finished (53 B)
**T8**
Access-Request EAP-TLS.resp/ACK (6 Bytes)
**T9** 285 ms
EAP-Success (4 Bytes)
**T10**
Access-Success GET-MSK-Key (64 Bytes) 10 ms
**T11**
Total : 5080ms
Figure 3. Choreography and timings observed with an EAP-TLS smart
card server
In this paper we focus on arrays of EAP-TLS smart cards
deployed in grids. These devices run the OpenEapSmartcard
JAVA open stack, introduced in [10], and which comprises
four logical components (see figure 1):
1. The Engine Object is mostly in charge of the IO
management (i.e. APDUs exchange).It is also responsible of
EAP messages segmentation and reassembly. In fact, the
APDU payloads maximum length is 255 bytes for input data
and 256 bytes for output data, while EAP packets maximum
length is about 1300 bytes of data. Consequently, EAP
packets are split into several ISO 7816 units, and the Engine
entity parses them in order to rebuild the proper EAP packet.
-----
2. The Credential Object holds all the credentials
required by EAP-TLS method, that is to say: the
Certification Authority certificate, the server Certificate and
its associated private key. The EAP-TLS State Machine is
reset and its according method is initialized with appropriate
credentials, each time an EAP Identity.Response message is
received. This object also works as an Identity module. For
now, it only holds a unique server Identity but one could
possibly load different server Identities issued by several
companies which, upon success, would grant different kind
of access or services depending of the Client’s Identity and
its subscription to one or several companies’ Network.
3- The Authentication Interface object implements all
services fulfilled by EAP-TLS methods, whose main
procedures are initialization, packet processing or MSK key
downloading.
4- Lastly, the EAP-TLS object is in charge of packets
processing, as specified in EAP standard [15]. Since TLS
packets size may exceed the Ethernet frame capacity, EAPTLS supports internal segmentation and reassembly
mechanisms.
Example of computing performances are illustrated in
figure 3, the EAP-TLS server runs in the _GX40 java card_
manufactured by the Gemalto Company. For convenience,
the EAP authentication process is divided in eleven steps; the
total procedure costs 5s (with a RSA key size of 1024 bits)
and about 2,5 Kbytes of data are exchanged.
IV. ABOUT RADIUS
RADIUS technology was developed in the nineties as an
access server authentication and accounting protocol,
massively deployed in order to solve authentication concerns
raised by the increasing number of users who aimed to reach
their Internet Service Provider by mean of modems based on
PPP protocols. It was then again largely exploited when
IEEE 802.1x architecture was introduced, for RADIUS is the
key protocol of AAA architecture (Authentication,
Authorization and Accounting) and it supports access control
mechanisms for wired and wireless infrastructures.
_A._ _Classical Architecture_
RADIUS protocol is built on two entities: the NAS or
_Network Access Server which can be a Point of Presence_
(POP) or an Access Point (AP), and the AS (Authentication
_Server)._
In our platform we deal with Wi-Fi infrastructure,
compatible with the IEEE 802.1x standard. A wireless client
is called a supplicant. Before this supplicant is authenticated
and given an IP address, the NAS rejects all frames which do
not belong to an authenticated supplicant. For this purpose,
EAP authentication messages are exchanged between the
NAS and the AS; those messages are transported by LAN or
PPP frames and are encapsulated into RADIUS datagrams
routed over an UDP/IP stack. To each type of EAP message
corresponds a RADIUS datagram (Access-Challenge,
Access-Request and Access-Accept / Access-Reject)
according to the following scenario:
- The client, or supplicant, tries to access to a network
through the affiliated NAS and issues its user’s Identity to
start the authentication procedure. This Identity is sent by the
client terminal thanks to an EAP-Identity message which is
then encapsulated by the NAS in a RADIUS Access-Request
packet and forwarded to the AS. In the case of an EAP-TLS
scenario, there is a mutual authentication therefore the user’s
identity is the subject field of its X509 certificate.
- The AS extracts and analyses the EAP message from
the RADIUS datagram and depending on the user’s Identity,
it will then process the appropriate authentication method.
Typically, user’s account information and parameters are
stored in a LDAP file accessed by the AS, and this
information determines which procedure, in our case EAPTLS, should be initiated to authenticate the user.
- The whole EAP session is then supervised by RADIUS
Access-Challenge packets transporting EAP requests, and
RADIUS Access-Request packets transporting EAP
responses.
- Finally, once the authentication procedure has been
finished, the EAP server delivers a notification message,
either failure or success, which is respectively encapsulated
in a RADIUS Access-Accept or Access-Reject. Upon success,
the EAP server computes a Master Session Key (MSK)
which is delivered to the AS through the _Access-Accept_
packet. This MSK is both shared by the client terminal and
the NAS, and is handled to calculate the session keys needed
to encrypt the exchanges between the NAS and the client.
As stated previously, the EAP server is merged within
the whole RADIUS module of AS. Most of RADIUS
software implementations use the well known _OpenSSL_
library in order to support the EAP-TLS authentication
procedure, which is a quite transparent encapsulation of the
TLS protocol. In our proposal though, EAP server runs in the
smart cards and EAP messages are computed by the smart
card and forwarded to the AS which then dispatches them to
the NAS.
_B._ _Distributed Architecture_
We have concluded that the benefits of implementing
EAP servers into smart cards are the following:
- The server private key is secretly stored and used by the
smart card.
- The client certificate is autonomously checked by the
EAP server.
- The SSL stack processed by the smart card is
transparent to the RADIUS and the OS in which it has been
implemented; it can be easily updated in case of major
patches of SSL.
- If the EAP client also runs in a smart card, the TLS
stack is channelled from card to card and the EAP session is
then fully processed by a couple of tamper resistant devices,
working as _Secure Access Module (SAM), a classical_
paradigm deployed in highly trusted financial architectures.
-----
Figure 4. Structure and choreography of the test platform
However, our experimental results demonstrate so far
that the performance of our server is much slower than
classical RADIUS servers, even if it assures the
simultaneous connection of a predetermined number of
users.
Our proposed Smartcard enabled RADIUS server is
typically a classical RADIUS server which has been split
into two main components: a RADIUS authentication server
and distributed EAP servers.
The RADIUS authentication server is located on a distant
host and is in charge of the following tasks:
- It sends and receives RADIUS datagrams from and to
the NAS, thanks to UDP sockets.
- It builds or analyses RADIUS messages and more
specifically encapsulates EAP messages from the smartcard
into RADIUS datagrams forwarded to the NAS, and
reciprocally extracts RADIUS datagrams from the NAS into
EAP messages forwarded to the appropriate server
smartcard.
- It parses and builds APDUs which are communication
units used to interact with the smartcards as explained below.
- It handles the RADIUS secret and computes or checks
the associated authentication digest and attributes.
- It opens stream sockets with the smartcards grid and
associates an incoming session with a single smartcard and
its related connection.
V. PLATFORM DESIGN AND EXPERIMENTAL RESULTS
The platform we have designed for experimental purposes
and performances evaluation is represented in figure 4, and
is built on three main components: a TLS proxy based on
_OpenSSL and used to simulate a reasonable amount of_
802.1x clients as well as an artificial NAS dialoguing with
our AS, a RADIUS _Access Server, and a java card array_
remotely located and managed by a specific dedicated
proxy, which we will call card proxy for comprehension
purposes.
The TLS proxy can run up to 30-35 connections, at
which point the computer’s computation power is not strong
enough to assure a decent simulation. It is possible to run this
proxy on two different hosts in order to distribute the
connections to our RADIUS server, but we have determined
that it did not change significantly our results. The SSL
proxy accesses our RADIUS server, either remotely, or on an
internal bus if the server and the proxy are located on the
-----
same host. Our SSL proxy creates a predetermined amount
of SSL connections whose TLS messages are encapsulated
into EAP packets and then into RADIUS datagrams, which
are forwarded to the RADIUS server thanks to datagram
sockets directed on port 1812.
Once it has been launched, our RADIUS AS generates a
thread on port 1812, waiting for socket connections. Each
time it receives a connection on this port, the server creates a
new thread which will initiate a connection with one of the
server smartcards according to the following procedures (see
figure 4):
- It checks the incoming datagram, parses it and verifies
it is a proper RADIUS datagram.
- It checks the attribute 79 of the RADIUS message
which corresponds to the encapsulated EAP message.
- It splits the EAP message into the appropriate number
of APDUs. The EAP message is transported by APDUs
thanks to an EAP-Process command, created for that
purpose.
- It generates an appropriate context for the APDUs so
that they shall be recognized by the card proxy, which
redirects the incoming connexion to the proper java card and
whose syntax is specific.
- It associates a RADIUS session-ID with a specific
smartcard so that each incoming TLS session is associated to
the same smartcard throughout the whole TLS authentication
phase. Once the authentication is successful (or not) and
once the keys-blocs and MSK have been generated, the
smartcard associated to a RADIUS session is released and
free to be used by a new incoming session.
- It generates stream sockets connected with the distant
terminal which hosts the card proxy and the java cards array.
Those sockets are used to send the APDUs to the remote
card proxy and to receive the smartcard response.
- Upon answer from the smartcard, it parses and
reassemblies the EAP packets, in case it has been split by the
smartcard into several APDUs, and waits until all the EAPRequest packets have been transmitted.
- It encapsulates the incoming EAP packet into a
RADIUS datagram, and forwards it to the TLS proxy located
on the client terminal and lastly closes the thread.
This procedure is renewed as often as necessary until all
sessions have been treated and all clients authenticated. In
case an internal error occurs or in case there is no more java
card available, the client’s incoming RADIUS request is
silently discarded.
The performance of a single EAP server card linked with
a single client has been previously measured [8][9][10].
Another type of java cards was used in our current
architecture, but the results we obtained are similar, albeit
slightly less efficient. At best, the total cost of an
authentication previously measured with a single EAP card
directly docked in the server host was about 5000ms,
whereas the authentication cost based on the cards we used
approximates 6000ms. Now if we initiate the same
authentication with the same card located remotely the
authentication time is almost doubled. The transfer time has
risen drastically, however, since the ping to this remote card
proxy is about 30ms, there is an issue here which needs to be
farther investigated and fixed in order to obtain reasonable
authentication times.
The following measurements have been determined
according to the APDU stream. Indeed, the EAP messages
are fragmented if necessary, and each EAP-Response packet
matches one or several EAP-Process APDUs coupled with
the appropriate status words answered by the server card.
Those status words indicate that the packets have been
properly transmitted or that the server card needs to emit a
specific answer which needs to be fetched. Reciprocally,
each EAP-request packet matches one or several APDUs
coupled with the proper EAP-Response APDU answered by
the client card.
Figure 5. Method for the measurement of transmission time
The transmission time of EAP-Request and Response
packets was measured according to the method illustrated by
figure 5. Indeed, the time measured for an EAP-Response
may be approximated to the time spent between the sending
of the APDUs by the RADIUS server, and the reception of
the status words sent by the server card. From this point
begins the time measurement for the next EAP-Request
packet.
USB Card Distant Card
T1- Rx: EAP-Identity.response 30ms 100ms
T2- Tx: Start 5ms 60ms
T3- Rx: Client Hello 430ms 580ms
T4- Tx: Server Hello fragment#1 220ms 1850ms
T5- Rx: EAP-TLS-ACK 40ms 100ms
T6- Tx: Server Hello fragment#2 10ms 200ms
T7- Rx: Client-Finished 270ms 2100ms
T8- Tx: Server-Finished 4320ms 4500ms
T9- Rx: EAP-TLS-ACK 290ms 350ms
T10- Tx: EAP-Success 20ms 60ms
T11- Rx: Get-PMK 20ms 190ms
Total 5655ms 10090ms
_Tx: EAP-TLS.Request, Rx: EAP-TLS.Response_
Figure 6. Timing differences of two EAP sessions established with a USB
docked server card or a distant server card.
|acket.|Col2|Col3|
|---|---|---|
||USB Card|Distant Card|
|T1- Rx: EAP-Identity.response|30ms|100ms|
|T2- Tx: Start|5ms|60ms|
|T3- Rx: Client Hello|430ms|580ms|
|T4- Tx: Server Hello fragment#1|220ms|1850ms|
|T5- Rx: EAP-TLS-ACK|40ms|100ms|
|T6- Tx: Server Hello fragment#2|10ms|200ms|
|T7- Rx: Client-Finished|270ms|2100ms|
|T8- Tx: Server-Finished|4320ms|4500ms|
|T9- Rx: EAP-TLS-ACK|290ms|350ms|
|T10- Tx: EAP-Success|20ms|60ms|
|T11- Rx: Get-PMK|20ms|190ms|
|Total|5655ms|10090ms|
-----
We will now compare the time measures given by a
server card directly docked to the AS terminal with the one
given by a distant server card. While a TCP exchange with
the distant server can be roughly approximated to 30ms, we
observe that the time elapsed to perform a full authentication
with the distant server card is greater than expected. In fact,
26 TCP packets are exchanged during a full session and
about 2500 bytes are transferred - which we will disregard,
considering nowadays broadband data rate. Thus, Tt being
the evaluated transfer time, we get:
Tt = 26*30 = 780 ms
In short and at worse, the total authentication time given
with a distant card should be one second longer than with a
card directly docked to the server terminal, and the results
obtained (see figure 6) are far from our expectations.
The total authentication time of a session performed with
a USB docked card reaches an average of 5700ms while a
session performed with a distant server card takes about
10000ms, which is roughly 3000ms more than expected.
Upon investigation of this issue, we noticed that the biggest
delays were induced by the largest packets (usually a few
hundred of bytes). For instance T7 is very short when the
session is performed with an USB server card; however and
since it matches the sending of the Client certificate, when
the session is performed with a distant server card T7 is ten
times longer. In fact the smart grid array works with a 9600
baud throughput for data exchange with smart cards.
Therefore, and because about 2500 bytes are required by an
EAP-TLS session, these hardware constraints cost 2500 ms.
There must be a delay induced with the data processing of
the distant proxy in charge of the redirection of the stream to
the appropriate server cards. Roughly speaking, an extra
delay of about 1000 ms (10000-5700-800-2500) is added by
the board Operating System.
To confirm this assumption, we tested the scalability and
the parallel performance of our distant SIM array with
concurrent connections.
1 Card out 1 Card out
of 5 of 20
T1- Rx: EAP-Identity.response 220ms 580ms
T2- Tx: Start 100ms 390ms
T3- Rx: Client Hello 580ms 1300ms
T4- Tx: Server Hello fragment#1 2000ms 6300ms
T5- Rx: EAP-TLS-ACK 550ms 2200ms
T6- Tx: Server Hello fragment#2 400ms 1750ms
T7- Rx: Client-Finished 6500ms 21500ms
T8- Tx: Server-Finished 5000ms 6600ms
T9- Rx: EAP-TLS-ACK 350ms 350ms
T10- Tx: EAP-Success 60ms 60ms
T11- Rx: Get-PMK 190ms 200ms
Total 15950ms 41230ms
_Tx: EAP-TLS.Request, Rx: EAP-TLS.Response_
Figure 7. Average times of EAP-TLS sessions established with distant
cards within a 5 or 20 cards concurrency.
When we now start five or twenty simultaneous
connections to the remote SIM array; figure 7 shows the
average results for one authentication.
Strikingly enough, we can note that the unvarying times
such as T9 or T10 match the shortest APDUs, which also
means that the parallelisation does work. In addition and as
suspected, the certificate exchanges induce an increasing
delay (T4 and T7) which prevents the reasonable scalability
of this platform. In summary we observe a processing time
of 10s for one card, 3s per card for 5 five devices, and about
2s per card for twenty devices.
As of now, we can only establish the fact that there is an
obvious issue with the queuing management of the remote
server card proxy, which needs to be corrected in order to
significantly improve the performance of our SIM array.
VI. CONCLUSION
In conclusion, although the experimental results of our
platform demonstrates that the scalability performances are
not yet compatible with today network constraints, we are
confident that in a near future we will be able to achieve a
platform whose authentication time will be reasonable
enough to be massively deployed. Furthermore, the security
and practicality it provides shall be a great addition to the
802.1x architecture in general as well as a key asset to
securing Cloud Computing infrastructures.
REFERENCES
[1] Jurgensen, T.M. et. al., "Smart Cards: The Developer's Toolkit",
Prentice Hall PTR, ISBN 0130937304, 2002.
[2] Chen, Z., "Java Card[TM] Technology for Smart Cards: Architecture
and Programmer's (The Java Series) ", Addison-Wesley Pub Co 2002,
ISBN 020170329.
[3] RSA Laboratories, "PKCS #15 v1.1: Cryptographic Token
Information Syntax Standard", 2000.
[4] Urien, P., Saleh, H., Tizraoui, A., "SSL in smart card", in proceedings
of Journees Doctorales Informatique et Reseaux - JDIR’2000,
(Networking and Computer Science PHD days), 6-8 november 2000.
[5] "A Personal token and a method for controlled authentication",
Patent# WO 2006/021865.
[6] Urien, P.; Badra, M.; Dandjinou, M., "EAP-TLS smartcards, from
dream to reality", in proceedings of Applications and Services in
Wireless Networks (ASWN 2004), 2004.
[7] Chaumette S. et. al., "Secure distributed computing on a Java Card
grid". 19[th] IEEE International Parallel and Distributed Processing
Symposium (IPDPS'05), 2005.
[8] Urien, P., Dandjinou, M., "Introducing Smartcard Enabled RADIUS
Server", The 2006 International Symposium on Collaborative
Technologies and Systems (CTS 2006), 2006.
[9] Urien, P., "Open two-factor authentication tokens, for emerging
wireless LANs.", Fifth Annual IEEE Consumer Communications &
Networking Conference (CCNC’08), 2008.
[10] Urien, P., Pujolle, G., "Security and Privacy for the next Wireless
Generation", International Journal of Network Management, IJNM,
Volume 18 Issue 2 (March/April 2008), WILEY.
[11] IETF draft,, "EAP-Support in Smartcard", draft-urien-eap-smartcard18.txt, February 2010.
[12] RFC 2246, "The TLS Protocol Version 1.0", January 1999.
[13] RFC 2716, "PPP EAP TLS Authentication Protocol". October 1999.
[14] RFC 5216, "The EAP-TLS Authentication Protocol", March 2008.
[15] RFC 3748, "Extensible Authentication Protocol, (EAP)", June 2004.
[16] RFC 2865, "Remote Authentication Dial In User Service (RADIUS)
", 2000.
|oncurrent connections.|Col2|Col3|
|---|---|---|
||1 Card out of 5|1 Card out of 20|
|T1- Rx: EAP-Identity.response|220ms|580ms|
|T2- Tx: Start|100ms|390ms|
|T3- Rx: Client Hello|580ms|1300ms|
|T4- Tx: Server Hello fragment#1|2000ms|6300ms|
|T5- Rx: EAP-TLS-ACK|550ms|2200ms|
|T6- Tx: Server Hello fragment#2|400ms|1750ms|
|T7- Rx: Client-Finished|6500ms|21500ms|
|T8- Tx: Server-Finished|5000ms|6600ms|
|T9- Rx: EAP-TLS-ACK|350ms|350ms|
|T10- Tx: EAP-Success|60ms|60ms|
|T11- Rx: Get-PMK|190ms|200ms|
|Total|15950ms|41230ms|
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ICDT.2010.12?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ICDT.2010.12, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://hal.archives-ouvertes.fr/hal-00673665/file/05532389.pdf"
}
| 2,010
|
[
"Conference"
] | true
| 2010-06-13T00:00:00
|
[] | 7,960
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02fad7e99ab4ecfaf34c7e6a575a93600db73b46
|
[
"Computer Science"
] | 0.888905
|
Multi-site Connectivity for Edge Infrastructures : DIMINET:DIstributed Module for Inter-site NETworking
|
02fad7e99ab4ecfaf34c7e6a575a93600db73b46
|
IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing
|
[
{
"authorId": "1410749116",
"name": "D. Sarmiento"
},
{
"authorId": "7682100",
"name": "A. Lèbre"
},
{
"authorId": "2465432",
"name": "L. Nussbaum"
},
{
"authorId": "144358987",
"name": "A. Chari"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Clust Comput Grid",
"CCGRID",
"IEEE/ACM Int Symp Clust Cloud Grid Comput",
"IEEE/ACM International Symposium Cluster, Cloud and Grid Computing",
"Cluster Computing and the Grid",
"IEEE/ACM Int Symp Clust Cloud Internet Comput"
],
"alternate_urls": null,
"id": "57f970eb-366a-4bfa-aa06-2ff70d834806",
"issn": null,
"name": "IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing",
"type": "conference",
"url": "http://www.buyya.com/ccgrid/"
}
|
The deployment of a geo-distributed cloud infrastructure, leveraging for instance Point-of-Presences at the edge of the network, could better fit the requirements of Network Function Virtualization services and Internet of Things applications. The envisioned architecture to operate such a widely distributed infrastructure relies on executing one instance of a Virtual Infrastructure Manager (VIM) per location and implement appropriate code to enable collaborations between them when needed. However, delivering the mechanisms that allow the collaborations is complex and error prone task. This is particularly true for the one in charge of establishing connectivity among VIM instances on-demand. Besides the reconfiguration of the network equipment, the main challenge is to design a mechanism that can offer usual network virtualization operations to the users while dealing with scalability and intermittent network properties of geo-distributed infrastructures.In this paper, we present how such a challenge can be tackled in the context of OpenStack. More precisely, we introduce DIMINET, a DIstributed Module for Inter-site NETworking services capable to interconnect independent networking resources in an automatized and transparent manner. DIMINET relies on a decentralized architecture where each agent communicates with others only if needed. Moreover, there is no global view of all networking resources but each agent is in charge of interconnecting resources that have been created locally. This approach enables us to mitigate management traffic and keep each site operational in case of network partitions. A promising approach to make other cloud-services collaborative on-demand.
|
## Multi-site Connectivity for Edge Infrastructures DIMINET:DIstributed Module for Inter-site NETworking
### David Espinel Sarmiento, Adrien Lebre, Lucas Nussbaum, Abdelhadi Chari
To cite this version:
#### David Espinel Sarmiento, Adrien Lebre, Lucas Nussbaum, Abdelhadi Chari. Multi-site Connectivity for Edge Infrastructures DIMINET:DIstributed Module for Inter-site NETworking. CCGRID 2020: 20th IEEE/ACM International Symposium on Cluster, Cloud and Internet Computing, IEEE; The University of Melbourne, May 2020, Melbourne, Australia. pp.1-10, 10.1109/CCGrid49817.2020.00- 81. hal-02573638
### HAL Id: hal-02573638
https://hal.science/hal-02573638
#### Submitted on 14 May 2020
#### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
#### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
-----
# Multi-site Connectivity for Edge Infrastructures
### DIMINET:DIstributed Module for Inter-site NETworking
#### Abdelhadi Chari
_Orange Labs Network_
_Orange_
Lannion, France
abdelhadi.chari@orange.com
#### Lucas Nussbaum
_Université de Lorraine_
_Inria/LORIA_
Nancy, France
lucas.nussbaum@loria.fr
#### David Espinel Sarmiento
_Orange Labs Network_
_Orange_
Lannion, France
davidfernando.espinelsarmiento@orange.com
#### Adrien Lebre
_IMT-Atlantique_
_Inria/LS2N_
Nantes, France
adrien.lebre@inria.fr
**_Abstract—The deployment of a geo-distributed cloud infras-_**
**tructure, leveraging for instance Point-of-Presences at the edge**
**of the network, could better fit the requirements of Network**
**Function Virtualization services and Internet of Things appli-**
**cations. The envisioned architecture to operate such a widely**
**distributed infrastructure relies on executing one instance of a**
**Virtual Infrastructure Manager (VIM) per location and imple-**
**ment appropriate code to enable collaborations between them**
**when needed. However, delivering the mechanisms that allow**
**the collaborations is complex and error prone task. This is**
**particularly true for the one in charge of establishing connectivity**
**among VIM instances on-demand. Besides the reconfiguration**
**of the network equipment, the main challenge is to design a**
**mechanism that can offer usual network virtualization operations**
**to the users while dealing with scalability and intermittent**
**network properties of geo-distributed infrastructures.**
**In this paper, we present how such a challenge can be**
**tackled in the context of OpenStack. More precisely, we introduce**
**DIMINET, a DIstributed Module for Inter-site NETworking ser-**
**vices capable to interconnect independent networking resources**
**in an automatized and transparent manner. DIMINET relies**
**on a decentralized architecture where each agent communicates**
**with others only if needed. Moreover, there is no global view**
**of all networking resources but each agent is in charge of**
**interconnecting resources that have been created locally. This**
**approach enables us to mitigate management traffic and keep**
**each site operational in case of network partitions. A promising**
**approach to make other cloud-services collaborative on-demand.**
**_Index Terms—IaaS, SDN, virtualization, networking, automa-_**
**tion**
I. INTRODUCTION
Internet of Things (IoT) applications, Network Function
Virtualization (NFV) services, and Mobile Computing [1] have
operational constraints that require to deploy computational
and storage resources at multiple locations closer to the end
users. While the deployment of such distributed cloud infrastructures (DCIs) has been debated initially due to economical
reasons, the question, now, is no more whether they will be
deployed but rather how we can operate them?
Among the different approaches that are investigated, the
use of a series of independent Virtual Infrastructure Manager
(VIM) instances looks to be the most promising one [2],
[3]. However VIMs such as OpenStack [4], have been designed in a pretty stand-alone way in order to manage a
single deployment, and not, to peer each other in order to
establish inter-site services. Hence, most VIM services should
be extended with additional pieces of code in order to offer
the same functionality but over multiple instances. While the
use of distributed databases can help developers implement
such inter-site operations at the first sight [5], it is a bit
more complicated for a few services, in particular when
scalability and network partitions should be taken into account.
To illustrate this claim, we propose to address in this paper
the challenges related to the inter-site networking services.
In particular, we consider the two following points as the
cornerstone:
_• Layer 2 network extension: being able to have a Layer 2_
Virtual Network (VN) that spans several VIMs. This is
the ability to plug into the same VN, Virtual Machines
(VMs) that are deployed in different VIMs.
_• Routing function: being able to route traffic between a_
VN A on VIM 1 and a VN B on VIM 2.
Obviously, current proposals that leverage centralized approaches such as Tricircle [6] are not satisfactory. The challenge is to establish such inter-site networking services among
several VIMs in a decentralized way. By decentralized, we
mean that the networking service of a VIM needs to be
extended in order to guarantee the following characteristics:
_• Scalability: The inter-site service should not be restricted_
by design to a certain number of VIMs.
_• Resiliency: All parts of a DCI should be able to survive to_
network partitioning issues. In other words, cloud service
capabilities should be operational locally when a site is
isolated from the rest of the infrastructure.
_• Locality awareness: VIMs should mitigate as much as_
possible remote interactions. This implies that locally
created data should remain local as much as possible, and
only shared with other instances if needed, thus avoiding
to maintain a global knowledge base.
_• Abstraction and automation: Configuration and instan-_
tiation of inter-site services should be kept as simple
as possible to allow the deployment and operation of
complex scenarios. The management of the involved
implementations must be fully automatic and transparent
for the users.
Our contribution to tackle this challenge is the DIMINET
proposal, a DIstributed Module for Inter-site NETworking
services. To the best of our knowledge this is the first inter
-----
site service tool that satisfies all of the aforementioned properties. The architecture of DIMINET extends concepts that
have been proposed in Software Defined Networking (SDN)
technologies, in particular in the DISCO and OpenDayLight
SDN controllers [7], [8]: On each site, a module is in charge
of managing its local site networking services, and is capable
of communicating with remote modules, on-demand, in order
to provide virtual networking constructions spanning several
VIMs.
We implemented a first proof-of-concept of DIMINET as
a module deployed besides the networking service of OpenStack, Neutron, using an horizontal API to communicate
among modules. This approach enabled us to keep the collaboration code outside the Neutron one. Although additional
experiments should be done to validate how our PoC behaves
in presence of untimely network disconnections, preliminary
experiments conducted on top of Grid’5000 demonstrated the
correct functioning of our proposal.
It is noteworthy that the contribution of DIMINET goes
beyond the technical contribution on the Neutron OpenStack
service. Actually, we are investigating how can the DIMINET
proposal be generalized to other services to make them collaborative with minimal efforts in terms of developments.
Indeed, a significant part of the abstractions that has been
implemented can be reused to share information in an efficient manner between services while mitigating the impact of
network partitions. Such generic pieces of code may represent
a huge contribution to deliver building blocks for collaboration
between independent systems. Such building block are critical
to operate and use DCIs such as envisioned in Fog and Edge
computing platforms.
The rest of this paper is organized as follows. Section II
describes challenges related to inter-site networking services.
Section III presents related work. DIMINET architecture is
given in Section IV. Preliminary evaluations of our PoC are
discussed in Section V. Finally, Section VI concludes and
discusses future work.
II. INTER-SITE NETWORKING CHALLENGES
As mentioned before, programming collaboration mechanisms between several instances of the same service is a
tedious task, especially in a geo-distributed context where network disconnections can prevent one instance from synchronizing with others for more or less long durations. Considering
this point as the norm when designing inter-site services, in
particular for the cloud networking ones, brings forth new
challenges and questions, In this section, we discuss the major
ones. We classified them in two categories: those related to the
organization of networking information and those related to the
implementation of inter-site networking services.For the sake
of clarity, we remind that our DCI architecture is composed of
several sites. Each site is managed by a VIM instance, which
is itself composed of several services. An inter-site operation
consists in interacting with at least two remote instances of
the same service.
_A. Organization of networking information’s challenges_
In order to mitigate as much as possible overheads due to
data exchanges while being robust enough w.r.t. network disconnections and partitioning issues, it is important to identify
(i) which is the minimal information we have to share and at
which granularity, (ii) how this information should be shared
and (iii) how the inter-site networking resources that have been
created behave in presence of network disconnections.
_1) Identifying which networking information should be_
**_shared: A first aspect to consider is related to the organization_**
of information related to the cloud network resources. For
instance, the provisioning of a Layer 2 segment with its
respective IP range between two VIMs will require to share
information related to the IP addresses that have been allocated
at each VIM to avoid conflicts. On the contrary, other information related to local router gateways, external gateways, fixed
host routes ...may not be likely to be shared with remote sites.
In consequence, depending on the inter-site operation, the
information that should be shared needs to be well specified
to avoid conflicts among the networking management entities.
Understanding the different structures that are manipulated
by the operations of the networking service will enable the
definition of efficient and robust sharding strategies between
multiple VIMs.
Fig. 1: Layer 2 extension Request
_2) Defining_ **_how_** **_networking_** **_information_** **_should_** **_be_**
**_shared: A second aspect to consider is related to the scope of_**
each networking service call. Taking into account the scope
for each request is critical as sharing information across all
VIMs could lead to heavy synchronization and communication needs. For instance,network information like MAC/IP
addresses of ports and identifiers of a network related to one
VIM does not need to bes hared with the other VIMS that
composed the DCI. Similarly,information related to a Layer
2 network shared between two VIMs as depicted in Figure 2
does not need to be shared with the 3rd VIM. The extension of
this Layer 2 network could be done later. That is, only when
it will be relevant to extend this network to VIM 3.
_3) Facing network disconnections: Each VIM should be_
able to deliver networking services even in case of network
-----
partitions. Two situations must be considered in this context:
(i) the inter-site networking service (for instance a Layer 2
network) has been deployed before the network disconnection
and (ii) the provisioning of a new inter-site networking service
while some sites cannot be contacted. In the first case, the
isolation of a VIM (for instance VIM2 in Figure 2 should not
impact the inter-site network elements: VIM2 should still be
able to assign IPs to VMs using the "local" part of the intersite Layer 2 network. Meanwhile, VIM1 and VIM3 should
continue to manage inter-site traffic from/to the VMs deployed
on this same shared Layer 2 network.
In the second case, because the VIM cannot reach other
VIMs due to the network partitioning issue, the information
that is mandatory to finalize the provisioning process will
be impossible to obtain. The question is whether to decide
to completely revoke such a request or if instead it will be
desirable to provide the appropriate mechanisms in charge of
finalizing the provisioning request partially.
Fig. 2: Operate in a local any mode
_B. Implementation challenges_
The Networking domain is a large area with multiple
standards as well as different technological solutions. To
avoid facing heterogeneity issues in the DCI context, the
technological choices need to be coordinated in advance in
order to ease the networking service provisioning.
_1) Standard automatized interfaces: A first aspect to take_
into account is related to the definition of the vertical and
horizontal interfaces to allow the provisioning of inter-site
services from the end-users viewpoint but also to make the
communication/collaboration possible between the different
VIMs. This means that the interface which faces the user (userside or vertical as traffic flows in a vertical way) and the
interface which faces other VIMs (VIM-side of horizontal as
traffic flows in a horizontal way) have to be bridged among
them. This integration needs to be done in order to provide
the necessary user abstraction and the automation of the VIMs
communication process.
Consequently, this necessitates the specification and development of well-defined vertical and horizontal interfaces.
These interfaces should present an abstract enough list of the
available inter-site networking services and constructions.
_2) Support and adaptation of networking technologies:_
Along with the necessary exchange of networking information
among VIMs to provide inter-site services as described in
II-A1, the identification of the mechanism to actually do
the implementation (i.e., control plane) and allow the VMs
traffic to be exchanged (i.e., data plane) will be needed.
This implementation information is pretty important since it
indicates to each VIM what technologies it should use to
forward the VMs traffic and where it should be send (i.e.,
consider a VM with its private IP address reachable through
a VXLAN tunnel endpoint in a physical server with a public
IP). Although there are many existing networking protocols
to rely on to do the implementation (BGP-EVPN/IPVPN,
VXLAN, GRE, Geneve, IPSec, etc.), they will need adaptation
in the DCI case. Since the configuration of the networking
mechanisms needs to be known by all the participant VIMs
in a requested inter-site service, the exchange of additional
implementation information will be required among the sites
in an automatized way.
This automation is required due to the fact that the user
should not be aware of how these networking constructions
are configured at the low-level implementation. Since a DCI
infrastructure could scale up to hundred of sites, manual
networking stitching techniques like [9], [10] will be simply
not enough.
Table I summarizes the challenges explained in this section.
These challenges will be referenced in the following sections
to explain our architectural choices.
III. RELATED WORK
A few solutions have been proposed to deal with intersite virtualized networking services in IaaS systems [8], [11],
[12], [6], [13], [14]. In [12], the authors describes an Hybrid
Fog and Cloud interconnection framework to enable a simple
and automated provision and configuration of virtual networks
to interconnect multiple sites. It sustains the entire management in a single entity called the HFC manager which is a
centralized entity acting as the networking orchestrator. The
Tricircle project [6] is also another solution that leverages a
centralized architecture. In this proposal, an API gateway node
is used as an entry point to a geo-distributed set of OpenStack
deployments. Each instance of the Neutron service is not aware
of the existence of other Neutron instances, but instead always
communicate with the API gateway which is also the only
interface exposed to the user. These solutions, which relies
on a centralized entity, are not scalable not robust w.r.t. to
network partitions.
Among the decentralized approaches that have been described, we should emphasize the ODL federation project [8]
and the DISCO SDN controller [7]. The project Federation
for OpenDayLight (ODL) [8] aims to facilitate the exchange
of state information between multiple ODL instances by
levearging an AMQP communication bus to send and receive
messages among instances. The project relies on a fully
decentralized architecture where each instance maintains its
own view of the system. In that sense, the project might
-----
TABLE I: DCI Challenges summary
Challenge Summary
_Organization of networking information’s challenges_
Identifying which networking information should be shared Propose good information sharding strategies
Defining how network information should be shared Avoid heavy synchronization by contacting only the relevant
sites
Facing network disconnections Continue to operate in cases of network partitioning and be
able to recover
_Implementation challenges_
Standard automatized and distributed interface Well-defined and bridged vertical and horizontal interfaces
Support and adaptation of networking technologies Capacity to configure different networking technologies
be a good solution for our objectives in terms of scalability
and robustness. However, ODL Federation does not ensure
that information related to the inter-site networking resources
is consistent across the whole DCI. Actually, the inter-site
services are proposed at the controller level while the Neutron
instances of OpenStack remain unconscious of the information
shared at the ODL level. During a network failure, every
Neutron instance will continue to provide its local services
without knowing that there are potential conflict-operations
when executing actions in resources that are shared between
ODLs. Once the connectivity is reestablished, ODLs cannot
provide a recovery method and information like IP addresses
could be duplicated without coordination among controllers.
This is an important flaw for the controller when it needs
to recovery from networking disconnections. In the DISCO
approach[7], the DCI is divided into several logical groups,
each managed by one controller. Each controller peers with the
other ones only when traffic needs to be routed. In other words,
there is no need to maintain a global view among all instances.
However, the design of DISCO is rather simple as DISCO
does not a cloud-oriented solution (i.e., it delivers mainly
domain-forwarding operations, which includes only conflictless exchanges). Offering usual VIM operations such as ondemand networks creation, dynamic IP assignment, security
groups creation, etc. is prone to conflict and thus is harder to
implement.
DIMINET goes one step ahead of these solutions by delivering an inter-site networking service at the cloud level and in
a decentralized manner.
IV. DIMINET ARCHITECTURE
This section describes the architecture of DIMINET. First,
we give a general overview of the architecture. Second, we
discuss important design choices, in particular by focusing on
how DIMINET instances communicate and how L3 forwarding and L2 network services have been implemented. Finally
and for the sake of clarity, we explain how the network traffic
is effectively routed among the different sites.
_A. Overview_
As shown in Figure 3, DIMINET is fully decentralized: each
DIMINET instance is deployed besides a VIM networking
service.
This architecture guarantees the DCI characteristics as explained as follows.
Fig. 3: DIMINET overview
**Scalability: New DIMINET instances representing remotes**
sites can join the deployment without affecting the normal
behaviour of other instances.
**Resiliency: Because of the fully distributed architecture,**
DIMINET does not present the centralized architecture limitations. This means that in case of network partitions, as every
DIMINET instance and its respective VIM are independent of
the others, they will continue to provide, at least, their cloud
services locally.
**Locality awareness: Because of its horizontal commu-**
nication between instances that happens only on demand,
DIMINET does not build a global knowledge but instead relies
on the collaboration among instances to share the necessary
inter-site service-related information.
**Abstraction and automation: Thanks to its rather simple**
but powerful APIs, DIMINET does the creation and configuration of inter-site services in an automatic way without further
actions needed from the user besides the initial service creation
request.
Figure 4 depicts more in detail the internal architecture of
a DIMINET instance. It is composed of the communication
interfaces, which allows collaboration among VIMs and endusers, and the logic core, which implements the necessary
strategies to manage and deploy inter-site services.
-----
|ID|All_pool|Local CIDR|IPv|Service_Global_ID|
|---|---|---|---|---|
Fig. 4: DIMINET architecture
_B. Logic Core_
The core of DIMINET is the Logic Core, which is in charge
of the actual management and coordination of the inter-site
services, including communication when required with other
DIMINET instances and with the VIM’s Network service (in
our case the Neutron service from OpenStack).
In order to effectively address the consistency challenge
detailed in II-A1, the information sharding strategy for each
service is defined in the Logic Core.
The Logic Core stores inter-site service information in a
local database. To relate the same inter-site service stored in
different locations, the Logic Core generates a global unique
identifier that will identify the same service either in Site 1 or
Site N of the service. This global identifier will be created at
the DIMINET instance that receives the initial user vertical
request and will be transmitted to remote sites inside the
horizontal creation request. In this way, all sites will be capable
to reference the same inter-site service.
Table 5 shows the schema of the objects used by the Logic
Core to represent an inter-site resource.
_• Service: Main object of DIMINET which represents the_
inter-site service. A service is composed by some Parameters, a list of Resources, and a list of local Connections.
_• Param: As we already mentioned, since every proposed_
inter-site feature has their own needs, it is necessary to
store different information per service. The Param class
is used to store service-related information to support the
main functionalities of the Logic Core. If for instance, the
Service is of L3 type, it will not store information into
the All_pool parameter. At the contrary, an L2 service
will store the IP allocation pool assigned by the master
instance.
_• Resource: A Resource represents a virtual networking_
object belonging to a site. The Service class holds a list
of resources (the local one and a series of remote ones).
This list exists in every DIMINET instance composing a
service.
_• Connection: A Connection represents the mechanism_
enabling the interconnection for resources to contact or
be contacted by remote VIMs in order to forward/route
VMs traffic. Unlike Resources objects, Connections are
only stored locally.
|UUID|Site|Service_Global_ID|
|---|---|---|
|Service|Col2|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|Global_ID||Name|Type|Params|Ressources|Connections|
|Param ID All_pool Local CIDR IPv Service_Global_ID Resource UUID Site Service_Global_ID Connection UUID Service_Global_ID|||||||
|UUID|Service_Global_ID||||||
||||||||
Fig. 5: DIMINET database relationship diagram
We emphasize that for each inter-site resource, there is a
master in charge of maintaining the consistency of the related
information. In our current model, the master is defined as
the VIM that received the initial service request. The use of a
more advanced database engine leveraging CRDT [15] would
probably be relevant. However, answering this question is let
as future work as this does not change the key concepts of
DIMINET (i.e., for each resource, there is a mechanism used
to maintain the consistency of the information). The use of a
per resource master enables DIMINET to deal with network
partitions for inter-site resources in a straightforward manner:
When an end-user request cannot be satisfied due to network
issues (either a remote site cannot be reached or reciprocally
when a remote site cannot interact with the master), the request
is simply revoked and a notification is sent to the end-user,
who is in charge of invoking it once again later.
_C. Communication Interfaces_
To accept end-user requests and make communication
among VIMs possible, DIMINET relies in a two interface
division (inspired from the DISCO SDN controller): the North
interface and the East-West interface as depicted in Figure 3.
These two interfaces are coupled among them and with
the Logic Core in order to automatize the inter-site service
provisioning.
Both North and East-West interfaces are REST APIs using standard HTTP traffic presenting to the users and to
remote instances Create/Read/Update/Delete actions (CRUD).
The implemented vertical and horizontal CRUD and their
explanation are summarized in Table II.
_1) North interface: The north or vertical interface allows_
the user to request the establishment of inter-site networking services among several sites. This interface exposes an
abstract-enough API to allow the user to execute CRUD
actions on inter-site services. For instance, if the user wants
to create a new inter-site service, it has to provide the list of
resources that will compose the service and the type of service.
-----
_2) East-West interface: Once the networking instance re-_
ceives an inter-site networking provisioning request from the
user using the North interface, it will communicate with the
appropriate distant instances using the East-West interface.
This interface allows DIMINET instances to communicate
with the relevant neighbor instances to exchange information
about the distant networking objects and to request the creation
of the symmetric remote inter-site service. The exchanged
information are both the logical information to do the distributed management of the networking constructions and the
necessary implementation low-level mechanism information
allowing the communication of the virtualized instances.
This interface is only used on-demand and with servicerelated instances. In other words, contacting only the relevant
sites for a request will mitigate the network communication
overhead and the limitations regarding scalability as well as
network disconnections.
_D. Layer 3 routing_
The inter-site Layer 3 routing feature is provided for
traffic to be routed among different virtual networks (VN)
subnetworks. By design, subnetworks should not overlap.
That is, the range of addresses in one subnetwork should be
unique compared to all other subnetworks. If two subnetworks
overlap, when a router needs to send a packet to an IP address
inside that range of overlapped addresses, the router may
forward the packet to the wrong subnetwork. In this context,
the organization of the information of the local VN subnetwork
does not need to be coordinated with remotes VIMs, but for the
service to be correctly provided, the VN subnetworks Classless
Inter Domain Routing (CIDRs) must not overlap.
Let be {SN1, SN2, SN3, ..., SNn−1, SNn} a set of independent subnetworks deployed on n VIM sites which are
requested to have L3 routing among them. The condition
�n
_i=0_ _[SN]_ [(][CIDR][)][i][ =][ ∅] [(the sets of subnetworks CIDRs have]
to be disjoint sets) needs to be true for traffic to be routed.
This verification is done on the first instance that receives
the service request. Once the user provides the identifiers of
the resources to interconnect in a Layer 3 service and the site
**_North Interface_**
_Operation_ _Prefix_ _Description_
GET /intersite-vertical Retrieve local information of
all services
POST /intersite-vertical Create a new service
DELETE /intersite-vertical/{global_id} Delete a service with id
_global_id_
GET /intersite-vertical/{global_id} Retrieve local information of
service with id global_id
PUT /intersite-vertical/{global_id} Modify a service with id
_global_id_
**_East-West interface_**
_Operation_ _Prefix_ _Description_
POST /intersite-horizontal Horizontal request to create a
service
DELETE /intersite-horizontal/{global_id} Horizontal request to delete a
service with id global_id
GET /intersite-horizontal/{global_id} Read the distant parameters of
a service with id global_id
PUT /intersite-horizontal/{global_id} Horizontal request to modify
a service with id global_id
TABLE II: REST API Operations
where they belong, the instance proceeds to query the network
information from every pertinent sites to ensure that the IP
ranges are not overlapping among them. Once this condition
is verified, the instances do the exchange of information to
allow the low-level mechanism to do the virtualized traffic
forwarding.
Fig. 6: DIMINET L3 Routing service sequence diagram
For example, if the user requests to the DIMINET instance
of VIM A a Layer 3 routing service among two networks
A and B, belonging to VIMs A and B respectively, this
DIMINET instance will contact the remote site in order to
find the subnetwork CIDR related to the remote network,
and of course, it does the same search locally. Consider
the IPv4 CIDRs 10.1.2.0/23 and 10.1.4.0/23 for network
A and B respectively. The DIMINET instance will do the
overlapping verification with the ranges [10.1.2.0-10.1.3.255]
for 10.1.2.0/23 and [10.1.4.0-10.1.4.255] for 10.1.4.0. Thus,
10.1.2.0/23 [�] 10.1.4.255/23 = φ (the two subnetworks do
not overlap). Since the verification is satisfactory, DIMINET
instance A will send a horizontal service creation request to
instance B with the information of the two resources and the
type of service. Then, the instance A proceeds to send its local
information for the data place connection to instance B. The
same information is sent as an answer in order for both sites to
have the respective reachability information. Figure 6 shows
the sequence diagram of the communication among the users
and the DIMINET instances when no overlapping is verified.
Obviously, when CIDRs overlap, DIMINET does not satisfy
the request and notifies the user that the service cannot be
provided due to overlapping subnetworks CIDRs.
Finally, DIMINET instances then only interact with the
master of the L3 resource each time a site wants to join or
leave this network.
_E. Layer 2 extension_
The inter-site Layer 2 extension feature gives the possibility
to plug into the same virtual network, VMs belonging to
different sites. To belong to the same virtual network, hosts
must have the same subnetwork prefix (CIDR) and do not have
-----
duplicate MAC or IP addresses. Since every network exists as
an independent network in each site, they can each one have
their own DHCP service for IP assignment. Thus, MAC and
IP assignment have to be coordinated among the requested
sites in order for the service to be correctly provided.
At this point, there are two operations that need to be
considered over VNs: the join and the extension. The join
operation refers to combining multiple independent L2 resources to create a single L2 resource. This implies that every
independent L2 resource could have already deployed VMs
on it. If the join operation is applied between two resources,
it will be potentially necessary for each VIM to change the IP
addresses already allocated and thus, interrupting the services
that are being provided by those VMs, which is not desirable
in operational environments. The extension operation refers to
expanding one of the L2 resources into the others to create
a single L2 resource. This implies that these resources need
to be clean in order to do the initial request. Since this last
operation does not impact the operation of every segment, we
preferred to use it in our design.
For this reason, we have decided to propose the following
approach:
_• The instance receiving the initial service request assumes_
the role of master for that particular service.
_• This master instance does a logical split of the range of IP_
addresses within the same CIDR between, for instance,
two VIMS at the creation of the inter-site L2 network.
In this sense, the master instance will be in charge of providing the IP allocation pools to the other instances composing
the service, and thus, to do the L2 extension. To avoid to
spend all the IP addresses from the first service request, the
master instance delivers mid-sizes allocation pools to the other
participants. If in any case, one of these instances needs more
IP addresses or a new instance arrives to compose the service,
it will be enough with querying the master instance to have a
new allocation pool.
With this approach, we will avoid the communication overhead of sharing the information between the concerned VIMS
each time an IP address is allocated to one resource. At the
same time, we will avoid the static division of only doing a
CIDR allocation pool division at the service creation time. This
approach will allow the instances to maintain a segment logic
division while providing a more dynamic sharding strategy.
With our approach, if the user requests to the DIMINET
instance of VIM A a Layer 2 extension service to a VIM
B, this DIMINET instance will contact the instance of site
B in order to verify that a corresponding subnetwork CIDR
can be created. If so, the instance of site B will create the
corresponding subnetwork and the instance of Site A will take
the role of master of that specific L2 inter-site resource. This
implies that this instance will decide how to do the CIDR IP
allocation pool among the participant sites for the request and
to manage further requests concerning the modification of the
service.
This information of the master instance as well as the
allocated IP range will be sent through the East-West interface
Fig. 7: DIMINET L2 extension service sequence diagram
to remote instances sharing this L2 inter-site resource. When
receiving the horizontal L2 creation request, remote instances
will store the service information in their local database and
will use the service-related information dictated by the master
instance to do the appropriate changes in local networking
constructions (i.e., change the local IP allocation pool). This
changes are also done in the master site to provide the aforementioned logical division. Once this is done, the instances
proceed to exchange the necessary implementation information
to allow VMs traffic to be forwarded among them. Figure 7
shows the sequence diagram of the communication among the
users and the DIMINET instances when the same CIDR is
verified.
_F. Virtualized traffic interconnection_
As we proposed DIMINET to be deployed besides Neutron, we do not implement the information exchange for the
virtualized traffic connectivity over the horizontal interface,
but instead we rely on the Interconnection Service Plugin [11]. The Interconnection plug-in allows to create an
"interconnection" resource which references a local resource
(e.g. network A in VIM1) and a remote resource (e.g. network
B in VIM2) having the semantic informing that connectivity is
desired between the two sites. The proposition then leverages
the use of Border Gateway Protocol based Virtual Private
Networks (BGPVPNs) [9] at both sides to create an overlay
network connecting the two local segments.
The BGPVPN Service Plug-in itself uses the wellknown networking protocol BGP for the establishment of
IPVPN/EVPN [16], [17]. In BGP-based VPNs, a set of identifiers called Route Targets are associated with a VPN. Similarly
to the publish/subscribe pattern, BGP-peers use an export and
import list to let know the interest of receiving updates about
announced routes. A Route Target export identifier is used
to advertise the local routes of the VPN to the other BGPpeers. At the other hand, a Route Target import identifier is
used to import remote routes to the VPN. For instance, two
-----
sites belonging to the same BGP-VPN will have the following
informations to exchange their BGP routes: site A will have
_route-target-export 100 and route-target-import 200, while site_
B will have route-target-export 200 and route-target-import
_100._
V. PROOF-OF-CONCEPT
In this section, we discuss preliminary evaluations we
performed on top of a proof-of-concept (PoC) we implemented
to experimentally assess the DIMINET architecture.
Since OpenStack already posses an authentication service
(Keystone) that is used for client authentication, service discovery and authorization, we rely on this service to find out
distant DIMINET instances knowing that they are deployed in
the same IP address as Neutron.
_A. Testbed and setup_
Fig. 8: DIMINET testbed setup
Figure 8 shows the experimental platform: each gray box
corresponds to a physical machine of Grid’5000 on which
a Devstack version of the OpenStack Stein release has been
deployed with the following networking services: ML2 OVS
driver, neutron-interconnection Plug-in, networking-bgpvpn
Plug-in, and networking-bagpipe driver, and a DIMINET
instance. This federation of Devstack enabled us to emulate
our DCI infrastructure.
Since the Interconnection Service Plug-in also relies in the
BGPVPN Service Plug-in, it is necessary to either deploy a
BGP peering overlay on top of the IP WAN connectivity or
have a BGP peering with WAN IP/MPLS BGP-VPN underlay
routing instances. Because Grid’5000 does not allow to interact with the physical routers (underlay BGP), we deployed the
first scenario using GoBGP [18] to provide the functionality
of the BGP instances in each site. These BGP instances are
deployed on the same Grid’5000 machines used for the OpenStack and DIMINET deployments. Moreover, we deployed
some Route Reflector (RR) instances in independent physical
machines to advertise the BGP VPN Route Targets used to
advertise the routes of the virtual networking constructions.
We have deployed 14 sites in total, each RR is connected to
3 sites and the BGP sessions are pre-configured among them
and among each RR and its BGP instances clients in each site.
_B. Evaluation: Inter-site networking services deployment_
The first purpose of this demonstration is to show the
feasibility of using a distributed architecture to create inter-site
networking services. For this, we measured the time needed
to create the inter-site service for both kind of services. Since
the data plane interconnection depends on the number of
instances booted at every segment, we do not measure this
time but instead we rely on former works on BGP performance
proving the benefits and disadvantages of BGP VPN routes
exchanges [19], [20].
Each test has been executed 100 times and Table III summarizes the creation time of services varying the quantity of
resources/sites by service up to N=6 sites. Moreover, Figure 9
shows a graphical representation of this service mean creation
time with the standard error.
**_Feature_** **_# of sites per request_**
_2_ _3_ _4_ _5_ _6_
L3 routing 3.5006 3.56017 3.61324 3.8076 4.08032
L2 extension 3.47927 3.66885 3.98471 4.07191 4.40295
TABLE III: Performance measure time in seconds
_1) Layer 3 routing service: For every experiment a random_
instance has been chosen to receive the user request and
start the inter-site Layer 3 routing service creation. These
experiments have been done using resources/sites of size 2,
3, 4, 5, and 6.
As explained in the last section about the L3 routing
sharding strategy, the time needed to create the L3 service
is divided in the following elements:
_• The first DIMINET instance requests the pertinent remote_
sites about the network-related information to find out
the subnetworks’ identifiers. In our PoC, this is done
in parallel because remote network information can be
provided without dependency among the requests.
_• The first DIMINET instance proceeds to query the sub-_
network related CIDR information. Similarly to the previous step, this is done in parallel.
_• Once the DIMINET instance finished to query, it does_
the overlapping verification locally.
_• Since the verification is false, the instance proceeds to_
call the neutron-interconnection Plug-in to create the
interconnection resources.
_• Next, the instance sends in parallel the horizontal create_
API request to the remote DIMINET instances. The
instance waits for remote answers in order to continue.
_• Once all the remote instances answered the horizontal_
request, the first instance proceeds to answer the original
user request.
_2) Layer 2 extension service: Similarly to the L3 service,_
for every experiment a random instance has been chosen to
receive the user request and to start the inter-site Layer 2
extension service creation. This experiments have been done
using resources/sites of size 2, 3, 4, 5, and 6.
-----
10
8
6
4
2
0
2 3 4 5 6 7 8 9 10 11 12 13 14
_n_
Fig. 9: DIMINET mean time service creation and standard
error
As explained in the last section about the L2 extension
sharding strategy, the time needed to create the L2 service
is divided in the following elements:
_• The first DIMINET instance requests the pertinent remote_
sites about the network-related information to find out if
the CIDR is available to use. Similarly to the L3 service,
this is done in parallel. This means that this first request
depends on the time expended by remote sites to answer
the query
_• Once the DIMINET instance finished to query, it verifies_
that the CIDR is available for all the requested resources.
_• Since the verification is true, the instance creates a special_
Parameter to gather the IP allocation pools and does the
splitting of the same among the remote sites.
_• Then,_ the instance proceeds to call the neutroninterconnection Plug-in to create the interconnection resources.
_• The instance proceeds to do the change of the DHCP_
parameters of its local resource according to the splitting
done by itself.
_• Next, the instance sends the horizontal create API request_
to the remote DIMINET instances. In this request the
additional information about the master identity and the
allocated pool for the remote sites are added. The instance
waits for remote answers in order to continue.
_• Once all the remote instances answer the horizontal_
request, the first instance proceeds to answer the original
user request.
_C. Evaluation: Inter-site networking services Resiliency_
been deployed, two VMs have also been deployed on each
site.
Fig. 10: DIMINET Resiliency test. (A) Initial deployed service. (B) Inter-site service in presence of networking partitioning
The second purpose of this demonstration is to show the
improved resiliency of a distributed architecture against networking partitioning issues. To explain this, we have deployed
an L2 extension service with CIDR IPv4 10.0.0.0/24 depicted
in Figure 10 (A) among sites A and B. Once the service has
Firstly, we have checked that the traffic was being carried at
the intra-site level, this is, between the VMs deployed in the
same site. We also checked that the traffic was being carried
at the inter-site level. At this point, thanks to the different
technologies used (BGP routes exchanges, VXLAN tunnels
among sites ...), traffic was correctly forwarded in both cases.
Secondly, we emulated a network disconnection using Linux
_Traffic Control (TC) to introduce a network fault in the link_
between the sites as shown in Figure 10 (B). We decided to
impact in the network allowing the BGP routes exchanges. We
verify that while intra-site traffic continues to being forwarded,
inter-site traffic will continue to be forwarded a little more
until the local BGP router finds that its distant BGP peer is
no longer reachable. At that point the local BGP router decides
to withdraw the remote routes from its local deployment, then
impacting the inter-site data plane traffic.
**_Traffic_** **_Scenarios_**
_Before failure_ _During failure_ _After failure_
Intra-site
Inter-site
TABLE IV: Traffic being forwarded in different scenarios
-----
Because of the independence between the deployments and
the logical division done by our DIMINET instance, we
effectively arrived to instantiate new VMs during the network failure. This corresponds to the behaviour we expected
since the OpenStack deployments are completely independent
among them.
Finally, when connectivity is reestablished, inter-site traffic
takes some time to be forwarded again between sites. This is
because the BGP peers waits the configured Keep Alive time
to query the distant peer about its availability to reestablish
the BGP peering among them, thus, impacting on the time
needed to reestablish the traffic. Table IV summarizes whether
the traffic is routed either in intra-site or inter-site.
_D. Summary_
Although these experiments enables the validation of our
PoC in terms of behaviour, deeper investigations should be
performed in order to clarify some trends. In particular, we
need to understand why the time to create an inter-site resource
increases w.r.t. to the number of sites involved. Conceptually
speaking this is a non sense as all internal requests are
handled in parallel. Moreover, we plan to perform additional
experiments to stress DIMINET by requesting the creation of
several inter-site resources simultaneously and across different
groups of sites. Such experiments should demonstrate also the
good properties of DIMINET as master roles are distributed
among the different instances of our DCI. Obviously, this can
lead to hotspots where some VIMs will be much more stressed
than others. However, these possible hotspots issues are due to
the locality-awareness as well as the resiliency w.r.t. network
partitions properties we are looking for.
VI. CONCLUSIONS
In this article, we have introduced DIMINET, a DIstributed
Module for Inter-site NETworking services capable to provide
automatized management and interconnection for independent
networking resources. DIMINET relies on a decentralized
architecture where each instance is in charge of managing its
local site networking services, and is capable of communicating with remote instances, on-demand, in order to provide
virtual networking constructions spanning several VIMs. To
assess the design of our proposal, we implemented a first PoC
that extends the OpenStack Neutron service. We evaluated it
through a set of experiments conducted on top of Grid’5000.
Preliminary results demonstrated that the DIMINET model can
address the challenge of inter-site resources without requiring
intrusive modifications at the VIM level.
We are currently conducting additional experiments in order
to identify the time-consuming steps in the creation of intersite resources. We also plan to achieve additional experiments
to validate how DIMINET behaves in presence more complex
service requests scenarios, in particular in the presence of
simultaneous operations.
In parallel to these experimental studies, we are investigating how the use of advanced database engines can improve the
robustness of the DIMINET master concept. Across this study,
we identified the opportunity to deliver a more general model
of DIMINET. A model that can deal with more services than
just the networking one and deliver building blocks capable of
handle the life cycle of inter-site resources in a DCI resource
management system.
ACKNOWLEDGEMENT
All developments related to this work have been supported
by Orange Labs and Inria in the context of the Discovery
Open Science initiative. Experiments were carried out using
the Grid’5000 testbed, supported by a scientific interest group
hosted by Inria and including CNRS, RENATER and several
Universities as well as other organizations.
REFERENCES
[1] A. Bousselmi, J. F. Peltier, and A. Chari, “Towards a Massively
Distributed IaaS Operating System: Composition and Evaluation of
OpenStack,” IEEE Conference on Standards for Communications and
_Networking, 2016._
[2] D. Sabella, A. Vaillant, P. Kuure, U. Rauschenbach, and F. Giust,
“Mobile-Edge Computing Architecture: The role of MEC in the Internet
of Things,” IEEE Consumer Electronics Magazine, vol. 5, no. 4, pp. 84–
91, Oct 2016.
[3] R.-A. Cherrueau, A. Lebre, D. Pertin, F. Wuhib, and J. M. Soares, “Edge
Computing Resource Management System: a Critical Building Block!”
_HotEdge, 2018._
[[4] OpenStack, “OpenStack,” https://docs.openstack.org/latest/, 2020.](https://docs.openstack.org/latest/)
[5] A. Lebre, J. Pastor, A. Simonet, and F. Desprez, “Revising OpenStack
to Operate Fog/Edge Computing Infrastructures,” IEEE International
_Conference on Cloud Engineering, 2017._
[[6] OpenStack, “Tricircle Project,” https://wiki.openstack.org/wiki/Tricircle,](https://wiki.openstack.org/wiki/Tricircle)
2018.
[7] K. Phemius, M. Bouet, and J. Leguay, “DISCO: Distributed Multidomain SDN Controllers,” Network Operations and Management Sym_posium, 2014._
[[8] OpenDayLight, “OpenDaylight Federation Application,” https://wiki.](https://wiki.opendaylight.org/view/Federation:Main)
[opendaylight.org/view/Federation:Main, 2016.](https://wiki.opendaylight.org/view/Federation:Main)
[9] OpenStack, “Neutron BGPVPN Interconnection,” [https:](https://docs.openstack.org/networking-bgpvpn/latest/)
[//docs.openstack.org/networking-bgpvpn/latest/, 2019.](https://docs.openstack.org/networking-bgpvpn/latest/)
[10] ——, “Neutron Networking-L2GW,” [https://docs.openstack.org/](https://docs.openstack.org/networking-l2gw/latest/readme.html)
[networking-l2gw/latest/readme.html, 2019.](https://docs.openstack.org/networking-l2gw/latest/readme.html)
[[11] ——, “Neutron-Neutron Interconnections,” https://specs.openstack.org/](https://specs.openstack.org/openstack/neutron-specs/specs/rocky/neutron-inter.html)
[openstack/neutron-specs/specs/rocky/neutron-inter.html, 2018.](https://specs.openstack.org/openstack/neutron-specs/specs/rocky/neutron-inter.html)
[12] R. Moreno-Vozmediano, R. S. Montero, E. Huedo, and I. Llorente,
“Cross-Site Virtual Network in Cloud and Fog Computing,” IEEE
_Computer Society, 2017._
[13] OpenStack, “KingBird Project,” [https://wiki.openstack.org/wiki/](https://wiki.openstack.org/wiki/Kingbird)
[Kingbird, 2019.](https://wiki.openstack.org/wiki/Kingbird)
[14] F. Brasileiro, G. Silva, F. Arajo, M. Nbrega, I. Silva, and G. Rocha, “Fogbow: A middleware for the federation of iaas clouds,” 16th IEEE/ACM
_International Symposium on Cluster, Cloud and Grid Computing, 2016._
[15] N. Preguica, J. M. Marques, M. Shapiro, and M. Letia, “A commutative
replicated data type for cooperative editing,” in 2009 29th IEEE Inter_national Conference on Distributed Computing Systems._ IEEE, 2009,
pp. 395–403.
[16] E. Rosen and Y. Rekhter, “BGP/MPLS IP Virtual Private Networks
(VPNs),” Internet Requests for Comments, RFC Editor, RFC 4364,
[February 2006. [Online]. Available: https://tools.ietf.org/html/rfc4364](https://tools.ietf.org/html/rfc4364)
[17] A. Sajassi, R. Aggarwal, N. Bitar, A. Isaac, J. Uttaro, J. Drake, and
W. Henderickx, “ BGP MPLS-Based Ethernet VPN,” Internet Requests
for Comments, RFC Editor, RFC 7432, February 2015. [Online].
[Available: https://tools.ietf.org/html/rfc7432](https://tools.ietf.org/html/rfc7432)
[[18] OSRG, “GoBGP,” https://osrg.github.io/gobgp/, 2019.](https://osrg.github.io/gobgp/)
[19] F. Palmieri, “VPN scalability over high performance backbones evaluating MPLS VPN against traditional approaches,” Proceedings of the
_Eighth IEEE International Symposium on Computers and Communica-_
_tion, 2003._
[20] J. Mai and J. Du, “BGP performance analysis for large scale VPN,”
_2013 IEEE Third International Conference on Information Science and_
_Technology, 2013._
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/CCGrid49817.2020.00-81?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/CCGrid49817.2020.00-81, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://hal.archives-ouvertes.fr/hal-02573638/file/DIMINET__On_Demand_Multi_site_Connectivity_for_Edge_Infrastructures%281%29.pdf"
}
| 2,020
|
[
"JournalArticle"
] | true
| 2020-05-01T00:00:00
|
[] | 12,795
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02fb22b5673a90bd6bbfb85c2121d29ec3e49203
|
[] | 0.881439
|
Safe Distributed Architecture for Image-based Computer Assisted Diagnosis
|
02fb22b5673a90bd6bbfb85c2121d29ec3e49203
|
[
{
"authorId": "1719339",
"name": "Sébastien Varrette"
},
{
"authorId": "2286474829",
"name": "Jean-Louis Roch"
},
{
"authorId": "1727081",
"name": "J. Montagnat"
},
{
"authorId": "2054542656",
"name": "L. Seitz"
},
{
"authorId": "145073414",
"name": "J. Pierson"
},
{
"authorId": "2286496039",
"name": "Franck Lepr´evost"
},
{
"authorId": "2286474829",
"name": "Jean-Louis Roch"
},
{
"authorId": "2286486980",
"name": "Sophia Antipolis"
},
{
"authorId": "2286508927",
"name": "France L.Seitz"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
## Safe Distributed Architecture for Image-based Computer Assisted Diagnosis
### Sébastien Varette, Jean-Louis Roch, Johan Montagnat, Ludwig Seitz,
Jean-Marc Pierson, Franck Leprévost
To cite this version:
#### Sébastien Varette, Jean-Louis Roch, Johan Montagnat, Ludwig Seitz, Jean-Marc Pierson, et al.. Safe Distributed Architecture for Image-based Computer Assisted Diagnosis. 1st IEEE International Work- shop on Health Pervasive Systems (HPS 2006) in conjunction with ICPS 06, IEEE, Jun 2006, Lyon, France. pp.1-10. hal-00683206
### HAL Id: hal-00683206
https://hal.science/hal-00683206
#### Submitted on 28 Mar 2012
#### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
#### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
-----
# Computer Assisted Diagnosis
#### S´ebastien Varrette, Jean-Louis Roch,Johan Montagnat, Ludwig Seitz, Jean-Marc Pierson, Franck Lepr´evost
**_Abstract— Existing electronic healthcare systems based on_**
**PACS and Hospital IS are designed for clinical practice. Yet, both**
**for security, technical and legacy reasons, they are often weakly**
**connected to computing infrastructures and data networks. In**
**the context of the RAGTIME project, grid infrastructures are**
**studied to propose a cheap and reliable infrastructure enabling**
**computerized medical applications. This raises various concerns,**
**in particular in terms of security and data privacy. This paper**
**presents the results of this study and proposes a complete grid-**
**based architecture able to process medical image for assisted**
**diagnosis in a secured way. Using this infrastructure, care**
**practitioner are able to execute the application from any machine**
**connected to the Internet, therefore improving their mobility.**
**Medical image analysis jobs are certified to be correct using the**
**latest advances in result checking and fault-tolerant algorithms**
**provided in [1], [2]. The architecture has been successfully de-**
**ployed and validated on the Grid5000 large scale infrastructure.**
**_Index Terms— Medical Expert Systems, Security, Distributed_**
**Computing, Image Processing, Parallel Architectures.**
I. INTRODUCTION
The RAGTIME project[1] federates researchers in the grid
computing community around a common goal: the management of medical information. For that purpose, grids and
more particularly grids of clusters [3] are studied to provide a
cheap distributed computing infrastructure complementary to
clinical PACS[2] for medical application. The project aims to
demonstrate how grid technologies can improve the cooperation between PACS located in distant hospitals and enable
medical image analysis procedures.
A grid of cluster corresponds to a cluster aggregation
through the Internet with a remote access for users. This
topology is particularly adapted to represent the network that
would interconnect the PACS.
This work is supported by the following projects: RAGTIME and
SAFESCALE.
S. Varrette is both with the MOAIS team (INRIA-CNRS-UJF-INPG) at
the LIG-IMAG Laboratory (Montbonnot Saint Martin, France) and the LACS
Laboratory of the University of Luxembourg - Phone: +352 46 66 44 6600;
fax: +352 46 66 44 6313; e-mail: Sebastien.Varrette@imag.fr
J.-L. Roch is with the MOAIS Team (INRIA-CNRS-UJF-INPG) at the
LIG-IMAG Laboratory (Montbonnot Saint Martin, France).
J. Montagnat is with the RAINBOW team (CNRS) at the I3S laboratory
(Sophia Antipolis, France).
L.Seitz is with the SICS laboratory (Kista, Sweden).
J.-M. Pierson is with the LIRIS Laboratory (Lyon, France).
F. Lepr´evost is with the LACS Laboratory of the University of Luxembourg.
1Literally ”Rhˆone-Alpes : Grille pour le Traitement d’Informations
M´edicales” http://liris.univ-lyon2.fr/[∼]miguet/ragtime/
2Picture Archiving and Communication Systems
The experiment described in this article has the ambition
to convince care practitioner of the improvement and the
flexibility provided by such an architecture (the latter point is
generally seen as incompatible with this technology for endusers). For that purpose, this paper illustrates an application
of breast cancer lesions detection in mammograms using
statistical comparison on a database of studied cases (see
figure 1). In practice, the database should be located on PACS
that are seen as a secure distributed storage grid for this
application. On the other side, a computing grid composed
of interconnected clusters (either located in hospitals or in
supporting institutions) executes comparison algorithms to
evaluate the similarity between a new mammogram submitted
by a doctor to those registered in the storage grid.
Storage grid
(0) To store **_D1_**
**DB1** **DB2** **DB3**
(2) score computation
(1) To analyse
Computing grid
**_D2_**
**_(3) Results_**
Fig. 1. Application for mammograms comparison
Of course, this application raises various constraints, mainly
in terms of security and privacy:
- The system should be accessible from any computer
connected to the Internet.
- Only authorized users (typically a doctor) should be
allowed to use this application and access the resources
(machines or data) required.
- Communications during the process between the resources should be encrypted to guarantee their privacy
and integrity.
- Medical images sent on the computing grid have to be
anonymized to guarantee patient privacy even in case of
a resource corruption.
- Data on the storage grid should be securely stored.
- The system should remain operative even in case of
resources corruption.
- The jobs should be cleverly scheduled on the grid
All these constraints are addressed by the proposed architecture. Point 4 should normally be addressed by the PACS.
Yet, for obvious security reasons, access to a real PACS in
production mode has not been possible. In this article, the
|(0) To store D1 DB1 DB2 DB3 (2) score computation e Computing grid D2 (3) Results|D1|Col3|
|---|---|---|
||||
-----
model. It is important to notice that as soon as unsafe resources
are used (as in pervasive architecture), it is impossible to
completely trust the results computed [4]. That’s why some
algorithms are considered in §II-B to certify the computed
results. The remaining sections are organized as follows:
§II expounds and justifies the components of the proposed
architecture. §III details the protocol used while §IV concludes
this article and explain future works we plan to add to improve
this experiment.
II. ARCHITECTURAL COMPONENTS
_A. Authentication System for Grid Access_
Designing a robust authentication system in distributed environments has been extensively studied [3], [5], [6]. Efficient
solutions depend on the grid topology. As for grids of clusters,
the authors of [3] demonstrate an adapted and efficient solution
based on LDAP servers that broadcast authentication information. In terms of security, LDAP provides various guarantees
thanks to the integration of cypher and authentication standard
mechanisms (SSL/TLS, SASL) coupled with Access Control
Lists. All these mechanisms enable an efficient protection
of transactions and access to the data incorporated in the
LDAP directory. The proposed solution is currently used as
the authentication system of Grid5000[4]. It enables access to
the Grid5000 grid - and for the context of this paper to
the execution platform - from any computer connected to
the Internet. Yet, LDAP could easily use other authentication
technologies such as smartcards - this kind of authentication
will be investigated in future works.
_B. Ensuring Computation Resilience_
Resilience in grid execution is a prerequisite that should
be embedded in the application: at this scale, component
failures, disconnections or results modifications are part of
operations, and applications have to deal directly with repeated
failures during program runs. This integration can be done
in a cross-platform way using a portable representation of
the distributed execution: a bipartite Direct Acyclic Graph
G = (V, E). the first class of vertices is associated to the
tasks (in the sequential scheduling sense) whereas the second
one represents the parameters of the tasks (either inputs or
outputs according to the direction of the edge). Such a graph
is illustrated in figure 2.
Using this representation, the authors in [7], [2] propose
portable fault-tolerance mechanisms for heterogeneous multithreaded applications. The flexibility of macro dataflow graphs
has been exploited to allow for a platform-independent description of the application state. This description resulted
3Enabling Grids for E-sciencE http://public.eu-egee.org/
4The Grid5000 project aims at building a highly reconfigurable, controlable
and monitorable experimental Grid platform gathering 9 sites geographically
distributed in France featuring a total of 5000 CPUs - https://www.
```
grid5000.fr
```
Fig. 2. Instance of a data-flow graph associated to the execution of five tasks
{f1, ..., f5}. The input parameters of the program are {e1, ..., e4} whereas
the outputs (i.e the results of the computation) are {s1, s2}
in flexible and portable recovery strategies with a low overhead that only required the existence of a checkpoint server
deployed on a set of safe resources. This server stores the
dataflow graph of the execution provided by the Kernel for
Adaptive, Asynchronous Parallel Interface (KAAPI). KAAPI
is a C++ library that allows to program and execute multithreaded computations with dataflow synchronization between
threads. The same approach can be exploited in this paper to
ensure resilience to crash fault of computing resources.
Alternatively, any error on the analysis of an image computed on a grid could have dramatic consequences on the
resulting diagnosis. We do the ”optimistic” assumption that
even if the majority of resources will compute correctly, they
cannot be fully trusted. It is therefore important to reassure
the care practitioner that the computed results are correct and
have not been tampered by a corrupted resource. This requires
efficient error checking algorithms able to certify the correctness of the computation. In this area, dataflow graphs are also
used [1], [8] and provide a tunable probabilistic certification.
These research also assumed the availability of safe resources
gathering a checkpoint server (eventually distributed) together
with a controller (or verifier) used to safely re-execute some
tasks of the program.
Both mechanisms – either for fault-tolerance or error checking – have to be used in the target medical application. This
leads to the infrastructure presented in figure 3 in which the
resources have been divided in two classes:
1) A limited number of safe resources host the checkpoint
server and the verifiers. As it will be seen in §II-E,
the farm daemon of the µgrid middleware will also be
hosted on these resources.
2) the other resources, mentioned as ”unsafe”, constitute
the real computing grid and are divided among the
different hospitals and involved institutions.
_C. DICOM Image anonymization_
In this experiment, medical images are encoded using the
standard DICOM format (Digital Image and COmmunication
in Medicine). DICOM files contain both image data and
metadata headers containing sensitive patient identifying information such as name, sex, data acquisition site, etc. Before
sending any data to the computing grid infrastructure, the
DICOM images have to be anonymized to ensure patient
-----
Fig. 3. Resources hierarchy and mandatory components for portable fault-tolerance and error-checking algorithms
privacy. This is simply done by whipping all metadata out
of the DICOM files.
_D. Secured Storage and Access_
Since the image files and the associated meta-data in the DICOM image format must be considered sensitive information,
one goal is to protect them against unauthorized disclosure
while they are on the storage Grid. Archived images cannot
be fully anonymized, since we need to keep the person related
meta-data that are very important in many medical diagnosis
procedures. A convenient solution is to encrypt all sensitive
data before it is stored onto the Grid.
When using data encryption, the problem arises of how
to make it possible for authorized users to decrypt the data
in order to gain access to its contents. Therefore we need a
mechanism that makes decryption keys accessible for authorized users, while not compromising the security of the data
encryption. Furthermore access to the keys needs to evolve
dynamically with the individual access rights of the users,
without requiring external intervention of an administrator.
Finally we want to have some degree of safety in the key
storage, so that the loss of one or more keys do not cause the
loss of data it encrypts.
In order to make the keys available we store them on key
servers that are not necessarily part of the Grid itself. These
key servers use an access control mechanism to determine
who may access which decryption key. The access control
mechanism used by the key servers should mirror exactly the
file access permissions of the users on the Grid. We have
implemented an access control system called Sygn that can
be used to achieve this. The use of Sygn in the RAGTIME
environment is described in a former paper [9].
In order to make the key server more resilient to attacks
and breakdowns, we do not store entire keys on a single
key server. Instead we use several key servers and split up
the keys we want to store into key-shares, using Shamir’s
secret sharing algorithm [10]. This gives us two considerable
advantages: first, a successful attack on one key server does
not expose actual keys. Attackers will need to successfully
attack a number of key servers equal to the chosen threshold
value of the secret sharing algorithm in order to be able to
reconstruct the actual keys. Second, the algorithm allows for
the creation of redundant key shares, meaning that you only
need any n out of m (with n < m) key-shares to reconstruct a
key. We therefore gain some redundancy if a key server should
be unavailable or even loose its key related data.
The actual data encryption, creation and storage of keyshares on key servers is performed by a local tool on the
machine of the user that produces the image. We have implemented a prototype of this encrypted storage architecture
called CryptStore which is described in more detail in our
publication Encrypted Storage of Medical Data on a Grid [11].
_E. µgrid_
Grid5000 is an experimental platform for grid computing
research that is not making any assumption on the middleware to be used. Instead, Grid5000 users are deploying the
middleware they need for their research and experiments. We
have deployed the µgrid middleware [12] over the Grid5000
infrastructure. µgrid is a lightweight middleware prototype
that was developed for research purposes and already used
to deploy applications to medical image processing [13].
The µgrid middleware was designed to use clusters of PCs
available in laboratories or hospitals. It is intended to remain
easy to install, use and maintain. Therefore, it does not make
any assumption on the network and the operating system
except that independent hosts with private CPU, memory,
and disk resources are connected through an IP network
and can exchange messages via communication ports. This
matches the Grid5000 platform. The middleware provides the
basic functionalities needed for batch-oriented applications: It
enables transparent access to data for jobs executed from a
user interface. The code of µgrid is licensed under the GNU
Public License and is freely available from the authors web
page.
In the µgrid middleware a pool of hosts, providing storage
space and computing power, is transparently managed by a
_farm manager. This manager collects the information about the_
controlled hosts and also serves as entry point to the grid. The
µgrid middleware is packaged as three elements encompassing
all services offered:
1) A host daemon running on each grid computing host that
manages the local CPU, disk, and memory resources. It
is implemented as a multiprocesses daemon forking a
new process for handling each task assigned. It offers
the basic services for job execution, data storage and
data retrieval on a farm.
2) A farm daemon, running on each cluster, that manages
a pool of hosts. It is implemented as a multithreaded
-----
with farm daemons and access to the grid resources.
Although logically separated, the three µgrid components
may be executed as different processes on a single host. The
communications between these processes are performed using
secured sockets. Therefore, a set of hosts interconnected via an
IP network can also be used to run these elements separately.
The µgrid middleware offers the following services:
- User authentication through X509 certificates. Certificates
are delivered by a certification authority that can be set
up using the sample commands provided in the openSSL
distribution.
- Data registration and replication. The middleware offers
the virtual view of a single file system though data are
actually distributed over multiple hosts. Files on the hosts
need to be registered at the farm to be accessible from the
grid middleware. Furthermore data can be transparently
replicated by the middleware for efficiency reasons.
- Job execution. Computing tasks are executed on the
grid hosts as independent processes. Each job is a call
to a binary command possibly including command line
arguments such as registered grid files.
These functionalities are handled by the µgrid components as
follows. The farm daemon role is to control a computing farm
composed of one or more hosts. It holds a database of host
capacities, grid files, and a queue of scheduled jobs. MySQL
is used as database back-end. When started, the farm daemon
connects to the database back-end. If it cannot find the µgrid
database, it considers that it is executed for the first time and
sets up the database and creates empty tables. In the other
case, it finds in the database, the list of grid files that have been
registered during previous executions and the hosts where files
are physically instantiated.
The host manager role is to manage the resources available
on a host. When started, it collects data about the host CPU
power, its available memory and the available disk space. It
connects to the farm manager indicated on command line or
in a configuration file to which it sends the host information.
The host manager encompasses both a data storage/retrieval
service and a job execution service.
To make use of the system, a user has to know a farm manager to which its requests can be directed. For convenience, the
latest farm managers addressed are cached in the user home
directory. Through the user interface, a user may require file
creation, replication, or destruction, and jobs execution. These
requests are sent to a farm manager which is responsible for
locating the proper host able to handle the user request. To
avoid unnecessary network load, the farm manager does not
interfere any longer between the user interface and the target
host, it only provides the user interface with the knowledge
of the target host and then let the user interface establish a
direct connection with the host for completion of the task.
The system is fault tolerant in the sense that if the farm
manager becomes unreachable (e.g. due to a network failure
or the process being killed), the user interface parses the list of
until it restarts and registers again. The user interface consists
of a C++ API. A single class enables the communication with
the grid and the access to all implemented functionalities. A
command line interface has also been implemented above this
API that offers access to all the functionalities through four
UNIX-like commands (ucp, urm, and uls for file management
similarly to UNIX cp, rm, and ls commands respectively plus
usubmit for starting programs execution on the grid).
_F. Analyzing medical images_
Many computerized medical image analysis algorithms are
available today. Grid are particularly well suited to tackle very
compute intensive applications like those requiring full image
databases analysis. Indeed, massive data parallelism can often
be exploited on such applications to distribute the workload
over a grid infrastructure [14].
Computer Assisted Diagnosis techniques rely on target
image comparison against annotated reference databases. The
image comparison techniques greatly varies depending on the
concrete medical objectives researched. Some global image
indexing and analysis techniques have been proposed in the
literature, both based on global image descriptors (histograms,
global filter responses, etc) and local area features (local intensity and texture analysis, etc) [15]. Although some interesting
usecases can be implemented, the consideration of a precise
medical parameter to be extracted requires adaptation of these
generic parameters.
In this paper, we are considering an application to breast
cancer lesions detection in mammograms. The objective is to
provide a computerized double reading of mammograms: A
computer software selects images with an identified risk of
malignant lesions for expert reading by a trained radiologist.
Algorithms that may produce false positive (false alarms) but
no false negatives (no malignant cases ignored) can be used
for such an application. Image analysis procedures for mammograms based on a large number of local image descriptors
have been proposed e.g. in [16].
_G. Sorting Algorithm_
Image analysis for assisted diagnosis first consists of comparison jobs which results have to be sorted. This can be
done either on safe resources (the verifiers typically) or on
the computing grid. The choice depends on the number of safe
resources available. In the first case, O(n log n) comparisons
should be checkpointed to ensure the sorting of n scores. This
approach should be prefered in general. In the later case, one
should consider auto-tolerant algorithm to complement resultchecking approach (see §II-B). This approach is required when
safe resources are confined to limited embedded system and/or
have a limited computing power. To facilitate the certification,
we consider sorting algorithms composed of only one type of
tasks. This leads us to sorting networks analyzed by Batcher
[17] and Ajtai, Komlos, and Szemeredi [18]. Such a network
consists of n registers and a collection of comparators, where
-----
C
C Comparison Tasks C
**(3)** meta-data Farmanager+ **(4)**
Hostmanager Hostmanager Hostmanager
r1 r2 Scores rn
CERTIFICATION PROCESS
S S Sorting tasks S
Hostmanager Hostmanager Hostmanager
CERTIFICATION PROCESS
UNSAFE RESOURCES
|Col1|Col2|
|---|---|
|r 1|r 2 Scores r|
|CERTIFICATION PROCESS||
Grid5000
Front−End
|Col1|Col2|Col3|
|---|---|---|
||||
||(6) (7)||
||||
||||
(1) User authenticate to the front−end server
(2) A new mammogram I is send for analyse
(3) Using metadata of I, index of n images are selected on the storage grid
(4) Farmanager submits n comparison jobs to hostmanagers
Input images are anonymized
(5) Scores are certified to be correct using result−checking algorithms
(6) Farmanager submits sorting jobs to hostmanagers
(7) The sorting process is certified correct using result−checking algorithms
A table T containing sorted scores with pointers to corresponding images
is produced
(8) The first 10% entries of T are sent back to the user
Fig. 4. Detailed protocol for the RAGTIME Demonstration
n is the number of items to be sorted. Each register holds
one of the items to be sorted, and each comparator is a 2input, 2-output device that outputs the two input items in sorted
order. The comparators are partitioned into levels so that each
register is involved in at most one comparison in each level.
The depth of the network is defined to be the number of levels
in the network, and the size of the network is defined as the
number of comparators in the network. Extending this model
to our application, an algorithm only composed of comparator
tasks has been considered. One can show that this algorithm
requires at least Ω(n log n) comparators and Ω(log n) levels
and this bound is reached using the AKS network [18]. As the
best sequential algorithm has a time complexity of O(n log n),
the best improvement that can be expected using n processors
is O(log n) so that the AKS network is optimal except for a
constant. In practice, the constant hidden under the O notation
in AKS makes it less efficient than the bitonic sort of Batcher
[17] (with size O(n log[2] n) and depth O(log[2] n)) that should
be prefered.
It remains to make this algorithm auto-tolerant to comparator
tasks failures. The destructive fault model introduced in [19]
has been considered: a faulty comparator task with inputs x
and y can output f (x, y) and g(x, y) where f and g can be any
of the following functions: x, y, min(x, y) or max(x, y). In
the case of random faults, and given a n-item sorting network
with depth d and size N, Assaf and Upfal showed how to
construct a network with O(N log N ) comparators and O(d)
levels that (with high probability) can sort n items even if a
constant fraction of the comparators are faulty. Applied to the
AKS network, this leads to size O(n log[2] n) and Leighton &
Ma in [20] demonstrates that this is an optimal size. With the
bitonic sort algorithm, an algorithm with size O(n log[3] n) (the
checkpoint cost) and depth O(log[2] n) is obtained.
III. EXPERIMENTAL PROTOCOL
As mentioned in §II-B, the availability of safe resources
is assumed. They will host the controllers, the checkpoint
server and the farm manager. In addition, the storage grid,
the front-end server and the key server are supposed on these
resources. Concerning the image database, §II-D demonstrates
how to provide a secure storage and access. The remaining resources compose the computing grid and are supposed unsafe.
They each run a hostmanager daemon required by the µgrid
middleware (see §II-E). The exact protocol of the experiment
developed is summarized in the figure 4. It combines the
architectural components detailed in §III to provide a complete
and secure platform able to perform breast cancer lesions
detection in mammograms. The protocol conducted in the
experiment is now detailed:
1) The user authenticates to the front-end server. The
authentication system is the one used in the Grid5000
project (see II-A). Communications between the user
machine and the front-end server are encrypted using
SSL to ensure privacy of the request.
2) The user submits a new mammogram I to analyze.
3) The controller submits to the storage grid the meta-data
of the image I to select a set of indexes on n images
{Ii}0≤i<n that match the meta-data of I.
� �
4) The images of the set I ∪{Ii}0≤i<n are anonymized
(see §II-C). Then, the farm manager submits n compari
-----
larity computations (see §II B).
6) The farm manager submits sorting jobs to execute a
fault-tolerant extension of the bitonic algorithm (§II-G).
7) The sorting process is certified using the result-checking
algorithms developed in §II-B. This produces a table T
containing the sorted scores together with indexes on the
corresponding images Ii.
8) Only the first results are likely to interest the user. Consequently, only the 10% first entries of T are returned.
This complete architecture has been successfully deployed
on Grid5000 where unsafe resources have been simulated to
validate the approach. For this experiment, we only had a
small database of mammograms (for legal reasons, it appears
difficult to access medical images). Yet we hope that the
encouraging results presented in this paper will permit an
access to a wider set of mammograms: As mentioned in the
introduction, we negotiate access to a distributed database on
EGEE.
IV. CONCLUSION & FUTURE WORKS
This paper presents an illustration of a federated research
within the RAGTIME project. The specialities of the respective authors have been combined to provide a robust and
secure architecture, able to process medical images for assisted
diagnosis. The infrastructure is reachable from any machine
connected to the Internet, therefore improving the mobility
of care practitioner susceptible of using it. He gains a quick
and easy access to results, even from its desk. In the context
of this article, we considered an application of breast cancer
lesions detection in mammograms (even if this infrastructure
can be extended to any kind of medical image processing).
The complete architecture has been successfully deployed and
validated on the Grid5000 large scale infrastructure even if
we only have a small database of images. Having access to
a bigger database will make it possible to provide significant
experimental results. A current work in progress consists in
designing a graphical client to illustrate each step of the application described in §III. Future works include the integration
of the access to an EGEE database of medical images and the
use of smartcards for authentication in step (1) of figure 4.
REFERENCES
[1] A. Krings, J.-L. Roch, S. Jafar, and S. Varrette, “A Probabilistic
Approach for Task and Result Certification of Large-scale Distributed
Applications in Hostile Environments,” in Proceedings of the European
_Grid Conference (EGC2005), ser. LNCS 3470, S. Verlag, Ed., LNCS._
Amsterdam, Netherlands: Springer Verlag, February 14–16 2005.
[2] S. Jafar, T. Gautier, A. W. Krings, and J.-L. Roch, “A checkpoint/recovery model for heterogeneous dataflow computations using
work-stealing.” in Euro-Par, 2005, pp. 675–684.
[3] S. Varrette, S. Georget, J. Montagnat, J.-L. Roch, and F. Leprevost,
“Distributed Authentication in GRID5000,” in LNCS OnTheMove Fed_erated Conferences - Workshop ”Grid Computing and its Application to_
_Data Analysis (GADA’05)”, ser. LNCS 3762, R. Meersman and al., Eds._
Agia Napa, Cyprus: Springer Verlag, November 1 2005, pp. 314–326.
[4] L. F. G. Sarmenta, “Sabotage-Tolerance Mechanisms for Volunteer
Computing Systems,” in ACM/IEEE International Symposium on Cluster
_Computing and the Grid (CCGrid’01), Brisbane, Australia, Mai 2001._
[6] I. Foster and C. Kesselman, Globus: A metacomputing infrastructure
toolkit,” International J. of Supercomputer Applications and High Per_formance Computing, vol. 11, no. 2, pp. 115–128, Summer 1997._
[7] S. Jafar, S. Varrette, and J.-L. Roch, “Using Data-Flow Analysis for
Resilence and Result Checking in Peer to Peer Computations,” in IEEE
_DEXA’2004 - Workshop GLOBE’04: Grid and Peer-to-Peer Computing_
_Impacts on Large Scale Heterogeneous Distributed Database Systems,_
IEEE, Ed., Zaragoza, Spain, September 2004, pp. 512–516.
[8] A. W. Krings, J.-L. Roch, and S. Jafar, “Certification of large distributed
computations with task dependencies in hostile environments,” in IEEE
_Electro/Information Technology Conference, (EIT 2005), IEEE, Ed.,_
Lincoln, Nebraska, May 2005.
[9] L. Seitz, J. Montagnat, J. M. Pierson, D. Oriol, and D. Lingrand, “Authentication and Authorization Prototype on the µgrid for Medical Data
Management,” in From Grid to Healthgrid, Proceedings of Healthgrid
_2005._ Oxford, UK: IOS Press, April 2005, pp. 222–233.
[10] A. Shamir, “How to Share a Secret,” in Communications of the ACM,
vol. 22, 1979, pp. 612–613.
[11] L. Seitz, J. M. Pierson, and L. Brunie, “Encrypted Storage of Medical
Data on a Grid,” Methods of Information in Medicine, vol. 44, no. 2,
pp. 198–201, February 2005.
[12] L. Seitz, J. Montagnat, J.-M. Pierson, D. Oriol, and D. Lingrand,
“Authentication and autorisation prototype on the microgrid for medical
data management,” in HealthGrid05, Oxford, United Kingdom, Apr.
2005.
[13] J. Montagnat, V. Breton, and I. Magnin, “Partitionning medical image
databases for content-based queries on a grid,” Methods of Information
_in Medicine, vol. 44, no. 2, pp. 154–160, 2005._
[14] J. Montagnat, F. Bellet, H. Benoit-Cattin, V. Breton, L. Brunie,
H. Duque, Y. Legr´e, I. Magnin, L. Maigne, S. Miguet, J.-M. Pierson,
L. Seitz, and T. Tweed, “Medical images simulation, storage, and processing on the european datagrid testbed,” Journal of Grid Computing,
vol. 2, no. 4, pp. 387–400, Dec. 2004.
[15] T. Glatard, J. Montagnat, and I. Magnin, “Texture based medical image
indexing and retrieval: application to cardiac imaging,” in Proceedings of
_ACM Multimedia 2004, workshop on Multimedia Information Retrieval_
_(MIR’04), New York, USA, Oct. 2004._
[16] T. Tweed and S. Miguet, “Automatic detection of regions in interest
in mammographies based on a combined analysis of texture and histogram,” in International Conference on Pattern Recognition, Quebec
City, Canada, Aug. 2002.
[17] K. E. Batcher, “Sorting Networks and their Applications,” in Proc.
_AFIPS Spring Joint Computer Conference, vol. 32, 1968, pp. 307–314,_
http://www.cs.kent.edu/[∼]batcher/sort.ps.
[18] M. Ajtai, J. Komlos, and E. Szemeredi, “An O(n log n) sorting network,” in Proc. of the 15th Annual ACM Symposium on Theory of
_Computing, Boston, April 1983, pp. 1–9._
[19] S. Assaf and E. Upfal, “Fault Tolerant Sorting Networks.” SIAM J.
_Discrete Math., vol. 4, no. 4, pp. 472–480, 1991._
[20] T. Leighton, Y. Ma, and C. G. Plaxton, “Breaking the O(n log[2] n)
Barrier for Sorting with Faults,” Journal of Computer and System
_Sciences, vol. 54, no. 2, pp. 265–304, 1997._
-----
|
{
"disclaimer": null,
"license": null,
"status": null,
"url": ""
}
| 2,006
|
[] | false
| null |
[] | 8,267
|
|
en
|
[
{
"category": "Materials Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Materials Science",
"source": "s2-fos-model"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02fb65c8819aba0d1b1f8cdbeccabf3fd10cb8f8
|
[
"Materials Science",
"Medicine"
] | 0.92055
|
Systematic characterization of 3D-printed PCL/β-TCP scaffolds for biomedical devices and bone tissue engineering: Influence of composition and porosity
|
02fb65c8819aba0d1b1f8cdbeccabf3fd10cb8f8
|
Journal of Materials Research
|
[
{
"authorId": "3271563",
"name": "Arnaud Bruyas"
},
{
"authorId": "83577045",
"name": "Frank Lou"
},
{
"authorId": "46569372",
"name": "A. Stahl"
},
{
"authorId": "123039198",
"name": "M. Gardner"
},
{
"authorId": "47948345",
"name": "W. Maloney"
},
{
"authorId": "2737738",
"name": "S. Goodman"
},
{
"authorId": "100480723",
"name": "Y. Yang"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Mater Res"
],
"alternate_urls": [
"http://www.mrs.org/s_mrs/sec_jmr.asp?CID=30&DID=48&SID=1",
"http://journals.cambridge.org/JMR"
],
"id": "202ca369-7bb0-4271-8e37-746eb3245523",
"issn": "0884-1616",
"name": "Journal of Materials Research",
"type": "journal",
"url": "https://www.cambridge.org/core/journals/journal-of-materials-research"
}
| null |
## Author manuscript
#### J Mater Res. Author manuscript; available in PMC 2019 January 27.
Published in final edited form as:
J Mater Res. 2018 July 27; 33(14): 1948–1959. doi:10.1557/jmr.2018.112.
## Systematic characterization of 3D-printed PCL/β-TCP scaffolds for biomedical devices and bone tissue engineering: influence of composition and porosity
**Arnaud Bruyas[§],**
Department of Orthopaedic Surgery, Stanford University, 300 Pasteur Drive, 94305, Stanford CA
**Frank Lou[§],**
Department of Mechanical Engineering, Stanford University, 440 Escondido Mall, 94305, Stanford
CA
**Alexander M. Stahl,**
Department of Chemistry, Orthopaedic Surgery, Stanford University, 300 Pasteur Drive, 94305,
Stanford CA
**Michael Gardner,**
Department of Orthopaedic Surgery, Stanford University, 300 Pasteur Drive, 94305, Stanford CA
**William Maloney,**
Department of Orthopaedic Surgery, Stanford University, 300 Pasteur Drive, 94305, Stanford CA
**Stuart Goodman, and**
Department of Orthopaedic Surgery, Stanford University, 300 Pasteur Drive, 94305, Stanford CA
**Yunzhi Peter Yang**
Department of Orthopaedic Surgery, Bioengineering, Material Science and Engineering, Stanford
University, 300 Pasteur Drive, 94305, Stanford CA
### Abstract
This work aims at providing guidance through systematic experimental characterization, for the
design of 3D printed scaffolds for potential orthopaedic applications, focusing on fused deposition
modeling (FDM) with a composite of clinically available polycaprolactone (PCL) and β-tricalcium
phosphate (β-TCP). First, we studied the effect of the chemical composition (0% to 60% β-TCP/
PCL) on the scaffold's properties. We showed that surface roughness and contact angle were
respectively proportional and inversely proportional to the amount of β-TCP, and that degradation
rate increased with the amount of ceramic. Biologically, the addition of β-TCP enhanced
proliferation and osteogenic differentiation of C3H10. Secondly, we systematically investigated
the effect of the composition and the porosity on the 3D printed scaffold mechanical properties.
Both an increasing amount of β-TCP and a decreasing porosity augmented the apparent Young's
modulus of the 3D printed scaffolds. Third, as a proof-of-concept, a novel multi-material
biomimetic implant was designed and fabricated for potential disk replacement.
Correspondence to: Yunzhi Peter Yang.
§Both authors contributed equally to this work.
-----
**Keywords**
composite; bone; biomimetic
### Introduction
Design and manufacturing of synthetic scaffolds to stimulate bone repair has been
extensively studied. [1][2][3][4] It is reported that a construct should ideally present the
following properties: biocompatibility, high porosity with interconnected pores to allow cell
ingrowth, sufficient mechanical strength, promotion of cell adhesion (osteoconductive) and
activity (osteoinductive), appropriate degradation, and custom-fit geometry. [3] The design
of a construct is therefore complex, and different manufacturing processes have been
explored in this regard, such as casting, molding, or electrospinning. Recently, additive
manufacturing (AM) has received increasing attention in medical devices and tissue
engineering, because it allows the manufacturing of constructs with a controllable and
accurate material layout, leading to a high potential for complex geometries and better
control over the porosity and pore layout in the construct. [5] [6] [7] In particular, Fused
Deposition Modeling (FDM) is of interest for bone tissue engineering, because of its ease of
use, the variety of biocompatible polymers and composites that can be exploited, and
because porous structures can be accurately generated. Numerous studies have been
published regarding this topic, and a large number of geometries as well as materials have
been explored. [8][9][10][11]
Regarding material composition, medical grade poly(ε-caprolactone) (PCL) and beta
tricalcium phosphate (β-TCP) are two widely used materials for orthopaedic applications.
[1][5][12] PCL is a biocompatible synthetic polymer that is employed for biodegradable
implants, due to its biocompatibility, long-term biodegradability, FDA approval and
relatively low cost. [8][13] In addition, PCL can easily be manufactured and manipulated,
thanks to its low melting point, its ease of mixture with other materials (polymers and
ceramics), and its compatibility with most AM methods, in particular FDM. [14] β-TCP is a
calcium phosphate derivative, similar to the calcium phosphate material that comprises
between 60-70% of natural bone, thus presenting an inherent biomimetic potential. It is
biodegradable and has been demonstrated to have osteoconductive properties in encouraging
new bone growth, thus reducing patient recovery time due to the generation of natural bone
tissue. [15] The combination of these two materials offers unique properties and presents
great interest for regenerative medicine, and many studies have been conducted on this
particular composite for medical devices, implants and constructs for orthopaedic
applications. [16] [17] [18] During the design of the construct, its porosity and its
composition must be carefully defined, since both parameters can be related to the scaffold's
properties (chemical, physical and biological) known to be of influence in tissue
engineering, such as surface composition, degradation, stiffness, surface morphology and
hydrophilicity, cell proliferation and differentiation. Several studies have linked some of
these properties to the porosity and/or the composition of the construct. For instance, Huang
et al. [14] showed that the addition of β-TCP in PCL improved the 3D printed scaffold's
mechanical performance, and Yeo et al. studied in detail the degradation of PCL/ β-TCP
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
scaffolds (80:20) both in vitro and in vivo. [16] Hollister et al. [19][20] focused on the
relationship between porosity and mechanical stiffness to mimic different types of bone
tissue, and Shin et al. [21] highlighted the influence of calcium phosphate on osteogenic
differentiation of human mesenchymal stem cells, demonstrating an increasing ALP activity
of scaffolds with calcium phosphate compare to PCL only. However, the definition of both
the composition and the porosity for a given application often remains the result of a trial
and error approach, which seems largely sub-optimal given the number of properties and
their inter-dependability.
In this paper, we report on a systematic approach to this problem, by manufacturing and
assessing the properties of 3D printed PCL/ β-TCP scaffolds with different compositions
and porosities. We first focused on the effect of the composition. For that purpose, we
synthesized PCL/β-TCP composites with different proportions of ceramic (0%, 20%, 40%,
60% β-TCP) and 3D printed non-porous scaffolds. For each composition, we examined the
surface morphology, evaluated the hydrophilicity, quantified the degradation speed and
determined the mechanical properties of the material. We then performed in-vitro biological
experiments in order to assess the composition effect on cell proliferation and osteogenic
differentiation. Next, we systematically assessed the effect of both the composition and the
porosity on mechanical properties. We produced 20 groups of scaffolds with different
porosities and composition, measured their volume and characterized their mechanical
performances. In the last part of the paper, as a proof of concept, a 3D printed implant
composed of multiple materials and porosities was designed and fabricated for the
application of disc replacement. The choice of the design parameters was detailed and a
prototype was manufactured.
### Experimental
**A/ Synthesis and composition analysis of PCL/ β-TCP materials**
Polycaprolactone (Sigma-Aldrich, USA) and β-TCP powder with particle size averaging
100nm (Berkeley Advanced Materials Inc., USA) were weighed with respect to the required
PCL/ β-TCP ratio. 10% (wt/v) PCL solution and 5% (wt/v) β-TCP solution in
dimethylformamide were prepared at 80°C and stirred for 3 hours, before being mixed
together and thoroughly stirred for another 1 hour. This solution was then precipitated in a
large volume of water at room temperature to remove the solvent. The material was then
dried for 24h at room temperature, and manually cut into pellets of approximately 5mm in
diameter. This synthesis process was repeated for each ratio of PCL to β-TCP explored in
this study: 100/0, 80/20, 60/40 and 40/60.
Similar to the test performed by Lepoittevin, et.al, [22] the composition of the synthesized
composites was validated by thermal gravimetric analysis (TGA), using a TA instrument
Q500 TGA (TA instrument, USA). Briefly, the samples were heated up to 550°C with a
constant increase of 20°C per minute, while monitoring their mass over time. Because of the
thermal properties of both materials, the remaining mass at the end of the test was
considered to be pure β-TCP.
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
**B/ Filament fabrication and scaffold 3D printing**
Using an in-house built screw extruder (see Figure S1), the solid pellets of PCL/ β-TCP
were melted at 90°C and extruded into a filament of constant diameter for FDM 3D printing.
Average filament diameter for each group was measured and the values were used for each
group respectively during the 3D printing process. Scaffolds were manufactured using a
Lulzbot Mini (Aleph Objects Inc, USA) with a nozzle of 500μm. The printing temperature
was set to 160°C so that each ratio could be printed smoothly. The layer thickness was set to
200μm, each layer being constituted of parallel struts with an orientation of 90° relative to
the previous layer. The printing speed was set to 5mm/s, and was calibrated to deposit struts
of width ranging from 350μm to 400μm. Two types of scaffold were manufactured. For
surface characterization, and biological studies, disks of diameter 10mm, thickness 600μm
and strut distance 0.4mm (0% porosity) were manufactured, resulting in a scaffold with a
plain surface. For mechanical testing, the specimens were cylinders of diameter 10mm and
height 5mm. 5 different porosities were explored for each material composition. They
correspond to 5 different strut distances: 0.4mm, 0.5mm, 0.71mm, 1.25mm and 2.5mm.
0.4mm is the theoretical distance for a porosity of 0%, and 2.5mm has been empirically
defined as the maximum value that still ensures scaffold integrity.
**C/ Surface characterization**
Hydrophobicity was evaluated using a contact angle goniometer, Ramé-Hart 290 (RaméHart instrument co., USA). Briefly, a droplet (4μL) was deposited at the center of the disc
scaffold, and the contact angle was measured 1 minute after deposition using image
processing. Five samples were tested for each group. Because the contact angle is linked to
surface chemistry as well as morphology, and because 3D printing affects surface
morphology, two assays were carried out. The first assay aimed at studying the impact of the
composition only. To homogenize the surfaces after 3D printing, post-processing was
performed. The scaffolds were placed in an oven at 80°C onto a glass substrate for 10
minutes in order to melt the samples and obtain similar surfaces for all the compositions.
The second assay was performed on the scaffolds directly after printing in order to observe
the influence of both the material composition and the manufacturing process.
The morphology of the surface was evaluated qualitatively using second electron emission
imaging. The samples were first cleaned in ethanol and cut to size using a scalpel. Samples
were sputter-coated with gold (10 nm) (SPI Sputter, SPI Supplier Division of Structure Prob
Inc., West Chester, PA, USA). A scanning electron microscope (SEM, Zeiss Sigma FESEM)
was then used to image the samples at three different magnifications: 150, 1000, and 10000,
with an acceleration voltage of 3 kV.
Surface roughness (arithmetic average roughness Ra) of the disk scaffolds was quantified
using a profilometer (Dektak XT, Bruker, MA, USA). The profiles were measured over a
line of 1mm length following a single strut, with a stylus force of 1mg and a measuring
range of 6.5μm. 5 samples were tested per group.
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
**D/ Degradation**
Because PCL is a slow biodegradable polymer, accelerated degradation was performed
following the protocol described by Lam et al. [23] Pieces of filaments (length 20mm) were
used as specimens to quantify the degradation of the material itself and not the scaffold after
printing. The specimens were immerged in a 5M NaOH solution (15mL) and incubated at
37°C. At predefined timepoints, they were removed, dried in an oven at 30°C for 45 minutes
to remove any liquid, and weighed. Degradation was then quantified through measuring
mass loss, i.e, the change in mass of the bulk recovered filament sample throughout the
duration of the experiment.
**E/ Biological study**
For all biological studies, multi-potent mouse C3H10T1/2 fibroblasts (ATCC, USA) were
used as a model to study osteogenic differentiation of mesenchymal stem cells. They were
cultured in DMEM (Life Technologies, USA) supplemented with 10% fetal bovine serum
(FBS, Life Technologies, USA) and 1% PS. Medium was changed every two day. Cells were
incubated at 37°C, 5% CO2 in a humidified incubator.
Biological experiments were performed using flat non-porous scaffolds in order to assess the
impact of the composition only and for practical reason of image analysis of non-transparent
samples (details are provided in supplementary material 2). After manufacturing, disc
samples were immersed in a 70% ethanol solution for 20 min, rinsed in PBS 3 times and
dried overnight. Cell seeding was performed in 24 wells culture plates by depositing cells
suspended in media on the surface of the disc with a concentration of 0.8×10[4] cells/cm[2],
incubating them for 20 min before filling the well with 1mL of media. Cell proliferation and
osteogenic differentiation were both assessed at day 1, 7, 11. Proliferation was evaluated by
moving the scaffolds to new wells, detaching the cells from the scaffold using 0.05% trypsin
(Life Technologies, USA), suspending them in media, and counting them using a Z2 particle
counter (Beckman Coulter, USA). For cell differentiation, Alkaline Phosphatase (ALP)
activity of cells was assessed through semi-quantitative staining. Alkaline Phosphatase kit
(Sigma-Aldrich, USA) was used and the staining was performed following the manufacturer
instructions. At designated time-points, cells were fixed for 1 minute in 3.7% formaldehyde
and samples were incubated for 1 hour with ALP stain. After staining, the scaffolds were
imaged and the ALP levels were quantified using image processing performed with Matlab
R2013 (MathWorks, USA). Briefly, color features of the images of the scaffolds were
extracted, quantified, and the pixel values were averaged over the entire surface of the
scaffold. For each composition, the obtained values at day 7 and 11 were normalized using
the average value at day 1.
**F/ Porosity measurement**
To assess the actual porosity of each scaffold, they were imaged using a micro-CT imaging
device eXplore CT120 (TriFoil Imaging, USA). Reconstruction was performed using
MicroView software (Parallax Innovations, Canada). Each composition was processed
independently because of their different sensitivities to X-ray imaging. For each, a threshold
value was identified using the automatic tool provided by the software. Using this value, the
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
volume of each individual sample was computed. To get the porosity p, the construct volume
Vc was compared to the overall volume of the cylinder Vt, using equation 1.
_p = 1−(Vc/Vt)_
**G/ Mechanical analysis**
First, the mechanical properties of the bulk materials were evaluated in order to examine the
properties of the material independently of the manufacturing process. For this purpose, 5
cm pieces of filament were directly used as specimens. Tensile testing was performed using
an Instron 5944 uniaxial testing system with a 2 kN load-cell (Instron Corporation,
Norwood, MA). A preload of 1N was applied, at a speed of 1% strain/s until 25% strain. The
values of the Young's modulus as well as the tensile strength at zero slope were extracted,
the first one being the value of the initial slope of the stress-strain curve, the second being
the ordinate of the zero slope point of the curve. Five specimens were tested per material
composition.
Then, 3D printed porous scaffolds were mechanically tested in compression using the same
instrument, following guidelines adapted from Huang, et al. [14] A preload of 1N was
applied, and tests were performed at a speed of 1% strain/s until 25% strain. Five porosities
were independently tested for each composition in order to study the influence of both the
amount of β-TCP and the porosity on the mechanical properties. Five specimens were tested
for each group, and for each specimen, the apparent Young's modulus and the yield strength
at 1% were measured. The first one was identified as the slope of the initial linear portion of
the stress-strain curve. The second one corresponds to the ordinate of the intersection
between the stress-strain curve and a line with a slope equal to the Young's modulus starting
at an offset of 1% strain.
**H/ Manufacturing of a multi-material 3D printed implant**
In order to 3D print a single piece construct consisting of several materials, an algorithm was
developed. Knowing the layout of the construct and the position of each material in the
construct's bulk volume, the corresponding length of filament for each volume required was
computed. The filament pieces were then manually fused in the order in which the printing
would complete each separate section to form a single multi-material construct. The
recomposed filament was then used by the printer to manufacture the multi-material
construct in a single iteration, with each major section of the construct utilizing a different
material. To prove the feasibility of multi-material constructs, a novel implant for disc
reconstruction was proposed. The design as well as a prototype are detailed in the Results
and Discussion sections.
**I/ Statistical analysis**
Data are presented using mean ± standard deviation. Statistical analyses were performed
using t-test method when two groups were involved, and one-way analysis of variance
(ANOVA) with ad-hoc Tukey's test for three or more groups. Differences were considered
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
significant for p < 0.05, as labeled in figures by the * symbol. Analyses were performed
using Matlab R2013 software (MathWorks, USA).
### Results and discussion
**A/ Material composition**
Using the TGA curves presented in Figure 1 (a), the ratios of β-TCP in the samples were
quantified as the remaining mass at 550°C. The values are 2.62%, 21.06%, 41.55%, 59.31%
for theoretical compositions of 0%, 20%, 40%, and 60% β-TCP, respectively, which
represents relative errors of 2.62%, 1.06%, 1.55% and 0.69%, respectively. This disparity
can either be the result of impurities in PCL and/or experimental errors. This analysis
validates the material composition after synthesis, and therefore confirms that the solvent
method used in this study efficiently mixes PCL and β-TCP with a ratio up to 60% ceramic.
For future reference, higher amounts of β-TCP were attempted, but the synthesis failed as
the amount of PCL was too low to bind such a large amount of ceramic, resulting in a
collapsing composite after precipitation. Figure 1 (b) presents the differential
thermogravimetric curves, which are the first derivative of the TGA curves, providing
further information on material thermal degradation. The abscissa of the larger peak for each
composition are 378.3°C, 398.3°C, 402.1°C and 409.2°C, respectively for 0%, 20%, 40%
and 60% of β-TCP, showing a slight increase in thermal stability due to the addition of βTCP, which corroborates the results presented by Huang, et. al. [14]
Filament diameters following extrusion were also measured, averaging 2.75 mm ± 0.10, 2.43
mm ± 0.08, 2.45 mm ± 0.03, and 2.66 mm ± 0.04, respectively for β-TCP ratios of 0%,
20%, 40% and 60%.
**B/ Composition influence**
**1/ Surface characterization—Contact angle tests were first performed on post-processed**
scaffolds in order to quantify the influence of the composition alone. As shown in Figure 2
(a), all values are under 90°C (between 70° and 80°), indicating that the materials tend to be
hydrophilic. Moreover, a small decrease in contact angle is showed when the amount of
ceramic is increasing, with statistical significance between 0%, 20% and 60%. Figure 2 (b)
presents the results of similar tests on scaffolds without post-processing, in order to assess
both the influence of the composition and the manufacturing process on the contact angle.
Contrary to the previous test, no significant difference is noted between the different groups
although their chemical compositions are different. Similar results are demonstrated in [25],
where pure PCL presents a contact angle of 75°, while the addition of β-TCP did not
significantly affect the contact angle. These two tests show that the surface morphology
resulting from FDM 3D printing has a non-negligible impact on surface hydrophilicity. This
morphology is the result of the layer formation strut by strut, and is inherent to FDM 3D
printing technology.
Surface morphology was observed using SEM imaging, presented in Figure 2 (c)-(f). Crosssectional images are also introduced in Figure S3. At every scale, imaging shows a relatively
smooth surface for pure PCL, and an increase in surface roughness for higher β-TCP ratios.
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
At low magnification (x150), the morphology is the result of two phenomena. First, the
junction between struts of a single layer can be observed, resulting in linear ridges on the
surface, as indicated in Figure 2 (c)-(f) I). This is the result of the strut by strut deposition of
FDM 3D printing processes. Although careful calibration can attenuate it, it is related to the
inherent variability of the process and therefore cannot be entirely removed. Second, the
addition of ceramic nanoscopic powder is introducing bumps at the surface. At this scale,
they are the results of aggregates of ceramic particles that assemble together because of the
high surface tension of nanoparticles and the separation of the hydrophilic ceramic particles
from the PCL matrix. Although both components of the composite were put in solution
separately and thorough mixing was carried out, interaction between particles was evidently
stronger. At ×1000 magnification, these aggregates can be better observed (Figure 2 (c)-(f)
II)). Because of the extrusion process, they are usually under a thin layer of polymer,
sometimes even creating holes in the surface of the material. Consequently, the higher the
amount of ceramic in the composite, the rougher the surface will be. At high magnification
(×10000), the ceramic particles can be distinguished, their average size being 100 nm
(Figure 2 (c)-(f) III)). Some of them are apparent, but judging by the texture of the surface,
not all of them seem to be exposed, and similar to aggregates, some of them are covered
with a thin layer of polymer.
Surface roughness measurements Ra are introduced in Figure 2 g), with values of 178.3
± 67.6 nm, 645.7 ± 84.7 nm, 1193.6 ± 97.6 nm, and 1837.6 ± 317.6 nm, respectively for βTCP ratios of 0%, 20%, 40%, and 60%. A significant increase of Ra is observed for
increasing amount of ceramic, confirming the observations of the SEM images.
**2/ Degradation—Significant differences in degradation rates under accelerated condition**
were observed between the different materials depending on the ratio of ceramic to polymer
utilized (Figure 3). Quantifiable mass loss in the two materials with higher ceramic content
(60% and 40%) commenced within 24 hours, with the 60% filament experiencing over 50%
mass loss within 10 hours. Lower ceramic content filament (20%) showed <5% mass loss
after the total 54 hours period of the trial, and all samples of pure PCL filament experienced
<1% mass loss over the same duration.
These results are consistent with results presented in similar studies. Indeed, polyester
degradation under alkaline conditions is a well-known accelerated degradation protocol. [16]
[23] [24] It is thought that polyester degradation under these conditions mimics typical
degradation under aqueous conditions where ester-ester linkages are hydrolytically severed,
breaking apart the bulk material. The presence of additional -OH ions from alkaline solution
catalyzes this process, speeding up a degradation process that can require 3-4 years in vivo.
[23] Lei, et. al suggested that the addition of β-TCP speeds up the degradation because βTCP particles are only physically mixed in the composite, and submersion in the alkaline
media frees the β-TCP particles to convert into its more thermodynamically favorable form
of apatite in solution. [18] The void left by dissolving β-TCP particles additionally increases
the available surface area for the aforementioned hydrolytic attack on ester-ester linkages,
while also opening up more regions of β-TCP to be freed. Visually, it was noted that
solutions further along in the degradation process possessed a white powder-like substance
that precipitated along the bottom of the vials used for degradation; these are theorized to be
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
the aforementioned released β-TCP particles. Higher ceramic content filaments thus
experienced accelerated rates of degradation due to the presence of more β-TCP particles,
allowing for greater amounts of the ceramic to be released and quickening the rate at which
the filaments lost structural integrity. As such, it is worth noting that the extremely high
degradation rates of the higher ceramic content (40% and 60%) is a result of disassembly of
the composite, i.e loss of structural integrity, in combination with our assay method of mass
measurement of the samples, but not the dissolution/disappearance of the ceramic particles
themselves. However, this result does offer an approach via manipulation of the ceramic
ratio to control degradation rates, and further enables the creation of a bioresorbable bone
implant that can ideally be designed to match the natural variations found in bone healing
rates. [26]
**3/ Mechanical properties of bulk material—Tensile tests were performed on filament**
shape material of different composition in order to assess the bulk mechanical properties, i.e
the properties of the material before being affected by the manufacturing process (Figure 4
(a)). Both Young's modulus and yield strength are introduced in Figure 4 (b) and 4 (c).
Young's modulus average values were 264 MPa, 355MPa, 495MPa and 1140 MPa,
respectively for 0%, 20%, 40% and 60% of β-TCP content and yield strength average values
were 14.2 MPa, 12.4 MPa, 10.74 MPa and 10.29 MPa for the same β-TCP ratios. Young's
modulus quantifies the material stiffness, and increases with the amount of ceramic in the
composite, with statistical significance between all the groups. The increase seems to be
linear up to 40%, and displays a larger increase between 40% and 60%. The yield strength
decreased compared to the amount of ceramic in the composite. It represents the maximum
stress that can be applied to the material without permanent deformation. As a result,
elasticity of PCL/β-TCP composite reduces with higher quantity of ceramic.
Theoretical models have been developed to estimate the Young's modulus of particulatefilled systems. [27] For spherical particles added in a polymeric phase, the simplest model
equation has been identified by Einstein. [27] Under certain hypotheses (low ratio of
particles, perfect adhesion between the two phases of the composite, particles much more
rigid than the matrix) a linear dependency can be highlighted between the Young's modulus
of the composite and the Young's modulus of the polymer, according to the volume ratio of
particles. For large filler concentration, a more complex model has been developed by
Kerner, [27] which under the same hypotheses predicts considerably more stiffening action
of the filler compared to Einstein's model for higher particle concentration. Both these
models support the results presented in Figure 4 (b),(c).
**4/ Cell proliferation and differentiation—Figure 5 shows the proliferation results of**
mouse fibroblasts (C3H10 T1/2) at day 1, 7 and 11. At day 1, no significant difference was
noted between groups, highlighting the fact that scaffold composition had no influence on
cell attachment. Over the three time-points, the number of cells increased for all groups.
Starting at day 7, the number of cells on the sample containing β-TCP became significantly
higher than the number of cells on pure PCL scaffolds (about 50% more on average at day
11). Although it may seem that the number of cells slightly increased with the ceramic ratio
at day 11, no statistical difference was shown over the different β-TCP ratios. The
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
proliferation study indicated that addition of ceramic improves cell proliferation compared
to pure PCL. One possible explanation is the variation in surface morphology reported
previously, since it has been shown that cells proliferate better on rougher surfaces, as
rougher surfaces provide larger surface area for cell growth. [11]
Figure 6 (b) represents the relative ALP activity of the different groups for days 1, 7, and 11,
and examples of scaffolds at day 11 after staining are introduced in Figure 6 (a). For
composition of 0% and 20% β-TCP, no significant increase was shown over time, while 40%
and 60% ratios indicated a significant increase of activity at day 11 compared to day 1 and 7,
with respective increases of 5% and 10%. Moreover, the ALP activity seems to increase with
the amount of β-TCP in the scaffold, with significant differences between 0% and 40%,60%,
and between 20% and 60% (see Figure 6 (b)). These results highlight the impact of the
addition of β-TCP on C3H10 osteogenic differentiation, and corresponding results can be
found in the literature. Shin et. al demonstrated a higher ALP activity for human
mesenchymal stem cells cultured on PCL/β-TCP compared to PCL only, pointing out a
potential impact of the exposed β-TCP particles at the surface of the scaffold. [21] Similar
results were shown by Polini et.al, where the addition of β-TCP in PCL nanofibers improved
stem cell differentiation. [25]
The biological studies highlight the osteoconductive property of β-TCP and are in
accordance with other studies on the impact of calcium phosphate based materials (β-TCP or
hydroxyapatite) for both proliferation and differentiation of stem cells. [15] [25] It is not
clear, however, if this impact is the result of differences in chemical surface properties or of
differences in physical properties of the surface (hydrophobicity, roughness, stiffness).
Although we tried to isolate the influence of the composition only, it is inherently linked to
the physical properties through the manufacturing process. Post-processing techniques could
be considered in order to modify physical properties and decrease the physical properties
variability for more accurate assessment in the future.
**C/ Effect of composition and porosity on mechanical properties**
Characterization of the influence of porosity and composition on the mechanical properties
of 3D constructs was performed in a systematic manner. 20 different groups were
manufactured using FDM (Figure 7 (a)) with n=5 for each group. The average values of
porosity for each group are presented in Figure 7 (d). As expected, porosity was mostly
guided by the distance between each strut in the scaffold, and very low variation is noted
between the β-TCP ratios. After being tested under compression, apparent Young's modulus
and yield strength were computed for each group, as presented in Figure 7 (b) and 7 (c).
These values respectively ranged from 12 MPa (β-TCP ratio: 60%, strut distance: 2.5mm) to
188 MPa (β-TCP ratio: 60%, strut distance: 0.4 mm), and 0.7 MPa (β-TCP ratio: 60%, strut
distance: 2.5mm) to 15.4 MPa (β-TCP ratio: 0%, strut distance: 0.4 mm). In comparison,
Gibson [28] tested cancellous bone from varying regions of the body and demonstrated
Young's modulus values that varied between an order of magnitude of 10[0] MPa and 10[2 ]
MPa, a range is similar to the Young's modulus range demonstrated in Figure 7 (b). This
study could therefore be used to design and tailor constructs specific to the type of bone
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
considered in the application, improving its biomimicry, and hypothetically enhancing bone
regeneration rates.
To assess the extent of the influence of the composition versus the porosity, each mechanical
parameter was graphically related to the porosity in Figures 7 (e) and 7 (f). For the apparent
Young's modulus, a linear trend can be identified, the slope being steeper with the amount of
β-TCP. As a result, the composition of the construct has very little influence on its apparent
Young's modulus for high porosity. For lower porosity, an increasing amount of ceramic
results in higher Young's modulus. Comparing the values of the lowest porosity (close to
0%) to the results of the tensile tests performed on the same materials in a bulk form, a large
discrepancy can be noted, especially for higher amounts of ceramic, which highlights the
important influence of the FDM process and the geometry of the construct. Regarding yield
strength (Figure 7 (f)), the curve of each composition overlays, underscoring the fact that βTCP amount has very little influence and that the yield strength of a 3D construct is guided
by the layout of its layers. Indeed, during compression testing, yield is the result of a
collapse of the construct when loaded, which makes the geometrical layout of the construct
of higher impact compared to the composition.
Interestingly, a similar apparent Young's modulus value can be achieved by constructs with
very distinct design parameters. For instance, scaffolds with 15% porosity/0% β-TCP, and
45% porosity/60% β-TCP will both have an apparent Young's modulus of about 100 MPa.
This overlap results in more freedom regarding construct design, and allows for researchers
to take other parameters into consideration other than solely the mechanical performances,
since these two vastly different compositions can result in the same mechanical values. For
this purpose, results presented in this section and the rest of the paper can act as guidelines
and assist in the design process.
**E/ Multi-material implant for disc reconstruction: Proof of concept**
Using the results previously presented, and to prove the feasibility of manufacturing multimaterial constructs 3D printed in a single piece, a novel design for disc reconstruction is
presented. Disc implants aim at replacing a collapsing disc between two vertebrae. Typically,
they are composed of four assembled parts: two inner elements sliding between each other to
allow relative movement between the vertebrae, placed in between two metallic endplates
connecting the implant to each vertebrae. [29] An alternative design is introduced in Figure
8 (a), using the data collected in this study. For high stiffness and better osteoconduction, the
two endplates consist of 45% porous material with a high amount of β-TCP. They are
connected through a section of highly porous (70-75%) pure PCL designed to mimic
biological cartilage stiffness and elasticity. Being 3D printed in a single piece, no assembly
is required and the construct can be patient specific. A prototype is pictured in Figure 8 (b),
validating the feasibility of single piece 3D printed constructs integrating multiple porosity
and multiple composites.
### Conclusion
In this paper, we systematically studied the influence of the composition and porosity of
FDM 3D printed PCL/β-TCP constructs on properties that are of interest for bone tissue
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
engineering applications, namely surface morphology and hydrophilicity, degradation,
impact on cell behavior and mechanical performances. We first considered the influence of
the composition alone, and then the influence of both parameters. We demonstrated that the
composition affects surface properties, degradation rates and mechanical performance, and
as a result, impacts cell attachment, proliferation and osteogenic differentiation. When
combined with the porosity for 3D constructs, a synergistic effect of both parameters can be
highlighted on the mechanical performances. However, based on this study, no optimal
composition or porosity can be identified, and both parameters should be selected regarding
the application. In this regard, the results of this study can provide guidance during the
design process. As a proof of concept, we finally designed a construct for intervertebral disk
replacement that integrates multiple compositions and porosities. Because FDM 3D printing
is used, its manufacturing is possible in a single piece, the feasibility being illustrated with a
prototype.
In the end, this study enlarges the scope of FDM 3D printing for bone tissue engineering
applications, by providing guidance for the design of PCL/β-TCP constructs and proving the
feasibility of single piece constructs integrating multiple porosities and composite
compositions. It potentially leads the way towards the development of novel designs, for
instance implants specific not only to the patient but also to the type of bone.
### Supplementary Material
Refer to Web version on PubMed Central for supplementary material.
### Acknowledgments
We would like to acknowledge the financial support of the following agencies and donors: NIH R01AR057837
(NIAMS), NIH 1U01AR069395 (NIAMS/NIBIB), Stanford Coulter Translational Seed Grant, Boswell Foundation,
and Kent Thiry and Denise O'Leary.
### Bibliography
1. Hutmacher DW. Scaffolds in tissue engineering bone and cartilage. Biomaterials. 2000; 25(24):29–
43.
2. Lichte P, Pape HC, Pufe T, Kobbe P, Fischer H. Scaffolds for bone healing: Concepts, materials and
evidence. Injury. 2011; 42:569–573. [PubMed: 21489531]
3. Stevens B, Yang Y, Mohandas A, Stucker B, Nguyen TK. A review of materials, fabrication
methods, and strategies used to enhance bone regeneration in engineered bone tissues. J Biomed
Mater Res. 2011; 85B:573–582.
4. Bose S, Vahabzadeh S, Bandyopadhyay A. Bone tissue engineering using 3D printing. Materials
Today. 2013; 16(12):496–504.
5. Kawai T, Shanjani Y, Fazeli S, Behn AW, Okuzu Y, Goodman SB, Yang YP. Customized,
degradable, functionally graded scaffold for potential treatment of early stage osteonecrosis of the
femoral head. Journal of Orthopaedic Research. 2017; 35:23673.
6. Singh S, Ramakrishna S. Biomedical applications of additive manufacturing: present and future.
Current Opinion in Biomedical Engineering. 2017; 2:105–115.
7. Elomaa L, Yang YP. Additive manufacturing of vascular grafts and vascularized tissue constructs.
Tissue Eng Part B Rev. 2017; 23(5):436–450. [PubMed: 27981886]
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
8. Hutmacher DW, Schantz T, Zein I, Ng KW, Teoh SH, Tan KC. Mechanical properties and cell
cultural response of polycaprolactone scaffolds designed and fabricated via fused deposition
modeling. J Biomed Mater Res. 2001; 55:203–216. [PubMed: 11255172]
9. Goyanes A, Buanz ABM, Basit A, Gaisford S. Fused-filament 3D printing (3DP) for fabrication of
tablets. International Journal of Pharmaceutics. 2014; 476:88–92. [PubMed: 25275937]
10. Zein I, Hutmacher DW, Tan KC, Teoh SH. Fused deposition modeling of novel scaffold
architectures for tissue engineering applications. Biomaterials. 2002; 23:1169–1185. [PubMed:
11791921]
11. Marrella A, Lee TY, Lee DH, Karuthedom S, Syla D, Chawla A, Khademhosseini A, Jang HL.
Engineering vascularized and innervated bone biomaterials for improved skeletal tissue
regeneration. Materials Today. 2017; 2017
12. Shanjani Y, Kang Y, Zarnescu L, Ellerbee Bowden AK, Koh JT, Ker DFE, Yang Y. Endothelial
pattern formation in hybrid constructs of additive manufactured porous rigid scaffolds and cellladen hydrogels for orthopedic applications. Journal of the Mechanical Behavior of Biomedical
Materials. 2017; 65:356–372. [PubMed: 27631173]
13. Woodruff MA, Hutmacher DW. The return of a forgotten polymer—Polycaprolactone in the 21st
century. Progress in Polymer Science. 2010; 35:1217–1256.
14. Huang B, Caetano G, Vyas C, Blaker JJ, Diver C, Bártolo P. Polymer-Ceramic Composite
Scaffolds: The Effect of Hydroxyapatite and β-tri-Calcium Phosphate. Materials. 2018; 11(1):129.
15. Legeros RZ. Calcium Phosphate-Based Osteoinductive Materials. Chemical Reviews. 2008;
108(11):4742–4753. [PubMed: 19006399]
16. Yeo A, Rai B, Sju E, Cheong JJ, Teoh SH. The degradation profile of novel, bioresorbable PCL–
TCP scaffolds: An in vitro and in vivo study. J Biomed Mater Res. 2008; 84A:208–218.
17. Lee H, Kim GH. Three-dimensional plotted PCL/β-TCP scaffolds coated with a collagen layer:
preparation, physical properties and in vitro evaluation for bone tissue regeneration. Journal of
Materials Chemistry. 2011; 21.17:6305–6312.
18. Lei Y, Rai B, Ho KH, Teoh SH. In vitro degradation of novel bioactive polycaprolactone—20%
tricalcium phosphate composite scaffolds for bone engineering. Materials Science and
Engineering: C. 2007; 27:293–298.
19. Hollister SJ. Porous scaffold design for tissue engineering. Nature Materials. 2005; 4:518–524.
[PubMed: 16003400]
20. Hollister SJ, Maddox RD, Taboas JM. Optimal design and fabrication of scaffolds to mimic tissue
properties and satisfy biological constraints. Biomaterials. 2002; 23:4095–4103. [PubMed:
12182311]
21. Shin YM, Park JS, Jeong SI, An SJ, Gwon HJ, Lim YM. Promotion of Human Mesenchymal Stem
Cell Differentiation on Bioresorbable Polycaprolactone/Biphasic Calcium Phosphate Composite
Scaffolds for Bone Tissue Engineering. Biotechnol Bioproc E. 2014; 19:341–349.
22. Lepoittevin B, Devalckenaere M, Pantoustier N, Alexandre M, Kubies D, Calberg C, Jerome R,
Dubois P. Poly(ε-caprolactone)/clay nanocomposites prepared by melt intercalation: mechanical,
thermal and rheological properties. Polymer. 2002; 43:4017–4023.
23. Lam CX, Teoh SH, Hutmacher DW. Comparison of the degradation of polycaprolactone and
polycaprolactone–(β-tricalcium phosphate) scaffolds in alkaline medium. Polym Int. 2007;
56:718–728.
24. Li SM, Chen XH, Gross RA, McCarthy SP. Hydrolytic degradation of PCL/PEO copolymers in
alkaline media. Journal of Materials Science: Materials in Medicine. 2000; 11:227–233. [PubMed:
15348037]
25. Polini A, Pisignano D, Parodi M, Quarto R, Scaglione S. Osteoinduction of Human Mesenchymal
Stem Cells by Bioactive Composite Scaffolds without Supplemental Osteogenic Growth Factors.
PLoS One. 2011; 6(10)
26. Mirhadi S, Ashwood N, Karagkevrekis B. Factors influencing fracture healing. Trauma. 2013;
15:140–155.
27. Nielsen LE. Mechanical properties of particulate-filled systems. Journal of Composite Materials.
1967; 1:100–119.
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
28. Gibson LJ. The mechanical behaviour of cancellous bone. Journal of Biomechanics. 1985; 18:317–
328. [PubMed: 4008502]
29. Serhan H, Mhatre D, Defossez H, Bono CM. Motion-preserving technologies for degenerative
lumbar spine: The past, present, and future horizons. SAS J. 2011; 5(3):75–89. [PubMed:
25802672]
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
**Fig. 1.**
Thermogravimetric analysis of PCL/β-TCP composites with a theoretical ceramic
composition of 0%, 20%, 40%, 60%. (a) Variation of mass loss according to temperature. (b)
Differential thermogravimetric analysis.
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
**Fig. 2.**
Surface analysis of 3D printed PCL/β-TCP scaffolds with different ceramic amount: 0%,
20%, 40%, 60%. (a) Contact angle measurement after post-processing. (b) Contact angle
measurement without post-processing. (c) to (f) SEM imaging of the scaffolds for respective
composition of 0%, 20%, 40%, 60%. (g) Surface roughness measurement Ra for the
different ratios. For SEM, different magnifications are observed: ×150 (I), ×1000 (II) and
×10000 (III). In (I), arrows indicate linear ridges resulting from 3D printing. In (II), arrows
highlight ceramic aggregates, sometimes resulting in holes at the surface (arrows marked
with *). In (III), arrows are pointing towards β-TCP nanoscopic particles.
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
**Fig. 3.**
Accelerated degradation of PCL/β-TCP material under alkaline conditions for ceramic
composition of 0%, 20%, 40% and 60%.
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
**Fig. 4.**
Tensile testing of PCL/β-TCP filaments with different ceramic amounts: 0%, 20%, 40%,
60%. (a) Experimental setup. (b) and (c) Evolution of respectively the Young's modulus and
the yield strength according to the amount of β-TCP.
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
**Fig. 5.**
Proliferation of C3H10 mouse fibroblasts on 3D printed PCL/β-TCP surfaces with different
ceramic amounts: 0%, 20%, 40%, 60%. Cells were counted at Day 1, 7 and 11 for each
composition.
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
**Fig. 6.**
Semi-quantitative evaluation of the ALP activity of C3H10 mouse fibroblasts on 3D printed
PCL/β-TCP surfaces with different ceramic amounts: 0%, 20%, 40%, 60%. (a) Scaffolds of
different compositions after staining at day 11. (b) Image quantification of the ALP activity
at day 1, 7 and 11.
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
**Fig. 7.**
Systematic characterization of the influence of porosity and composition of 3D printed
constructs on the mechanical performances. (a) Example of scaffolds for each of the 20
groups studied. (b) and (c) Respective evolution of the apparent Young's modulus and the
yield strength regarding the composition of the scaffold and the strut distance. (d)
Assessment of the porosity of each scaffolds using μ-CT images. (e) and (f) Respective
evolution of the apparent Young's modulus and the yield strength according to the porosity
for each composition.
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
**Fig. 8.**
Novel design of a 3D printed disc implant. (a) Design explanation and CAD model of the
overall volume. (b) Prototype after 3D printing.
J Mater Res. Author manuscript; available in PMC 2019 January 27.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1557/jmr.2018.112?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1557/jmr.2018.112, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6197810"
}
| 2,018
|
[
"JournalArticle"
] | true
| 2018-07-01T00:00:00
|
[] | 11,363
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02feb35540f7770d8b18faac47c0fde332aa7dce
|
[
"Computer Science",
"Medicine"
] | 0.887373
|
Hybrid Blockchain Platforms for the Internet of Things (IoT): A Systematic Literature Review
|
02feb35540f7770d8b18faac47c0fde332aa7dce
|
Italian National Conference on Sensors
|
[
{
"authorId": "2064102517",
"name": "Ahmed Alkhateeb"
},
{
"authorId": "3213443",
"name": "C. Catal"
},
{
"authorId": "21629687",
"name": "Gorkem Kar"
},
{
"authorId": "144204979",
"name": "A. Mishra"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"SENSORS",
"IEEE Sens",
"Ital National Conf Sens",
"IEEE Sensors",
"Sensors"
],
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001",
"http://www.mdpi.com/journal/sensors",
"https://www.mdpi.com/journal/sensors"
],
"id": "3dbf084c-ef47-4b74-9919-047b40704538",
"issn": "1424-8220",
"name": "Italian National Conference on Sensors",
"type": "conference",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001"
}
|
In recent years, research into blockchain technology and the Internet of Things (IoT) has grown rapidly due to an increase in media coverage. Many different blockchain applications and platforms have been developed for different purposes, such as food safety monitoring, cryptocurrency exchange, and secure medical data sharing. However, blockchain platforms cannot store all the generated data. Therefore, they are supported with data warehouses, which in turn is called a hybrid blockchain platform. While several systems have been developed based on this idea, a current state-of-the-art systematic overview on the use of hybrid blockchain platforms is lacking. Therefore, a systematic literature review (SLR) study has been carried out by us to investigate the motivations for adopting them, the domains at which they were used, the adopted technologies that made this integration effective, and, finally, the challenges and possible solutions. This study shows that security, transparency, and efficiency are the top three motivations for adopting these platforms. The energy, agriculture, health, construction, manufacturing, and supply chain domains are the top domains. The most adopted technologies are cloud computing, fog computing, telecommunications, and edge computing. While there are several benefits of using hybrid blockchains, there are also several challenges reported in this study.
|
# sensors
_Systematic Review_
## Hybrid Blockchain Platforms for the Internet of Things (IoT): A Systematic Literature Review
**Ahmed Alkhateeb** **[1], Cagatay Catal** **[2], Gorkem Kar** **[1]** **and Alok Mishra** **[3,4,]***
1 Department of Computer Engineering, Bahcesehir University, Istanbul 34353, Turkey;
mohamed.alkhateeb@bahcesehir.edu.tr (A.A.); gorkem.kar@eng.bau.edu.tr (G.K.)
2 Department of Computer Science and Engineering, Qatar University, Doha 2713, Qatar; ccatal@qu.edu.qa
3 Informatics and Digitalization Group, Molde University College—Specialized University in Logistics,
6410 Molde, Norway
4 Software Engineering Department, Atilim University, Ankara 06830, Turkey
***** Correspondence: alok.mishra@himolde.no
[����������](https://www.mdpi.com/article/10.3390/s22041304?type=check_update&version=1)
**�������**
**Citation: Alkhateeb, A.; Catal, C.;**
Kar, G.; Mishra, A. Hybrid
Blockchain Platforms for the Internet
of Things (IoT): A Systematic
Literature Review. Sensors 2022, 22,
[1304. https://doi.org/10.3390/](https://doi.org/10.3390/s22041304)
[s22041304](https://doi.org/10.3390/s22041304)
Academic Editor: François Verdier
Received: 5 January 2022
Accepted: 5 February 2022
Published: 9 February 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: In recent years, research into blockchain technology and the Internet of Things (IoT) has**
grown rapidly due to an increase in media coverage. Many different blockchain applications and
platforms have been developed for different purposes, such as food safety monitoring, cryptocurrency
exchange, and secure medical data sharing. However, blockchain platforms cannot store all the
generated data. Therefore, they are supported with data warehouses, which in turn is called a
hybrid blockchain platform. While several systems have been developed based on this idea, a
current state-of-the-art systematic overview on the use of hybrid blockchain platforms is lacking.
Therefore, a systematic literature review (SLR) study has been carried out by us to investigate the
motivations for adopting them, the domains at which they were used, the adopted technologies that
made this integration effective, and, finally, the challenges and possible solutions. This study shows
that security, transparency, and efficiency are the top three motivations for adopting these platforms.
The energy, agriculture, health, construction, manufacturing, and supply chain domains are the top
domains. The most adopted technologies are cloud computing, fog computing, telecommunications,
and edge computing. While there are several benefits of using hybrid blockchains, there are also
several challenges reported in this study.
**Keywords: Internet of Things; blockchain; cloud computing; integration; hybrid blockchains; system-**
atic literature review
**1. Introduction**
For the last few years, the global demand for using Internet of Things (IoT) devices
is highly increasing due to the increasing global market demand for faster and more
efficient ways of manufacturing, required improvements of the military capabilities, and
transforming things into smart ones such as smart homes, smart factories, and smart cities.
Although IoT devices have numerous benefits, they also have several weaknesses, such as
generating a huge amount of data, requiring a lot of energy to work, and considerations
regarding the trust issues as they are centralized, controlled by an administrator who can
manipulate the underlying system or even stop it entirely. The IoT system enables the
devices to collect data about themselves and the environment around them, and later share
these collected data with a device, and finally send these data to a central server. Blockchain
technologies allow the IoT devices to exchange collected data with each other or send them
to a cloud server securely and reliably [1]. As a result, blockchain technology has been
introduced to minimize these potential weaknesses and risks.
Nakamoto [2], who is the pseudonym of the creator of Bitcoin, introduced the first
cryptocurrency that uses distributed ledger technology (DLT) (a.k.a., blockchain). Since
then, blockchain technology has penetrated the Internet of Things (IoT) market, allowing
-----
_Sensors 2022, 22, 1304_ 2 of 19
smart devices that can connect to the Internet to use a secure, immutable, and verifiable
network. Blockchain is a decentralized ledger that secures, verifies, and records all peer-topeer transactions quickly, safely, and transparently. The primary benefit of using blockchain
technology over traditional technologies is that it enables two parties to perform secure
transactions online without the need for a trusted authority. As a result of the lack of this
authority, transaction rates are cheaper than the other conventional approaches [3].
As the world is becoming more and more dependent on smart devices, the number
of connected IoT devices by the year 2025 is estimated to be 16.44 billion devices, and
25.44 billion by the year 2030 [4]. As such, we expect a dramatic change in the IoT market,
and the contribution of this new blockchain technology is expected to be disruptive. Many
vendors are currently developing new platforms, tools, and techniques. While blockchain
platforms are very useful in terms of security and transparency, all the generated data
cannot be stored in these platforms. In most cases, a separate data warehouse is needed to
store the huge amount of data that cannot be stored directly in the blockchain platform.
This can be a cloud data warehouse or a traditional central database management system;
however, cloud data warehouses are mostly preferred due to their elasticity and other
advanced features. Many blockchain applications and platforms have been developed
recently using the cloud as storage units.
Since blockchain platforms cannot store all the generated data, they are mostly supported with cloud data warehouses, which can be called a hybrid blockchain platform.
Another perspective for hybrid blockchain definitions is the use of both public and private
blockchains in the same project. Ref. [5] described the hybrid blockchain as a street with
many stores, where everyone can access and view the stores, similar to public blockchains,
however, one cannot access the back offices of the stores without permission, which is
similar to the private blockchain. From this point of view, a hybrid blockchain can be considered as a combination of a private and a public blockchain where the private blockchain
can be hosted on the public blockchain. A hybrid blockchain can be entirely customized,
where hybrid blockchain users can decide which transactions are made public or who can
take part within the blockchain.
While several systems have been developed based on the idea of hybrid blockchains, a
systematic overview of the current state of the art on the use of hybrid blockchain platforms
is lacking. Knowing how this integration has been performed would help facilitate future
research on hybrid blockchains. Although there are several relevant papers on this topic,
this has not been evaluated in detail yet. The objective of this study is to present the
main challenges and possible solutions and, also, different aspects related to this hybrid
blockchain research. As such, we performed a systematic literature review study to collect
and synthesize the required data on the state-of-the-art in this field.
In this paper, we particularly focus on the integration of blockchain and IoT, its
motivations, challenges, and the domains by performing a systematic literature review
(SLR) on research articles collected from different digital databases.
The following research questions are defined in this SLR study:
1. What are the key motivations for adopting hybrid blockchain?
2. What kind of domains has this concept been applied to?
3. What are the adopted technologies in IoT and blockchain integration?
4. What are the blockchain platforms used in the IoT and blockchain integration?
5. What are the key challenges and possible solutions for IoT and blockchain integration?
The contributions of this study are as follows:
_•_ To the best of our knowledge, this is the first systematic review of the hybrid blockchains
in literature.
We evaluated 38 research papers (see Appendix A) from different dimensions and
_•_
responded using different categories for each research question.
_•_ Challenges and possible solutions are also discussed in this paper; this might pave the
way for further research.
-----
_Sensors 2022, 22, 1304_ 3 of 19
This first SLR study using 38 research articles on hybrid blockchains shows that
efficiency, data integrity, and security are the major motivations for adopting integration
of IoT and blockchains. Researchers mostly focused on health, energy, agriculture, and
manufacturing domains and applied fog computing, edge computing, telecommunications,
and cloud computing technologies. The most preferred blockchain platform is Ethereum,
and several challenges are discussed in this study. The following sections are organized
as follows: Section 2 provides the background and related work. Section 3 describes the
adopted research methodology. Section 4 presents the results of this SLR, and Section 5
presents the discussion. Finally, Section 6 discusses the conclusions and future work.
**2. Background and Related Work**
The blockchain-integrated IoT system (BC-IoT system) can be defined as an IoT system
that contains some blockchain elements to perform its transactions. Therefore, understanding the architecture of the IoT systems and the structures and operations of blockchain
networks is necessary for the analysis of BC-IoT systems. In this section, we provide
an overview of the background information. We also present some related studies in
this section.
_2.1. Background_
2.1.1. Internet of Things
The Internet of Things (IoT) refers to a set of devices that are connected to the Internet
or other communication networks and exchange data among themselves. Any object can be
transformed into an IoT device by adding sensors and processing ability. For instance, very
large and crowded cities can also be covered with thousands of tiny IoT components to
track the traffic, and useful suggestions and proper measures can be provided to eliminate
several problems. It seems possible to turn anything into an IoT device thanks to the
availability of very cheap and tiny computer chips, and the widespread use of wireless
networks. Along with using IoT devices to make daily life easier, IoT can also be used in
different application domains shown as follows:
1. Manufacturing [6]: Due to the increasing population numbers in the last few decades,
the demand for goods is as never before. IoT devices are being adopted in today’s
manufacturing to automate production lines, which highly increase the production
speed and, thereby, reduce the overall costs. Less labor is needed to produce the same
amount of goods and, therefore, manufacturers need to pay less money for the labor.
2. Healthcare [7]: Medical IoT devices are being used as remote patient management
(RPM) tools by physicians to monitor the medical state of a patient, distantly. IoT
devices can be wearable or implantable devices, and they can help medical doctors to
monitor heartbeat, arrhythmia, blood pressure, oxygen level, sugar level, and they
can even be used for collapse detection.
3. Environment [8]: Smart sensors can help to fight against climate change and make
the world greener as IoT devices are also used to measure CO2 levels, oxygen levels,
and ozone concentration in the atmosphere. They can monitor volcanic activities,
extreme weather conditions, water levels, and safety-related events, and help to
predict the timing of occurrences of natural disasters such as earthquakes, tsunamis,
and wildfires.
4. Energy [9]: Energy waste is another problem that IoT is used to prevent. Sensors are
used to sense and transmit real-time data regarding the energy levels being produced
and consumed. They can be used to track the sunlight and direct the solar panels to
the appropriate positions to maximize performance.
5. Agriculture and our food supply [10]: In precision agriculture, IoT sensors are widely
used; for example, in smart greenhouses they are used to monitor and control temperature and humidity to increase yield [11]. In addition, some apps can advise farmers
what time is the best to transplant their crops and harvest them.
-----
_Sensors 2022, 22, 1304_ 4 of 19
IoT systems mainly consist of the following subsystems: perception layer, communication layer, and industrial applications. The perception layer is the physical layer of
the IoT system, where sensors, RFID tags, barcode or QR code readers, and other datacollecting devices are used to collect data. After these data are collected, the communication
layer connects the IoT device with a gateway device such as Wi-Fi access points (APs)
using a communication protocol (e.g., Bluetooth, NFC, and Ethernet). The communication
layer transfers the collected data to the industrial applications layer where data are being
analyzed and stored.
2.1.2. Blockchain
Blockchain (BC) is a decentralized ledger that securely, verifiably, and transparently
records all transactions made on the blockchain network. The ledger is shared among
distributed computers (a.k.a., nodes) on the network. All users can see the ledger from
its first transaction in the system until its most recent one as it is not controlled or owned
by a central entity, being decentralized. When a user sends a transaction, the data of the
transaction are encrypted using a cryptographic algorithm before being verified by the
miners to check if the transaction is valid. If most of the miners consent to the transaction,
a new block is added to the chain [12]. The primary benefit of blockchain over traditional
technology is that it allows two parties to conduct encrypted transactions over the Internet
without the intervention of a third-party entity.
Blockchain technology was proposed to support transactions between two parties in a
peer-to-peer manner without the need for a middleman using a cryptocurrency called Bitcoin.
This initial blockchain technology was then labeled as blockchain 1.0. Later, new blockchain
technology emerged that allows applications to be built on top of the blockchain platform,
and smart contracts were widely used. The use of such smart contracts helped to realize
decentralized applications (Dapps), decentralized autonomous organizations (DAOs), smart
land, smart tokens, and other cryptocurrencies that allowed the capability for automated
financial applications. These applications in the financial sector were developed using smart
contracts, which are now called blockchain 2.0. However, blockchains are not only restricted
to cryptocurrency, which is just one application of the wider definition of DLT.
Distributed ledgers can store arbitrary data that are not always linked to financial
services. All implementations of blockchain technology that include a broader range of
non-cryptocurrency-distributed ledger uses are called blockchain 3.0 [13]. Blockchain technology consists of the following four main components: a smart contract, consensus, ledger,
and cryptography [14]. The smart contract is a kind of program stored on the blockchain
that starts functioning when the terms of the contract are achieved. The consensus is an
agreement that all nodes of the blockchain follow to determine which information is added
to the next block of the ledger and provide validity and authenticity for the transactions on
the blockchain. There are two main categories of consensus listed as follows:
6. PoW (proof of work) [15]: This consensus mechanism is used by Bitcoin [16], Ethereum
1.0 [17]. All nodes are a part of a competition. In this competition, each node tries
to construct the appropriate block by solving a mathematical puzzle, which is called
mining. The transaction fees in this consensus are calculated based on the demand
and supply of transactions, where miners will choose to verify transactions with the
highest fees first when the number of waiting transactions exceeds the number that
one block of the blockchain can contain, which is why Eth 1.0’s transaction fees are so
high sometimes. However, the problem with PoW for a blockchain is that it is very
expensive as it requires a huge amount of computational power to mine; therefore, if
the awarded coins drop in price and becomes cheaper than the energy costs spent,
then miners will have no incentive to mine more blocks of that blockchain.
7. PoS (proof of stake) [15]: Unlike the PoW, PoS does not require high computational
power to validate block transactions. The more coins a miner has, the more mining
rewards and power over the network they have. This consensus mechanism is
-----
_Sensors 2022, 22, 1304_ 5 of 19
significantly cheaper than PoW, and its transaction fees are very low. Some examples
of blockchains using PoS are Eth 2.0 [18], Cardano [19], Solana [20], and Polkadot [21].
8. Other consensus mechanisms, such as delegated proof of stake [22], practical Byzantine fault tolerance [23], proof of elapsed time [24], practical Byzantine fault tolerance [25], proof of weight [19], proof of burn [24], proof of capacity [26], and proof of
space [27], also exist; however, they are not as widely used as PoW and PoS.
The ledger is a database that contains all the transactions that occurred in the blockchain.
Since the network is decentralized and there is no central authority, the ledger is distributed
across the network. Every transaction added to the ledger can never be deleted, which
makes the ledger immutable. In addition, to make sure that all the information on the
blockchain network is accessed by only authorized users, cryptography is used. Since the
blockchain is a decentralized network and there are no centralized entities that control and
store the transactions of the network, a P2P network is used when a sender wants to make
a transaction. When a sending wallet wants to make a transaction, it uses a public and a
private key. The public key is used as an identifier of the sending wallet in the network and
the private key is used to sign the transactions of the wallet in the network to protect the
authenticity and integrity of the transaction on the network. After the transaction is signed
with the private key, the wallet broadcasts the request to all the nodes on the network
of the blockchain, where all the nodes verify all the transactions of the blockchain and
start to validate the transaction and check if the request is not tempered. Once the request
is successfully validated by more than 50% of the nodes on the network, a new block is
added to the last block on the blockchain, where each block contains various such validated
transactions with a timestamp, hash, and the hash of the current block.
Hybrid blockchain platforms are used to integrate IoT systems with the blockchain;
some projects that use this integration have different architecture types. Ref. [28] proposed
a hybrid–IoT system that uses multiple PoW blockchains as sub-blockchains for IoT, where
hundreds of IoT devices located at a near distance from each other are contained in a subblockchain. A Byzantine fault-tolerant interconnector is used to ensure the transactions are
between the sub-blockchains. Ref. [29] proposed a hybrid blockchain as a crowdsourcing
platform and used a public chain and many private sub-chains. It uses delegated proof
of stake (DPOS) and practical Byzantine fault tolerance (PBFT) consensuses to verify
the transactions.
_2.2. Related Work_
During our search in electronic databases, there was no other SLR paper that focused
on hybrid blockchains. A paper that focused on making the Internet and IoT more secure by
using blockchain smart contracts is the study of Lone and Naaz [30]. Their paper examines
the applicability of blockchain smart contracts to achieve the security goals related to the
Internet and, particularly, IoT. While their paper defined four research questions, our SLR
paper focuses on five research questions. There is one similar question, which is related
to the blockchain platforms. Similar to our results, they also specified that the Ethereum
platform is the most exploited platform in the selected papers. They concluded that access
control, authentication, integrity assurance, data protection, secure key management, and
nonrepudiation are the most common smart contract-driven security services in the Internet
and IoT. Ref. [31] focused on how the blockchain and smart contracts work with IoT. They
reported that the blockchain that combines blockchain and IoT can be very powerful.
A smart contract allows the automation of the complex multistep process. They also
concluded that if IoT devices in an IoT ecosystem are combined to work together, they can
automate time-consuming workflows and achieve cryptographic verifiability by reducing
cost and time. Ref. [32] studied the blockchain architectures that governments use in public
services, where they focused on the software architectures and solutions of blockchain
technology applied in public services. Their research results conclude that the blockchain
solutions are diversified and the offered solutions are developed recently, which opens
the road for more research in the future. Ref. [33] studied the maturity and readiness of
-----
recently, which opens the road for more research in the future. Ref. [33] studied the ma
_Sensors 2022, 22, 1304_ 6 of 19
turity and readiness of digital forensic (DF) investigations in the era of the industrial revolution (IR) 4.0, where they focused on the challenges that face DF in the IR 4.0, the readiness, the existing maturity model, and benchmarking the maturity element. They were
digital forensic (DF) investigations in the era of the industrial revolution (IR) 4.0, whereable to outline five indicators that need to be considered to support the DF organization’s
they focused on the challenges that face DF in the IR 4.0, the readiness, the existing maturitymaturity model related to IR 4.0. They were also able to list out 28 suggested governance
model, and benchmarking the maturity element. They were able to outline five indicators
and management objectives that DF organizations can use to guide them concerning IR
that need to be considered to support the DF organization’s maturity model related to IR
4.0.
4.0. They were also able to list out 28 suggested governance and management objectives
Tran et al.’s study [2], on the other hand, is the most relevant paper to this SLR. This
that DF organizations can use to guide them concerning IR 4.0.
paper focused on the ways to integrate blockchain with IoT and how to achieve this inte
Tran et al.’s study [2], on the other hand, is the most relevant paper to this SLR.
gration. The paper reported that security, integrity, reliability, and performance are the
This paper focused on the ways to integrate blockchain with IoT and how to achieve this
most common objective reasons for adopting the integration; another interesting reason
integration. The paper reported that security, integrity, reliability, and performance are the
for the integration is to add new functionalities to the IoT systems. Problem-wise reasons
most common objective reasons for adopting the integration; another interesting reason
for the adoption are to decentralize operations and improve the security of IoT systems.
for the integration is to add new functionalities to the IoT systems. Problem-wise reasons
Most of the reviewed BC-IoT systems are integrated with one blockchain network only,
for the adoption are to decentralize operations and improve the security of IoT systems.
and the most common blockchain network is Ethereum. The business process orchestra
Most of the reviewed BC-IoT systems are integrated with one blockchain network only,
tor, authorization mechanism, and sensor data storage are the top three modules added
and the most common blockchain network is Ethereum. The business process orchestrator,
to the IoT systems by the blockchain networks. Most of the verified transactions recorded
authorization mechanism, and sensor data storage are the top three modules added to the
on the blockchain are resource exchanges and interactions with devices and services data.
IoT systems by the blockchain networks. Most of the verified transactions recorded on the
blockchain are resource exchanges and interactions with devices and services data.
**3. Research Methodology**
**3. Research MethodologyTo achieve the objective of answering the research questions, this SLR paper has been**
prepared by following the guidelines provided by [34]. The following three stages are
To achieve the objective of answering the research questions, this SLR paper has been
prepared by following the guidelines provided by [followed: planning, conducting, and reporting the systematic literature review. In Figure 34]. The following three stages are followed:
planning, conducting, and reporting the systematic literature review. In Figure1, the process of conducting this SLR is depicted. This process was followed, and results 1, the process
of conducting this SLR is depicted. This process was followed, and results were gathered.were gathered.
**Figure 1.Figure 1. SLR process.SLR process.**
_3.1. Research Questions_
_3.1. Research Questions_
This research’s goal is to analyze published studies and their findings on the integra
This research’s goal is to analyze published studies and their findings on the integra
tion of the blockchain and IoT. To make the paper more focused, Table 1 shows the six
tion of the blockchain and IoT. To make the paper more focused, Table 1 shows the six
research questions we developed.
research questions we developed.
**Table 1. Research questions (RQs).**
**ID** **Research Question (RQ)**
Q1 What are the key motivations for adopting hybrid blockchain?
Q2 What kind of domains has it been applied to?
Q3 What are the adopted technologies in IoT and blockchain integration?
Q4 What are the blockchain platforms used in the IoT and blockchain integration?
Q5 What are the key challenges and possible solutions of IoT and blockchain integration?
_3.2. Primary Research Questions_
To find the primary studies needed for this SLR paper, we used the following digital
databases: ScienceDirect (www.sciencedirect.com, accessed on 5 October 2021), ACM
Digital (dl.acm.org, accessed on 5 October 2021), IEEE Explore (ieeexplore.ieee.org, accessed
on 5 October 2021), and Wiley (www.wiley.com, accessed on 5 October 2021). This set was
selected because these are the databases that index the most important conferences and
journals in the computer science discipline. Later, a search criterion was set as follows:
-----
_Sensors 2022, 22, 1304_ 7 of 19
(“Blockchain”) AND (“Internet of Things”) AND (“Architecture” OR “Integration” OR
“Cloud”).
The search resulted in a total number of 985 research articles. A total of 804 of them
were found in the IEEE Xplore database, 118 in ScienceDirect, 38 in ACM Digital, and 25
in the Wiley database. We eliminated any review articles, correspondence articles, and
discussion papers. This filter reduced the studies to 295 articles, where the results found in
IEEE Xplore were reduced to 175, papers in ScienceDirect to 75, papers in ACM Digital to
29, and papers in Wiley to 16. Later, exclusion criteria were applied to exclude irrelevant
publications. The relevant ones were added to a spreadsheet file. The exclusion criteria
(EC) are provided in Table 2.
**Table 2. Exclusion criteria.**
**No.** **Criterion**
EC1 Not related to blockchain and IoT integration
EC2 Non-English publication
EC3 A survey or a review publication
EC4 Duplicated publication
EC5 The publication is older than 2017
The selected publications were then checked using quality assessment questions to
ensure that only high-quality publications were being used. Each question was assessed
with a score of 1 (yes), 0 (no), or 0.5 (partial). Therefore, 0 is the minimum score and 8 is
the maximum score for a paper. A paper with a total score of 4 or lower was excluded.
Eight assessment questions were used from the study of [35] because this set of questions
is widely used in SLR papers. The assessment questions that we used are shown in Table 3.
Figure 2 shows the distribution of the selected papers’ quality scores.
_Sensors 2021, 21, x FOR PEER REVIEW_ 8 of 19
**Table 3. Quality assessment questions [35].**
**No.** **Assessment Questions**
Q3 Is the proposed solution clearly explained and validated by an empirical study? Q1 Are the aims of the study clearly stated?
Q2 Are the scope and context of the study clearly defined?
Q4 Are the variables used in the study likely to be valid and reliable?
Q3 Is the proposed solution clearly explained and validated by an empirical study?
Q5 Is the research process documented adequately?
Q4 Are the variables used in the study likely to be valid and reliable?
Q6 Are all study questions answered? Q5 Is the research process documented adequately?
Q7 Are the negative findings presented? Q6 Are all study questions answered?
Q7 Are the negative findings presented?
Q8 Q8[Are the main findings stated clearly in terms of creditability, validity, and reliabil-]Are the main findings stated clearly in terms of creditability, validity, and reliability?
ity?
**Figure 2. Distribution of the selected papers’ quality score.**
**Figure 2. Distribution of the selected papers’ quality score.**
After the quality assessment was performed, 38 publications were identified for the
_3.3. Data Extraction_
SLR study. Therefore, observations and conclusions presented in this study are based on
After selecting the papers, data relevant to the research questions were extracted,
-----
_Sensors 2022, 22, 1304_ 8 of 19
these 38 publications. Figure 2 shows that most of the papers achieved high scores to
provide higher quality.
_3.3. Data Extraction_
After selecting the papers, data relevant to the research questions were extracted,
stored, and categorized in a spreadsheet. The data extraction form, which contains the
essential data needed for this study, is shown in Table 4. Papers were read in full and
required data were collected. The collected data per question were then categorized into
different groups. In RQ1, the motivations were categorized into the following groups:
security, transparency and trust, efficiency, privacy, and quality of service. In RQ2, the domains were categorized as follows: energy, agriculture, health, construction, manufacturing
and supply chain, automotive and transportations, education, military, and government.
In RQ3, the adopted technologies were categorized into the following categories: cloud
computing, fog computing, telecommunications, edge computing, and extended reality. In
RQ4, the BC platforms were categorized into the following categories: Ethereum, Bitcoin,
Litecoin, EOS, and Ripple. In RQ5, the challenges were categorized into the following
categories: security and privacy, storage and scalability, computational power, bandwidth
and connectivity, and cost. In addition to these essential elements, general data, such as the
title and publication year, were also collected. Table 4 shows the collected elements.
**Table 4. The data extraction form.**
**No.** **Extraction Elements**
1 ID
2 Title
3 Link
4 Year
5 Database
6 Publication channel
7 Type
8 Motivations
9 Domains
10 Adopted technologies
11 Blockchain platforms
12 Challenges and possible solutions
_3.4. Data Synthesis and Reporting_
After we managed to extract and categorize the data, the aggregated data were then
synthesized to be used to respond to research questions.
**4. Results**
In this section, the results of this systematic literature review are presented. The
number of selected papers for the last years is presented in Figure 3. A clear increasing
interest in the recent years can be seen from that figure. In Table 5, the number of papers
that are published in different databases is shown, where ScienceDirect is the primary, and
IEEE Explore is the secondary, source.
-----
y g, p p
_Sensors 2022, 22, 1304_ published in different databases is shown, where ScienceDirect is the primary, and IEEE 9 of 19
Explore is the secondary, source.
**Figure 3. Number of papers per year.**
**Figure 3. Number of papers per year.**
**Table 5. Table 5.Paper distributions per journal. Paper distributions per journal.**
**Data SourcesData Sources** **# of Papers# of papers**
ScienceDirect 24
ScienceDirect 24
ACM Digital 4
IEEE XploreACM Digital 10 4
Wiley 0
IEEE Xplore 10
Wiley 0
The six research questions presented in Table 1 are addressed one by one in the
following subsections:
The six research questions presented in Table 1 are addressed one by one in the following subsections: 1. RQ-1: What are the key motivations for adopting hybrid blockchain?
The motivations identified from the primary studies are shown in Figure 4. The results
1. RQ-1: What are the key motivations for adopting hybrid blockchain?
show that more than one-third of the primary papers had a motivation to increase security.
The motivations identified from the primary studies are shown in Figure 4. The re
Some of them were needed to ensure the integrity of the data collected by the IoT devices
sults show that more than one-third of the primary papers had a motivation to increase
of the system [36–39] or to protect the confidentiality of the collected data [38,40,41], or to
security. Some of them were needed to ensure the integrity of the data collected by the
ensure the availability of the IoT systems [42] because there are no centralized authorities
IoT devices of the system [36–39] or to protect the confidentiality of the collected data
that can be attacked to stop the systems from functioning. In addition, another use case
[38,40,41], or to ensure the availability of the IoT systems [42] because there are no cen
of blockchain as a security measure was to protect data from plaintext and ciphertext
tralized authorities that can be attacked to stop the systems from functioning. In addition,
attacks on UAVs [43]. Another motivation was related to the transparency and trust
another use case of blockchain as a security measure was to protect data from plaintext
goals, as the platform is resistant to the modification of the blockchain blocks. As a
and ciphertext attacks on UAVs [43]. Another motivation was related to the transparency
result, the data inside each block are unmodifiable and cannot be edited or deleted, which
and trust goals, as the platform is resistant to the modification of the blockchain blocks.
provides trust in the system. It can also be beneficial to track and trace products and
As a result, the data inside each block are unmodifiable and cannot be edited or deleted,
increase the credibility of food safety information [44]. In addition, its distributed nature
which provides trust in the system. It can also be beneficial to track and trace products
helps to increase transparency as all the stored data on the blockchain are accessible to
and increase the credibility of food safety information [44]. In addition, its distributed na
everyone [38,43,45–49]. Efficiency is also an important motivation for the integration, as
ture helps to increase transparency as all the stored data on the blockchain are accessible smart contracts can be used to reduce the delay between IoT devices [50], or to reduce
to everyone [38,43,45–49]. Efficiency is also an important motivation for the integration, costs [42,48,49,51], or to increase energy efficiency [24], or to decrease latency [48,52,53],
as smart contracts can be used to reduce the delay between IoT devices [50], or to reduce or to enhance throughput [52]. Another important motivation of the integration was
costs [42,48,49,51], or to increase energy efficiency [24], or to decrease latency [48,52,53], privacy [36,38,50,54]. A sender and a receiver are only known by their public keys, which
do not provide any personal data.
-----
g p [ ] p g p
or to enhance throughput [52]. Another important motivation of the integration was pri
_Sensors 2022, 22, 1304_ vacy [36,38,50,54]. A sender and a receiver are only known by their public keys, which do 10 of 19
vacy [36,38,50,54]. A sender and a receiver are only known by their public keys, which do
not provide any personal data.
not provide any personal data.
**Figure 4. Motivations of adopting a hybrid blockchain.**
**Figure 4. Figure 4.Motivations of adopting a hybrid blockchain. Motivations of adopting a hybrid blockchain.**
2. RQ-2: What kind of domains has the hybrid blockchain been applied to?
2. 2. RQ-2: What kind of domains has the hybrid blockchain been applied to? RQ-2: What kind of domains has the hybrid blockchain been applied to?
Figure 5 shows the percentage of domains that adopted hybrid blockchains. As
shown in Figure 5, energy is the most mentioned domain in the primary papers, with Figure 5 shows the percentage of domains that adopted hybrid blockchains. As Figure 5 shows the percentage of domains that adopted hybrid blockchains. As shown
17.95% of the papers. Agriculture and health are second and third, with 15.38%. These shown in Figure 5, energy is the most mentioned domain in the primary papers, with in Figure 5, energy is the most mentioned domain in the primary papers, with 17.95% of the
results indicate that these three domains are the most adopting domains of the integration. 17.95% of the papers. Agriculture and health are second and third, with 15.38%. These papers. Agriculture and health are second and third, with 15.38%. These results indicate
Other domains were construction and manufacturing and supply chain domains with results indicate that these three domains are the most adopting domains of the integration. that these three domains are the most adopting domains of the integration. Other domains
Other domains were construction and manufacturing and supply chain domains with were construction and manufacturing and supply chain domains with 12.82%, automotive
12.82%, automotive and transportations (10.26%), and education, military, and govern
12.82%, automotive and transportations (10.26%), and education, military, and govern-and transportations (10.26%), and education, military, and government, with 5.13%.
ment, with 5.13%.
ment, with 5.13%.
**Figure 5. Figure 5.Hybrid blockchain domains that have been adopted. Hybrid blockchain domains that have been adopted.**
**Figure 5. Hybrid blockchain domains that have been adopted.**
3. RQ-3: What are the adopted technologies in the IoT and blockchain integration? 3. RQ-3: What are the adopted technologies in the IoT and blockchain integration?
3. RQ-3: What are the adopted technologies in the IoT and blockchain integration?
Figure 6 shows the distribution of technologies used in these selected papers. Cloud Figure 6 shows the distribution of technologies used in these selected papers. Cloud
Figure 6 shows the distribution of technologies used in these selected papers. Cloud
computing is the most adopted technology, with 44.4%. It includes cloud storage and computing is the most adopted technology, with 44.4%. It includes cloud storage and
computing is the most adopted technology, with 44.4%. It includes cloud storage and
cloud servers. Fog computing is the second most adopted technology with 22.2%, fol-cloud servers. Fog computing is the second most adopted technology with 22.2%, followed
cloud servers. Fog computing is the second most adopted technology with 22.2%, fol
lowed by telecommunications with 16.7%, edge computing with 11.1%, and extended re-by telecommunications with 16.7%, edge computing with 11.1%, and extended reality
lowed by telecommunications with 16.7%, edge computing with 11.1%, and extended re
ality with 5.6.%. Extended reality includes both virtual reality (VR) and augmented reality with 5.6%. Extended reality includes both virtual reality (VR) and augmented reality (AR)
ality with 5.6.%. Extended reality includes both virtual reality (VR) and augmented reality
(AR) technologies. technologies.
(AR) technologies.
-----
_Sensors Sensors Sensors20212021 2022, 21, 21, x FOR PEER REVIEW, 22, x FOR PEER REVIEW, 1304_ 11 of 19 11 of 19 11 of 19
**Figure 6. Figure 6. Figure 6.Adopted technologies in the integration. Adopted technologies in the integration. Adopted technologies in the integration.**
4.4.4.RQ-4: What are the blockchain platforms used in the IoT and blockchain integration? RQ-4: What are the blockchain platforms used in the IoT and blockchain integration? RQ-4: What are the blockchain platforms used in the IoT and blockchain integration?
Figure 7 shows the blockchain platforms used in the selected papers. According to Figure 7 shows the blockchain platforms used in the selected papers. According to Figure 7 shows the blockchain platforms used in the selected papers. According to this
this figure, Ethereum is the top-used blockchain platform with 77.8%, as Ethereum is con-this figure, Ethereum is the top-used blockchain platform with 77.8%, as Ethereum is con-figure, Ethereum is the top-used blockchain platform with 77.8%, as Ethereum is considered
sidered a mature blockchain technology for developing smart contracts [37,51]. EOS sidered a mature blockchain technology for developing smart contracts [37,51]. EOS a mature blockchain technology for developing smart contracts [37,51]. EOS blockchain
blockchain is another platform that was also used, as its smart contract platform enables blockchain is another platform that was also used, as its smart contract platform enables is another platform that was also used, as its smart contract platform enables IoT to be
IoT to be integrated with the blockchain [55]. Bitcoin, Litecoin, and Ripple were also used IoT to be integrated with the blockchain [55]. Bitcoin, Litecoin, and Ripple were also used integrated with the blockchain [55]. Bitcoin, Litecoin, and Ripple were also used in these
in these papers. Ref. [36] stated that Bitcoin and Litecoin can be used as a medium to store in these papers. Ref. [36] stated that Bitcoin and Litecoin can be used as a medium to store papers. Ref. [36] stated that Bitcoin and Litecoin can be used as a medium to store the
the IoT data. Ripple, on the other hand, was used as a private blockchain to establish pri-the IoT data. Ripple, on the other hand, was used as a private blockchain to establish pri-IoT data. Ripple, on the other hand, was used as a private blockchain to establish private
vate communications between nodes [56]. Bitcoin, Litecoin, EOS, and Ripple have been vate communications between nodes [56]. Bitcoin, Litecoin, EOS, and Ripple have been communications between nodes [56]. Bitcoin, Litecoin, EOS, and Ripple have been used,
used, with 5.6% in the selected studies. used, with 5.6% in the selected studies. with 5.6% in the selected studies.
**Figure 7. Figure 7. Figure 7.The adopted BC platforms in the primary papers.The adopted BC platforms in the primary papers. The adopted BC platforms in the primary papers.**
5.5.5.RQ-5: What are the key challenges and possible solutions of IoT and blockchain inte-RQ-5: What are the key challenges and possible solutions of IoT and blockchain inte-RQ-5: What are the key challenges and possible solutions of IoT and blockchain
gration? gration? integration?
We categorized the challenges into five categories. Table 6 presents these categories We categorized the challenges into five categories. Table 6 presents these categories We categorized the challenges into five categories. Table 6 presents these categories
and possible solutions. These five categories are described as follows: and possible solutions. These five categories are described as follows: and possible solutions. These five categories are described as follows:
6.6. Portability: It is almost impossible to enable blockchain’s required features in most modern Portability: It is almost impossible to enable blockchain’s required features in most modern
6. Portability: It is almost impossible to enable blockchain’s required features in most
industrial machines because the protocols that are being used in the blockchain operations and industrial machines because the protocols that are being used in the blockchain operations and
modern industrial machines because the protocols that are being used in the blockchain
transactions are very specific while being computationally intense, thread-blocking, and time-transactions are very specific while being computationally intense, thread-blocking, and time
operations and transactions are very specific while being computationally intense,
-----
_Sensors 2022, 22, 1304_ 12 of 19
thread-blocking, and time-consuming. These issues can be solved by designing a
system that can decouple the operations of the blockchain from industrial machines’
functionalities and capabilities [37].
7. Resource: Replacing currently functional legacy systems with blockchain will cost
time and resources, but it can be resolved by creating a mechanism that enables the
communication of the blockchain and the legacy systems rather than replacing it with
a fully decentralized system [57].
8. Interoperability: Industrial IoT devices are heterogeneous. Old and new devices
use different operating systems, of which some are very difficult to modify to add
the blockchain features. To solve this issue, an abstraction layer in the software
architecture design of the OS can be added to allow the communication of the IoT
devices with the smart contracts of different blockchains [37].
9. Computational power: The use of the PoW consensus mechanism requires high
computational power to mine new blocks on the blockchain. This requirement costs a
lot of money and too much electrical power. Ref. [47] propose a solution as a gateway
node that can be used to gather the blocks of data from a set number of IoT devices
and then verify the blocks as a miner before it adds them to the blockchain network.
10. Scalability: Technical limitations of traditional blockchains cannot scale well for widespre
ad use in an IoT environment. Ref. [52] proposed the use of “off-chain” protocols, where
some of the transactions are moved temporarily to be computed elsewhere and then
return the results of the transactions to be added to the main chain.
**Table 6. Challenges and possible solutions for BC and IoT integration.**
**Category** **Challenges (C1 to C6)** **Proposed Solutions (S1 to S6)** **Reference**
It is almost impossible to modify the
Portability industrial apparatus software to add
the blockchain protocols.
Replacing legacy systems with
Resources
blockchain requires time and resources.
Some operating systems (OS) of old IoT
Interoperability devices cannot be modified to add the
new blockchain features.
High computational power is required
Computational power by IoT devices that use the PoW
consensus mechanism.
To design a system that can decouple the
operations of the blockchain from
industrial machines’ functionalities
and capabilities.
Creating a mechanism that enables the
communication of the blockchain and the
legacy systems rather than replacing it
with a fully decentralized system.
Adding an abstraction layer in the
software architecture design of the OS to
allow the communication of the IoT
device with the smart contracts of
different blockchains.
A gateway node can be used to gather
the blocks of data from a set number of
IoT devices and then verify the blocks as
a miner before it adds them to the
blockchain network.
An “off-chain” protocol can be used,
where some of the transactions are
moved temporarily to be computed
elsewhere and then return the results of
the transactions to be added to the
main chain.
[37]
[57]
[37]
[47]
[52]
Scalability
Technical limitations of traditional
blockchains cannot scale them for
widespread use in IoT environments.
The scalability limitations of blockchain
networks prevent the blockchain
applications from performing high
scale IoT data.
A BB-DIS system can be used to
overcome the high-scale IoT data issues [58]
in cloud storage.
The scalability limitations of blockchain networks are a big obstacle for blockchain
applications to perform large-scale transactions. Ref. [58] proposed a blockchain and
-----
_Sensors 2022, 22, 1304_ 13 of 19
bilinear mapping-based data integrity scheme (BB-DIS) for high-scale IoT data in cloud
storage as a solution to this challenge.
**5. Discussion and Threats to Validity**
In Section 5.1, a general discussion addressing research questions is presented. In
Section 5.2, potential threats to validity are explained. In Section 5.3, the specialty of hybrid
blockchains in the IoT environment compared to general hybrid blockchains is discussed.
In Section 5.4, several research directions are suggested.
_5.1. Discussion_
In this paper, we reviewed the literature on the integration of blockchain platforms
and IoT to understand the state-of-the-art and current practices. For this purpose, five
research questions were identified and responded to. RQ1 aimed at understanding the
key motivations for adopting the hybrid blockchain with IoT. Security, transparency, trust,
and privacy were the top motivations. This shows that most of the research groups had
mostly security-related concerns and therefore, adopted this new strategy. RQ2 explored
the domains where the integration has been applied. Energy, agriculture, health, and
construction were the top domains. The energy sector showed the power of blockchains
earlier than the other sector and, therefore, we noticed that this type of hybrid blockchains
was mostly used in the energy domain. Some other domains were not mentioned in
the articles, which are the entertainment and business domains. These two domains are
witnessing a major development and adoption with the hybrid blockchain that could
change the way people interact at their work, play video games, or attend concerts. RQ3
focused on the technologies that were used in this integration. Cloud computing, fog
computing, and communications were the top results. Since the IoT devices and sensors
are a major part of the blockchain and IoT integration, they were not considered as a
technology, but rather as a part of the system. As shown in these results, cloud computing
plays a major role in this integration because the generated huge amount of data is mostly
stored in cloud computing platforms. RQ4 addressed which blockchain platforms were
used. During our analysis, Ethereum was used with 77.8%, followed by Bitcoin, Litecoin,
EOS, and Ripple. This indicates that the majority of the projects are relying on Ethereum.
Therefore, any attack or network failure on the Ethereum blockchain can cause operational
failures in these systems. RQ5 identified the key challenges and possible solutions faced by
prior researchers. The collected challenges were mainly the challenges of integrating the
blockchain and IoT systems. Challenges were reported based on the explicit statements
in the articles. There can be more challenges; however, if they were not mentioned in
these papers, we could not identify and include them here. The integration of blockchain
technology and IoT is still in its early stages and yet being widely adopted in various
domains and sectors.
_5.2. Threats to Validity_
We can see new domains and new technologies soon that will emerge as a result of
this integration. There are several threats to validity in this SLR. Concerning the timeframe,
the primary papers selection process was finalized in October 2020. This SLR selected
the papers that were published until that time. Papers that were published on the digital
databases after this month were not considered in this review. Because of the fast development of the blockchain and IoT space, there may be new papers that have not been covered
in this SLR. Another threat to validity is selecting the articles. Different papers could be
found when different databases were used for the primary paper selection. However, we
did not want to use Google Scholar because it indexes non-peer-reviewed papers and
non-well-reputed journals as well. Moreover, during the data extraction process, some
data might have been missed, and to reduce this threat, the authors double-checked all
primary papers. In addition, the search for the primary papers was strictly focused on
papers in English; as such, there could be a chance of missing some papers that were
-----
_Sensors 2022, 22, 1304_ 14 of 19
written in other languages that could add value to the research questions in this paper.
Some papers used the term hybrid blockchain, however, their definition was different than
our scope. For example, one of these papers referred to the combination of public and
private blockchains [59]; however, since IoT was not included in this integration, it was
not used in the analysis. In addition, papers that focused on only blockchains were not
included in the SLR analysis [60].
_5.3. Specialty of Hybrid Blockchains in IoT Environment compared to General Hybrid Blockchains_
There are specific requirements needed for hybrid blockchains in IoT environments
compared to general hybrid blockchains. One of the most important issues is the resource
limitations of IoT devices [61]. The platform should not cause an extra bottleneck on the
devices. In addition, the scalability of hybrid blockchain platforms in the IoT context is
crucial, and therefore, microservices were applied in one of the studies to address this
requirement [61]. Confidentiality is another quality factor that needs to be addressed for
hybrid platforms in IoT environments because data produced from different devices such
as smart home appliances and wearables are sensitive and confidential [61]. For general
hybrid blockchains, scalability and confidentiality have less impact on the design of the
overall hybrid blockchain architecture. Throughput is another parameter that requires
extra design decisions during the system design because IoT applications need a huge
number of transactions to be executed at a time, however, some of the blockchain platforms
such as Bitcoin cannot satisfy the expectations (e.g., only seven transactions per second)
because of their internal design [61]. Latency can be mostly tolerated in hybrid blockchains
in the IoT context and it is known that latency is high in some blockchain platforms such as
Bitcoin (i.e., 10 min to complete a transaction). Maintaining hybrid blockchain in an IoT
environment is more costly because the required computational power, energy, and storage
are much more. These different quality aspects make hybrid blockchains in the IoT context
more special compared to the general hybrid blockchains.
_5.4. Research Directions_
As part of this SLR study, we identified the following research directions:
1. Artificial Intelligence (AI)-enabled Hybrid Blockchains: Machine learning algorithms,
and more specifically, deep learning algorithms have been applied in many different
application domains successfully recently. In the cloud data warehouse, these algorithms can be effectively used, and interesting patterns can be discovered. However,
the learning types (i.e., supervised, unsupervised, semisupervised, reinforcement
learning) and corresponding algorithms (e.g., support vector machines, K-means
clustering, low-density separation, Deep Q Network) must be carefully selected. From
an engineering perspective, the integration of machine learning capabilities into the
hybrid blockchain requires additional research in this field. The isolated development
of these AI components limits their benefits and, therefore, the system engineering
perspective must be followed.
2. Energy-Efficient Hybrid Blockchains: Energy efficiency is one of the most important
concerns of blockchain platforms. Some decentralized consensus mechanisms such as
proof-of-stake (PoS) are more efficient than others, such as the proof-of-work (PoW)
model. However, they are still not considered to be energy-efficient, and more research
is needed to optimize the hybrid blockchains in IoT environments. New consensus
protocols in this context can reduce the required resources. For example, recently a
new blockchain network called Casper demonstrated that it is 47,000% and 136,000%
more energy-efficient than Ethereum and Bitcoin platforms, respectively [62]. Energy
efficiency is not necessarily related to only the consensus mechanism; there are other
aspects that need to be investigated in detail in future research.
3. Interoperable Hybrid Blockchains: Between two or more hybrid blockchains in the IoT
context, there should be an effective communication mechanism to obtain more bene
-----
_Sensors 2022, 22, 1304_ 15 of 19
fits and achieve more transparency and easier processes. While there are some solutions at the blockchain level, more research is needed for complex hybrid blockchains.
4. Ethical and Legal Aspects: Legal boundaries of restrictions and ethical aspects must
be investigated in hybrid blockchains, which are used by a consortium. Ethics and
moral issues of hybrid blockchains are also crucial, but now they are lacking.
5. Privacy-preserving Hybrid Blockchains: Privacy preservation for hybrid blockchains
in IoT environments is another important issue that needs further research because
sensitive and confidential data are stored on some platforms. Since most of these
systems are public and transactions are visible to other network members, confidential
information might be inferred by adversaries. Therefore, new privacy preservation
strategies are needed.
6. Standardization: In the IoT context, one of the most important challenges is standardization. While there are different initiatives at the national and international levels,
there is still no standard set because the IoT standards landscape is too diverse. In
the long term, standardization should be also managed for hybrid blockchains in the
IoT environments.
**6. Conclusions and Future Work**
In this SLR paper, 38 papers were used as primary papers, and five research questions
were addressed. Security, data integrity, and efficiency are the top three motivations
for adopting integration. The energy, agriculture, health, construction, manufacturing,
and supply chain domains are the top domains that adopt the integration. The most
adopting technologies are cloud computing, telecommunications, fog computing, and edge
computing. Ethereum was by far the most used blockchain in the reviewed articles. The
reported challenges are related to portability, resources, interoperability, computational
power, and scalability. As future work, we are planning to design and implement a hybrid
blockchain platform that can minimize the reported challenges.
**Author Contributions: Conceptualization: C.C. and A.A.; data curation: A.A.; formal analysis: A.A.,**
C.C. and G.K.; investigation: A.A., C.C., G.K. and A.M.; methodology: A.A., C.C. and G.K.; project
administration: C.C. and G.K.; resources: A.A., C.C., G.K. and A.M.; supervision: C.C. and G.K.;
validation: A.A., C.C. and G.K.; writing—original draft: A.A., C.C. and G.K.; writing—review and
editing: A.A., C.C., G.K. and A.M. All authors have read and agreed to the published version of
the manuscript.
**Funding: This research was funded by Molde University College-Specialized Univ. in Logistics,**
Norway for the support of Open Access fund.
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Not applicable.**
**Acknowledgments: Authors thank to their universities for scientific database subscriptions and**
infrastructure support that enabled this collaborative research.
**Conflicts of Interest: The authors declare no conflict of interest.**
**Appendix A. Primary Studies (Sources Reviewed in the SLR)**
1. Abou-Nassar, E. M., Iliyasu, A. M., El-Kafrawy, P. M., Song, O. Y., Bashir, A. K., &
Abd El-Latif, A. A. (2020). DITrust chain: towards blockchain-based trust models for
sustainable healthcare IoT systems. IEEE Access, 8, 111223–111238.
2. Ch, R., Srivastava, G., Gadekallu, T. R., Maddikunta, P. K. R., & Bhattacharya, S. (2020).
Security and privacy of UAV data using blockchain technology. Journal of Information
_Security and Applications, 55, 102670._
-----
_Sensors 2022, 22, 1304_ 16 of 19
3. Fan, K., Bao, Z., Liu, M., Vasilakos, A. V., & Shi, W. (2020). Dredas: Decentralized,
reliable and efficient remote outsourced data auditing scheme with blockchain smart
contract for industrial IoT. Future Generation Computer Systems, 110, 665-674.
4. Fernández-Caramés, T. M., & Fraga-Lamas, P. (2018). A Review on the Use of
Blockchain for the Internet of Things. Ieee Access, 6, 32979–33001.
5. Garg, N., Wazid, M., Das, A. K., Singh, D. P., Rodrigues, J. J., & Park, Y. (2020).
BAKMP-IoMT: Design of blockchain enabled authenticated key management protocol
for internet of medical things deployment. IEEE Access, 8, 95956–95977.
6. Ge, C., Liu, Z., & Fang, L. (2020). A blockchain based decentralized data security mechanism for the Internet of Things. Journal of Parallel and Distributed Computing, 141, 1–9.
7. Hang, L., Ullah, I., & Kim, D. H. (2020). A secure fish farm platform based on
blockchain for agriculture data integrity. Computers and Electronics in Agriculture, 170,
105251.
8. He, S., Tang, Q., & Wu, C. Q. (2018, November). Censorship resistant decentralized
IoT management systems. In Proceedings of the 15th EAI International Conference on
_Mobile and Ubiquitous Systems: Computing, Networking and Services (pp. 454–459)._
9. Iqbal, S., Malik, A. W., Rahman, A. U., & Noor, R. M. (2020). Blockchain-based
reputation management for task offloading in micro-level vehicular fog network.
_IEEE Access, 8, 52968–52980._
10. Jain, R., & Dogra, A. (2019, July). Solar Energy Distribution Using Blockchain and IoT
Integration. In Proceedings of the 2019 International Electronics Communication Conference
(pp. 118–123).
11. Jeong, J. W., Kim, B. Y., & Jang, J. W. (2018, April). Security and device control method
for fog computer using blockchain. In Proceedings of the 2018 International Conference
_on Information Science and System (pp. 234–238)._
12. Kochovski, P., Gec, S., Stankovski, V., Bajec, M., & Drobintsev, P. D. (2019). Trust
management in a blockchain based fog computing platform with trustless smart
oracles. Future Generation Computer Systems, 101, 747–759.
13. Kumar, A., Krishnamurthi, R., Nayyar, A., Sharma, K., Grover, V., & Hossain, E. (2020).
A novel smart healthcare design, simulation, and implementation using healthcare
4.0 processes. IEEE Access, 8, 118433–118471.
14. Kumari, A., Gupta, R., Tanwar, S., & Kumar, N. (2020). Blockchain and AI amalgamation for energy cloud management: Challenges, solutions, and future directions.
_Journal of Parallel and Distributed Computing, 143, 148–166._
15. Liu, Y., Lu, Q., Chen, S., Qu, Q., O’Connor, H., Choo, K. K. R., & Zhang, H. (2020).
Capability-based IoT access control using blockchain. Digital Communications and Networks.
16. Lokshina, I. V., Greguš, M., & Thomas, W. L. (2019). Application of integrated building
information modeling, IoT and blockchain technologies in system design of a smart
building. Procedia computer science, 160, 497–502.
17. Ma, M., Shi, G., & Li, F. (2019). Privacy-oriented blockchain-based distributed key
management architecture for hierarchical access control in the IoT scenario. IEEE
_Access, 7, 34045–34059._
18. Mazzei, D., Baldi, G., Fantoni, G., Montelisciani, G., Pitasi, A., Ricci, L., & Rizzello,
L. (2020). A Blockchain Tokenizer for Industrial IOT trustless applications. Future
_Generation Computer Systems, 105, 432–445._
19. Pal, K. (2020). Internet of things and blockchain technology in apparel manufacturing
supply chain data management. Procedia Computer Science, 170, 450–457.
20. Rahman, M. A., Rashid, M. M., Hossain, M. S., Hassanain, E., Alhamid, M. F., &
Guizani, M. (2019). Blockchain and IoT-based cognitive edge framework for sharing
economy services in a smart city. IEEE Access, 7, 18611–18621.
21. Robert, J., Kubler, S., & Ghatpande, S. (2020). Enhanced Lightning Network (offchain)-based micropayment in IoT ecosystems. Future Generation Computer Systems,
_112, 283–296._
-----
_Sensors 2022, 22, 1304_ 17 of 19
22. Roy, D. G., Das, P., De, D., & Buyya, R. (2019). QoS-aware secure transaction framework for internet of things using blockchain mechanism. Journal of Network and
_Computer Applications, 144, 59–78._
23. Rožman, N., Corn, M., Požrl, T., & Diaci, J. (2019). Distributed logistics platform based
on Blockchain and IoT. Procedia CIRP, 81, 826–831.
24. Saurabh, S., & Dey, K. (2021). Blockchain technology adoption, architecture, and
sustainable agri-food supply chains. Journal of Cleaner Production, 284, 124731.
25. Sharma, P. K., Chen, M. Y., & Park, J. H. (2017). A software defined fog node based
distributed blockchain cloud architecture for IoT. Ieee Access, 6, 115–124.
26. Singh, S. K., Jeong, Y. S., & Park, J. H. (2020). A deep learning-based IoT-oriented
infrastructure for secure smart city. Sustainable Cities and Society, 60, 102252.
27. Singh, S. K., Rathore, S., & Park, J. H. (2020). Blockiotintelligence: A blockchainenabled intelligent IoT architecture with artificial intelligence. Future Generation
_Computer Systems, 110, 721–743._
28. Sittón-Candanedo, I., Alonso, R. S., Corchado, J. M., Rodríguez-González, S., &
Casado-Vara, R. (2019). A review of edge computing reference architectures and a
new global edge proposal. Future Generation Computer Systems, 99, 278–294.
29. Sok, K., Colin, J. N., & Po, K. (2018, December). Blockchain and Internet of Things
Opportunities and Challenges. In Proceedings of the Ninth International Symposium on
_Information and Communication Technology (pp. 150–154)._
30. Tian, Z., Yan, B., Guo, Q., Huang, J., & Du, Q. (2020). Feasibility of identity authentication for IoT based on Blockchain. Procedia Computer Science, 174, 328–332.
31. Torky, M., & Hassanein, A. E. (2020). Integrating blockchain and the internet of
things in precision agriculture: Analysis, opportunities, and challenges. Computers
_and Electronics in Agriculture, 105476._
32. Uddin, M. A., Stranieri, A., Gondal, I., & Balasubramanian, V. (2020). Blockchain
leveraged decentralized IoT eHealth framework. Internet of Things, 9, 100159.
33. Venkatesh, V. G., Kang, K., Wang, B., Zhong, R. Y., & Zhang, A. (2020). System
architecture for blockchain based transparency of supply chain social sustainability.
_Robotics and Computer-Integrated Manufacturing, 63, 101896._
34. Wang, H., & Zhang, J. (2019). Blockchain based data integrity verification for largescale IoT data. IEEE Access, 7, 164996–165006.
35. Xie, L., Ding, Y., Yang, H., & Wang, X. (2019). Blockchain-based secure and trustworthy
Internet of Things in SDN-enabled 5G-VANETs. IEEE Access, 7, 56656–56666.
36. Xu, H., Klaine, P. V., Onireti, O., Cao, B., Imran, M., & Zhang, L. (2020). Blockchainenabled resource management and sharing for 6G communications. Digital Communi_cations and Networks, 6(3), 261–269._
37. Zhang, A., Zhong, R. Y., Farooque, M., Kang, K., & Venkatesh, V. G. (2020). Blockchainbased life cycle assessment: An implementation framework and system architecture.
_Resources, Conservation and Recycling, 152, 104512._
38. Zhao, Q., Chen, S., Liu, Z., Baker, T., & Zhang, Y. (2020). Blockchain-based privacypreserving remote data integrity checking scheme for IoT information systems. Infor_mation Processing & Management, 57(6), 102355._
**References**
1. Tran, N.K.; Babar, M.A.; Boan, J. Integrating blockchain and Internet of Things systems: A systematic review on objectives and
[designs. J. Netw. Comput. Appl. 2021, 173, 102844. [CrossRef]](http://doi.org/10.1016/j.jnca.2020.102844)
2. Nakamoto, S. Bitcoin: A peer-to-peer electronic cash system. Decentralized Bus. Rev. 2008, 4, 21260.
3. Brown, R.G. The Corda platform: An introduction. Retrieved 2018, 27, 2018.
4. [Holst, A. Number of Internet of Things (IoT) Connected Devices Worldwide from 2019 to 2030. Available online: https:](https://www.statista.com/statistics/1183457/iot-connected-devices-worldwide/)
[//www.statista.com/statistics/1183457/iot-connected-devices-worldwide/ (accessed on 5 February 2022).](https://www.statista.com/statistics/1183457/iot-connected-devices-worldwide/)
5. [Romero, M.A. Hybrid Blockchain 101. Available online: https://medium.com/kadena-io/hybrid-blockchain-101-714827d0e77b](https://medium.com/kadena-io/hybrid-blockchain-101-714827d0e77b)
(accessed on 5 February 2022).
6. Pal, K. Internet of things and blockchain technology in apparel manufacturing supply chain data management. Procedia Comput.
_[Sci. 2020, 170, 450–457. [CrossRef]](http://doi.org/10.1016/j.procs.2020.03.088)_
-----
_Sensors 2022, 22, 1304_ 18 of 19
7. Zhang, X.; Cao, Z.; Dong, W. Overview of Edge Computing in the Agricultural Internet of Things: Key Technologies, Applications,
[Challenges. IEEE Access 2020, 8, 141748–141761. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.3013005)
8. Gómez, J.E.; Marcillo, F.R.; Triana, F.L.; Gallo, V.T.; Oviedo, B.W.; Hernández, V.L. IoT for environmental variables in urban areas.
_[Procedia Comput. Sci. 2017, 109, 67–74. [CrossRef]](http://doi.org/10.1016/j.procs.2017.05.296)_
9. Jain, R.; Dogra, A. Solar Energy Distribution Using Blockchain and IoT Integration. In Proceedings of the 2019 International
Electronics Communication Conference, Okinawa, Japan, 7–9 July 2019; pp. 118–123.
10. [Internet of Things in Agriculture. Available online: https://agriculture.vic.gov.au/farm-management/digital-agriculture/](https://agriculture.vic.gov.au/farm-management/digital-agriculture/internet-of-things-in-agriculture#:~{}:text=On%20farms%2C%20IOT%20allows%20devices,monitor%20fences%20vehicles%20and%20weather)
[internet-of-things-in-agriculture#:~{}:text=On%20farms%2C%20IOT%20allows%20devices,monitor%20fences%20vehicles%20](https://agriculture.vic.gov.au/farm-management/digital-agriculture/internet-of-things-in-agriculture#:~{}:text=On%20farms%2C%20IOT%20allows%20devices,monitor%20fences%20vehicles%20and%20weather)
[and%20weather (accessed on 5 February 2022).](https://agriculture.vic.gov.au/farm-management/digital-agriculture/internet-of-things-in-agriculture#:~{}:text=On%20farms%2C%20IOT%20allows%20devices,monitor%20fences%20vehicles%20and%20weather)
11. [Pathak, R. 7 Applications of IoT in Agriculture. Available online: https://www.analyticssteps.com/blogs/5-applications-iot-](https://www.analyticssteps.com/blogs/5-applications-iot-agriculture)
[agriculture (accessed on 5 February 2022).](https://www.analyticssteps.com/blogs/5-applications-iot-agriculture)
12. [Underwood, S. Blockchain beyond bitcoin. Commun. ACM 2016, 59, 15–17. [CrossRef]](http://doi.org/10.1145/2994581)
13. [Maesa, D.D.F.; Mori, P. Blockchain 3.0 applications survey. J. Parallel Distrib. Comput. 2020, 138, 99–114. [CrossRef]](http://doi.org/10.1016/j.jpdc.2019.12.019)
14. Mistry, I.; Tanwar, S.; Tyagi, S.; Kumar, N. Blockchain for 5G-enabled IoT for industrial automation: A systematic review, solutions,
[and challenges. Mech. Syst. Signal Processing 2020, 135, 106382. [CrossRef]](http://doi.org/10.1016/j.ymssp.2019.106382)
15. [Zhang, S.; Lee, J.H. Analysis of the main consensus protocols of blockchain. ICT Express 2020, 6, 93–97. [CrossRef]](http://doi.org/10.1016/j.icte.2019.08.001)
16. [What is Bitcoin? Available online: https://www.bitcoin.com/get-started/what-is-bitcoin/ (accessed on 5 February 2022).](https://www.bitcoin.com/get-started/what-is-bitcoin/)
17. [Mining. Available online: https://ethereum.org/en/developers/docs/consensus-mechanisms/pow/mining/ (accessed on 5](https://ethereum.org/en/developers/docs/consensus-mechanisms/pow/mining/)
February 2022).
18. [Upgrading Ethereum to Radical New Heights. Available online: https://ethereum.org/en/eth2/ (accessed on 5 February 2022).](https://ethereum.org/en/eth2/)
19. [Proof of Weight (PoWeight). 2018. Available online: https://tokens-economy.gitbook.io/consensus/chain-based-proof-of-](https://tokens-economy.gitbook.io/consensus/chain-based-proof-of-capacity-space/proof-of-weight-poweight)
[capacity-space/proof-of-weight-poweight (accessed on 5 February 2022).](https://tokens-economy.gitbook.io/consensus/chain-based-proof-of-capacity-space/proof-of-weight-poweight)
20. [Yakovenko, A. Solana: A New Architecture for a High Performance Blockchain v0.8.13. Available online: http://gumhip.com/](http://gumhip.com/wp-content/uploads/2021/05/Solana-Whitepaper.pdf)
[wp-content/uploads/2021/05/Solana-Whitepaper.pdf (accessed on 5 February 2022).](http://gumhip.com/wp-content/uploads/2021/05/Solana-Whitepaper.pdf)
21. [Salman, D. Polkadot Consensus. Available online: https://wiki.polkadot.network/docs/learn-consensus\T1\textbar{} (accessed](https://wiki.polkadot.network/docs/learn-consensus\T1\textbar {})
on 5 February 2022).
22. Sun, Y.; Yan, B.; Yao, Y.; Yu, J. DT-DPoS: A Delegated Proof of Stake Consensus Algorithm with Dynamic Trust. Procedia Comput.
_[Sci. 2021, 187, 371–376. [CrossRef]](http://doi.org/10.1016/j.procs.2021.04.113)_
23. Chen, P.; Han, D.; Weng, T.H.; Li, K.C.; Castiglione, A. A novel Byzantine fault tolerance consensus for Green IoT with intelligence
[based on reinforcement. J. Inf. Secur. Appl. 2021, 59, 102821. [CrossRef]](http://doi.org/10.1016/j.jisa.2021.102821)
24. [Centieiro, H. What’s Proof of Elapsed Time. Available online: https://medium.com/nerd-for-tech/whats-proof-of-elapsed-time-](https://medium.com/nerd-for-tech/whats-proof-of-elapsed-time-4f67cf3f45b3)
[4f67cf3f45b3 (accessed on 5 February 2022).](https://medium.com/nerd-for-tech/whats-proof-of-elapsed-time-4f67cf3f45b3)
25. Castro, M.; Liskov, B. Practical Byzantine fault tolerance and proactive recovery. ACM Trans. Comput. Syst. 2002, 20, 398–461.
[[CrossRef]](http://doi.org/10.1145/571637.571640)
26. [Kapoor, S. What is PoC(Proof of Capacity)? Available online: https://medium.com/@shivaanshkapoor02/what-is-poc-proof-of-](https://medium.com/@shivaanshkapoor02/what-is-poc-proof-of-capacity-c85febb5d18e)
[capacity-c85febb5d18e (accessed on 5 February 2022).](https://medium.com/@shivaanshkapoor02/what-is-poc-proof-of-capacity-c85febb5d18e)
27. [Apograf. Simple Proofs of Space-Time and Rational Proofs of Storage. Available online: https://medium.com/@Apograf/](https://medium.com/@Apograf/simple-proofs-of-space-time-and-rational-proofs-of-storage-fb14fd5e479e)
[simple-proofs-of-space-time-and-rational-proofs-of-storage-fb14fd5e479e (accessed on 5 February 2022).](https://medium.com/@Apograf/simple-proofs-of-space-time-and-rational-proofs-of-storage-fb14fd5e479e)
28. Sagirlar, G.; Carminati, B.; Ferrari, E.; Sheehan, J.D.; Ragnoli, E. Hybrid-IoT: Hybrid blockchain architecture for internet of
things-pow sub-blockchains. In Proceedings of the 2018 IEEE International Conference on Internet of Things (iThings) and IEEE
Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart
Data (SmartData), Halifax, NS, Canada, 30 July–3 August 2018; pp. 1007–1016.
29. Zhu, S.; Cai, Z.; Hu, H.; Li, Y.; Li, W. zkCrowd: A hybrid blockchain-based crowdsourcing platform. IEEE Trans. Ind. Inform. 2019,
_[16, 4196–4205. [CrossRef]](http://doi.org/10.1109/TII.2019.2941735)_
30. Lone, A.H.; Naaz, R. Applicability of Blockchain smart contracts in securing Internet and IoT: A systematic literature review.
_[Comput. Sci. Rev. 2021, 39, 100360. [CrossRef]](http://doi.org/10.1016/j.cosrev.2020.100360)_
31. Christidis, K.; Devetsikiotis, M. Blockchains and smart contracts for the internet of things. IEEE Access 2016, 4, 2292–2303.
[[CrossRef]](http://doi.org/10.1109/ACCESS.2016.2566339)
32. Franciscon, E.A.; Nascimento, M.P.; Granatyr, J.; Weffort, M.R.; Lessing, O.R.; Scalabrin, E.E. A systematic literature review of
blockchain architectures applied to public services. In Proceedings of the 2019 IEEE 23rd International Conference on Computer
Supported Cooperative Work in Design (CSCWD), Porto, Portugal, 6–8 May 2019; pp. 33–38.
33. Ariffin, K.A.Z.; Ahmad, F.H. Indicators for maturity and readiness for digital forensic investigation in era of industrial revolution
[4.0. Comput. Secur. 2021, 105, 102237. [CrossRef]](http://doi.org/10.1016/j.cose.2021.102237)
34. Keele, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report, Ver. 2.3 EBSE Technical
Report; EBSE: Gyeonggi-do, Korea, 2007; Volume 5.
35. Slob, N.; Catal, C.; Kassahun, A. Application of machine learning to improve dairy farm management: A systematic literature
[review. Prev. Vet. Med. 2020, 187, 105237. [CrossRef]](http://doi.org/10.1016/j.prevetmed.2020.105237)
36. Singh, S.K.; Rathore, S.; Park, J.H. Blockiotintelligence: A blockchain-enabled intelligent IoT architecture with artificial intelligence.
_[Future Gener. Comput. Syst. 2020, 110, 721–743. [CrossRef]](http://doi.org/10.1016/j.future.2019.09.002)_
-----
_Sensors 2022, 22, 1304_ 19 of 19
37. Mazzei, D.; Baldi, G.; Fantoni, G.; Montelisciani, G.; Pitasi, A.; Ricci, L.; Rizzello, L. A Blockchain Tokenizer for Industrial IOT
[trustless applications. Future Gener. Comput. Syst. 2020, 105, 432–445. [CrossRef]](http://doi.org/10.1016/j.future.2019.12.020)
38. Sok, K.; Colin, J.N.; Po, K. Blockchain and Internet of Things Opportunities and Challenges. In Proceedings of the Ninth
International Symposium on Information and Communication Technology, Da Nang, Vietnam, 6–7 December 2018; pp. 150–154.
39. Jeong, J.W.; Kim, B.Y.; Jang, J.W. Security and device control method for fog computer using blockchain. In Proceedings of the
2018 International Conference on Information Science and System, Jeju, Korea, 27–29 April 2018; pp. 234–238.
40. Kumari, A.; Gupta, R.; Tanwar, S.; Kumar, N. Blockchain and AI amalgamation for energy cloud management: Challenges,
[solutions, and future directions. J. Parallel Distrib. Comput. 2020, 143, 148–166. [CrossRef]](http://doi.org/10.1016/j.jpdc.2020.05.004)
41. Garg, N.; Wazid, M.; Das, A.K.; Singh, D.P.; Rodrigues, J.J.; Park, Y. BAKMP-IoMT: Design of blockchain enabled authenticated
[key management protocol for internet of medical things deployment. IEEE Access 2020, 8, 95956–95977. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.2995917)
42. Kumar, A.; Krishnamurthi, R.; Nayyar, A.; Sharma, K.; Grover, V.; Hossain, E. A novel smart healthcare design, simulation, and
[implementation using healthcare 4.0 processes. IEEE Access 2020, 8, 118433–118471. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.3004790)
43. Ch, R.; Srivastava, G.; Gadekallu, T.R.; Maddikunta, P.K.R.; Bhattacharya, S. Security and privacy of UAV data using blockchain
[technology. J. Inf. Secur. Appl. 2020, 55, 102670. [CrossRef]](http://doi.org/10.1016/j.jisa.2020.102670)
44. Saurabh, S.; Dey, K. Blockchain technology adoption, architecture, and sustainable agri-food supply chains. J. Clean. Prod. 2021,
_[284, 124731. [CrossRef]](http://doi.org/10.1016/j.jclepro.2020.124731)_
45. Ge, C.; Liu, Z.; Fang, L. A blockchain based decentralized data security mechanism for the Internet of Things. J. Parallel Distrib.
_[Comput. 2020, 141, 1–9. [CrossRef]](http://doi.org/10.1016/j.jpdc.2020.03.005)_
46. Lokshina, I.V.; Greguš, M.; Thomas, W.L. Application of integrated building information modeling, IoT and blockchain technolo[gies in system design of a smart building. Procedia Comput. Sci. 2019, 160, 497–502. [CrossRef]](http://doi.org/10.1016/j.procs.2019.11.058)
47. Uddin, M.A.; Stranieri, A.; Gondal, I.; Balasubramanian, V. Blockchain leveraged decentralized IoT eHealth framework. Internet
_[Things 2020, 9, 100159. [CrossRef]](http://doi.org/10.1016/j.iot.2020.100159)_
48. Singh, S.K.; Jeong, Y.S.; Park, J.H. A deep learning-based IoT-oriented infrastructure for secure smart city. Sustain. Cities Soc. 2020,
_[60, 102252. [CrossRef]](http://doi.org/10.1016/j.scs.2020.102252)_
49. He, S.; Tang, Q.; Wu, C.Q. Censorship resistant decentralized IoT management systems. In Proceedings of the 15th EAI
International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, New York, NY, USA, 5–7
November 2018; pp. 454–459.
50. Sharma, P.K.; Chen, M.Y.; Park, J.H. A software defined fog node based distributed blockchain cloud architecture for IoT. IEEE
_[Access 2017, 6, 115–124. [CrossRef]](http://doi.org/10.1109/ACCESS.2017.2757955)_
51. Yue, D.; Li, R.; Zhang, Y.; Tian, W.; Huang, Y. Blockchain-based verification framework for data integrity in edge-cloud storage. J.
_[Parallel Distrib. Comput. 2020, 146, 1–14. [CrossRef]](http://doi.org/10.1016/j.jpdc.2020.06.007)_
52. Robert, J.; Kubler, S.; Ghatpande, S. Enhanced Lightning Network (off-chain)-based micropayment in IoT ecosystems. Future
_[Gener. Comput. Syst. 2020, 112, 283–296. [CrossRef]](http://doi.org/10.1016/j.future.2020.05.033)_
53. Ma, M.; Shi, G.; Li, F. Privacy-oriented blockchain-based distributed key management architecture for hierarchical access control
[in the IoT scenario. IEEE Access 2019, 7, 34045–34059. [CrossRef]](http://doi.org/10.1109/ACCESS.2019.2904042)
54. Venkatesh, V.G.; Kang, K.; Wang, B.; Zhong, R.Y.; Zhang, A. System architecture for blockchain based transparency of supply
[chain social sustainability. Robot. Comput.-Integr. Manuf. 2020, 63, 101896. [CrossRef]](http://doi.org/10.1016/j.rcim.2019.101896)
55. Rožman, N.; Corn, M.; Požrl, T.; Diaci, J. Distributed logistics platform based on Blockchain and IoT. Procedia CIRP 2019, 81,
[826–831. [CrossRef]](http://doi.org/10.1016/j.procir.2019.03.207)
56. Abou-Nassar, E.M.; Iliyasu, A.M.; El-Kafrawy, P.M.; Song, O.Y.; Bashir, A.K.; Abd El-Latif, A.A. DITrust chain: Towards
[blockchain-based trust models for sustainable healthcare IoT systems. IEEE Access 2020, 8, 111223–111238. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.2999468)
57. Hang, L.; Ullah, I.; Kim, D.H. A secure fish farm platform based on blockchain for agriculture data integrity. Comput. Electron.
_[Agric. 2020, 170, 105251. [CrossRef]](http://doi.org/10.1016/j.compag.2020.105251)_
58. Wang, H.; Zhang, J. Blockchain based data integrity verification for large-scale IoT data. IEEE Access 2019, 7, 164996–165006.
[[CrossRef]](http://doi.org/10.1109/ACCESS.2019.2952635)
59. Sharmila, A.H.; Jaisankar, N. Edge Intelligent Agent Assisted Hybrid Hierarchical Blockchain for continuous healthcare monitoring & recommendation system in 5G WBAN-IoT. Comput. Netw. 2021, 200, 108508.
60. Dhall, S.; Dwivedi, A.D.; Pal, S.K.; Srivastava, G. Blockchain-based Framework for Reducing Fake or Vicious News Spread on
[Social Media/Messaging Platforms. Trans. Asian Low-Resour. Lang. Inf. Processing 2021, 21, 1–33. [CrossRef]](http://doi.org/10.1145/3467019)
61. Nartey, C.; Tchao, E.T.; Gadze, J.D.; Keelson, E.; Klogo, G.S.; Kommey, B.; Diawuo, K. On blockchain and IoT integration platforms:
[Current implementation challenges and future perspectives. Wirel. Commun. Mob. Comput. 2021, 2021, 6672482. [CrossRef]](http://doi.org/10.1155/2021/6672482)
62. [CasperLabs. Available online: https://blog.casperlabs.io/new-power-usage-report-shows-the-casper-networks-impressive-](https://blog.casperlabs.io/new-power-usage-report-shows-the-casper-networks-impressive-energy-efficiency-relative-to-other-blockchain-protocols)
[energy-efficiency-relative-to-other-blockchain-protocols (accessed on 25 January 2022).](https://blog.casperlabs.io/new-power-usage-report-shows-the-casper-networks-impressive-energy-efficiency-relative-to-other-blockchain-protocols)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC8962977, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1424-8220/22/4/1304/pdf?version=1644404598"
}
| 2,022
|
[
"Review",
"JournalArticle"
] | true
| 2022-02-01T00:00:00
|
[
{
"paperId": "a1f6dc02c91284b52b84895143b9d1869d679d9a",
"title": "Blockchain-based Framework for Reducing Fake or Vicious News Spread on Social Media/Messaging Platforms"
},
{
"paperId": "5146e573c1cc9974d6a8a301e3b2fdf7c975adf5",
"title": "Edge Intelligent Agent Assisted Hybrid Hierarchical Blockchain for continuous healthcare monitoring & recommendation system in 5G WBAN-IoT"
},
{
"paperId": "f025643473c5d4312663c74b93fd021f0ba1664f",
"title": "Indicators for maturity and readiness for digital forensic investigation in era of industrial revolution 4.0"
},
{
"paperId": "e9076dcc8bd523f9e8c3df8d1066a4db5b37bcc7",
"title": "On Blockchain and IoT Integration Platforms: Current Implementation Challenges and Future Perspectives"
},
{
"paperId": "559bcef033955ecb32aca43649b44fceb31c59aa",
"title": "Applicability of Blockchain smart contracts in securing Internet and IoT: A systematic literature review"
},
{
"paperId": "2c641b36507e1d40c31bedd9fc3f7ab62ab93e57",
"title": "Integrating blockchain and Internet of Things systems: A systematic review on objectives and designs"
},
{
"paperId": "a976fb022c43e451573671ab3b122c879c91104e",
"title": "Application of machine learning to improve dairy farm management: A systematic literature review."
},
{
"paperId": "b38d4c25b4e7f76e97a60b83546d2b11360b8bec",
"title": "Blockchain-based verification framework for data integrity in edge-cloud storage"
},
{
"paperId": "6ef8a703768f7cc52b2e40edad28be70fb78f9b7",
"title": "Security and privacy of UAV data using blockchain technology"
},
{
"paperId": "449b60ef61e5c01ff7b12f6d59d4ee6b1118097f",
"title": "Enhanced Lightning Network (off-chain)-based micropayment in IoT ecosystems"
},
{
"paperId": "de3629e232226d25e7c854c54a7026483b2332b1",
"title": "Blockchain-based privacy-preserving remote data integrity checking scheme for IoT information systems"
},
{
"paperId": "27b42af5c3684ecd30125ff61dc2eb600f8cba68",
"title": "Integrating blockchain and the internet of things in precision agriculture: Analysis, opportunities, and challenges"
},
{
"paperId": "96d3c60d1ee83f49ae6685e6c6159f52b82b30f3",
"title": "Blockchain technology adoption, architecture, and sustainable agri-food supply chains"
},
{
"paperId": "c3aca32385a9538eefefe9f026288c99ed998540",
"title": "Capability-based IoT access control using blockchain"
},
{
"paperId": "b2e1390f9e1d26bf0df91b9d992774a18fa0ecb7",
"title": "Blockchain and AI amalgamation for energy cloud management: Challenges, solutions, and future directions"
},
{
"paperId": "10ed4a256d051166963970c97e8f8946df19fe82",
"title": "A deep learning-based IoT-oriented infrastructure for secure smart City"
},
{
"paperId": "e8129c016fb48077fedfc23239bb51aa12f70d7f",
"title": "BlockIoTIntelligence: A Blockchain-enabled Intelligent IoT Architecture with Artificial Intelligence"
},
{
"paperId": "8f52aae7997a0db6f452f38ae55a8b0b118c260b",
"title": "Dredas: Decentralized, reliable and efficient remote outsourced data auditing scheme with blockchain smart contract for industrial IoT"
},
{
"paperId": "f096afbd7cda694f452dae078df13a400a8f5b57",
"title": "Overview of Edge Computing in the Agricultural Internet of Things: Key Technologies, Applications, Challenges"
},
{
"paperId": "fc7b9b2f9e1d0c221cbd84659ef0d4c2fc28983c",
"title": "A blockchain based decentralized data security mechanism for the Internet of Things"
},
{
"paperId": "99df24d3f9fa3b524b2cf04a0a05f611dad42c5d",
"title": "A Novel Smart Healthcare Design, Simulation, and Implementation Using Healthcare 4.0 Processes"
},
{
"paperId": "d278ff2fbd4b048c78b0c9a6bc12dcf0a81f9fa2",
"title": "DITrust Chain: Towards Blockchain-Based Trust Models for Sustainable Healthcare IoT Systems"
},
{
"paperId": "00b0db9d82a7e4c34c2fb69138e97a6a74f16e25",
"title": "Analysis of the main consensus protocols of blockchain"
},
{
"paperId": "599a846955e74dedb410e5dab835e3773580612e",
"title": "System architecture for blockchain based transparency of supply chain social sustainability"
},
{
"paperId": "44ff32b8bbfa0b4bdda40162afcbe12af6e5b1b5",
"title": "zkCrowd: A Hybrid Blockchain-Based Crowdsourcing Platform"
},
{
"paperId": "9f0da40f2f72c077902e3f1e88c1d02b2ed548ff",
"title": "BAKMP-IoMT: Design of Blockchain Enabled Authenticated Key Management Protocol for Internet of Medical Things Deployment"
},
{
"paperId": "bb0ed76f0cbc5696f6930d036072cff1b5fb4cdc",
"title": "A Blockchain Tokenizer for Industrial IOT trustless applications"
},
{
"paperId": "317d8798a1a22462b685e9f9b76de4b50c53501e",
"title": "Blockchain 3.0 applications survey"
},
{
"paperId": "ce9b0807acd76582fa32be9d2b255b4e0793640c",
"title": "Blockchain-enabled Resource Management and Sharing for 6G Communications"
},
{
"paperId": "245b31721a5a7b907b8a4b60271f8ba232e251a2",
"title": "Blockchain-Based Reputation Management for Task Offloading in Micro-Level Vehicular Fog Network"
},
{
"paperId": "570ebcf6662b090158049a4876290047091ccac7",
"title": "Blockchain leveraged decentralized IoT eHealth framework"
},
{
"paperId": "17cfa4c5d53735dba4a4ed4239a9b4e2ddb0b57d",
"title": "A secure fish farm platform based on blockchain for agriculture data integrity"
},
{
"paperId": "ef8e9fcadab13b5d20e03d9b56d491da754b1953",
"title": "Blockchain for 5G-enabled IoT for industrial automation: A systematic review, solutions, and challenges"
},
{
"paperId": "e6bd05040571790c7c2a5b96ed79a7d96e81898b",
"title": "Blockchain-based life cycle assessment: An implementation framework and system architecture"
},
{
"paperId": "847a825efdb5aafd0be07c0188941f39f634a0f4",
"title": "Trust management in a blockchain based fog computing platform with trustless smart oracles"
},
{
"paperId": "59233c7e3a8ea3afe6f81e79ca376b21ae97d6ab",
"title": "QoS-aware secure transaction framework for internet of things using blockchain mechanism"
},
{
"paperId": "6aa0394e9676f64b8ddcc084d878d2d1701ee33b",
"title": "A review of edge computing reference architectures and a new global edge proposal"
},
{
"paperId": "3dc36a25d0ef31146ce0e2a70463ed6b3a48db16",
"title": "Solar Energy Distribution Using Blockchain and IoT Integration"
},
{
"paperId": "f810b5430e3e7b0b30d2bed932587506baa9e6e2",
"title": "A Systematic Literature Review of Blockchain Architectures Applied to Public Services"
},
{
"paperId": "df7bf84ee9b9a1aa6940673721acb492be6a41a9",
"title": "Privacy-Oriented Blockchain-Based Distributed Key Management Architecture for Hierarchical Access Control in the IoT Scenario"
},
{
"paperId": "21a6c1f44f4809ec56826a0b2432ce25888be1dc",
"title": "Blockchain and IoT-Based Cognitive Edge Framework for Sharing Economy Services in a Smart City"
},
{
"paperId": "b54440362a40461b988b4b7bf3559f5b4e3bc799",
"title": "Blockchain and Internet of Things Opportunities and Challenges"
},
{
"paperId": "4e4cd44a770d99e7996eae1a1a4e4f26ef3e247c",
"title": "Censorship Resistant Decentralized IoT Management Systems"
},
{
"paperId": "02458904f9bd718bd8c6a1a36e9847ad83b0410b",
"title": "A Review on the Use of Blockchain for the Internet of Things"
},
{
"paperId": "56e1a19fd6eb958ae6be71958fa618a8add90d7e",
"title": "Security and Device Control Method for Fog Computer using Blockchain"
},
{
"paperId": "3573784823ae9f177788f2599b162a50a4127f83",
"title": "Hybrid-IoT: Hybrid Blockchain Architecture for Internet of Things - PoW Sub-Blockchains"
},
{
"paperId": "051a8fae323f26a9bd2ca551940b4ba52b99c1be",
"title": "A Software Defined Fog Node Based Distributed Blockchain Cloud Architecture for IoT"
},
{
"paperId": "007dd9293459aedd117f6dd7baa3e4d2a00ec267",
"title": "Internet of Things in agriculture"
},
{
"paperId": "efe573cbfa7f4de4fd31eda183fefa8a7aa80888",
"title": "Blockchain beyond bitcoin"
},
{
"paperId": "c998aeb12b78122ec4143b608b517aef0aa2c821",
"title": "Blockchains and Smart Contracts for the Internet of Things"
},
{
"paperId": "55bdaa9d27ed595e2ccf34b3a7847020cc9c946c",
"title": "Performing systematic literature reviews in software engineering"
},
{
"paperId": "48326c5da8fd277cc32e1440b544793c397e41d6",
"title": "Practical byzantine fault tolerance and proactive recovery"
},
{
"paperId": "7a3a335b0bae24378be0776d0c3316246e395fe6",
"title": "DT-DPoS: A Delegated Proof of Stake Consensus Algorithm with Dynamic Trust"
},
{
"paperId": "49e3cc508a201161a00708e1318b146e22bfe6e2",
"title": "A novel Byzantine fault tolerance consensus for Green IoT with intelligence based on reinforcement"
},
{
"paperId": "992c2c498f46b02425bf3f1ece57f8dfa0f748ed",
"title": "Internet of Things and Blockchain Technology in Apparel Manufacturing Supply Chain Data Management"
},
{
"paperId": "fb707f27d2bb06c0d15da122bc3edc3909cc98c8",
"title": "Feasibility of Identity Authentication for IoT Based on Blockchain"
},
{
"paperId": "dd68bdee3aaed1094fee533a2348e0d144ca6580",
"title": "Distributed logistics platform based on Blockchain and IoT"
},
{
"paperId": "d72753e4b940e49bb6df90223556b56f99f375f4",
"title": "Application of Integrated Building Information Modeling, IoT and Blockchain Technologies in System Design of a Smart Building"
},
{
"paperId": "f8d78bc2588e394586746cc562f8c18a89737bba",
"title": "Blockchain Based Data Integrity Verification for Large-Scale IoT Data"
},
{
"paperId": "8f0bb6825f4af8a708187959d62cadd6b6e424f2",
"title": "The Corda Platform : An Introduction"
},
{
"paperId": "8328278c5843963e9cfff3f595b52081bb9aacf0",
"title": "Solana : A new architecture for a high performance blockchain v 0 . 8"
},
{
"paperId": "d94cd8565653a2a55f4fe9b7e1de72e669d56086",
"title": "The 8 th International Conference on Ambient Systems , Networks and Technologies ( ANT 2017 ) \" IoT FOR ENVIRONMENTAL VARIABLES IN URBAN AREAS \""
},
{
"paperId": "7aea23d6e14d2cf70c8bcc286a95b520d1bc6437",
"title": "Proofs of Space-Time and Rational Proofs of Storage"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": null,
"title": "A New Architecture for a High Performance Blockchain v0.8.13"
},
{
"paperId": null,
"title": "Polkadot Consensus. Available online: https://wiki.polkadot.network/docs/learn-consensus\\T1\\textbar{} (accessed on 5 February 2022)"
},
{
"paperId": null,
"title": "Upgrading Ethereum to Radical New Heights"
},
{
"paperId": null,
"title": "Number of Internet of Things (IoT) Connected Devices Worldwide from 2019 to 2030"
},
{
"paperId": null,
"title": "Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report, Ver"
},
{
"paperId": null,
"title": "What is Bitcoin ? Mining Upgrading Ethereum to Radical New Heights Proof of Weight ( PoWeight )"
},
{
"paperId": null,
"title": "What’s Proof of Elapsed Time. Available online: https://medium.com/nerd-for-tech/whats-proof-of-elapsed-time4f67cf3f45b3 (accessed on 5 February 2022)"
},
{
"paperId": null,
"title": "Proof of Weight (PoWeight)"
},
{
"paperId": null,
"title": "What ’ s Proof of Elapsed Time"
},
{
"paperId": null,
"title": "Blockchainbased life cycle assessment: An implementation framework and system architecture. Resources, Conservation and Recycling"
},
{
"paperId": null,
"title": "Polkadot Consensus"
},
{
"paperId": null,
"title": "Blockchain - based secure and trustworthy Internet of Things in SDN - enabled 5 G - VANETs"
},
{
"paperId": null,
"title": "7 Applications of IoT in Agriculture"
},
{
"paperId": null,
"title": "What is PoC(Proof of Capacity)? Available online: https://medium.com/@shivaanshkapoor02/what-is-poc-proof-ofcapacity-c85febb5d18e (accessed on 5 February 2022)"
},
{
"paperId": null,
"title": "Hybrid Blockchain 101. Available online: https://medium.com/kadena-io/hybrid-blockchain-101-714827d0e77b (accessed on 5 February 2022)"
},
{
"paperId": null,
"title": "What is PoC(Proof of Capacity)? Available online"
},
{
"paperId": null,
"title": "Hybrid Blockchain 101"
},
{
"paperId": "5b26cca5549303a5b78918a9b26d238779cf5ef2",
"title": "Applications of IoT in Agriculture"
}
] | 21,400
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/02ff5d5e0e8f717e0fbf57bef99d47bf4a42c74c
|
[
"Computer Science"
] | 0.867391
|
A New Security Protocol Based on Elliptic Curve Cryptosystems for Securing Wireless Sensor Networks
|
02ff5d5e0e8f717e0fbf57bef99d47bf4a42c74c
|
EUC Workshops
|
[
{
"authorId": "2585935",
"name": "S. Seo"
},
{
"authorId": "2109608569",
"name": "Hyung Chan Kim"
},
{
"authorId": "1391156834",
"name": "R. S. Ramakrishna"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# A New Security Protocol Based on Elliptic
Curve Cryptosystems for Securing Wireless
Sensor Networks
Seog Chung Seo, Hyung Chan Kim, and R.S. Ramakrishna
Department of Information and Communications,
Gwangju Institute of Science and Technology (GIST),
1 Oryong-dong, Buk-gu, Gwangju 500-712, Rep. of Korea
_{gegehe, kimhc, rsr}@gist.ac.kr_
**Abstract. In this paper, we describe the design and implementation of**
a new security protocol based on Elliptic Curve Cryptosystems (ECC)
for securing Wireless Sensor Networks (WSNs). Some public-key-based
protocols such as TinyPK and EccM 2.0 have already been proposed
in response. However, they exhibit poor performance. Moreover, they
are vulnerable to man-in-the-middle attacks. We propose a cluster-based
Elliptic Curve Diffie-Hellman (ECDH) and Elliptic Curve Digital Signature Algorithm (ECDSA) for efficiency and security during the pairwise
key setup and broadcast authentication phases, respectively. We have
implemented our protocol on 8-bit, 7.3828-MHz MICAz mote. The experimental results indicate the feasibility of our protocol for WSNs.
## 1 Introduction
Wireless sensor networks (WSNs) have been proposed for a wide variety of
applications such as emergency medical care, vehicular tracking, and building monitoring systems. Because these sensor networks are composed of small,
resource-constrained sensor nodes and are deployed in harsh, unattended environments, some combination of authentication, integrity, and confidentiality are
required for reliable and lasting network communications. However, achieving security in WSNs is a challenging job in that the absence of any supervisor makes
the application of conventional security protocol infeasible for WSNs. Furthermore, the limited resources at the sensor nodes are targets of Denial-of-Service
(DoS) attacks. Therefore, it is essential to build a security protocol taking into
account the inherent characteristics of WSNs such as low computing power, low
bandwidth, high susceptibility to physical capture, and dynamic network topology [1]. Besides, the security protocol should cope with a number of threats
including eavesdropping, injecting malicious messages, and node compromise.
Most of the existing security protocols are based on symmetric key. The sym
metric key system provides efficient cryptographic operations of encryption and
decryption. However, it is not appropriate for setting up pairwise keys and broadcast authentication because it generates heavy traffic and involves complex architecture. Moreover, symmetric-key-based security protocols are vulnerable to
-----
node compromise. Some public-key-based protocols such as TinyPK [8], EccM
2.0 [7] and Blass’ [4] have addressed these issues. However, they exhibit poor
performance. They are vulnerable to man-in-the-middle attack.
This paper presents a new security protocol based on the ECC. Our protocol
consists of mainly two phases: pairwise key setup and broadcast authentication.
We propose a cluster-based ECDH and ECDSA for security of key agreement
and efficiency of broadcast authentication, respectively. Our contributions are
summarized below:
**– The proposed ECDH provides the generic key agreement mechanism which**
does not require any knowledge of network topology. The proposed scheme
can prevent the man-in-the-middle attack by verifying the signature of the
public key from other nodes. The built pairwise keys are used to distribute
the cluster key, a key that is common to all the cluster elements.
**– The cluster-based ECDSA offers efficient broadcast authentication which**
can reduce the overheads on network-wide verification. This is why only the
clusterheads are responsible for verifying broadcast messages in our protocol.
**– We have implemented the proposed protocols on the 8-bit, 7.3828-MHz**
MICAz mote [11] which is one of the most popular sensor motes. The
experimental results testify to the viability of our protocol for WSNs. Furthermore, the proposed protocol outperforms existing ECC-based protocols
over GF (2[p]) for WSNs with the aid of efficient algorithms such as width-w
Mutual-Opposite-Form (wMOF) [14] and shamir’s trick [13].
The remainder of this paper is organized as follows: In Section 2 we take a look
at related work. Section 3 describes the proposed security protocols. Security is
analyzed in Section 4. In section 5 we present implementation and experimental
results. Conclusions are presented in Section 6. The details of main idea for
efficient implementation of ECDH and ECDSA can be found in the Appendix.
## 2 Related Work
As WSNs are becoming attractive in ubiquitous computing environments, security of WSNs is understandably attracting attention. Many security protocols have been proposed to date. They are divided into two main categories:
symmetric-key-based protocols and public-key-based protocols. The security protocols taking advantage of symmetric keys such as SPIN assume the complete
impracticability of public key system due to their high computational overhead
[2,3]. However, symmetric key systems are not as versatile as public key system,
so that they complicates the design of security architecture such as key distribution, and broadcast authentication. The complicated security architecture
generates heavy network traffic. Currently, many researchers are attempting to
apply the public key cryptosystem for securing WSNs.
Watro et al. presented the TinyPK for authentication and key agreement
between 8-bit MICA2 motes [8]. The TinyPK makes use of RSA-based DiffieHellman for key agreement. However, TinyPK takes more than 2 minutes to
-----
establish a pairwise key between two sensor nodes. Kumar et al. [6] developed
a communication protocol employing ECDH key exchange. Their work involves
optimal extension fields where field multiplication is quite efficient. However, it is
vulnerable to the Weil descent attack. The work of Wander et al., compared the
performance of RSA and ECC on the Atmega128L processor in respect of energy
consumption [12]. They tried to integrate the RSA and ECC into SSL handshake
to provide mutual authentication. Gaubats et al. have compared Rabin’s scheme,
NtruEncrypt, and ECC on a low power device in [5]. The results of experiments
show that the ECC is more appropriate for WSNs than Rabin’s scheme and
NtruEncrypt. The EccM 2.0 has implemented ECDH key agreement protocol
on MICA2 mote [7]. In the EccM 2.0, the established pairwise key between two
sensor nodes is used for the symmetric key of TinySec [9] which is the link-layer
security architecture in TinyOS [10]. Blass and Zitterbart have also analyzed the
performance of the ECDH, ECDSA and El-Gamal on MICA2 mote [4].
## 3 Proposed Protocol
**3.1** **Assumptions and Preliminaries**
**– The sensor network consists of several clusters. They are interconnected by**
gateway nodes which are involved in more than two clusters.
**– Clusterheads are computationally very powerful and have larger storage ca-**
pacity than normal sensor nodes.
**– Each sensor node has one public key and its corresponding signature signed**
by BS’s private key. BS’s public key is also stored in every node before
deployment. All public keys of clusterheads are stored in the BS.
**– NA is the normal node named A in a cluster. NC and NG represent the**
clusterhead and gateway node, respectively. KA is the private key of NA.
_PA is public key of NA and SP A is the signature of the public key in NA._
_KAB is the pairwise key between NA and NB. KC refers to the cluster key._
_EKAB_ (m) indicates that a message m is encrypted with pairwise key KAB
shared by node A and node B. The concatenation of messages is expressed
with the operator . G is the global point in ECC operation.
_||_
**3.2** **Pairwise Key Establishment**
We combine the ECDH key agreement and the clustering scheme. We can categorize the pairwise key setup into four types. This is illustrated in Fig. 1. The
categories involve keys between clusterhead and normal nodes adjacent to it
(1), between normal nodes (2), between either gateway node and clusterhead or
gateway node and normal node (3), and between clusterheads (4). In the process of pairwise key setup between normal nodes and clusterhead, the validity
of normal nodes which are adjacent to clusterhead should be verified because
they can modify the data from other normal nodes or generate malicious data
directly. Furthermore, the legality of gateway nodes should be examined to prevent attackers from pretending a valid node. Otherwise, the attacker can control
as to where the gathered data should go by impersonating the gateway node.
-----
**Fig. 1. Pairwise key establishment process**
In fact, the legitimacy of the clusterhead should be inspected by other normal
nodes because it plays a pivotal role in gathering the data and forwarding it.
Some of the normal nodes which are close to the clusterhead can investigate the
identity of the clusterhead via the signature of its public key.
The sensor nodes make use of the established pairwise key as a symmetric
key of TinySec [9] which is a link-layer security architecture in TinyOS [10].
In fact, our protocol utilizes the TinySec so that it can provide node-to-node
confidentiality and authentication.
(i) Between clusterhead and normal nodes
1. Normal node (NA) sends a pair of public keys (PA = G ∗ _KA) and its_
signature (SP A) signed by BS’s private key to the clusterhead (NC ).
_NA −→_ _NC_ : PA||SP A
2. The clusterhead verifies the validity of the public key using BS’s public
key. If the signature is authentic, the clusterhead sends its public key to
_NA. Otherwise, it registers NA as a malicious node._
3. If the signature proves to be valid, they can calculate the common pair
wise key (KAC = KCA = PA ∗ _KC = PC ∗_ _KA = G ∗_ _KA ∗_ _KC)._
4. After completing the pairwise key setup between the normal nodes and
the clusterhead, the latter can distribute the cluster key (KC) which
is the commonly shared key of a cluster. The cluster key is encrypted
with pairwise keys between each normal node and clusterhead and is
distributed to each normal node.
_NC_ _NA: EKCA(KC_ )
_−→_
(ii) Between two normal nodes
1. NA sends its public key (PA = G ∗ _KA) to NB._
2. NB sends its public key (PB = G ∗ _KB) to NA._
3. They can calculate the common pairwise key (KAB = KBA = PA∗KB =
_PB ∗_ _KA = G ∗_ _KA ∗_ _KB)._
-----
**Fig. 2. Broadcast authentication process**
(iii) Between gateway node and ( clusterhead or normal node )
1. The gateway node (NG) sends (PG, SP G).
2. If a clusterhead receives the message from the gateway node, it can
verify the validity of the public key immediately.
3. If a normal node gets the message, it forwards the pair to the clusterhead
to examine it. The clusterhead returns the result of the verification.
_NG −→_ _NA: PG||SP G_
_NA −→_ _NC_ : EKAC (PG||SP G)
_NC_ _NA: EKCA(V alid or Not)_
_−→_
4. If the signature is authentic, the remaining steps are same as before.
(iv) Between clusterheads
Assume that two clusterheads NCA and NCB try to set up a pairwise key.
1. A clusterhead (NCA) sends (PCA, SP CA) to the gateway node (NG).
_NCA −→_ _NG: EKGCA (PCA||SP CA_ )
2. A gateway node (NG) forwards the pair to another clusterhead (NCB )
_NG −→_ (NCB ): EKGCB (PCA _||SP CA)_
3. If the signature is valid, the clusterhead (NCB ) also sends (PCB, SP CB )
to the clusterhead (NCA). The same procedure is followed to verify the
validity of the clusterhead (NCB ).
4. After authenticating mutually, the clusterheads can compute the com
mon pairwise key.
**3.3** **Broadcast Authentication**
In WSNs, the BS broadcasts its command or query to sensor nodes. If a broadcast
authentication mechanism is not provided, an attacker can impersonate the BS
and execute a kind of DoS attack by generating heavy traffic over the network.
Similarly, the clusterheads broadcast their aggregated data from normal nodes to
the BS. Unless there is provision for authentication, it is possible for attackers to
send malicious or bogus data to the BS. For broadcast authentication in WSNs,
-----
_µTESLA has been proposed in [2]. However, all the sensor nodes must be syn-_
chronized with the BS in µTESLA. This constraint results in decreased lifetime
of the sensor network. Furthermore, the delayed disclosure of the authentication
keys causes time delay in message authentication in µTESLA. We can provide
efficient broadcast authentication mechanism by exploiting the ECC, especially
the ECDSA, due to its even smaller key size as compared with other digital
signature algorithms. However, the overhead of verification in ECDSA is almost
twice as large as that of signing. If all the sensor nodes in the network verify
the broadcast messages from the BS, considerable energy is consumed. This is
unacceptable in view of the limited resource of the entire network. Therefore, we
propose a cluster-based ECDSA so that we may reduce the overhead of verification for broadcast authentication. Actually, only the clusterheads are responsible
for verifying the broadcast message in our mechanism. This results in a sharp
fall in resource consumption.
In Section 3.1, we have assumed that BS’s public key is stored in each sensor
node and the public keys of the clusterheads are also maintained by the BS. The
public key of the BS is utilized by the clusterheads for verifying the signature of
the broadcast message from the BS. Similarly, the public keys of the clusterheads
are applied to verify the messages from the clusterheads to the BS.
**Broadcast from Base Station to Clusterheads**
The process is depicted in Fig. 2. In the figure, the BS broadcasts a message to
the clusterheads ( 1 through 4 ). The message is encrypted by the pairwise keys
of the concerned nodes for providing confidentiality. The details are given below.
(i) Signing the broadcast message
1. The BS generates the signature (r, s) based on the message (m) and its
private key (d). The nonce value (R) is used to prevent replay attack.
The (r, s) is computed as below:
_r = x1 mod n, kP = (x1, x2), k ∈_ [1, n − 1], P is a point on curve
_s = k[−][1]_ _h(m_ _R) + dr_ mod n, where h is SHA-1, n is large prime.
_{_ _||_ _}_
2. The BS (NB′) sends a pair of signature (r, s) and a message (m) with
random nonce (R) to gateway nodes for delivering it to clusterheads.
The gateway nodes forward it to the clusterheads.
_NB′ −→_ _NG: EKB′_ _G((r, s)||m||R)_
_NG −→_ _NC: EKGC_ ((r, s)||m||R)
(ii) Verifying the broadcast message
1. When a clusterhead receives the signed broadcast message, it verifies the
message by comparing (v) and (r). In addition, it ignores the duplicate
messages by checking the nonce value, which results in energy efficiency.
The procedure for computing value (v) is given below:
_v = x1 mod n, u1_ _G + u2_ _PB[′] = (x1, y1),_
_∗_ _∗_
_u1 = {h(m||R) ∗_ _w} mod n, u2 = r ∗_ _w mod n, w = s[−][1]_ mod n.
2. If the calculated (v) is same as the received (r), the clusterhead accepts
this message. And then it broadcasts a local query encrypted with the
cluster key (KC ) to normal nodes in a cluster.
-----
_NC_ _NA: EKC_ (m)
_−→_
3. Normal nodes begin on the assigned work and return the results.
**Broadcast from Clusterheads to Base Station**
This procedure in Fig. 2 is reversed.
(i) Signing the broadcast message
1. A clusterhead collects data from normal nodes in a cluster. It signs the
gathered data using its private key (The signing procedure is same as
above). For prevention of replay attack, the nonce value (R[′]) is used.
2. The clusterhead sends a pair of signature (r[′], s[′]) and data (m[′]) to a
gateway node.
_NC −→_ _NG: EKCG_ ((r[′], s[′])||m[′]||R[′])
3. The gateway node forwards the pair to the BS through other clusters.
_NG −→_ _NB′_ : EKGB′ ((r[′], s[′])||m[′]||R[′])
(ii) Verifying the broadcast message
1. The BS can verify the message from clusterheads because it maintains
the public keys of clusterheads (The verification procedure is same as
above). It also achieves high energy efficiency by rejecting the duplicate
messages through checking the nonce value.
2. If the signature proves to be innocent, the BS accepts this message.
## 4 Security Analysis
We analyze the proposed key setup protocol and broadcast authentication mechanism with regard to essential security properties such as confidentiality, integrity, authentication, and node compromise attack.
**Confidentiality and Integrity. In a process of pairwise key setup, even if**
an attacker can eavesdrop on the information exchange such as the nodes’ public key, the secret pairwise key continues to be secure. This is why an attacker
must solve the ECDLP to gain the pairwise key. After completing the process,
the nodes use the pairwise key for a symmetric key in TinySec [9]. The TinySec
provides efficient node-to-node confidentiality and authentication. Furthermore,
broadcast messages from BS are encrypted by these pairwise key. Therefore, our
protocol ensures confidentiality and integrity.
**Authentication. We categorize the pairwise key setup into four types and**
then require that the concerned nodes verify each others’ signature so as to
thwart man-in-the-middle attack. For example, the identity of the clusterheads,
the gateway nodes, and the normal nodes that are close to the clusterhead is
examined because they play principal roles in our protocol. Furthermore, the BS
broadcasts messages signed with its private key. In both cases, attackers cannot
forge the signature of the public key because it is signed by the BS’s private key.
Therefore, the proposed protocols provide authentication mechanism through
the process of verifying the signature.
-----
**Node Compromise. Node compromise is a central attack that can destroy**
the entire mechanism. An attacker can examine the secret information and the
running code by compromising a node. In our protocol, nodes maintain minimal
information such as their own public/private key pair and the corresponding
signature. Therefore, if an attacker compromises t nodes, the information about
the (t + 1)[th] node remains out of reach. In other words, the attacker must solve
the ECDLP in order to find out the private key of the (t + 1)[th] node. This is
an advantage of our protocol over symmetric-key-based protocols [2,3] which are
vulnerable to node compromise attack.
## 5 Implementation and Performance Evaluation
We have implemented the proposed protocol on an 8-bit, 7.3828-MHz MICAz
mote [11]. For emphasizing the feasibility of our protocol in WSNs, we concentrate on efficient implementation of the proposed pairwise key establishment
protocol and broadcast authentication mechanism based on the ECC rather than
the cluster forming or the routing protocol.
**5.1** **Implementation Details**
**Elliptic Domain Parameters and Selection of Key Size. We make use**
of the recommended 113-bit Elliptic Curve Domain Parameters (sect113r1 of
[15]) over GF (2[p]). Although the selected 113-bit key is shorter than NIST’s
recommended key size (163-bit), it is more in tune with the life time of the
sensor nodes. In fact, the largest broken key size has 109-bit, and it took more
than seventeen months with ten thousands computers.
**Elliptic Scalar Multiplication. The ECDH and ECDSA are related to com-**
pute the scalar multiplication which computes (Q = dP ) for a given point P and
a scalar d. The performance of ECDH and ECDSA depends on the number of additions in the scalar multiplication. The number of additions should be reduced
for efficiency. We have developed a scalar multiplication algorithm using wMOF
which is a kind of signed representation. We can represent the equivalent value
with reduced number of additions with the aid of the wMOF [14]. Even though
the number of additions is reduced, an extended window size requires additional
memory for precomputed points. Through experiments, we have found that the
optimal window size is 3 on MICAz mote with regard to memory and efficiency.
The verification procedure in ECDSA involves scalar multiplication of multiple
points such as vP + uQ. If the sensor nodes are required to verify the signature
quickly, the term vP + uQ should be computed efficiently. Inspired by shamir’s
trick [13], we perform simultaneous elliptic scalar multiplication using wMOF.
The details of our algorithm can be found in the Appendix.
**5.2** **Performance Evaluation**
We compare our work with other implementations over GF (2[p]) using the same
key size. Actually, the EccM 2.0’s key is 163 bits long. We lowered the key size
-----
of EccM to 113-bit for a fair comparison. In Table 1, we present the performance
of our pairwise key setup protocol based on ECDH and compare it with other
existing implementations in aspect of time, energy, and CPU utilization. By
signed representation of the multiplier, the proposed protocol can achieve better
performance than other implementations. Furthermore, by preloading the public
key and its signature on each sensor node before deployment, the sensor nodes do
not have to compute their public key, which lowers the overhead of the pairwise
key setup process. In fact, it takes only 5.796 sec for two normal nodes to share
a pairwise key. Clusterheads can establish the pairwise key more rapidly because
they have higher computational power than normal nodes.
Table 2 also presents the performance of broadcast authentication based on
the ECDSA verification. We could reduce the verification overhead by using
shamir’s trick based on wMOF. Actually, this overhead is larger than that of
computing a pairwise key. However, in our protocol, only the clusterheads or BS
verifies the signature of the broadcast message. Therefore, they can complete
this operation even within a period of 7.367 sec.
The experimental results show that our protocol outperforms existing ECC
based protocols such as EccM 2.0 [7] or Blass’ [4]. Furthermore, it implies the
feasibility of our protocol for WSNs.
**Table 1. Performance of computing pairwise key in ECDH**
Time Energy CPU Utilization
EccM 2.0 [7] 22.72 sec 0.54518 Joules 1.6783 × 10[8] cycles
Blass’ [4] 17.28 sec 0.41472 Joules 1.2767 × 10[8] cycles
Proposed 5.796 sec 0.13910 Joules 0.4282 × 10[8] cycles
**Table 2. Performance of verification in ECDSA**
EccM 2.0 [7] 23.63 sec 0.56712 Joules 1.7458 × 10[8] cycles
Blass’ [4] 24.17 sec 0.58008 Joules 1.7857 × 10[8] cycles
Proposed 7.367 sec 0.17681 Joules 0.5443 × 10[8] cycles
## 6 Conclusion
For securing the WSNs, we propose pairwise key establishment and broadcast
authentication protocol. By clustering the entire network, we can categorize the
pairwise key setup into four types involving the concerned members. In our protocol, the sensor nodes can establish the pairwise key efficiently with ECDH
over an insecure channel. Furthermore, the proposed mechanism can prevent
the man-in-the middle attack by verifying the other node’s signature. Through
the application of established pairwise key to Tinysec, our protocol provides
node-to-node confidentiality and authentication. In the proposed mechanism,
the clusterheads are required to verify the signature of the broadcast messages,
-----
thereby preventing the attackers from impersonating the BS. This prevents DoS
attacks. Through experiments on the 8-bit, 7.3828-MHz MICAz mote, we provide
performance analysis of our protocol. The feasibility of the proposed protocol
for WSNs is borne out by the above analysis.
**Acknowledgement. The authors would like to thank Dr. Jong-Phil Yang and**
anonymous reviewers for their helpful comments and valuable suggestions. This
research was supported by Brain Korea 21 of Ministry of Education of KOREA.
## References
1. Perrig, A., Stankovic, J. and Wagner, D.: Security in Wireless Sensor Networks.
Comm. ACM (2004) 47(6):53–57
2. Perrig, A., et al.: SPINS: security protocols for sensor networks. Wireless Network
ing. (2002) 8(5):521–534
3. Du, W., et al.: A pairwise Key Pre-distribution Scheme for Wireless Sensor Net
works. Proc. 10th ACM Conf. Comp. and Comm. Security. (2003) 42–51
4. Blass, E.O., Zitterbart, M.: Efficient Implementation of Elliptic Curve Cryptogra
phy for Wireless Sensor Networks. (2005)
5. Gaubatz, G., et al.: State of the Art in Ultra-Low Power Public Key Cryptography
for Wireless Sensor Networks. Proc. of 3th IEEE Conf. on Pervasive Comp. and
Comm. (2005) 146–150
6. Kumar, S., et al.: Embedded End-To-End Wireless Security with ECDH Key Ex
change. Proc. of IEEE Conf. On Circuit and Systems. (2003)
7. Malan, D.J., Welsh, M., and Smith, M.D.: A Public-Key Infrastructure for Key
Distribution in TinyOS Based on Elliptic Curve Cryptography. Proc. of IEEE
Conf. on Sensor and Ad Hoc Comm. and Networks. (2004)
8. Watro, R., et al.: TinyPK: Securing Sensor Networks with Public Key Technology.
Proc. of SASN’04 (2004) ACM Press 59–64
9. Karlof, C., Sastry, N., and Wagner, D.: TinySec: Link Layer Security Architecture
for Wireless Sensor Networks. Proc. of SenSys’04 (2004) 162–175
10. TinyOS forum. Available at “http://www.tinyos.net/”.
11. MICAz Hardware Description Available at “http://www.xbow.com/Products”.
12. Wander, A.S., et al.: Energy Analysis of Public-Key Cryptography for Wireless
Sensor Networks. Proc. of IEEE Conf. on Pervasive Comp. and Comm. (2005)
13. Hankerson, D., Hernandez, J.L.: Software Implementation of Elliptic Curve Cryp
tography over Binary Fields. Proc. of CHES 2000. LNCS 1965 (2000) 1–24
14. K. Okeya, et al.: Signed Binary Representation Revisited. Proc. of CRYPTO 2004.
LNCS 3152. (2004) 123–139
15. Certicom Research: SEC 2-Recommended Elliptic Curve Domain Parameters.
## Appendix
This section presents main idea for efficient implementation of ECDH and
ECDSA by describing the proposed scalar multiplication algorithm. Algorithm 1
computes a scalar multiplication which is a dominant computation in ECC. To
generate proper wMOF code on the fly, we have developed algorithm 2. It generates appropriate wMOF code from the MOF using a kind of weighted sum.
-----
Our algorithms provide efficiency in aspect to both computation and memory.
In fact, the scalar multiplication consumes only (w) bits for signed representa_O_
tion of scalar multipliers. Furthermore, with wMOF code, we could reduce the
number of additions from ( _[n]_ _n_
_O_ 2 [) to][ O][(] _w+1_ [) given][ n][-bit binary string.]
**Algorithm 1. Scalar Multiplication Algorithm using wMOF**
1: INPUT: a point P, window width w, d = (dn−1, ..., d1, d0)2, R ←O
2: OUTPUT: product dP
3: d−1 ← 0; dn ← 0; i ← _c + 1 for the largest c with dc ̸= 0_
4: Compute Pi = iP, for i ∈{1, 3, 5, . . ., 2[w][−][1] _−_ 1}
5: while i ≥ 1 do
6: _R ←_ _ECDBL(R)_
7: **if di−1 = di then**
8: _i ←_ _i −_ 1
9: **else if di−1 ̸= di then**
10: _GenerationwMOF_ (di,...,i−w, indexi, code[w])
11: **for k ←** 0 to w − 1 do
12: _R ←_ _ECADD(R, code[k] ∗_ _P_ )
13: **if k ̸= w −** 1 then
14: _R ←_ _ECDBL(R)_
15: **end if**
16: **end for**
17: _i ←_ _i −_ _w_
18: **end if**
19: end while
**Algorithm 2. Generation of wMOF** : GenerationwMOF (On the fly)
1: INPUT: w-bit binary strings, index, and w-byte array
2: OUTPUT: w-byte wMOF code
3: check ← _true, multiplier ←_ 1, SUM ← 0, position ← 0
4: for m ← _index −_ _w, n ←_ _w −_ 1 to index do
5: **if check && bm −** _bm−1 then_
6: _position ←_ _n; check ←_ _false_
7: **end if**
8: _SUM ←_ _SUM + multiplier ∗_ (bm − _bm−1)_
9: _multiplier ←_ _multiplier ∗_ 2
10: _n ←_ _n −_ 1; wMOF [n] ← 0
11: end for
12: wMOF [position] ← _SUM/2[w][−][position][−][1]_
13: return wMOF [w]
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/11807964_30?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/11807964_30, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007/11807964_30.pdf"
}
| 2,006
|
[
"JournalArticle"
] | true
| 2006-08-01T00:00:00
|
[] | 7,394
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/030046515a20a4b4f86c290361881923694e458a
|
[
"Computer Science"
] | 0.859294
|
Combining Graph Neural Networks With Expert Knowledge for Smart Contract Vulnerability Detection
|
030046515a20a4b4f86c290361881923694e458a
|
IEEE Transactions on Knowledge and Data Engineering
|
[
{
"authorId": "2145312301",
"name": "Zhenguang Liu"
},
{
"authorId": "2066135394",
"name": "Peng Qian"
},
{
"authorId": null,
"name": "Xiaoyang Wang"
},
{
"authorId": "2056432056",
"name": "Yuan Zhuang"
},
{
"authorId": "2150467815",
"name": "Lin Qiu"
},
{
"authorId": "2115535476",
"name": "Xun Wang"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Trans Knowl Data Eng"
],
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=69"
],
"id": "c6840156-ee10-4d78-8832-7f8909811576",
"issn": "1041-4347",
"name": "IEEE Transactions on Knowledge and Data Engineering",
"type": "journal",
"url": "https://www.computer.org/web/tkde"
}
|
Smart contract vulnerability detection draws extensive attention in recent years due to the substantial losses caused by hacker attacks. Existing efforts for contract security analysis heavily rely on rigid rules defined by experts, which are labor-intensive and non-scalable. More importantly, expert-defined rules tend to be error-prone and suffer the inherent risk of being cheated by crafty attackers. Recent researches focus on the symbolic execution and formal analysis of smart contracts for vulnerability detection, yet to achieve a precise and scalable solution. Although several methods have been proposed to detect vulnerabilities in smart contracts, there is still a lack of effort that considers combining expert-defined security patterns with deep neural networks. In this paper, we explore using graph neural networks and expert knowledge for smart contract vulnerability detection. Specifically, we cast the rich control- and data- flow semantics of the source code into a contract graph. To highlight the critical nodes in the graph, we further design a node elimination phase to normalize the graph. Then, we propose a novel temporal message propagation network to extract the graph feature from the normalized graph, and combine the graph feature with designed expert patterns to yield a final detection system. Extensive experiments are conducted on all the smart contracts that have source code in Ethereum and VNT Chain platforms. Empirical results show significant accuracy improvements over the state-of-the-art methods on three types of vulnerabilities, where the detection accuracy of our method reaches 89.15, 89.02, and 83.21 percent for reentrancy, timestamp dependence, and infinite loop vulnerabilities, respectively.
|
## Combining Graph Neural Networks with Expert Knowledge for Smart Contract Vulnerability Detection
###### Zhenguang Liu, Peng Qian, Xiaoyang Wang, Yuan Zhuang, Lin Qiu, and Xun Wang
**Abstract—Smart contract vulnerability detection draws extensive attention in recent years due to the substantial losses caused by**
hacker attacks. Existing efforts for contract security analysis heavily rely on rigid rules defined by experts, which are labor-intensive and
_non-scalable. More importantly, expert-defined rules tend to be error-prone and suffer the inherent risk of being cheated by crafty_
attackers. Recent researches focus on the symbolic execution and formal analysis of smart contracts for vulnerability detection, yet to
achieve a precise and scalable solution. Although several methods have been proposed to detect vulnerabilities in smart contracts,
there is still a lack of effort that considers combining expert-defined security patterns with deep neural networks.
In this paper, we explore using graph neural networks and expert knowledge for smart contract vulnerability detection. Specifically,
we cast the rich control- and data- flow semantics of the source code into a contract graph. To highlight the critical nodes in the graph,
we further design a node elimination phase to normalize the graph. Then, we propose a novel temporal message propagation network
to extract the graph feature from the normalized graph, and combine the graph feature with designed expert patterns to yield a final
detection system. Extensive experiments are conducted on all the smart contracts that have source code in Ethereum and VNT Chain
platforms. Empirical results show significant accuracy improvements over the state-of-the-art methods on three types of vulnerabilities,
where the detection accuracy of our method reaches 89.15%, 89.02%, and 83.21% for reentrancy, timestamp dependence, and infinite
loop vulnerabilities, respectively.
**Index Terms—Deep learning, blockchain, smart contract, vulnerability detection, expert knowledge**
###### !
###### 1 INTRODUCTION
Lockchain and its killer applications, e.g., Bitcoin and
_smart contract, are taking the world by storm [1–6]. A_
# B
blockchain is essentially a distributed and shared transaction ledger, maintained by all the miners in the blockchain
network following a consensus protocol [7]. The consensus protocol and replicated ledgers enforce all the transactions immutable once recorded on the chain, endowing
blockchain with decentralization and tamper-free nature.
**Smart contract. Smart contracts are programs running on**
top of the blockchain [4, 8]. A smart contract can implement
arbitrary rules for managing assets by encoding the rules
into source code. The defined rules of a contract will be
strictly and automatically followed during execution, effectuating the ‘code is law’ logic. Smart contracts make the
automatic execution of contract terms possible, facilitating
complex decentralized applications (DApps). Indeed, many
DApps are basically composed of several smart contracts as
the backend and a user interface as the frontend [9].
Millions of smart contracts have been deployed in various blockchain platforms, enabling a wide range of appli
_•_ _Zhenguang Liu, Peng Qian are with School of Computer and Information_
_Engineering, Zhejiang Gongshang University and Zhejiang University,_
_China. Email: liuzhenguang2008@gmail.com, messi.qp711@gmail.com_
_•_ _Yuan Zhuang is with National University of Singapore, Singapore._
_•_ _Xiaoyang Wang is with School of Computer and Information Engineering,_
_Zhejiang Gongshang University, China._
_•_ _Lin Qiu is with Southern University of Science and Technology, China._
_•_ _Xun Wang is with School of Computer and Information Engineer-_
_ing, Zhejiang Gongshang University and Zhejiang Lab, China. Email:_
_xwang@zjgsu.edu.cn._
cations including wallets [10], crowdfunding, decentralized
gambling [11], and cross-industry finance [12]. The number
of smart contracts is still growing rapidly. For example,
within the last six months, over 15,000 new contracts were
deployed on Ethereum alone, which is the most famous
smart contract platform.
**Security issues of smart contracts. Smart contracts from**
various fields now hold more than 10 billion dollars worth
of virtual coins. Undoubtedly, holding so much wealth
makes smart contracts attractive enough to attackers. In June
2016, attackers exploited the reentrancy vulnerability of the
DAO contract [13] to steal 3.6 million Ether, which was
worth 60 million US Dollars. This case is not isolated and
several security vulnerabilities are discovered and exploited
every few months [13–15], undermining the trust for smart
contract-based applications.
There are several reasons that make smart contracts particularly prone to errors. First, the programming languages
(e.g., Solidity) and tools are still new and crude, leaving
plenty of rooms for bugs and misunderstandings in the tools
[8, 16]. Second, since smart contracts are immutable once
deployed, developers are required to anticipate all possible
status and environments the contract may encounter in
the future, which is undoubtedly difficult. Distinct from
conventional distributed applications that can be updated
when bugs are detected, there is no way to patch the bugs
of a smart contract without forking the blockchain (almost
an impossible task), regardless of how much money the
contract holds or how popular it is [8]. Therefore, effective
vulnerability checkers for smart contracts before their de
_Corresponding authors: Peng Qian Xun Wang_
-----
ployment are essential.
**Drawbacks of conventional methods. Conventional**
methods for smart contract vulnerability detection, such as
[8, 16–18], employ classical static analysis or dynamic execution techniques to identify vulnerabilities. Unfortunately,
they fundamentally rely on several expert-defined patterns.
The manually defined patterns bear the inherent risk of
being error-prone and some complex patterns are non-trivial
to be covered. Crudely using several rigid patterns leads to
high false-positive and false-negative rates, and crafty attackers
may easily bypass the pattern checking using tricks. Moreover, as the number of smart contracts increases rapidly, it
is becoming impossible for a few experts to sift through all
the contracts to design precise patterns. A feasible solution
might be: ask each expert to label a number of contracts,
then collect all the labeled contracts from many experts to
train a model that can automatically give a prediction on
whether a contract has a specific type of vulnerability.
Recently, efforts have been made towards adopting deep
neural networks for smart contract vulnerability detection
[19–21], achieving improved accuracy. [19] utilizes LSTM
based networks to sequentially process source code, while
[20] models the source code into control flow graphs. [21]
builds a sequential model to analyze the Ethereum operation code. However, these approaches either treat the
source code or operation code as a text sequence instead
of semantic blocks, or fail to highlight critical variables in
the data flow [20], leading to insufficient semantic modeling
and unsatisfactory results.
To fill the research gap, in this paper, we investigate
more than 300,000 smart contract functions and present
a fully automated and scalable approach that can detect
vulnerabilities at the function level. Specifically, we cast the
rich control- and data- flow semantics of the source code into
graphs. The nodes in the graph represent critical variables
and function invocations, while directed edges capture their
temporal execution traces. Since not all nodes in the graph
are of equal importance and most graph neural networks
are inherently flat during information propagation on the
graph, we design a node elimination phase to normalize the
graph and highlight the key nodes. The normalized graph
is then fed into a temporal message propagation network
to learn the graph feature. In the meantime, we extract the
_security pattern feature from the source code using expert_
knowledge. Finally, the graph feature and security pattern
feature are incorporated to produce the final vulnerability
detection results.
We conducted experiments on all the 40k contracts that
have source code in Ethereum and on all the contracts in
VNT Chain, demonstrating significant improvements over
state-of-the-art vulnerability detection methods: F1 score
from 78% to 86%, 79% to 88%, 74% to 82% for reentrancy,
_timestamp dependence, and infinite loop vulnerabilities, respec-_
tively. Our implementations[1] are released to facilitate future
research.
We would like to point out that this work is clearly
distinct from the previous one [20] in three ways: 1) this
work is to investigate whether combining graph neural
networks with conventional expert patterns could achieve
1 Github: https://github com/Messi-Q/GPSCVulDetector
better vulnerability detection results, while the objective
of the previous work is to explore the possibility of using
neural networks for smart contract vulnerability detection.
2) In this work, we propose to extract vulnerability-specific
expert patterns and combine them with the graph feature.
We also explicitly model the key variables in the data flow.
In contrast, in the previous work, we only utilize the graph
feature while ignoring expert patterns and key variables. 3)
This work consistently outperforms the previous one across
different vulnerabilities, and overall provides more insights
and findings in this field. Note that in the previous paper,
we proposed two neural networks, DR-GCN and TMP, to
explore the applicability of different graph neural networks
on smart contract vulnerability detection. In this paper, we
focus on extending TMP, which delivers better performance
than DR-GCN. We will also extend DR-GCN and compare it
with the extension of TMP.
**Contributions. To summarize, the key contributions are:**
_• To the best of our knowledge, we are the first to inves-_
tigate the idea of fusing conventional expert patterns
and graph-neural-network extracted features for smart
contract vulnerability detection.
_• We propose to characterize the contract function source_
code as contract graphs. We also explicitly normalize
the graph to highlight key variables and invocations.
A novel temporal message propagation network is proposed to automatically capture semantic graph features.
_• Our methods set the new state-of-the-art performance_
on smart contract vulnerability detection, and overall
provide insights into the challenges and opportunities.
As a side contribution, we have released our implementations to facilitate future research.
###### 2 RELATED WORK
**2.1** **Smart Contract Vulnerability Detection**
Smart contract vulnerability detection is one of the fundamental problems in blockchain security. Early works on
smart contract vulnerability detection verify smart contracts
by employing formal methods [22–25]. For example, [22]
introduces a framework, translating Solidity code (the smart
contract programming language of Ethereum) and the EVM
(Ethereum Virtual Machine) bytecode into the input of an
existing verification system. [25] proposes a formal model
for EVM and reasons the potential bugs in smart contracts
by using the Isabelle/HOL tool. Further, [23] and [24] define
formal semantics of the EVM using the F* framework and
the K framework, respectively. Although these frameworks
provide strong formal verification guarantees, they are still
semi-automated.
Another stream of work relies on generic testing and
symbolic execution, such as Oyente [8], Maian [26], and
Securify [18]. Oyente is one of the pioneering works that
perform symbolic execution on contract functions and flags
bugs based on simple patterns. Zeus [27] leverages abstract
interpretation and symbolic model checking, as well as the
constrained horn clauses to detect vulnerabilities in smart
contracts. [18] introduces compliance (negative) and violation (positive) patterns to filter false warnings.
Researchers also explore smart contract vulnerability
detection using dynamic execution. [17] presents ContractFuzzer to identify vulnerabilities by fuzzing and runtime
-----
1
2
3
4
5
6
7
8
9
c����ac��A��ac�e��
�����add�e���ba��_add=01f3�...32;�����
�����f��c������a��ac�()�
���������ba��_add.de�����.�a��e(10�e�he�)();
���������ba��_add.���hd�a�();
��������
������f��c������()���a�ab�e�
�����������f(c�����++�<�10)
��������������ba��_add.���hd�a�();
�������
��
1
2
3
4
5
6
7
8
9
10
11
12
c����ac��Ba���
���a����g�(add�e���=>�����)�����a�e���e�Ba�a�ce;
1
����f��c�����de�����()��a�ab�e�
�����������e�Ba�a�ce[��g.�e�de�]+=��g.�a��e;
����� 2
�����f��c���������hd�a�()����b��c� 3
��������������a������=���e�Ba�a�ce[��g.�e�de�];
����������e����e(��g.�e�de�.ca��.�a��e(a�����)());
�����������e�Ba�a�ce[��g.�e�de�]�=�0;
����� 4 �
��
���
**Fig. 1 A simplified example of reentrancy vulnerability.**
behavior monitoring during execution. Similarly, [28] develops a fuzzing-based analyzer to identify the reentrancy
vulnerability. Sereum [29] uses taint analysis to monitor
runtime data flows during smart contract execution for vulnerability detection. However, dynamic execution methods
require a hand-crafted agent contract to interact with the
contract under test, preventing them from fully-automated
applications and endowing them non-scalability.
Recently, a few attempts have been made to study using
deep neural networks for smart contract vulnerability detection. [19] constructs the sequential contract snippet and feeds
them into the BLSTM-ATT model to detect reentrancy bugs.
[20] proposes to convert the source code of a contract into
the contract graph and constructs graph neural networks as
the detection model. [30] proposes ContractWard, extracting
bigram features from the operation code of smart contracts
and utilizing machine learning algorithms. However, although a few methods have been proposed, the field of contract vulnerability detection using deep learning is still in its
infancy and the accuracy is still unsatisfactory. For common
smart contract vulnerabilities and attacks, motivated readers
may refer to [31] for a comprehensive survey.
**2.2** **Graph Neural Network**
With the remarkable success of neural networks, graph
neural network has been investigated extensively in various
fields such as graph classification [32, 33], program analysis
[34, 35], and graph embedding [36]. Existing approaches
roughly cast into two categories: (i) Spectral-based approaches
generalize well-established neural networks like CNNs to
work on graph-structured data. For instance, GCN [37]
implements a first-order approximation of spectral graph
convolutions [38–40] and develops a layer-wise propagation network using the Laplacian matrix, which achieves
promising performance on graph node classification tasks.
[41] proposes a graph CNN which can take data of arbitrary
graph structure as input. (ii) Spatial-based methods inherit
ideas from recurrent GNNs and adopt information propagation to define graph convolutions. Early work such as
[42] directly sums up the nodes’ neighborhood information
for graph convolutions. Another line of work, such as GAT
[43] and GAAN [44], employs attention mechanisms to learn
the weights of different neighboring nodes. Motivated by
these spatial-based approaches, [45] outlines a messagepassing neural network framework to predict the chemical
properties of molecules.
Recently, [34, 35, 46, 47] attempt to apply GNNs to
program analysis issues. Specifically, [35] introduces a gated
graph recurrent network for variable prediction, while [46]
proposes Gemini for binary code similarity detection where
functions in binary code are represented by attributed con
trol flow graphs. [34] develops Devign, a general graph neural network-based model for vulnerability identification in
C programming language. Different from these methods, we
focus on the specific smart contract vulnerability task, and
explicitly take into account the distinct roles and temporal
relationships of program elements.
###### 3 PROBLEM STATEMENT
In this section, we first formulate the problem, then introduce the three types of vulnerabilities studied in this
paper, and present the reasons for focusing on these three
vulnerabilities.
**Problem formulation. Given the source code of a smart**
contract, we are interested in developing a fully automated
approach that can detect vulnerabilities at the function level.
In other words, we are to estimate the label ˆy for each
smart contract function f, where ˆy = 1 represents f has
a specific vulnerability while ˆy = 0 denotes f is safe. In
this paper, we focus on three types of vulnerabilities, which
will be presented below. Before that, we first introduce the
preliminary knowledge of the fallback mechanism in smart
contracts, which is important in understanding the problem.
**Fallback mechanism. Within a smart contract, each func-**
tion is uniquely identified by a signature, consisting of its
name and parameter types [31]. Upon a function invocation,
the signature of the invoked function is passed to the called
contract. If the signature matches a function of the called
contract, the execution jumps to the corresponding function.
Otherwise, it jumps to the fallback function. Money transfer
is considered as an empty signature, which will trigger
the fallback function as well. The fallback function is a
special function with no name and no argument, which
can be arbitrarily programmed [31]. After introducing this
background knowledge, we now are ready to elaborate on
the three types of vulnerabilities.
(1) Reentrancy is a well-known vulnerability that caused
the infamous DAO attack. When a smart contract function
_f1 transfers money to a recipient contract C, the fallback_
function f2 of C will be automatically executed. In its fallback function f2, C may invoke back to f1 for conducting an
invalid second-time transfer. Since the current execution of
_f1 waits for the first-time transfer to finish, C can make use_
of the intermediate state of f1 to succeed in stealing money.
A simplified example is shown in Fig. 1, where the withdraw
function of contract Bank has a reentrancy vulnerability,
contract Attacker steals money by exploiting the vulnerability. First, Attacker deposits 10 Ether (Ether is the virtual
money of Ethereum) in contract Bank (step 1). Then, Attacker
withdraws the 10 Ether by invoking the withdraw function
(step 2). When the contract Bank sends 10 Ether to Attacker
using call.value (Bank, line 9), the fallback function (Attacker,
lines 8–11) of Attacker will be automatically invoked (step 3).
In its fallback function, Attacker calls withdraw again (step
4). Since the userBalance of Attacker has not yet been set to 0
(Bank, line 10), Bank believes that Attacker still has 10 Ether in
the contract, thus transfers 10 Ether to Attacker again (Step
5). The withdraw loop lasts for 9 times (count + + < 10,
_Attacker line 9). Finally, Attacker obtains much more Ether_
(100 Ether) than expected (10 Ether)
-----
|c|pragma solidity 0.4.24; ontract DAO{ function withdraw(){ msg.sender.call.value(); balance[msg.sender]-=0; ...... } }|
|---|---|
pragma solidity 0.4.24;
contract DAO{
function withdraw(){
msg.sender.call.value();
balance[msg.sender]-=0;
......
}
}
_smart contract functions_ _security patterns_
**(a) Security pattern extraction**
_contract graphs_
**(b) Contract graph construction and normalization**
**(c) Vulnerability detection**
**Fig. 2 The overall architecture of our proposed method. (a) The expert pattern extraction phase; (b) the contract graph**
**construction and normalization phase; (c) the vulnerability detection phase.**
(2) Timestamp dependence vulnerability exists when
a smart contract uses the block timestamp as a triggering
condition to execute some critical operations, e.g., using the
_timestamp of a future block as the source to generate random_
numbers so as to determine the winner of a game. The miner
(a node in the blockchain) who mines the block has the
freedom to set the timestamp of the block within a short
time interval (< 900 seconds) [17]. Therefore, miners may
manipulate the block timestamps to gain illegal benefits.
(3) Infinite loop is a common vulnerability in smart
contracts. The program of a function may contain a loop (e.g.
_for loop, while loop, and self-invocation loop) with no exit_
condition or the exit condition cannot be reached, namely
an infinite loop.
**Why focus on these vulnerabilities. We mainly focus**
on the three aforementioned vulnerabilities since: (i) In
real attacks, blockchain networks have suffered more than
100 million USD losses due to the three vulnerabilities.
For instance, attacks on reentrancy have caused one of the
biggest losses (60 million USD in The Dao Event) in smart
contract history. (ii) We empirically found that the three
vulnerabilities may affect a significant number of smart
contracts and are non-trivial to be detected. Specifically,
we surveyed 40,932 Ethereum smart contracts, observing
that around 5,013 out of 307,396 functions possess at least
one invocation to call.value. Although possessing a call.value
invocation does not necessarily mean that the contract has
a reentrancy vulnerability, the contract has the potential
to be affected by the reentrancy vulnerability and thus requires further checking. Similarly, around 4,833 functions
have used block.timestamp and thus are potentially affected
by the timestamp dependence vulnerability. Many functions
have for or while loops, which may lead to the infinite loop
vulnerability. In contrast, most other contract vulnerabilities
affect a relatively smaller number of functions, e.g., the locked
_ether vulnerability affects less than 900 functions, and the_
_integer overflow vulnerability affects less than 1,000 functions._
###### 4 OUR METHOD
**Method overview. The overall architecture of our proposed**
method is depicted in Fig. 2, which consists of three phases:
(1) a security pattern extraction phase, which obtains the
vulnerability-specific expert patterns from the source code;
(2) a contract graph construction and normalization phase,
which extracts the control flow and data flow semantics
from the source code and highlights the critical nodes; and
(3) a vulnerability detection phase, which casts the normalized contract graph into graph feature using temporal graph
neural network and combines the pattern feature and graph
feature to output the detection result. In what follows, we
elaborate on the details of the three components one by one.
**4.1** **Expert Pattern Extraction**
In this section, we summarize existing patterns and design
new patterns for the three specific vulnerabilities respectively, and implement an open-sourced tool to automatically
extract these patterns.
**Reentrancy. Conventionally, the reentrancy vulnerability**
is considered as an invocation to call.value that can call back
to itself through a chain of calls. That is, the invocation
of call.value is successfully re-entered to perform the unexpected operation of repeated money transfer. By investigating existing works such as [8, 17, 27], we design three subpatterns. The first sub-pattern is callValueInvocation that
checks whether there exists an invocation to call.value in the
function. The second sub-pattern balanceDeduction checks
whether the user balance is deducted after money transfer
using call.value, which considers the fact that the money
stealing can be avoided if user balance is deducted each time
_before money transfer. The third sub-pattern enoughBalance_
concerns whether there is a check on the sufficiency of
the user balance before transferring to a user. Note that
_enoughBalance is a new pattern designed in this paper._
**Timestamp dependence. Generally, the timestamp de-**
pendence vulnerability exists when a smart contract uses the
block timestamp as part of the conditions to perform critical
operations [17]. By investigating previous works including
[8, 17, 31], we design three sub-patterns that are closely
related to timestamp dependence. First, sub-pattern times**tampInvocation models whether there exists an invocation**
to opcode block.timestamp in the function. Then, the second
sub-pattern timestampAssign checks whether the value of
_block.timestamp is assigned to other variables or passed to_
a function as a parameter, namely whether block.timestamp
is actually used. Last, the third sub-pattern timestampCon**tamination checks if block.timestamp may contaminate the**
triggering condition of a critical operation, which can be
implemented by taint analysis. Sub-pattern timestampCon_tamination is a new pattern designed in this paper._
**Infinite loop. Infinite loop is conventionally considered**
as a loop bug which unintentionally iterates forever, failing
to jump out of the loop and return an expected result.
Specifically, we define three expert patterns for infinite loop
as follows. (1) The first sub-pattern loopStatement checks
whether the function possesses a loop statement such as for
and while. (2) The second sub-pattern loopCondition models whether the exit condition can be reached For example
-----
**Contract Vulnerable**
**Contract Graph** **Temporal Edges**
**Normalized Graph**
##### e8
##### e6 ee78 CC32 CC22 78 FWAC 2e104
##### e6
|Col1|Vs|Ve|Order|Type|
|---|---|---|---|---|
|e1|C1|C2|1|IT|
|e2|C1|N1|2|IT|
|e3|C2|N1|3|AC|
|e4|N1|C3|4|FW|
|e5|C3|N1|5|AC|
|e6|C3|C3|6|RG|
|e7|C3|C2|7|FW|
|e8|C2|C2|8|AC|
|e9|C2|N1|9|AG|
|e10|C3|F|10|FB|
|e11|F|C1|11|FB|
|1 2 3 4 5 6 7 8 9 10|// … // withdraw function function withdraw(uint amount) public { if (Balance[msg.sender] < amount) { throw; } require(msg.sender.call.value(amount)()); Balance[msg.sender] -= amount; } // …|
|---|---|
**Fig. 3 The contract graph construction and normalization phase. The first figure shows the source code of a contract**
**function, while the second figure visualizes the contract graph extracted from the code. Nodes Ci denote core nodes,**
**nodes Ni represent normal nodes, and node F denotes fallback node. The third figure illustrates the temporal edges**
**in the extracted graph, where the types of edges are detailed in Table 1. The fourth figure demonstrates the graph after**
**normalization.**
for a while loop, its exit condition i < 10 may not be reached
if i is never updated in the loop. (3) The third sub-pattern
**selfInvocation models whether the function invokes itself**
and the invocation is not in an if statement. This concerns
the fact that if the self-invocation statement is not in an if
statement, the self-invocation loop will never terminate.
**Pattern Extraction Implementations. We implemented**
an open-sourced tool to extract the designed expert patterns from smart contract functions. Particularly, simple subpatterns such as callValueInvocation, timestampInvocation, and
_loopStatement can be directly extracted by keyword match-_
ing. Sub-patterns balanceDeduction, enoughBalance, loopCondi_tion, timestampAssign, and selfInvocation are obtained by syn-_
tax analysis. Complex sub-pattern timestampContamination is
extracted by taint analysis where we follow the traces of the
data flow and flag all the variables that may be affected
along the traces.
**4.2** **Contract Graph Construction and Normalization**
Existing works [35, 48] have shown that programs can be
transformed into symbolic graph representations, which are
able to preserve semantic relationships (e.g., data dependency and control dependency) between program elements.
Inspired by this, we formulate smart contract functions into
_contract graphs, and assign distinct roles to different program_
elements (namely nodes). We also construct edges to model
control and data flow between program elements, taking
their temporal orders into consideration. Further, we design
a node elimination process to normalize the contract graph
and highlight important nodes. Next, we introduce contract
graph construction and normalization, respectively.
_4.2.1_ _Contract Graph Construction_
**Nodes construction. Our first insight is that different pro-**
gram elements in a function are not of equal importance in
detecting vulnerabilities. Therefore, we extract three types
of nodes, i.e., core nodes, normal nodes, and fallback nodes.
_Core nodes. Core nodes symbolize the key invocations_
and variables that are critical for detecting a specific vulnerability. In particular, for reentrancy vulnerability, core
nodes model (i) an invocation to a money transfer function
or the built-in call.value function, (ii) the variable that corresponds to user balance, and (iii) variables that can directly
affect user balance. For timestamp dependence vulnerability,
(i) invocations to block.timestamp, (ii) variables assigned by
_block.timestamp, and (iii) invocations to a random function_
that takes block.timestamp as the cardinal seed are extracted
as core nodes. For infinite loop vulnerability, (i) all the loop
statements such as for and while statements, (ii) the loop
condition variables, and (iii) self invocations are considered
as core nodes.
_Normal nodes. While core nodes represent key invoca-_
tions and variables, normal nodes are used to model invocations and variables that play an auxiliary role in detecting
vulnerabilities. Specifically, invocations and variables that
are not extracted as core nodes are modeled as normal ones,
e.g., for timestamp dependence vulnerability, invocations
that do not call block.timestamp and variables indirectly
related to block.timestamp are considered as normal nodes.
_Fallback node. Further, we construct a fallback node F to_
stimulate the fallback function of a virtual attack contract,
which can interact with the function under test.
_A simplified example. Taking contract Vulnerable presented_
in the left of Fig. 3 as an example, suppose we are to evaluate
whether its withdraw function possesses a reentrancy vulnerability. As shown by the arrows in the left two figures of Fig.
3, function withdraw itself is first modeled as a core node C1
since its inner code contains call.value. Then, following the
temporal order of the code, we treat the critical variable
_Balance[msg.sender] as a core node C2, while variable_
_amount is modeled as normal node N1. The invocation_
to call.value is extracted as a core node C3, and the fallback
function of a virtual attack contract is characterized by the
fallback node F .
**Edges construction. Our second insight is that the nodes**
are closely related to each other in a temporal manner rather
than being isolated. To capture rich semantic dependencies
between the nodes, we construct three categories of edges,
namely control flow, data flow, and fallback edges. Each edge
describes a path that might be traversed through by the
function under test, and the temporal number of the edge
characterizes its sequential order in the function We investi
-----
AH assert{X}
RG require{X}
IR if{...} revert
IT if{...} throw
IF if{X}
GB if{...} else {X} Control-flow
GN if{...} then {X}
WH while{X} do{...}
FR for{X} do{...}
FW natural sequential relationships
AG assign{X} Data-flow
AC access{X}
FB interactions with fallback function Fallback
**TABLE 1 Semantic edges summarization. All edges are**
**classified into three categories, namely control-flow, data-**
**flow, and fallback edges.**
gated various functions and summarized the semantic edges
in Table 1. All edges are classified into three categories.
_Control flow edges. Control flow edges capture the control_
semantics of the code. Specifically, a control flow edge is
constructed for a conditional statement or security handle
statement, such as a if, for, assert, and require statement. The
edge directs from the previous node encountered, which
represents the critical function call or variable preceding to
the current statement, to the node representing the function
call or variable in the current statement. In particular, we
use forward edges to describe the natural control flow of the
code sequence. A forward edge connects two nodes in the
adjacent statements. The main benefit of such encoding is to
reserve the programming logic reflected by the sequence of
the source code. The control flow edges are depicted with
red arrows in Fig. 3.
_Data flow edges. Data flow edges track the usage of vari-_
ables. A data flow edge involves the access or modification
of a variable. The data flow edges are demonstrated with
orange arrows in Fig. 3. For example, the access and assign
statement Balance[msg.sender]-=amount (line 8, Vulnera_ble, Fig. 3) is characterized by two data flow edges, i.e.,_
an access edge e7 starting from the Balance[msg.sender]
variable node C2 to itself, and an assign edge e8 starting
from C2 to the amount variable node N1.
_Fallback edges. In order to explicitly model the specific_
fallback mechanism, two fallback edges are constructed. The
first fallback edge connects from the first call.value invocation to the fallback node, while the second edge directs from
the fallback node to the function under test. The fallback
edges are shown by dashed purple edges in Fig. 3.
**Node and edge features. Fig. 4 illustrates the extracted**
features for edges and nodes, respectively. Specifically, the
feature of an edge is extracted as a tuple (Vstart, Vend,
_Order, Type), where Vstart and Vend represent its start and_
end nodes, Order denotes its temporal order, and Type
stands for edge type. For nodes, different kinds of nodes
possess different features. 1) The feature of a node that
models function invocation consists of (ID, AccFlag, Caller,
_Type), where ID denotes its identifier, Caller represents the_
caller address of the invocation, and Type stands for the
node type. Interestingly, the modifier of a smart contract
function Ψ may trigger the pre-check of certain conditions,
e.g., modifier owner will check whether the caller of Ψ is
the owner of the contract before executing Ψ. Therefore,
we use AccFlag to capture this semantics, where AccFlag
= ‘LimitedACC’ specifies the function has limited access
while AccFlag =‘NoLimited’ denotes non-limited access. 2)
In contrast the feature of a fallback node or a node that
|Vstart|Vend|Order|Type|
|---|---|---|---|
|ID|Type|
|---|---|
|ID|LimitedAcc|Caller|Type|
|---|---|---|---|
|ID|Type|
|---|---|
**Fig. 4 Illustration of the edge feature and node feature.**
models variable consists of only ID and Type.
_4.2.2_ _Contract Graph Normalization_
Most graph neural networks are inherently flat when propagating information, ignoring that some nodes play more
central roles than others. Moreover, different contract functions yield distinct graphs, hindering the training of graph
neural networks. Therefore, we propose a node elimination
process to normalize the contract graph.
**Nodes elimination. As introduced in Section 4.2.1, the**
nodes of a contract graph are partitioned into core nodes
_{Ci}i[|][C]=1[|]_ [, normal nodes][ {][N][i][}][|]i[N]=1[|] [, and the fallback node][ F] [.]
We remove each normal node Ni, but pass the feature of
_Ni to its nearest core nodes. For example, the normal node_
_N1 in the second figure of Fig. 3 is removed with its feature_
aggregated to nearest core nodes C2 and C3. For a node Ni
that has multiple nearest core nodes, its feature is passed
to all of them. The edges connected to the removed normal
nodes are preserved but with their start or end node moving
to the corresponding core node. The fallback node is also
removed similar to the normal node.
**Feature aggregation. After removing normal nodes, fea-**
tures of core nodes are updated by aggregating features
from their neighboring normal nodes. More precisely, the
new feature of Ci is composed of three components: (i) selffeature, namely the feature of core node Ci itself; (ii) infeatures, namely features of the normal nodes {Pj}[|]j[P]=1[ |] [that]
are merged to Ci and having a path pointing from Pj to Ci;
and (iii) out-feature, namely features of the normal nodes
_{Qk}k[|][Q]=1[|]_ [that are merged to][ C][i][ and having a path directs]
from Qk to Ci. Note that features of different normal nodes
that model variables and invocations are added respectively
when aggregating to the same node.
**4.3** **Vulnerability Detection**
In this subsection, we introduce the proposed vulnerability detection network CGE (Combining Graph feature and
Expert patterns). First, we obtain the expert pattern feature
_Pr by passing the extracted sub-patterns (introduced in_
subsection 4.1) into a feed-forward neural network (FNN).
Then, we extract the graph feature Gr from the normalized
_contract graph by our proposed temporal message propaga-_
tion network, consisting of a message propagation phase and
a readout phase. Finally, we use a fusion network to combine
the graph feature Gr and the pattern feature Pr, outputting
the detection results. The process is demonstrated in Fig. 5
with details presented below.
**Security pattern feature Pr extraction. For the sub-**
patterns closely related to a specific vulnerability, we utilize
a one-hot vector to represent each sub-pattern, and append
a 0/1 digit to each vector, which indicates whether the function under test has the sub-pattern. The vectors for all subpatterns related to a specific vulnerability are concatenated
**Edge Feature** **Fallback Node Feature**
Vstart Vend Order Type ID Type
**Invocation Node Feature** **Variable Node Feature**
ID LimitedAcc Caller Type ID Type
|GB GN WH FR FW|{} if{...} else {X} if{...} then {X} while{X} do{...} for{X} do{...} natural sequential relationships|Control-flow|
|---|---|---|
|AG AC|assign{X} access{X}|Data-flow|
|FB|interactions with fallback function|Fallback|
-----
**FNN**
**_φ(x1, x2, x3)_**
**_Pr_**
**_Xr_**
_Security patterns_
e3 C1 e1 **_| C |_** **_Gr_**
C3 e2 C2 ∑i =1 FC ( )ol **.** _gl_ **Filter**
_Contract graph_
**Convolution** _high-dim_ **_Xr : merged_** **FC Layer and**
**Message propagation phase** **Readout phase** **and Pooling** _features_ _feature_ **Sigmoid**
**Fig. 5 The process of vulnerability detection. First, a feed-forward neural network generates the pattern feature Pr**
**for the security patterns extracted from the source code. Then, the temporal message propagation network is used to**
**extract the graph feature Gr from the contract graph. Finally, the CGE network combines Gr and Pr into the merged**
**feature Xr, which is fed into the FC and sigmoid layers to output the vulnerability detection results.**
into a final vector x. Taking x as the input, and the ground
truth of whether the function has the specific vulnerability
as the target label, we utilize a feed-forward neural network
_ϕ(x) to extract high-dimensional semantic feature Pr ∈_ R[d].
**Contract graph feature Gr Extraction. After extracting**
security pattern feature Pr, we further obtain the semantic feature of the contract graph by using our proposed
temporal-message-propagation network, which consists of
a message propagation phase and a readout phase. In the
message propagation phase, the network passes information
along the edges successively by following their temporal
orders. Then, it generates the graph feature Gr by using
a readout function, which aggregates the final states of all
nodes in the contract graph.
_Message propagation phase. Formally, we denote the nor-_
malized contract graph as G = _V, E_, where V consists of
_{_ _}_
the core nodes, and E consists of all edges. Denote E = {e1,
_e2, . . ., eN_ _}, where ek represents the k[th]_ temporal edge.
Messages are passed along the edges, one edge per
time step. At first, the hidden state h[0]i [for each node][ V][i][ is]
initialized with its own node feature. Then, at time step
_k, message flows through the k[th]_ temporal edge ek and
updates the hidden state hek of the end node of ek.
More specifically, message mk is first computed basing
on the hidden state hsk of the start node of ek, and the edge
type tk:
_xk = hsk_ _tk_ (1)
_⊕_
_mk = Wkxk + bk_ (2)
where network parameters U, Z, R are matrices, while b1
and b2 are bias vectors.
_Readout phase. After successively traversing all the edges_
in G, we extract the feature for G by reading out the final
hidden states of all nodes. Let h[T]i [be the final hidden state]
of the i[th] node, we find that the differences between the
final hidden state h[T]i [and the original hidden state][ h]i[0] [are]
informative in the vulnerability detection task. Therefore,
we consider to generate the graph feature Gr by
_si = h[T]i_ _[⊕]_ _[h]i[0]_ (5)
_gi = softmax(Wg[(2)](tanh(b[(1)]g_ + Wg[(1)]si)) + b[(2)]g [)] (6)
_oi = softmax(Wo[(2)](tanh(b[(1)]o_ + Wo[(1)]si)) + b[(2)]o [)] (7)
_Gr = FC(_
_|V |_
�
_oi ⊙_ _gi)_ (8)
_i=1_
where ⊕ denotes concatenation, matrix Wk and bias vector
_bk are network parameters. The original message xk con-_
tains information from the start node of ek and edge ek itself,
which are then transformed into a vector embedding using
_Wk and bk._
After receiving the message, the end node of ek updates
its hidden state hek by aggregating information from the
incoming message and its previous state. Formally, hek is
updated according to:
_hˆek = tanh(Umk + Zhek + b1)_ (3)
where ⊕ denotes concatenation, and ⊙ denotes elementwise product. Wj, b[(1)]j [, and][ b]j[(2)][, with subscript][ j][ ∈{][g, o][}]
are network parameters.
**Vulnerability detection by combining Pr and Gr. After**
obtaining the security pattern feature Pr and the contract
graph feature Gr, we combine them to compute the final
label ˆy (0, 1), indicating whether the function under test
_∈_
has the specific vulnerability. To this end, we first filter Pr
and Gr using a convolution layer and a max pooling layer,
then we concatenate the filtered features and pass them to a
network consisting of 3 fully connected layers and a sigmoid
layer. The process can be formulated as:
_Xr = ψ(Pr) ⊕_ _ψ(Gr)_ (9)
_yˆ = sigmoid(FC(Xr))_ (10)
The convolutional layer learns to assign different weights
to different elements of the semantic vector, while the max
pooling layer highlights the significant elements and avoids
overfitting. The fully connected layer and the non-linear
sigmoid layer produce the final estimated label ˆy.
###### 5 EXPERIMENTS
_h′ek_ [=][ softmax][(][R][h][ˆ][ek] [+][ b][2][)] (4)
In this section, we empirically evaluate our proposed methods on all the Ethereum smart contracts that have source
code verified by Etherscan [49] as well as on all the available
-----
smart contracts on another blockchain platform VNT Chain
[50]. We seek to answer the following research questions:
_• RQ1: Can the proposed method effectively detect the_
reentrancy, infinite loop, and timestamp dependence vulnerabilities? How are its accuracy, precision, recall, and
_F1 score performance against the state-of-the-art conven-_
tional vulnerability detection approaches?
_• RQ2: Is our proposed method useful for detecting new_
types of vulnerabilities, e.g., sharing-variable reentrancy,
which is difficult for existing methods?
_• RQ3: Can the proposed method outperform other neural_
network-based methods?
_• RQ4: How do the proposed security pattern, graph normal-_
_ization, message propagation modules, and different network_
_layers in CGE affect the performance of the proposed_
method?
Next, we first present the experimental settings, followed by
answering the above research questions one by one.
**5.1** **Experimental Settings**
**Datasets. We conducted experiments on two real-world**
smart contract datasets, namely ESC (Ethereum Smart Contracts) and VSC (VNT chain Smart Contracts), which are
collected from Ethereum and VNT Chain platforms, respectively. Experiments for reentrancy and timestamp dependence vulnerabilities are conducted on ESC, while the
infinite loop vulnerability is evaluated on VSC.
_• The ESC dataset consists of 307,396 smart contract_
functions from 40,932 smart contracts in Ethereum [51].
Among the functions, around 5,013 functions possess at
least one invocation to call.value, making them potentially affected by the reentrancy vulnerability. Around
4,833 functions contain the block.timestamp statement,
making them susceptible to the timestamp dependence
vulnerability. Around 56,800 functions contain for or
_while loop statements._
_• The VSC dataset contains 13, 761 functions, which are_
collected from all the available 4, 170 smart contracts
in the VNT Chain network [50]. VNT Chain is an
experimental public blockchain platform proposed by
companies and universities from Singapore, China, and
Australia. The VNT Chain runs smart contracts written
in Class C language.
**Implementation details. All the experiments are con-**
ducted on a computer equipped with an Intel Core i7
CPU at 3.7GHz, a GPU at 1080Ti, and 32GB of Memory.
Our vulnerability detection system consists of three main
components: the auto CodeExtractor tool for extracting the
security patterns and contract graphs from the source code;
the Normalization tool for normalizing contract graphs; the
CGE network that outputs results by combining pattern
feature and graph feature. The CodeExtractor and Normal_ization tools are implemented with Python, while the CGE_
network is implemented with TensorFlow. The implementations of our vulnerability detection system are available at
[https://github.com/Messi-Q/GPSCVulDetector.](https://github.com/Messi-Q/GPSCVulDetector)
**Parameter settings. The adam optimizer is employed**
in the CGE network. We apply a grid search to find out
the best settings of hyper-parameters: the learning rate l is
tuned amongst {0 0001 0 0005 0 001 0 002 0 005 0 01} the
dropout rate d is searched in 0.1, 0.2, 0.3, 0.4, 0.5, and
_{_ _}_
batch size β in 8, 16, 32, 64, 128 . To prevent overfitting,
_{_ _}_
we tuned the L2 regularization λ in 10[−][6], 10[−][5], 10[−][4], 10[−][3],
_{_
10[−][2], 10[−][1] . Without special mention in texts, we report the
_}_
performance of all neural network models with following
default setting: 1) l = 0.002, 2) d = 0.2, 3) β = 32, and 4)
_λ = 10[−][4]. For each dataset, we randomly select 80% of them_
as the training set and the other 20% as the testing set for
several times, and report the averaged result. The ground
truth labels for contract functions are provided by experts.
**5.2** **Comparison with State-of-the-art Existing Methods**
**(RQ1)**
In this section, we benchmark our proposed method
against existing non-deep-learning vulnerability detection
approaches, which include:
_• Oyente [8]: A well-known symbolic verification tool for_
smart contract vulnerability detection, which performs
symbolic execution on the CFG (control flow graph) to
check vulnerable patterns.
_• Mythril [52]: A security analysis method, which uses_
concolic analysis, taint analysis, and control flow checking
to detect smart contract vulnerabilities.
_• Smartcheck [16]: An extensible static analysis tool for_
discovering smart contract code vulnerabilities.
_• Securify [18]: A formal-verification based tool for detect-_
ing Ethereum smart contract bugs, which checks compliance and violation patterns to filter false positives.
_• Slither [53]: A static analysis framework designed to find_
issues in Ethereum smart contracts by converting a smart
contract into an intermediate representation of SlithIR.
**Comparison on reentrancy vulnerability detection.**
First, we compare our CGE approach with the five existing
methods on the reentrancy vulnerability detection task. The
performance of different methods is presented in the left
of Table 2, where metrics of accuracy, recall, precision, and F1
_score are engaged. We would like to highlight that all metrics_
are computed over only the susceptible smart contract functions having invocation(s) to call.value, i.e., the functions that
may be infected with the reentrancy vulnerability. Functions
with no call.value invocation are known to be immune to
reentrancy vulnerability and is trivial to be handled (using
purely keyword matching), thus we do not involve those
functions in the calculation to better investigate the problem. From the quantitative results of Table 2, we have the
following observations. First, we find that conventional nondeep-learning methods have not yet achieved a satisfactory
accuracy on the reentrancy vulnerability detection task, e.g.,
the state-of-the-art method (i.e., Slither) yields a 77.12%
accuracy. Second, our proposed method substantially outperforms the existing methods on reentrancy vulnerability
detection. Specifically, CGE achieves a 89.15% accuracy,
gaining a 12.03% accuracy improvement over conventional
methods. The strong empirical evidences suggest the great
potential of combing graph neural networks with expert
patterns for reentrancy vulnerability detection.
By looking into the existing methods, we believe that
the reasons for the low precision and recall of conventional
methods are: (1) they heavily rely on simple and fixed
patterns to detect vulnerabilities, e.g., Mythril checks whether
_the call value invocation is not followed by any internal function_
-----
**Methods** **Methods**
Acc(%) Recall(%) Precision(%) F1(%) Acc(%) Recall(%) Precision(%) F1(%) Acc(%) Recall(%) Precision(%) F1(%)
Smartcheck 52.97 32.08 25.00 28.10 44.32 37.25 39.16 38.18 Jolt 42.88 23.11 38.23 28.81
Oyente 61.62 54.71 38.16 44.96 59.45 38.44 45.16 41.53 PDA 46.44 21.73 42.96 28.26
Mythril 60.54 71.69 39.58 51.02 61.08 41.72 50.00 45.49 SMT 54.04 39.23 55.69 45.98
Securify 71.89 56.60 50.85 53.57 – – – – Looper 59.56 47.21 62.72 53.87
Slither 77.12 74.28 68.42 71.23 74.20 72.38 67.25 69.72 – – – – –
Vanilla-RNN 49.64 58.78 49.82 50.71 49.77 44.59 51.91 45.62 Vanilla-RNN 49.57 47.86 42.10 44.79
LSTM 53.68 67.82 51.65 58.64 50.79 59.23 50.32 54.41 LSTM 51.28 57.26 44.07 49.80
GRU 54.54 71.30 53.10 60.87 52.06 59.91 49.41 54.15 GRU 51.70 50.42 45.00 47.55
**GCN** **77.85** **78.79** **70.02** **74.15** **74.21** **75.97** **68.35** **71.96** **GCN** **64.01** **63.04** **59.96** **61.46**
DR-GCN 81.47 80.89 72.36 76.39 78.68 78.91 71.29 74.91 DR-GCN 68.34 67.82 64.89 66.32
**TMP** **84.48** **82.63** **74.06** **78.11** **83.45** **83.82** **75.05** **79.19** **TMP** **74.61** **74.32** **73.89** **74.10**
**CGE** **89.15** **87.62** **85.24** **86.41** **89.02** **88.10** **87.41** **87.75** **CGE** **83.21** **82.29** **81.97** **82.13**
**TABLE 2 Performance comparison in terms of accuracy, recall, precision, and F1 score. A total of sixteen methods**
**are investigated in the comparison, including state-of-the-art vulnerability detection methods, neural network-based**
**alternatives, DR-GCN, TMP, and CGE. ‘–’ denotes not applicable.**
_call to detect reentrancy, and (2) the rich data dependencies_
and control dependencies within smart contract code are not
characterized with fine-grained details in these methods.
**Comparison on timestamp dependence vulnerability**
**detection. We further compare the proposed CGE with the**
five methods on the timestamp dependence vulnerability
detection task. The comparison results are demonstrated
in the middle of Table 2. The state-of-the-art conventional
method (i.e., Slither) has obtained a 74.20% accuracy on
timestamp dependence vulnerability detection, which is
quite low. This may stem from the fact that most of existing methods detect timestamp dependence vulnerability by
crudely checking whether there is block.timestamp statement
in the function. Moreover, in consistent with the results on
reentrancy vulnerability detection, CGE keeps delivering
the best performance in terms of all the four metrics. In
particular, CGE gains a 14.82% accuracy improvement over
state-of-the-art conventional methods.
**Comparison on infinite loop vulnerability detection.**
We also evaluated our methods on the infinite loop vulnerability. Specifically, we compare our methods against
available infinite loop detection methods including:
_• Jolt [54]: The tool detects infinite loop bugs by monitoring_
the program state of two consecutive loop iterations.
_• SMT [55]: An algorithm that relies on satisfiability mod-_
ulo theories for automated detection of infinite loop bugs.
_• PDA [56]: A method that performs program path-based_
checking for infinite loop detection.
_• Looper [57]: Loop detection based on symbolic execution._
Quantitative results are illustrated in the right of Table 2. From the table, we see that CGE consistently and
significantly outperforms other methods on the infinite loop
vulnerability detection task. In particular, CGE achieves a
83.21% accuracy and a 82.13% F1 score. In contrast, stateof-the-art detection tools Looper are 59.56% and 53.87%,
and TMP are 74.61% and 74.10%. The improvements may
come from the fact that we consider key variables and rich
dependencies between program elements in smart contracts.
We further visualize the quantitative results of Table 2
in Figs. 6(a), (b), and (c). Specifically, Fig. 6(a) and Fig. 6(b)
present comparison results of reentrancy vulnerability detection and timestamp dependence vulnerability detection,
respectively. The 7 rows (in different colors) from front to
back denote methods Smartcheck, Oyente, Mythril, Securify,
_Slither, TMP, and CGE, respectively. For each row in the_
figures accuracy recall precision and F1 score are respectively
demonstrated from left to right. Fig. 6(c) shows comparison
results of infinite loop vulnerability detection, where the 6
rows from front to back denote Jolt, PDA, SMT, Looper, TMP,
and CGE methods, respectively. We can clearly observe that
CGE outperforms existing methods by a large margin.
**5.3** **A Case Study Towards Better Understanding of the**
**Reasons Behind the Results (RQ2)**
In this subsection, we present an interesting case of smart
contract vulnerabilities, which may bring new insights into
the abilities of the studied methods. Particularly, we investigate a new type of reentrancy vulnerability, i.e., sharing_variable reentrancy. To our knowledge, most existing meth-_
ods cannot precisely detect such vulnerabilities.
Besides classical reentrancy introduced in Fig. 1 and
section 3, a reentrancy attack is also possible when a transfer
function shares internal variables with another function,
which we define as sharing-variable reentrancy.
In Fig. 7, we illustrate a real-world sharing-variable
reentrancy example, where the Malicious contract plays an
attack role against the Vulnerable contract. More specifically,
contract Vulnerable contains two functions: getBonusWith_draw and withdrawAll. Function withdrawAll allows a user to_
withdraw all her rewards, while function getBonusWithdraw
allows a user to withdraw all her rewards together with a
0.1 Ether bonus for each new user.
**Attack. As demonstrated in Fig. 7, contract Malicious**
first uses its attack function to call the getBonusWithdraw
function of contract Vulnerable (step 1). As getBonusWithdraw
invokes the withdrawAll function (Vulnerable, line 6) to send
the rewards and bonus to Malicious (step 2). This will
automatically trigger the fallback function of Malicious (step
3), where Malicious invokes getBonusWithdraw again to steal
money (step 4). Since the bonus flag Bonus[msg.sender] has
yet been set to true, Vulnerable believes Malicious has not got
the new user bonus yet and thus gives 0.1 Ether bonus again
to Vulnerable (Vulnerable, line 5), then function withdrawAll is
re-entered to withdraw the 0.1 Ether illegal bonus (step 5).
_Malicious actually invokes getBonusWithdraw 9 times (Mali-_
_cious, line 9) in its fallback function to steal 1 Ether._
**Underlying issue. This example reveals that although**
in the withdrawAll function, contract Vulnerable updates
the user balance (i.e., Reward) before money transfer, Ma_licious can still be attacked. The novel attack utilizes the_
shared variable (Reward) to steal money. Although with_drawAll function itself is safe the malicious contract may_
-----
100
80
60
40
20
CGE
%
100
80
60
40
20
CGE
%
100
80
60
40
20
CGE
%
Precision
Smartcheck
Precision
Smartcheck
Precision
Jolt
F1
F1
F1
(a) Reentrancy comparison of tools
100
80
60 %
40
20
DR-GCNTMPCGE
Accuracy GRUGCN
Recall LSTM
Precision Vanilla-RNN
F1
(d) Reentrancy comparison of networks
(b) Timestamp comparison of tools
100
80
60 %
40
20
DR-GCNTMPCGE
Accuracy GRUGCN
Recall LSTM
Precision Vanilla-RNN
F1
(e) Timestamp comparison of networks
(c) Infinite loop comparison of tools
100
80
60 %
40
20
DR-GCNTMPCGE
Accuracy GRUGCN
Recall LSTM
Precision Vanilla-RNN
F1
(f) Infinite loop comparison of networks
**Fig. 6 Visuallization of the quantitative results in Table 2: (a) & (d) present comparison results of reentrancy**
**vulnerability detection, while (b) & (e) present comparison results of timestamp dependence detection, (c) & (f)**
**show comparison results of infinite loop vulnerability detection. In (a) & (b), the 7 rows from front to back denote the**
**Smartcheck, Oyente, Mythril, Securify, Slither, TMP, and CGE methods, respectively. In (c), the 6 rows from front to**
**back denote the Jolt, PDA, SMT, Looper, TMP, and CGE methods, respectively. In (d) & (e) & (f), the 7 rows from front**
**to back denote the Vanilla-RNN, LSTM, GRU, GCN, DR-GCN, TMP, and CGE methods, respectively. For each row in**
**the figures, accuracy, recall, precision, and F1 score are respectively demonstrated from left to right.**
call getBonusWithdraw to modify the shared variable Reward
to enable attacks.
Unfortunately, such kind of attacks cannot yet be detected by existing methods. We empirically checked the
_Vulnerable contract using the state-of-the-art tools includ-_
ing Oyente [8], Securify [18], Smartcheck [16], Slither [53],
_and Mythril [52], and manually inspected their generated_
reports. Oyente, Smartcheck, Slither, and Mythril fail to
identify the reentrancy bug, whereas Security presents a
lot of warnings all at the wrong places and misses the
sharing-variable reentrancy vulnerability as well. In contrast, CGE successfully detects the vulnerability. These evidences reveal that the underlying detection rules of existing
reentrancy vulnerability detection methods indeed can be
cheated by the sharing variable trick and some vulnerability
patterns are hard to be covered. The current rules check
only the user balance variable that is directly related to the
_call.value invocation, while ignoring dependencies between_
variables, e.g., other variables may affect the user balance
variable. In this regard, an essential highlight of our method
is the capability of capturing data dependencies between
critical variables.
**5.4** **Comparison with Neural Network-based Methods**
**(RQ3)**
We further compare our methods with other neural network
alternatives to seek out which neural network architectures
could succeed in the smart contract vulnerability detection
task. The compared methods are summarized below.
_• Vanilla-RNN [58]: A two-layer recurrent neural network,_
which takes the code sequence as input and evolves its
hidden states recurrently to capture the sequential pattern
lying in the code
|1 2 3 4 5 6 7 8 9 10 11 12 13 14|contract Vulnerable{ 1 ... function getBonusWithdraw(){ require(!Bonus[msg.sender]); Reward[msg.sender] += 0.1 ether; withdrawAll(msg.sender); Bonus[msg.sender] = true; }2 5 function withdrawAll() { 4 unit amount = Reward[msg.sender]; Reward[msg.sender] = 0; require(msg.sender.call.value(amount )()); } }|1 2 3 4 5 6 7 8 9 10 11 12 13|contract Malicious{ address vul_add=01a5f...43; ... function attack() { vul_add.getBonusWithdraw(); } function () payable{ count++; if (count < 10) { vul_add.getBonusWithdraw(); } } 3 }|
|---|---|---|---|
**Fig. 7 A real-world smart contract with the sharing-**
**variable reentrancy vulnerability.**
_• LSTM [59]: The most widely used recurrent neural net-_
work for processing sequential data. LSTM is short for
long short term memory, which recurrently updates the
cell state upon successively reading the code sequence.
_• GRU [60]: The gated recurrent unit, which uses gating_
mechanisms to handle the code sequence.
_• GCN [37]: Graph convolutional network that takes the_
contract graph as input and implements layer-wise convolution on the graph using graph Laplacian.
_• DR-GCN [20]: The degree-free graph convolutional net-_
work, which increases the connectivity of nodes and
removes the diagonal node degree matrix.
_• TMP [20]: The temporal message propagation network,_
which learns the contract graph feature by flowing information along the edges successively following their
temporal order. The final graph feature is used for vulnerability prediction.
For a feasible comparison, Vanilla-RNN, LSTM, and
GRU are fed with the contract function code sequence, represented as vectors GCN DR-GCN and TMP are presented
-----
with the normalized graph extracted from the source code
and are required to detect the corresponding vulnerabilities.
We illustrate the results of different models in terms
of accuracy, recall, precision, and F1 score in Table 2, while
Figs. 6(d), (e), and (f) further visualize the results. Interestingly, experimental results show that Vanilla-RNN, LSTM,
and GRU perform relatively worse than the state-of-theart conventional (non-deep-learning) methods. In contrast,
graph neural networks GCN, DR-GCN, and TMP, which
are capable of handling graphs, achieve significantly better results than conventional methods. This suggests that
blindly treat the source code as a sequence is not suitable for
the vulnerability detection task, while modeling the source
code into graphs and adopting graph neural networks is
promising. We conjecture that processing code sequentially
loses valuable information from smart contract code since
they ignore the structural information of contract programs,
such as the data-flow and invocation relationships. The
accuracies of GCN and DR-GCN are lower than TMP, this
may due to the fact that GCN and DR-GCN fail to capture
the temporal information induced by data flow and control
flow, which is explicitly considered in TMP using ordered
edges. Further, we attribute the improved performance of
CGE over TMP to that TMP does not consider known
security patterns and ignores key variables.
**5.5** **Ablation Study (RQ4)**
By default, CGE adopts the graph normalization module to
highlight the core nodes in the contract graph, it is interesting
to study the effect of removing this module. Moreover,
CGE incorporates an expert pattern extraction module and
a message propagation module to aggregate information from
both security patterns and the contract graph. It is useful to
evaluate the contributions of the two modules by removing
them respectively from CGE. Finally, we are also interested
in exploring the effect of different network layers in CGE.
In what follows, we conduct experiments to study the four
aforementioned modules.
**Effect of the graph normalization module. We removed**
the graph normalization module (introduced in subsection 4.2.2) from CGE, and compared it with the default CGE.
The variant is denoted as CGE-WON, where WON is short
for without normalization. Quantitative results are summarized in Table 3. We can observe that with the proposed
graph normalization phase, the performance of CGE is better. For example, for reentrancy vulnerability detection task,
the CGE model obtains a 2.81% and 2.55% improvement in
terms of accuracy and F1 score, respectively.
Figs. 8(a) & (b) & (c) further plot the ROC curves of CGE
and CGE-WON. We adopt Receiver Operating Characteristic (ROC) analysis to show the impact of the graph normalization module. AUC (area under the curve) is used as the
measure for performance, the higher AUC the better performance. Fig. 8(a) demonstrates that CGE performs better
on the reentrancy detection task, the AUC increases by 0.03
with the graph normalization module. On the timestamp
dependence detection task, CGE obtains a 0.03 improvement
in AUC (shown in Fig. 8(b)). On the infinite loop detection
task, CGE gains a 0.04 improvement in AUC (shown in
Fig. 8(c)). In the figures, we also demonstrate the effect
of removing the graph normalization module of another
method, namely TMP. Similar findings are observed. The
experimental results suggest that program elements should
contribute distinctly to vulnerability detection rather than
having equal contributions.
**Effect of the security pattern module. To evaluate the**
effect of our proposed security pattern module, we analyze
the performance of CGE with and without the security pattern module. Towards this, we modify CGE by removing the
expert pattern extraction module, utilizing only the graph
feature for vulnerability learning and detection. This variant
is denoted as CGE-WOE, where WOE is short for without
expert pattern. The empirical findings are demonstrated in
Table 3, while the visual curves are illustrated in Fig. 8(d).
In Fig. 8(d), the red curve demonstrates the accuracy of CGE
over different epochs on the reentrancy vulnerability detection. Obviously, we can observe that the performance of
CGE is consistently superior to CGE-WOE across all epochs,
revealing that incorporating security patterns is necessary
and important to improve the performance. Quantitative
results on all the three vulnerabilities, which are presented
in Table 3, further reconfirm the finding.
We also conduct experiments to extend other neural networks with expert patterns, and empirically compared these
methods with CGE. The results are illustrated in Table 4,
where ‘-EP’ denotes combining with expert patterns. We can
observe that neural networks combined with expert patterns
indeed achieve better results compared to their pure neural
network counterparts. For example, DR-GCN-EP gains a
4.92% accuracy improvement over DR-GCN in average,
and LSTM-EP obtains a 6.91% accuracy improvement over
LSTM. These results indicate the effectiveness of combining
neural networks with expert patterns. On the other hand,
the proposed method CGE consistently outperforms other
approaches including DR-GCN-EP. DR-GCN-EP ranks second in the tested methods.
**Effect of the contract graph feature extraction module.**
We further investigate the impact of the contract graph feature extraction module in CGE by comparing it with its variant. Towards this, we remove the proposed contract graph
construction and temporal message propagation module,
while utilizing only the security pattern feature. The new
variant is denoted as CGE-WOG, namely CGE without
contract graph feature. Fig. 8(e) visualizes the results, where
the red curve demonstrates the accuracy of CGE over different epochs, while the blue curve shows the accuracy of
CGE-WOG. Clearly, the performance of CGE is consistently
better compared to its variant across all epochs. Quantitative
results are further presented in Table 3, where all the three
vulnerabilities are involved. The results, together with the
experimental results on CGE-WOE, suggest that the contract
graph feature contributes significant performance gain in
CGE and leads to a higher gain than the security pattern
feature.
**Effect of different feature fusion networks. When com-**
bining security pattern features and contract graph features,
CGE uses a neural network with a convolution layer and
a max pooling layer followed by 3 fully connected layers
and a sigmoid layer. To verify this network architecture,
we also study five other alternatives. First, we replace the
convolution and max pooling layer with a fully connected
layer which we denote as CGE(FC) We also try replacing
-----
**Metrics**
CGE-WOG CGE-WOE CGE-WON CGE CGE-WOG CGE-WOE CGE-WON CGE CGE-WOG CGE-WOE CGE-WON CGE
Acc(%) 82.09 84.42 86.34 **89.15** 81.30 83.52 86.61 **89.02** 72.23 74.68 79.51 **83.21**
Recall(%) 80.18 82.65 84.38 **87.62** 80.68 82.89 84.06 **88.10** 70.08 74.21 77.14 **82.29**
Precision(%) 72.15 78.94 83.35 **85.24** 78.42 80.16 83.90 **87.41** 71.44 73.86 76.26 **81.97**
F1(%) 75.95 80.75 83.86 **86.41** 79.53 81.50 83.98 **87.75** 70.75 74.03 76.70 **82.13**
**TABLE 3 Accuracy comparison between CGE and its variants on the three vulnerability detection tasks.**
**Reentrancy** **Timestamp dependence** **Infinite Loop**
**Variants**
Acc(%) Recall(%) Precision(%) F1(%) Acc(%) Recall(%) Precision(%) F1(%) Acc(%) Recall(%) Precision(%) F1(%)
Vanilla-RNN-EP 56.06 60.24 58.21 59.20 54.58 49.65 59.35 54.07 54.72 52.62 49.94 51.24
LSTM-EP 60.15 72.26 58.68 64.77 59.82 63.38 58.28 56.29 56.52 59.98 49.75 54.39
GRU-EP 62.08 75.01 60.13 66.75 61.22 64.18 58.45 61.18 57.09 60.54 49.81 54.65
GCN-EP 80.96 81.05 76.84 78.89 79.32 79.94 73.65 76.67 70.06 69.81 64.29 66.94
DR-GCN-EP 85.14 84.12 79.38 81.68 83.74 84.02 80.59 82.27 74.36 73.08 69.45 71.22
CGE(LSTM) 86.74 85.18 82.85 84.00 87.92 85.08 87.13 86.09 79.18 78.25 76.80 77.52
CGE(FC) 87.64 85.74 82.97 84.33 88.12 87.98 85.04 86.49 80.62 78.96 77.24 78.09
CGE(1-FC) 88.54 86.12 83.80 84.94 86.62 87.82 81.73 84.66 81.43 81.25 80.98 81.11
CGE(2-FC) 88.89 86.47 84.51 85.48 87.05 84.96 85.02 84.98 81.82 81.76 80.54 81.15
CGE(AP) 88.02 85.92 83.45 84.67 85.25 85.16 81.84 83.47 79.53 78.58 76.94 77.75
**CGE** **89.15** **87.62** **85.24** **86.41** **89.02** **88.10** **87.41** **87.75** **83.21** **82.29** **81.97** **82.13**
**TABLE 4 Upper: Performance comparison between CGE and other neural networks combined with expert patterns.**
**‘-EP’ denotes combining with expert patterns. Lower: Comparison with other feature fusion network architectures.**
Epoch
(e) Accuracy study on the graph feature module
False Positive Rate
(a) Reentrancy
False Positive Rate
(b) Timestamp dependence
False Positive Rate
(c) Infinite loop
Epoch
(d) Accuracy study on the security pattern module
**Fig. 8 Curves comparison: (a), (b), and (c) present the ROC analysis of graph normalization module for TMP, CGE, and**
**their variants on the three vulnerability detection tasks, where AUC stands for area under the curve. In (d), the two**
**curves study the effect of removing the security pattern extraction module, while (e) presents the study on removing**
**the contract-graph feature extraction module.**
them with an LSTM layer, which we term as CGE(LSTM).
Then, we keep the convolution and max pooling layer, but
change the 3 fully connected layers to 1 or 2 fully connected
layers. The two variants are denoted as CGE(1-FC) and
CGE(2-FC), respectively. Finally, we explore replacing the
max pooling layer with an average pooling layer, namely
CGE(AP), while keeping other layers fixed. The empirical
results are illustrated in Table 4. The results reveal that: 1)
RNN architectures such as LSTM are not suitable for the
feature fusion task, 2) the default setting of CGE yields
better results than the five alternatives, and 3) using average
pooling or changing the number of fully connected layers
leads to a slight performance drop.
###### 6 DISCUSSIONS
**Specialty of our method in dealing with smart contracts.**
Distinct from conventional programs that consume only
CPU resources, users have to pay a fee for executing each
line of smart contract code. The fee is approximately proportional to how much code needs to run and is referred to
as gas. Therefore, in the proposed method, we studied the
infinite loop vulnerability since an infinite loop will consume
a lot of gas but all the gas is consumed in vain. This is
because the infinite loop is unable to change any state (any
execution that runs out of gas is aborted). Moreover, the
function libraries of the smart contracts and other program
languages are quite different For example call value and
_block.timestamp are unique and specially designed in smart_
contracts. We implement an open-sourced tool to analyze
the specific syntax of smart contract statements. We also use
core nodes to symbolize invocations and variables closely
related to a specific vulnerability, and represent other variables and invocations as normal nodes. We would like to
point out that there is a unique fallback mechanism in
smart contracts, which is different from other programming
languages. In the contract graph, we build a fallback node
to stimulate the fallback function of a virtual attack contract,
which can interact with the function under test.
**Discussions on the contract graph. Existing efforts**
adopted the control flow graph, code property graph, and ab_stract syntax tree to represent program code. The differences_
between them and our contract graph can be summarized
as: (i) Control flow graph utilizes a node to model a basic
block, i.e. a straight-line piece of code without any jumps, and
uses edges to represent jumps [61]. They focus mainly
on execution path jumps and tend to consider each node
as of equal importance. (ii) Code property graph [62, 63]
models statements as nodes, and represents the control
flow between statements as edges. (iii) Abstract syntax tree
[64, 65] adopts a tree representation of the abstract syntactic
structure of source code, which relies on a tree structure
and has difficulties in fully characterizing the rich semantic information between nodes. (iv) In our contract graph,
nodes are used to model variables and invocations related
-----
to a specific vulnerability and are classified into different
categories, i.e. core nodes, normal nodes, and fallback nodes. We
also explicitly model the order of the edges following their
temporal order in the code and consider the specific fallback
mechanism of the smart contracts.
###### 7 CONCLUSION AND FUTURE WORK
In this paper, we have proposed a fully automated approach
for smart contract vulnerability detection at the function
level. In contrast to existing approaches, we combine both
expert patterns and contract graph semantics, consider rich
dependencies between program elements, and explicitly
model the fallback mechanism of smart contracts. We also
explore the possibility of using novel graph neural networks
to learn the graph feature from the contract graph, which
contains rich control- and data- flow semantics. Extensive
experiments are conducted, showing that our method significantly outperforms the state-of-the-art vulnerability detection tools and other neural network-based methods. We
believe our work is an important step towards revealing
the potential of combining deep learning with conventional
patterns on smart contract vulnerability detection tasks. For
future work, we will investigate the possibility of extending
this method to smart contracts that have only bytecode, and
explore this architecture on more other vulnerabilities.
###### ACKNOWLEDGMENTS
This paper was supported by the Natural Science Foundation of Zhejiang Province, China (Grant No. LQ19F020001),
the National Natural Science Foundation of China (No.
61902348, 61802345), and the Research Program of Zhejiang
Lab (2019KD0AC02).
###### REFERENCES
[1] T. T. A. Dinh, J. Wang, G. Chen, R. Liu, B. C. Ooi, and K.-L. Tan,
“Blockbench: A framework for analyzing private blockchains,” in
_ICMD, 2017, pp. 1085–1100._
[2] D. Yaga, P. Mell, N. Roby, and K. Scarfone, “Blockchain technology
overview,” arXiv preprint arXiv:1906.11078, 2019.
[3] C. Badertscher, U. Maurer, D. Tschudi, and V. Zikas, “Bitcoin as
a transaction ledger: A composable treatment,” in Annual Interna_tional Cryptology Conference, 2017, pp. 324–356._
[4] M. Dhawan, “Analyzing safety of smart contracts,” in Proceedings
_of the NDSS, 2017, pp. 16–17._
[5] M. Tsikhanovich, M. Magdon-Ismail, M. Ishaq, and V. Zikas,
“Pd-ml-lite: Private distributed machine learning from lighweight
cryptography,” arXiv preprint arXiv:1901.07986, 2019.
[6] T. T. A. Dinh, R. Liu, M. Zhang, G. Chen, B. C. Ooi, and J. Wang,
“Untangling blockchain: A data processing view of blockchain
systems,” IEEE Transactions on Knowledge and Data Engineering,
vol. 30, no. 7, pp. 1366–1385, 2018.
[7] L. S. Sankar, M. Sindhu, and M. Sethumadhavan, “Survey of
consensus protocols on blockchain applications,” in Proceedings of
_the ICACCS, 2017, pp. 1–5._
[8] L. Luu, D.-H. Chu, H. Olickel, P. Saxena, and A. Hobor, “Making
smart contracts smarter,” in CCS, 2016, pp. 254–269.
[9] A. M. Antonopoulos and G. Wood, Mastering ethereum: building
_smart contracts and dapps, 2018._
[10] A. Bahga and V. K. Madisetti, “Blockchain platform for industrial
internet of things,” Journal of Software Engineering and Applications,
vol. 9, no. 10, p. 533, 2016.
[11] V. Buterin et al., “A next-generation smart contract and decentralized application platform,” white paper, vol. 3, p. 37, 2014.
[12] J. Kokina, R. Mancha, and D. Pachamanova, “Blockchain: Emergent industry adoption and implications for accounting,” Journal
_of Emerging Technologies in Accounting, vol. 14, no. 2, pp. 91–100,_
2017
[13] The dao smart contract, [Website, 2016, http://etherscan.io/](http://etherscan.io/address/ 0xbb9bc244d798123fde783fcc1c72d3bb8c189413)
[address/0xbb9bc244d798123fde783fcc1c72d3bb8c189413.](http://etherscan.io/address/ 0xbb9bc244d798123fde783fcc1c72d3bb8c189413)
[[14] “King of the ether,” Webiste, 2016, https://www.kingoftheether.](https://www.kingoftheether.com/postmortem.html.)
[com/postmortem.html.](https://www.kingoftheether.com/postmortem.html.)
[[15] “An in-depth look at the parity multisig bug,” Website, 2017, http:](http://hackxingdistributed.com/2017/07/22/deep-dive-parity-bug)
[//hackxingdistributed.com/2017/07/22/deep-dive-parity-bug.](http://hackxingdistributed.com/2017/07/22/deep-dive-parity-bug)
[16] S. Tikhomirov, E. Voskresenskaya, I. Ivanitskiy, R. Takhaviev,
E. Marchenko, and Y. Alexandrov, “Smartcheck: Static analysis of
ethereum smart contracts,” in WETSEB, 2018, pp. 9–16.
[17] B. Jiang, Y. Liu, and W. Chan, “Contractfuzzer: Fuzzing smart
contracts for vulnerability detection,” in Proceedings of the ASE,
2018, pp. 259–269.
[18] P. Tsankov, A. Dan, D. Drachsler-Cohen, A. Gervais, F. Buenzli,
and M. Vechev, “Securify: Practical security analysis of smart
contracts,” in Proceedings of the CCS, 2018, pp. 67–82.
[19] P. Qian, Z. Liu, Q. He, R. Zimmermann, and X. Wang, “Towards
automated reentrancy detection for smart contracts based on
sequential models,” IEEE Access, vol. 8, pp. 19 685–19 695, 2020.
[20] Y. Zhuang, Z. Liu, P. Qian, Q. Liu, X. Wang, and Q. He, “Smart
contract vulnerability detection using graph neural network,” in
_Proceedings of the IJCAI-20, 7 2020, pp. 3283–3290._
[21] W. J. Tann, X. J. Han, S. S. Gupta, and Y. Ong, “Towards safer smart
contracts: A sequence learning approach to detecting vulnerabilities,” CoRR, 2018.
[22] K. Bhargavan, A. Delignat-Lavaud, C. Fournet, A. Gollamudi,
G. Gonthier, N. Kobeissi, N. Kulatova, A. Rastogi, T. Sibut-Pinote,
N. Swamy et al., “Formal verification of smart contracts: Short
paper,” in Proceedings of the 2016 ACM Workshop on Programming
_Languages and Analysis for Security, 2016, pp. 91–96._
[23] I. Grishchenko, M. Maffei, and C. Schneidewind, “A semantic
framework for the security analysis of ethereum smart contracts,”
in International Conference on Principles of Security and Trust, 2018,
pp. 243–269.
[24] E. Hildenbrandt, M. Saxena, N. Rodrigues, X. Zhu, P. Daian,
D. Guth, B. Moore, D. Park, Y. Zhang, A. Stefanescu et al., “Kevm:
A complete formal semantics of the ethereum virtual machine,” in
_CSF, 2018, pp. 204–217._
[25] Y. Hirai, “Defining the ethereum virtual machine for interactive
theorem provers,” in International Conference on Financial Cryptog_raphy and Data Security, 2017, pp. 520–535._
[26] I. Nikoli´c, A. Kolluri, I. Sergey, P. Saxena, and A. Hobor, “Finding
the greedy, prodigal, and suicidal contracts at scale,” in Annual
_Computer Security Applications Conference, 2018, pp. 653–663._
[27] S. Kalra, S. Goel, M. Dhawan, and S. Sharma, “Zeus: Analyzing
safety of smart contracts,” in NDSS, 2018.
[28] C. Liu, H. Liu, Z. Cao, Z. Chen, B. Chen, and B. Roscoe, “Reguard:
finding reentrancy bugs in smart contracts,” in Proceedings of the
_ICSE, 2018, pp. 65–68._
[29] M. Rodler, W. Li, G. O. Karame, and L. Davi, “Sereum: Protecting
existing smart contracts against re-entrancy attacks,” in Proceedings
_of the NDSS, 2019._
[30] W. Wang, J. Song, G. Xu, Y. Li, H. Wang, and C. Su, “Contractward:
Automated vulnerability detection models for ethereum smart
contracts,” TNSE, 2020.
[31] N. Atzei, M. Bartoletti, and T. Cimoli, “A survey of attacks on
ethereum smart contracts (sok),” in International Conference on
_Principles of Security and Trust, 2017, pp. 164–186._
[32] M. Zhang, Z. Cui, M. Neumann, and Y. Chen, “An end-to-end
deep learning architecture for graph classification,” in AAAI, 2018.
[33] H. Wang, P. Zhang, X. Zhu, I. W.-H. Tsang, L. Chen, C. Zhang, and
X. Wu, “Incremental subgraph feature selection for graph classification,” IEEE Transactions on Knowledge and Data Engineering,
vol. 29, no. 1, pp. 128–142, 2016.
[34] Y. Zhou, S. Liu, J. Siow, X. Du, and Y. Liu, “Devign: Effective
vulnerability identification by learning comprehensive program
semantics via graph neural networks,” in Advances in Neural
_Information Processing Systems, 2019, pp. 10 197–10 207._
[35] M. Allamanis, M. Brockschmidt, and M. Khademi, “Learning to
represent programs with graphs,” in International Conference on
_Learning Representations (ICLR), 2018._
[36] H. Cai, V. W. Zheng, and K. C.-C. Chang, “A comprehensive survey of graph embedding: Problems, techniques, and applications,”
_IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 9,_
pp. 1616–1637, 2018.
[37] T. N. Kipf and M. Welling, “Semi-supervised classification with
graph convolutional networks,” in Proceedings of the ICLR, 2017.
[38] M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional
-----
neural networks on graphs with fast localized spectral filtering,
in Advances in neural information processing systems, 2016, pp. 3844–
3852.
[39] X. Zhou, F. Shen, L. Liu, W. Liu, L. Nie, Y. Yang, and H. T.
Shen, “Graph convolutional network hashing,” IEEE transactions
_on cybernetics, 2018._
[40] Y. Wei, X. Wang, L. Nie, X. He, R. Hong, and T.-S. Chua, “Mmgcn:
Multi-modal graph convolution network for personalized recommendation of micro-video,” in Proceedings of the 27th ACM MM,
2019, pp. 1437–1445.
[41] R. Li, S. Wang, F. Zhu, and J. Huang, “Adaptive graph convolutional neural networks,” in AAAI, 2018.
[42] A. Micheli, “Neural network for graphs: A contextual constructive
approach,” IEEE Transactions on Neural Networks, vol. 20, no. 3, pp.
498–511, 2009.
[43] P. Veliˇckovi´c, G. Cucurull, A. Casanova, A. Romero, P. Lio,
and Y. Bengio, “Graph attention networks,” arXiv preprint
_arXiv:1710.10903, 2017._
[44] J. Zhang, X. Shi, J. Xie, H. Ma, I. King, and D.-Y. Yeung, “Gaan:
Gated attention networks for learning on large and spatiotemporal
graphs,” arXiv preprint arXiv:1803.07294, 2018.
[45] J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals, and G. E. Dahl,
“Neural message passing for quantum chemistry,” in Proceedings
_of the ICML, 2017, pp. 1263–1272._
[46] X. Xu, C. Liu, Q. Feng, H. Yin, L. Song, and D. Song, “Neural
network-based graph embedding for cross-platform binary code
similarity detection,” in CCS, 2017, pp. 363–376.
[47] S. Shen, S. Shinde, S. Ramesh, A. Roychoudhury, and P. Saxena,
“Neuro-symbolic execution: Augmenting symbolic execution with
neural constraints.” in Proceedings of the NDSS, 2019.
[48] R. A. Rossi, R. Zhou, and N. Ahmed, “Deep inductive graph
representation learning,” IEEE Transactions on Knowledge and Data
_Engineering, 2018._
[[49] “Etherscan,” Website, 2015, https://etherscan.io/.](https://etherscan.io/)
[[50] “Vntchain,” Website, 2018, https://github.com/vntchain/go-vnt.](https://github.com/vntchain/go-vnt)
[51] “Ethereum,” Website, 2015, [https://github.com/ethereum/](https://github.com/ethereum/go-ethereum)
[go-ethereum.](https://github.com/ethereum/go-ethereum)
[52] B. Mueller, “A framework for bug hunting on the ethereum
blockchain,” Webiste, 2017, [https://github.com/ConsenSys/](https://github.com/ConsenSys/mythril)
[mythril.](https://github.com/ConsenSys/mythril)
[53] J. Feist, G. Grieco, and A. Groce, “Slither: a static analysis framework for smart contracts,” in WETSEB, 2019, pp. 8–15.
[54] M. Carbin, S. Misailovic, M. Kling, and M. C. Rinard, “Detecting
and escaping infinite loops with jolt,” in European Conference on
_Object-Oriented Programming, 2011, pp. 609–633._
[55] M. Kling, S. Misailovic, M. Carbin, and M. Rinard, “Bolt: ondemand infinite loop escape in unmodified binaries,” ACM SIG_PLAN Notices, vol. 47, no. 10, pp. 431–450, 2012._
[56] A. Ibing and A. Mai, “A fixed-point algorithm for automated static
detection of infinite loops,” in IEEE 16th International Symposium
_on High Assurance Systems Engineering, 2015, pp. 44–51._
[57] J. Burnim, N. Jalbert, C. Stergiou, and K. Sen, “Looper:
Lightweight detection of infinite loops at runtime,” in Proceedings
_of ASE, 2009, pp. 161–169._
[58] C. Goller and A. Kuchler, “Learning task-dependent distributed
representations by backpropagation through structure,” in Pro_ceedings of ICNN, vol. 1, 1996, pp. 347–352._
[59] H. Sak, A. Senior, and F. Beaufays, “Long short-term memory
recurrent neural network architectures for large scale acoustic
modeling,” in Fifteenth annual conference of the international speech
_communication association, 2014._
[60] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio, “Empirical evaluation of gated recurrent neural networks on sequence modeling,”
_arXiv preprint arXiv:1412.3555, 2014._
[61] A. V. Phan, M. Le Nguyen, and L. T. Bui, “Convolutional neural
networks over control flow graphs for software defect prediction,”
in 2017 IEEE 29th International Conference on Tools with Artificial
_Intelligence (ICTAI)._ IEEE, 2017, pp. 45–52.
[62] F. Yamaguchi, N. Golde, D. Arp, and K. Rieck, “Modeling and
discovering vulnerabilities with code property graphs,” in 2014
_IEEE Symposium on Security and Privacy._ IEEE, 2014, pp. 590–604.
[63] S. Suneja, Y. Zheng, Y. Zhuang, J. Laredo, and A. Morari, “Learning
to map source code to software vulnerability using code-as-agraph,” arXiv preprint arXiv:2006.08614, 2020.
[64] L. Mou, G. Li, L. Zhang, T. Wang, and Z. Jin, “Convolutional
neural networks over tree structures for programming language
processing,” in Proceedings of the AAAI Conference on Artificial
_Intelligence, vol. 30, no. 1, 2016._
[65] J. Zhang, X. Wang, H. Zhang, H. Sun, K. Wang, and X. Liu, “A
novel neural source code representation based on abstract syntax
tree,” in 2019 IEEE/ACM 41st International Conference on Software
_Engineering (ICSE)._ IEEE, 2019, pp. 783–794.
**Zhenguang Liu is currently a professor of Zhe-**
jiang Gongshang University. He had been a research fellow in National University of Singapore
and A*STAR. He respectively received his Ph.D.
and B.E. degrees from Zhejiang University and
Shandong University, China. His research interests include smart contract security and multimedia data analysis. Dr. Liu has served as
technical program committee member for conferences such as ACM MM, CVPR, AAAI, IJCAI,
and ICCV, session chair of ICGIP, local chair of
KSEM, and reviewer for IEEE TVCG, IEEE TPDS, ACM TOMM, etc.
**Peng Qian received his BSc degree in software**
engineering from Yangtze University, MSc degree in computer science from Zhejiang Gongshang University, in 2018 and 2021. He is currently pursuing a Ph.D. at Zhejiang University.
His research interests include blockchain security, graph neural network, and deep learning.
**Xiaoyang Wang received the BSc and MSc de-**
grees in computer science from Northeastern
University, China, in 2010 and 2012, respectively, and the PhD degree from the University
of New South Wales, Australia, in 2016. He is
a professor in Zhejiang Gongshang University,
Hangzhou, China. His research interest includes
query processing on massive graph data.
**Yuan Zhuang received her PhD from the Col-**
lege of Computer Science and Technology
(CCST), Jilin University, China. Her research
interests include blockchain security, machine
learning, big data processing and distributed
computing
**Lin Qiu is a PhD candidate at the Department**
of Information Systems and Analytics, National
University of Singapore, Singapore. Before that,
she obtained her bachelor degree from Xiamen
University, China. Her research interests lie in
deep learning, healthcare, and blockchain.
**Xun Wang is currently a professor at the School**
of Computer Science and Information Engineering, Zhejiang Gongshang University, China. He
received his BSc in mechanics, Ph.D. degrees
in computer science, all from Zhejiang University, China, in 1990 and 2006, respectively. His
research interests include intelligent information
processing and computer vision. He has published over 100 papers in high-quality journals
and conferences. He is a member of the IEEE
and ACM, and a distinguished member of CCF.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2107.11598, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2107.11598"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-07-24T00:00:00
|
[
{
"paperId": "3ead9ba6582992e9d4fbafaeacb8cdca7a549ad5",
"title": "Ethereum"
},
{
"paperId": "81861b6615015a8af45b4119a1144d9dd5bef4a7",
"title": "Smart Contract Vulnerability Detection using Graph Neural Network"
},
{
"paperId": "b579c7c2cd8cef75dca25e3fab1a71110d1dff17",
"title": "Learning to map source code to software vulnerability using code-as-a-graph"
},
{
"paperId": "feeeee9d341882dd0292599b2d459fa5566e729b",
"title": "Graph Convolutional Network Hashing"
},
{
"paperId": "311d7fed687056e529c5ebcf842348931e9a896d",
"title": "Deep Inductive Graph Representation Learning"
},
{
"paperId": "4837cb216e0170ea13f6db28bc0d84c61e967541",
"title": "Towards Automated Reentrancy Detection for Smart Contracts Based on Sequential Models"
},
{
"paperId": "63a90c536ff9ec4d38c399f84d053e910b3501ef",
"title": "ContractWard: Automated Vulnerability Detection Models for Ethereum Smart Contracts"
},
{
"paperId": "75d6b55e8b9146e3d9dce9091cfb03211a664b3a",
"title": "MMGCN: Multi-modal Graph Convolution Network for Personalized Recommendation of Micro-video"
},
{
"paperId": "00549af4bc3270e0f688acbf694f912d7ee39cad",
"title": "Devign: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks"
},
{
"paperId": "814c5434265f0bdf9196746e08f03f406679f2cd",
"title": "Neuro-Symbolic Execution: Augmenting Symbolic Execution with Neural Constraints"
},
{
"paperId": "81aec3c1c0eb0b842216fe4d22077d688f8c64e6",
"title": "Slither: A Static Analysis Framework for Smart Contracts"
},
{
"paperId": "1432c8378b1cafa3f91f09fa743082d154fdab92",
"title": "A Novel Neural Source Code Representation Based on Abstract Syntax Tree"
},
{
"paperId": "d109c6efc216e6c8b2da8958f1a4cdad0894fdea",
"title": "PD-ML-Lite: Private Distributed Machine Learning from Lighweight Cryptography"
},
{
"paperId": "729e2ee148f8413eeeb60f0ece4b1a1a9725e538",
"title": "Sereum: Protecting Existing Smart Contracts Against Re-Entrancy Attacks"
},
{
"paperId": "197be7cf03dbb250130461600c17528b21a0fe69",
"title": "Towards Safer Smart Contracts: A Sequence Learning Approach to Detecting Vulnerabilities"
},
{
"paperId": "3785839cb695da8a94602a5e1c067ce1fa3123ec",
"title": "ContractFuzzer: Fuzzing Smart Contracts for Vulnerability Detection"
},
{
"paperId": "21bfb7d9014ffd758ec3be47fecb5e3cc1d26671",
"title": "KEVM: A Complete Formal Semantics of the Ethereum Virtual Machine"
},
{
"paperId": "272850f06a2aa8c0831f68cf832412852aab5dc8",
"title": "Securify: Practical Security Analysis of Smart Contracts"
},
{
"paperId": "8f22bf55536b50145bb117c97e13ea4b32a5e8fa",
"title": "SmartCheck: Static Analysis of Ethereum Smart Contracts"
},
{
"paperId": "0b0381cfd895fd5aafe373f8a263614ec9cf031f",
"title": "ReGuard: Finding Reentrancy Bugs in Smart Contracts"
},
{
"paperId": "d81fc968196e06ccafd7ea4c008b13e1cad1be64",
"title": "An End-to-End Deep Learning Architecture for Graph Classification"
},
{
"paperId": "c097be22f1e87a846385047346b73610d91fea4e",
"title": "GaAN: Gated Attention Networks for Learning on Large and Spatiotemporal Graphs"
},
{
"paperId": "3fefa6858fcc41d300b473249210ef165b113198",
"title": "A Semantic Framework for the Security Analysis of Ethereum smart contracts"
},
{
"paperId": "25d9e16a16582ce46607e6b646015a82d1188a74",
"title": "Finding The Greedy, Prodigal, and Suicidal Contracts at Scale"
},
{
"paperId": "aeb6a16146f82b4e1c2a298efb666dc1a7f5521e",
"title": "Blockchain Technology Overview"
},
{
"paperId": "51a2bc2e8fb8ed47a085df33dd965e57335080a0",
"title": "Adaptive Graph Convolutional Neural Networks"
},
{
"paperId": "5f1d429ba574581ac14effe3ebab654a57dc0e39",
"title": "Learning to Represent Programs with Graphs"
},
{
"paperId": "ff932dcd419000acdddb14b105c662178eb617aa",
"title": "Convolutional Neural Networks over Control Flow Graphs for Software Defect Prediction"
},
{
"paperId": "33998aff64ce51df8dee45989cdca4b6b1329ec4",
"title": "Graph Attention Networks"
},
{
"paperId": "006906b6bbe5c1f378cde9fd86de1ce9e6b131da",
"title": "A Comprehensive Survey of Graph Embedding: Problems, Techniques, and Applications"
},
{
"paperId": "210eb61e631e244675751e53845d55ff32a096bf",
"title": "Blockchain: Emergent Industry Adoption and Implications for Accounting"
},
{
"paperId": "f54f5b68d4fc7c47d04c240d3f189696e8dddc6f",
"title": "Neural Network-based Graph Embedding for Cross-Platform Binary Code Similarity Detection"
},
{
"paperId": "f3626b2b2bb6b3707e421b7e92051bf75f7ed053",
"title": "Bitcoin as a Transaction Ledger: A Composable Treatment"
},
{
"paperId": "82d2b9d09cc339fdeac05abfb8a31f9c6eace948",
"title": "Untangling Blockchain: A Data Processing View of Blockchain Systems"
},
{
"paperId": "aec843c0f38aff6c7901391a75ec10114a3d60f8",
"title": "A Survey of Attacks on Ethereum Smart Contracts (SoK)"
},
{
"paperId": "e24cdf73b3e7e590c2fe5ecac9ae8aa983801367",
"title": "Neural Message Passing for Quantum Chemistry"
},
{
"paperId": "d8f05241c063249f196c25b4c299fcd76b16cc29",
"title": "Defining the Ethereum Virtual Machine for Interactive Theorem Provers"
},
{
"paperId": "97b4375a71e98fb5b4628b3cf9bf80c4e006e891",
"title": "BLOCKBENCH: A Framework for Analyzing Private Blockchains"
},
{
"paperId": "7968129a609364598baefbc35249400959406252",
"title": "Making Smart Contracts Smarter"
},
{
"paperId": "4b13d7d3abcd071619bd4834882fccda95bc7c98",
"title": "Formal Verification of Smart Contracts: Short Paper"
},
{
"paperId": "628c2bcfbd6b604e2d154c7756840d3a5907470f",
"title": "Blockchain Platform for Industrial Internet of Things"
},
{
"paperId": "36eff562f65125511b5dfab68ce7f7a943c27478",
"title": "Semi-Supervised Classification with Graph Convolutional Networks"
},
{
"paperId": "c41eb895616e453dcba1a70c9b942c5063cc656c",
"title": "Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering"
},
{
"paperId": "694c50263bd20bb1a362f584170e345e835d215a",
"title": "A Fixed-Point Algorithm for Automated Static Detection of Infinite Loops"
},
{
"paperId": "ac3ee98020251797c2b401e1389461df88e52e62",
"title": "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling"
},
{
"paperId": "49512270b39636375880d611d7b2192d324f4ba6",
"title": "Convolutional Neural Networks over Tree Structures for Programming Language Processing"
},
{
"paperId": "07c4549be429a52274bc0ec083bf5598a3e5c365",
"title": "Modeling and Discovering Vulnerabilities with Code Property Graphs"
},
{
"paperId": "1441c63bb1ba5ba0888c83d2601f28a3c60dc39f",
"title": "Bolt: on-demand infinite loop escape in unmodified binaries"
},
{
"paperId": "21ec393478cc1cb6738c050aba5ab628d3c52627",
"title": "Detecting and Escaping Infinite Loops with Jolt"
},
{
"paperId": "448655935730eb854bddd9ee263c9aef39e2db2e",
"title": "Looper: Lightweight Detection of Infinite Loops at Runtime"
},
{
"paperId": "ec2b2569b3a0d70a5b45d48b041dec9060d85eb7",
"title": "Neural Network for Graphs: A Contextual Constructive Approach"
},
{
"paperId": "f3f927adf4aac1146c9587fa646864a040c94fa6",
"title": "ZEUS: Analyzing Safety of Smart Contracts"
},
{
"paperId": null,
"title": "“Vntchain,”"
},
{
"paperId": null,
"title": "Mastering ethereum: building smart contracts and dapps, 2018"
},
{
"paperId": null,
"title": "A framework for bug hunting on the ethereum blockchain"
},
{
"paperId": "8994cd423ce811dc48a542bfb90a7d2a405c461d",
"title": "Incremental Subgraph Feature Selection for Graph Classification"
},
{
"paperId": "00113e81ef3a179d74d988d72329d306eae78525",
"title": "Survey of consensus protocols on blockchain applications"
},
{
"paperId": null,
"title": "An in-depth look at the parity multisig bug"
},
{
"paperId": null,
"title": "“The dao smart contract,”"
},
{
"paperId": null,
"title": "King of the ether"
},
{
"paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a",
"title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM"
},
{
"paperId": "067e07b725ab012c80aa2f87857f6791c1407f6d",
"title": "Long short-term memory recurrent neural network architectures for large scale acoustic modeling"
},
{
"paperId": null,
"title": "Learning task-dependent distributed representations by backpropagation through structure"
},
{
"paperId": "7fcef094c80b31a7ef61c50466c736c148001e91",
"title": "IEEE Transactions on Knowledge and Data Engineering"
}
] | 23,627
|
en
|
[
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/03008eb29622789c8c6827679d2165641540a414
|
[] | 0.845179
|
Pawnshop Digital Service Quality and it’s Implication on Customer Satisfaction at PT Pegadaian (Persero) Pondok Labu Branch
|
03008eb29622789c8c6827679d2165641540a414
|
Jurnal Indonesia Sosial Sains
|
[
{
"authorId": "2054612002",
"name": "Aziz Setyawan"
}
] |
{
"alternate_issns": [
"2723-6692"
],
"alternate_names": [
"J Indones Sos Sains"
],
"alternate_urls": null,
"id": "999d860f-0de7-4a99-8300-6538b35d071b",
"issn": "2723-6595",
"name": "Jurnal Indonesia Sosial Sains",
"type": null,
"url": null
}
|
In the current era, services delivered through digital channels make up the majority of business transactions compared to those carried out through traditional channels such as branch offices. Starting with the theme “DigitalisMe”, Pegadaian launched a digital-based service called Pegadaian Digital. This study aims to empirically explore the service quality of Pegadaian Digital and their impact on customer satisfaction at PT Pegadaian (Persero) Pondok Labu Branch. This is a quantitative research, and the sample in this study amounted to 160 customers who are users of Pegadaian Digital services. The data collection process uses google forms and scanned barcodes that are distributed in each unit of the Pegadaian Pondok Labu Branch. The data were analyzed using the Partial Least Square (PLS) method, and the results show that: (1) Reliability has an effect on customer satisfaction (2) Efficiency has no effect on customer satisfaction (3) Security has no effect on customer satisfaction (4) Responsiveness has no effect on customer satisfaction (5) Web design has no effect on customer satisfaction.
|
**E-ISSN:2723 – 6595**
**[http://jiss.publikasiindonesia.id](http://jiss.publikasiindonesia.id/)** **P-ISSN:2723 – 6692**
**Pawnshop Digital Service Quality and It’s Implication on Customer Satisfaction at PT**
**Pegadaian (Persero) Pondok Labu Branch**
**Aziz Setyawan**
Universitas Pembangunan Nasional Veteran Jakarta, Indonesia
E-mail: azizsetyawan@upnvj.ac.id
**Artikel info**
**Artikel history**
Received : 18-07-2022
Revised : 13-08-2022
Approved : 25-08-2022
**Keywords:** service quality;
customer satisfaction; digital
banking
**Introduction**
**Abstract**
In the current era, services delivered through digital channels make up the
majority of business transactions compared to those carried out through
traditional channels such as branch offices. Starting with the theme
“DigitalisMe”, Pegadaian launched a digital-based service called Pegadaian
Digital. This study aims to empirically explore the service quality of
Pegadaian Digital and their impact on customer satisfaction at PT Pegadaian
(Persero) Pondok Labu Branch. This is a quantitative research, and the sample
in this study amounted to 160 customers who are users of Pegadaian Digital
services. The data collection process uses google forms and scanned barcodes
that are distributed in each unit of the Pegadaian Pondok Labu Branch. The
data were analyzed using the Partial Least Square (PLS) method, and the
results show that: (1) Reliability has an effect on customer satisfaction (2)
Efficiency has no effect on customer satisfaction (3) Security has no effect on
customer satisfaction (4) Responsiveness has no effect on customer
satisfaction (5) Web design has no effect on customer satisfaction.
**Correspondent author:** **Aziz Setyawan**
Email: azizsetyawan@upnvj.ac.id
Open access articles under license
CC BY SA
2022
Consumers are a top priority in modern business thinking and practices, businesses
must be able to attract and retain consumers to win the competition (Tjiptono, 2019).
Currently, the demand and desire of consumers is increasing, including in the financial
services industry, consumers always demand that transactions can be done anytime and
anywhere without time constraints (Hammoud et al., 2018).
In 2017 McKinsey conducted research on 900 banking consumers across Indonesia on
their banking habits, the results showed a shift to digital channels increased by 58 percent
from 2017 (Barquin et al., 2019). In addition, awareness of digital banking services in
Indonesia is increasing along with the COVID-19 pandemic, many customers are changing
their behavior towards banking services (Hamilton, 2021).
In today's era, services delivered through digital channels account for the majority of
business transactions compared to those conducted through traditional channels such as
branch offices (Chan et al., 2019). Starting from the theme "DigitalisMe", Pegadaian launched
a digital-based service called Pegadaian Digital.
Pegadaian Digital is one of the application-based Pegadaian services that help
customers to make Pegadaian transactions through smartphones. According to (Amin, 2016),
one of the common concerns that has been emphasized regarding the adoption of digital
-----
banking services is the poor quality of service and customer dissatisfaction. Customer
satisfaction is closely related to the quality of service, because one of the criteria for the
development of a company is influenced by the company's ability to serve its customers
(Rohaeni & Marwa, 2018).
Based on the results of a survey of customers who use Pegadaian Digital services at
Pegadaian Pondok Labu Branch Office and Unit below it shows only 53 percent of customers
are satisfied with The Pegadaian Digital service, meaning that there are still 47 percent of
customers who are not satisfied. This is certainly a problem for the company.
Even though the application is expected to make it easier for customers to use online
services. In addition, the company strives for customers to struggle to change their
perspective, behavior and habits in interacting with offerings in the form of services from
digital banking (Alsajjan & Dennis, 2010) dan (Amin, 2016).
The results of the study (Hammoud et al., 2018), showed that service quality variables
consisting of several dimensions such as reliability, efficiency, security and responsiveness
have a positive and significant influence on customer satisfaction. Other research also says
the quality dimensions of services such as web design have a significant role to play in
improving customer satisfaction in the application of digital banking (Haq & Awan, 2020).
Objects from previous research conducted in other countries such as Lebanon and
Pakistan, most likely differences in customer culture will affect their application in this study.
As revealed (Amin, 2016), that the scale of the quality of services developed in one culture
can be different from other cultures. The different interpersonal designs of different industries
can also differ from country to country. Thus, this study will discuss the quality of Pegadaian
Digital services and their implications for customer satisfaction.
Reliability is the company's ability to deliver promised services reliably and
accurately. With reliable service all customer needs will be met and customers will feel
satisfied with it (Haverila et al., 2019). Some previous studies have proven that reliability has
the strongest influence on customer satisfaction (Hammoud et al., 2018); (Haq & Awan,
2020); (Toor et al., 2016); (Hadiyati, 2015). Thus, it can be concluded that reliability has an
influence on customer satisfaction. H1: Reliability affects customer satisfaction.
Efficiency is the extent to which a person believes that using a service will improve
performance in his work, so that all his needs can be achieved and do not require great effort
when used (Yani et al., 2018). Efficiency in the context of digital services in addition to
assisting customers in saving energy, time and cost must also have advanced features,
complete yet simple and easy to use (Widiaty et al., 2020). Some previous studies have proven
that efficiency has an influence on customer satisfaction (Hammoud et al., 2018); (Amin,
2016). Thus, it can be concluded that efficiency has an influence on customer satisfaction.
H2: Efficiency affects customer satisfaction.
Security is an effort to maintain the confidentiality of operations, refrain from sharing
personal information and ensure a good level of security for customer information (Hammoud
et al., 2018). Security covers trustworthy nature, free from risk or doubt (Tjiptono, 2019).
Some previous studies have proven that security has an influence on customer satisfaction
(Hammoud et al., 2018); (Haq & Awan, 2020); (Huda & Wahyuni, 2019); (Toor et al., 2016).
Thus, it can be concluded that security has an influence on customer satisfaction. H3: Security
affects customer satisfaction.
-----
Responsiveness is a willingness to help customers and offer services quickly.
Currently customers have requests, questions, complaints and issues whose time cannot be
problems whose time cannot be determined. Therefore the service should be available
whenever needed to respond to requests, pay special attention and provide solutions (Ahmad
et al., 2019). Some previous studies have shown that responsiveness has an influence on
customer satisfaction (Hammoud et al., 2018); (Ahmad et al., 2019); (Toor et al., 2016). Thus,
it can be concluded that responsiveness has an influence on customer satisfaction. H4:
Responsiveness affects customer satisfaction.
Web design is the user's assessment of service features such as ease, composition,
navigation, availability of information and web compatibility with consumer expectations
(Priscillia & Budiono, 2020). According (Rita et al., 2019), efficient web design should
contain three main categories: information-oriented, transaction-oriented and customeroriented. Some previous research has proven that web design has an influence on customer
satisfaction (Haq & Awan, 2020); (Priscillia & Budiono, 2020). Thus, it can be concluded
that web design has an influence on customer satisfaction. H5: Web design has an influence
on customer satisfaction.
This research aims to test the effect of the quality of Pawnshop Digital services on
customer satisfaction. However, to the author's knowledge nothing was done in the non-bank
financial sector.
**Research Method**
The type of data in this study is quantitative data. According to (Sugiyono, 2017),
quantitative data is a type of data that is numerical or numbers that can be analyzed using
statistics with the aim of proving a predetermined hypothesis.
The population in this study is Pawnshop customers who use Pegadaian Digital
services. Sampling technique using purposive sampling method, which selects samples by
establishing specific characteristics that are in accordance with the purpose of the study. In
determining the number of valid sample calculations, the sample size guideline depends on
the number of indicators multiplied by 5 (Hair et al., 2014). Based on the results of these
calculations, the study took a sample number of 160 respondents.
The population in this study are Pegadaian customers who use Digital Pegadaian
services. Questionnaires were distributed to 160 respondents who use Pegadaian Digital
services at the Pondok Labu Branch Office and Units below.
Data collection using questionnaire techniques, questionnaires are data collection
techniques that are done by giving a set of questions/written statements to respondents to be
answered and asked for responses (Sugiyono, 2017). In this study, questionnaires were
addressed to Pegadaian customers who use Pegadaian Digital services in Pondok Labu Branch
and Unit Below in the form of google form accessed through barcode scans available in each
Unit.
Descriptive analysis in this study used PLS output by looking at the mean (average),
median (middle value), min (smallest value), and max (largest value) of each indicator item.
Inferential statistics is a statistical technique for analyzing sample data and the results will be
applied to the entire population (Sugiyono, 2017). The study used non-parametric statistics
because the type of data analyzed was on an interval scale.Data analysis techniques using the
-----
help of SmartPLS software version 3.0.
The first step of the PLS is the structural model (inner model) can explain the
relationship between variables. In this study, the formulation of problems and hypotheses built
on customer satisfaction variables (Y), service quality variables that have dimensions such as
reliability (X1), efficiency (X2), security (X3), responsiveness (X4) and web design (X5). The
second step is to design a measurement model (outer model), the characteristics of indicators
and dimensions used by these variables to form the basis of the formation of the measurement
model plan. The third step is to arrange a path diagram, forming a path diagram as an overview
of the results of outer model calculations and the inner model. The fourth step is the
conversion of the path diagram to the equation system.
Then the fifth step is the estimation of parameters. PLS estimation is a small box method
through iteration. The sixth step is the evaluation of goodness of fit which consists of several
tests, namely validity test, reliability test, and determinant test (R2). The last step is hypothesis
testing (bootstrap resampling), hypothesis testing (, , ) on PLS is done using bootstrap
resampling calculations. The test statistic used is the t statistic or t test.
**Results and Discussion**
**Table 1. Descriptive Statistics**
**Demographics Category** **Frequency** **Percentage**
Gender Male 74 46%
Female 86 54%
Age Under 20 years 2 1%
20-30 years 88 55%
30-40 years 56 35%
40-50 years 11 7%
50 years and above 3 2%
Domicile Jakarta 96 60%
Address Depok 46 29%
South Tanggerang 13 8%
Other 5 3%
Education Senior High School 42 26%
Associate Degree 16 10%
Bachelor 94 59%
Master and above 8 5%
Profession Government employees 16 10%
Privat employees 62 39%
Housewife 8 5%
Student 26 16%
Other 48 30%
Monthly IDR 1.000.000 – IDR 2.000.000 19 12%
Expenses IDR 2.000.000 – IDR 3.000.000 38 24%
IDR 3.000.000 – IDR 4.000.000 22 14%
IDR 4.000.000 – IDR 5.000.000 34 21%
More than IDR 5.000.000 47 29%
Source: Data Processing
-----
Based on table 1 the sample is normally distributed with 54% female respondents and
46% male respondents based on the sample. The majority of respondents are still relatively
young with 55% aged between 20-30 years. Most of the respondents are those who live in the
city of Jakarta, with a percentage of 60% of the sample. The education level of the majority of
respondents is that they have a bachelor's degree, which is 59% of the sample. Most of the
respondents are those who work in the private sector with a percentage of 39% of the total
sample. Regarding monthly expenses, most of the respondents (29%) claimed to have spent
more than 5 million rupiah in one month.
Briefly respondents of Pegadaian Digital service users in Pegadaian Pondok Labu
Branch are women aged 20-30 years, domiciled in Jakarta, Bachelor education, working in
the private sector and has a monthly expenditure of more than 5 million rupiah.
**1.** **Validity and Reliability Test**
The next step is to assess the relationship between the indicator and latent
construction in terms of validity and reliability. Aspects of validity and reliability can be
assessed from the measurement model convergent validity, discriminant validity and
composite reliability (Hammoud et al., 2018). Can be seen in table 2.
Table 2 shows that all statement instruments based on loading factors have a value
of >0.5 or exceeding the recommended threshold (Ghozali, 2018). The lowest value is 0.782
on CS4 items and the highest value is 0.927 on RE4 items. Thus, all indicators in this study
have been declared valid or have met convergent validity.
**Table 2. Factor Loading, AVE and CR**
**Factor**
**Construct** **Item Statement** **CR**
**Loading** **[AVE ]**
I am satisfied with the transaction
0.908
processing via Pegadaian Digital services
Pegadaian Digital service can speed up the
0.913
transaction process
Customer
Satisfaction
Reliability
Pegadaian Digital service provide
0.913
convenience and comfort in transactions
Overall, Pegadaian Digital service is
0.899
better than my expectation
Pegadaian Digital service is reliable and
0.891
dependable
Pegadaian Digital service provides the
0.916
exact service as promised
Pegadaian Digital service perform for me
0.899
the service right on the first time
Pegadaian Digital service can always
0.927
complete their tasks accurately
Transaction fees issued through the
Pegadaian Digital service are cheaper than
coming directly to the branch office
0.782
0.783 0.947
0.825 0.95
The use of Pegadaian Digital service are
Efficiency 0.871 0.775 0.954
time saving
-----
**Factor**
**Construct** **Item Statement** **CR**
**Loading** **[AVE ]**
The service delivered through the
0.883
Pegadaian Digital service is quick
Pegadaian Digital service is easy to use 0.888
The language in the Pegadaian Digital
0.903
service is easy to understand
The system in the Pegadaian Digital
0.879
service provides clear instructions
The Pegadaian Digital system to be
0.858
flexible to interact with
Pegadaian Digital service do not allow
0.884
others to access my accounts
I feel safe when making transactions
0.925
through the Pegadaian Digtal service
Security
Responsiveness
Web Design
Pegadaian Digital service are guaranteed
0.917
to be safe from all fraud and hacking
The information on the Pegadaian Digital
0.881
service is effective
The Pegadaian Digital service displays a
0.887
visually pleasing design
The Pegadaian Digital service has no
0.839
difficulties with making a payment online
The Pegadaian Digital service displays a
0.905
visually pleasing easy to read content
The Pegadaian Digital service has a wide
0.867
variety of products that interest me
The Pegadaian Digital service offer
0.869
attractive bonuses or promotions
Pegadaian Digital service provide high
protection for transaction data and
personal information
0.921
Pegadaian Digital services are available
0.888
24/7
Pegadaian Digital service is fast in
0.904
responding to requests
Pegadaian Digital service is fast in solving
0.907
problems
Pegadaian Digital service provide answers
0.915
to your questions
Can talk to employees by
telephone/directly at the branch office
when a problem occurs
0.869
0.832 0.952
0.804 0.954
0.777 0.965
I can interact with the Pegadaian Digital
service in order to get information tailored
to my specific needs
0.910
When I use the Pegadaian Digital service,
0.890
it doesn't take long to load
Source: Data Processing
-----
AVE values for all variables studied have a value of >0.5 or exceeding the
recommended limit (Ghozali, 2018). This means that all variables are declared valid. Based
on both tests it can be concluded that all instruments in this study are able to measure the
variables studied.
Table 2 shows the Composite Reliability (CR) value for all variables is >0.7 or has
exceeded the recommended threshold (Hair et al., 2014). This means all statements related
to customer satisfaction, reliability, efficiency, security, responsiveness and web design each
indicator is expected to meet the criteria. So it can be concluded that if similar research is
conducted using the same instrument, the quality of the data would not change.
**2.** **Coefficient of Determination Test (R[2])**
At this stage the structural model of the study was tested using the R square test.
Here are the results of the R square test in the table below:
**Table 3. R Square and R Square Adjusted**
**_R Square_** **_R Square Adjusted_**
Customer Satisfaction 0.869 0.865
Source: Data Processing
The R Square Adjusted customer satisfaction variable value of 0.865 means the
contribution of reliability, efficiency, security, responsiveness and web design variables to
customer satisfaction by 86.5%.
While the remaining 13.5% contribution to customer satisfaction variables is filled
by variables other than reliability, efficiency, security, responsiveness and web design.
**3.** **Hypothesis Testing**
Hypotheses in this study were tested using statistical testing of the t test. Known ttable
of 1,975 obtained from the formula df = number of samples - number of variables or df = N
– K so as to produce df = 160 – 6 = 154, then connected by the degree of error by 5% or
0.05. The results of data processing for significance testing (test t) can be seen in table 4.
**Table 4. T-Statistic**
**_Original_**
**_Sample (O)_**
**T Statistic**
**_P Value_**
**(|O/STDEV|)**
Reliability -> Customer Satisfaction 0,567 4,94 0
Efficiency -> Customer Satisfaction 0,173 1,314 0,189
Security -> Customer Satisfaction 0,072 0,846 0,398
Responsiveness -> Customer Satisfaction 0,077 0,709 0,478
Web design -> Customer Satisfaction 0,086 0,761 0,447
Source: Data Processing
Based on the results of research shows that reliability has an influence on customer
satisfaction, meaning reliability is one of the elements that can cause customer satisfaction,
especially in Pegadaian Digital services. The statement is supported by the results of the tstatistics test with t-statistic value showing that tcount 4,940 > ttabel 1,975 and P Value of 0.000
< 0.05. This means that the H1 hypothesis is accepted.
-----
The result of the original value sample reliability has a positive relationship to
customer satisfaction, this means that if the Pegadaian Digital service is more reliable then
the level of satisfaction with the service will increase. These results provide empirical
evidence that the reliability of digital banking services such as speed, convenience, cost and
conformity with expectations are indicators that can improve customer satisfaction (Huda &
Wahyuni, 2019). Research (Haq & Awan, 2020), shows that reliability has been shown to
increase satisfaction in digital banking, especially during the COVID-19 pandemic. During
the COVID-19 pandemic many customers changed their behavior from conventional switches
to using digital banking services, this is done to meet all his needs during the COVID-19
outbreak period.
The results of this study are in line with the research (Fida et al., 2020), (Hadiyati,
2015) dan (Toor et al., 2016), reliability is an important element in the quality of service
(Parasuraman et al., 1985). The results, evidenced by reliability, have the most powerful
influence on customer satisfaction, this confirms previous research when people had to relying
on stable digital banking services (Hammoud et al., 2018).
Efficiency has no effect on customer satisfaction, the statement is supported by the
results of t-statistics tests with t-statistical values showing that tcount 1,314 < ttable 1.975 and P
Value of 0.189 > 0.05. This means that the H2 hypothesis is rejected.
The results explained that the ability of Pegadaian Digital services in carrying out tasks
properly, quickly and appropriately will not affect the level of customer satisfaction.
Perceived efficiency cannot affect customer satisfaction because the majority of users of
Pegadaian Digital services in Pondok Labu Branch the purpose is to pay debts or credit.
According (Reading & Reynolds, 2001) dan (Shohib, 2017), credit is the strongest
socioeconomic predictor that can lead to depression.
The same is true of (Fitch et al., 2007) dan (Renanita, 2013), which states that people
who have debts tend to have mental health problems compared to people who do not have
debt, satisfaction will not arise if a service user has a problem that he is undergoing. This
research is in line with research (Ahmad et al., 2019), which says that not much is expected
from the point of view of personalized digital-based services with an understanding of a
special need when compared to current crisis conditions such as the COVID-19 pandemic.
Security has no effect on customer satisfaction, the statement is supported by the
results of t-statistics tests with t-statistical values showing that tcount 0.846 < ttabel 1.975 and P
Value of 0.398 > 0.05. This means that the H3 hypothesis is rejected.
The results explained that the security implemented by The Pegadaian Digital service
in Pondok Labu Branch in the transaction process and in maintaining the confidentiality of
customer data will not affect the level of customer satisfaction.
Basically, the implementation of the security of Pegadaian Digital services in Pondok
Labu Branch is good enough, with rare reports related to data leaks or transaction fraud is
enough to make service users feel safe. This may be the cause of service users still do not feel
the security facilities provided so that satisfaction is considered not problematic (Dewi et al.,
2019).
Responsiveness has no effect on customer satisfaction, the statement is supported by
the results of t-statistics tests with t-statistical values showing that tcount 0.709 < ttable 1.975 and
P Value of 0,478 > 0,05. This means that the H4 hypothesis is rejected.
-----
The results explain if the readiness or sensitivity of Pegadaian Digital services in
Pondok Labu Branch to support customers in overcoming problems quickly will not affect
customer satisfaction levels. In addition, most users rarely experience obstacles when using
Pegadaian Digital services, so that although it has a fast capture but has not had an impact on
customer satisfaction levels (Stevano et al., 2018).
When viewed from the characteristics of respondents, the average age of users of
Pegadaian Digital services is dominated by those between the ages of 20-30 years. According
Kemenpppa, one of the characteristics of millennials is that they do not pursue satisfaction
with a service. Millennials won't mind too much about something that makes it difficult for
them, instead they'll tend to leave something they think can hinder their development.
Web design has no effect on customer satisfaction, the statement is supported by the
results of t-statistics tests with the t-statistical value showing that tcount is 1.761 < ttable 1.975
and P Value is 0.447> 0.05. This means that hypothesis H5 is rejected.
The results explain that all the features or appearance of The Pegadaian Digital to help
customers in providing an easier and concise transaction structure cannot increase or decrease
customer perception of satisfaction.
This may happen because the main purpose of customers using Pegadaian Digital
services is for transactions, the users usually pay less attention to the design or features of the
service because they are busy or less concerned. For some people time is very important, so
they will not linger in the process of a service. This is also evidenced by the characteristics of
users of Pegadaian Digital services that are dominated by workers both in the private sector
and civil servants. This research is in line with the research (Tatang & Mudiantono, 2017).
**Conclusion**
The findings suggest that one in five hypotheses in the study are supported by data.
Reliability as a service quality variable contributed the most to customer satisfaction in the
study. The results can be concluded that reliability is proven to increase customer satisfaction,
the more reliable service then customer satisfaction will increase. Especially during the
COVID-19 pandemic, safety is a top priority. For customers, reliable service is enough to
make them feel satisfied. So some factors such as features or appearance are things that are
less noticed. Reliable meaning can also include the response, efficiency and security of a
service. If the customer already considers the service can be relied on, then the performance
of The Pegadaian Digital service is in accordance with the expectations of the customers of
Pegadaian Pondok Labu Branch.
-----
**Bibliography**
Ahmad, S. Z., Ahmad, N., & Papastathopoulos, A. (2019). Measuring service quality and
customer satisfaction of the small- and medium-sized hotels (SMSHs) industry: lessons
from United Arab Emirates (UAE). _Tourism_ _Review,_ _74(3),_ 349–370.
[https://doi.org/10.1108/TR-10-2017-0160](https://doi.org/10.1108/TR-10-2017-0160)
Alsajjan, B., & Dennis, C. (2010). Internet banking acceptance model: Cross-market
examination. _Journal_ _of_ _Business_ _Research,_ _63(9–10),_ 957–963.
[https://doi.org/10.1016/j.jbusres.2008.12.014](https://doi.org/10.1016/j.jbusres.2008.12.014)
Amin, M. (2016). Internet banking service quality and its implication on e-customer
satisfaction and e-customer loyalty. International Journal of Bank Marketing, 34(1), 1–
[5. https://doi.org/10.1108/IJBM-10-2014-0139](https://doi.org/10.1108/IJBM-10-2014-0139)
Barquin, S., HV, V., & Shrikhande, D. (2019). Digital banking in Indonesia: Building loyalty
[and generating growth. McKinsey & Company, February, 6.](https://scholar.google.com/scholar?hl=id&as_sdt=0%2C5&q=Barquin%2C+S.%2C+HV%2C+V.%2C+%26+Shrikhande%2C+D.+%282019%29.+Digital+banking+in+Indonesia%3A+Building+loyalty+and+generating+growth.+&btnG=)
Chan, C. K., Fang, Y., & Li, H. (2019). Relative advantage of interactive electronic banking
adoption by premium customers: The moderating role of social capital. _Internet_
_[Research, 30(2), 357–379. https://doi.org/10.1108/INTR-06-2018-0280](https://doi.org/10.1108/INTR-06-2018-0280)_
Dewi, K. I. L., Yulianthini, N. N., & Telagawathi, N. L. W. S. (2019). Pengaruh Dimensi
Kualitas Pelayanan Terhadap Kepuasan Pelanggan Pengguna Bpjs Kesehatan Di Kota
[Singaraja. Manajemen, 5(2), 82–92. https://doi.org/10.23887/bjm.v5i2.22011](https://doi.org/10.23887/bjm.v5i2.22011)
Fida, B. A., Ahmed, U., Al-Balushi, Y., & Singh, D. (2020). Impact of Service Quality on
Customer Loyalty and Customer Satisfaction in Islamic Banks in the Sultanate of
[Oman. SAGE Open, 10(2). https://doi.org/10.1177/2158244020919517](https://doi.org/10.1177/2158244020919517)
Fitch, C., Chaplin, R., Trend, C., & Collard, S. (2007). Debt and mental health: The role of
psychiatrists. _Advances_ _in_ _Psychiatric_ _Treatment,_ _13(3),_ 194–202.
[https://doi.org/10.1192/apt.bp.106.002527](https://doi.org/10.1192/apt.bp.106.002527)
Ghozali, I. (2018). _Aplikasi Analisis Multivariate dengan Program IBM SPSS 25. Badan_
Penerbit Universitas Diponogoro.
Hadiyati, R. (2015). Pengaruh Kualitas Pelayanan Pt. Garuda Indonesia Airlines Terhadap
[Kepuasan Konsumen. Jurnal EMOR, 1(1), 35–55.](https://scholar.google.com/scholar?hl=id&as_sdt=0%2C5&q=Hadiyati%2C+R.+%282015%29.+Pengaruh+Kualitas+Pelayanan+Pt.+Garuda+Indonesia+Airlines+Terhadap+Kepuasan+Konsumen.+Jurnal+EMOR%2C+1%281%29%2C+35%E2%80%9355.&btnG=)
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2014). Multivariate Data Analysis
(MVDA). In Pharmaceutical Quality by Design: A Practical Approach (Seventh Ed).
[https://doi.org/10.1002/9781118895238.ch8](https://doi.org/10.1002/9781118895238.ch8)
Hamilton, M. (2021). Digital Banking Trend in The New Normal Era.
Hammoud, J., Bizri, R. M., & El Baba, I. (2018). The Impact of E-Banking Service Quality on
Customer Satisfaction: Evidence From the Lebanese Banking Sector. SAGE Open, 8(3).
[https://doi.org/10.1177/2158244018790633](https://doi.org/10.1177/2158244018790633)
-----
Haq, I. U., & Awan, T. M. (2020). Impact of e-banking service quality on e-loyalty in pandemic
times through interplay of e-satisfaction. _Vilakshan - XIMB Journal of Management,_
_[17(1/2), 39–55. https://doi.org/10.1108/xjm-07-2020-0039](https://doi.org/10.1108/xjm-07-2020-0039)_
Haverila, M., Haverila, K., & Arora, M. (2019). Comparing the service experience of satisfied
and non-satisfied customers in the context of wine tasting rooms using the SERVQUAL
model. _International_ _Journal of Wine Business Research,_ _32(2), 301–324._
[https://doi.org/10.1108/IJWBR-12-2018-0070](https://doi.org/10.1108/IJWBR-12-2018-0070)
Huda, A. N., & Wahyuni, S. (2019). nalisis Pengaruh Kualitas Layanan Internet Banking dan
Tingkat Kepuasan Terhadap Loyalitas Nasabah Pada PT Bank Rakyat Indonesia
(Persero) Tbk Kantor Cabang Pembantu Jamsostek Jakarta. ABFII Perbanas Jakarta,
_[7(1), 12. https://doi.org/10.32497/keunis.v7i1.1527](https://doi.org/10.32497/keunis.v7i1.1527)_
Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1985). A conceptual model of service quality
and its implications for future research. _Journal of Marketing,_ _49(4), 41–50._
[https://doi.org/10.1177/002224298504900403](https://doi.org/10.1177/002224298504900403)
Priscillia, M., & Budiono, H. (2020). Prediksi Website Design Quality dan Service Quality
[terhadap Repurchase Intention Pada Pelanggan Shopee di Jakarta Dengan Customer](https://scholar.google.com/scholar?hl=id&as_sdt=0%2C5&q=Priscillia%2C+M.%2C+%26+Budiono%2C+H.+%282020%29.+Prediksi+Website+Design+Quality+dan+Service+Quality+terhadap+Repurchase+Intention+Pada+Pelanggan+Shopee+di+Jakarta+Dengan+Customer+Trust+Sebagai+Mediasi.&btnG=)
[Trust Sebagai Mediasi. Jurnal Manajerial Dan Kewirausahaan, II(4), 1033–1043.](https://scholar.google.com/scholar?hl=id&as_sdt=0%2C5&q=Priscillia%2C+M.%2C+%26+Budiono%2C+H.+%282020%29.+Prediksi+Website+Design+Quality+dan+Service+Quality+terhadap+Repurchase+Intention+Pada+Pelanggan+Shopee+di+Jakarta+Dengan+Customer+Trust+Sebagai+Mediasi.&btnG=)
Reading, R., & Reynolds, S. (2001). Debt, social disadvantage and maternal depression. Social
_[Science and Medicine, 53(4), 441–453. https://doi.org/10.1016/S0277-9536(00)00347-](https://doi.org/10.1016/S0277-9536(00)00347-6)_
6
Renanita, T. (2013). Faktor-faktor Psikologis Perilaku Berhutang pada Karyawan
Berpenghasilan Tetap. _Jurnal_ _Psikologi_ _UGM,_ _40(1),_ 92–101.
[https://doi.org/10.22146/jpsi.7069](https://doi.org/10.22146/jpsi.7069)
Rita, P., Oliveira, T., & Farisa, A. (2019). The impact of e-service quality and customer
satisfaction on customer behavior in online shopping. _Heliyon,_ _5(10), e02690._
[https://doi.org/10.1016/j.heliyon.2019.e02690](https://doi.org/10.1016/j.heliyon.2019.e02690)
Rohaeni & Marwa, N. (2018). Kualitas Pelayanan terhadap Kepuasan Pelanggan. _Jurnal_
_Ecodemica, 2(2), 312–318._
Shohib, M. (2017). Sikap terhadap uang dan perilaku berhutang. _Jurnal Ilmiah Psikologi_
_[Terapan, 3(1), 132–143. https://doi.org/10.22219/jipt.v3i1.2133](https://doi.org/10.22219/jipt.v3i1.2133)_
Stevano, Andajani, E., & Rahayu, S. (2018). Influence Service Quality To Customer
[Satisfaction and Customer Loyalty Using Self Service Technology: Internet Banking.](https://scholar.google.com/scholar?hl=id&as_sdt=0%2C5&q=Stevano%2C+Andajani%2C+E.%2C+%26+Rahayu%2C+S.+%282018%29.+Influence+Service+Quality+To+Customer+Satisfaction+and+Customer+Loyalty+Using+Self+Service+Technology%3A+Internet+Banking.+5th+International+Conference+on+Business%2C+Economic+and+Social+Sciences+%28ICBESS%29+2018%2C+22.&btnG=)
_5th International Conference on Business, Economic and Social Sciences (ICBESS)_
_2018, 22._
Sugiyono. (2017). Metode Penelitian Kuantitatif, Kualitatif dan R&D. Alfabeta.
Tatang, M., & Mudiantono. (2017). The impact of website design quality, service quality, and
[enjoyment on repurchase intention through satisfaction and trust at Zalora. Diponegoro](https://scholar.google.com/scholar?hl=id&as_sdt=0%2C5&q=Tatang%2C+M.%2C+%26+Mudiantono.+%282017%29.+The+impact+of+website+design+quality%2C+service+quality%2C+and+enjoyment+on+repurchase+intention+through+satisfaction+and+trust+at+Zalora.+Diponegoro+Journal+Of+Management%2C+6%285%29%2C+1%E2%80%9311.&btnG=)
_Journal Of Management, 6(5), 1–11._
-----
[Tjiptono, F. (2019). Strategi Pemasaran (A. Diana (ed.); 1st ed.). ANDI.](https://scholar.google.com/scholar?hl=id&as_sdt=0%2C5&q=Tjiptono%2C+F.+%282019%29.+&btnG=)
Toor, A., Hunain, M., Hussain, T., Ali, S., & Shahid, A. (2016). The Impact of E-Banking on
Customer Satisfaction: Evidence from Banking Sector of Pakistan. Journal of Business
_[Administration Research, 5(2), 27–40. https://doi.org/10.5430/jbar.v5n2p27](https://doi.org/10.5430/jbar.v5n2p27)_
Widiaty, I., Ana, Riza, L. S., Abdullah, A. G., & Mubaroq, S. R. (2020). Multiplatform
application technology – based heutagogy on learning batik: A curriculum development
framework. _Indonesian_ _Journal of Science and Technology,_ _5(1), 45–61._
[https://doi.org/10.17509/ijost.v5i1.18754](https://doi.org/10.17509/ijost.v5i1.18754)
Yani, E., Lestari, A. F., Amalia, H., & Puspita, A. (2018). Pengaruh Internet Banking Terhadap
Minat Nasabah Dalam Bertransaksi Dengan Technology Acceptance Model. _Jurnal_
_Informatika,_ _[5(1), 34–42. https://doi.org/10.31311/ji.v5i1.2717](https://doi.org/10.31311/ji.v5i1.2717)_
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.36418/jiss.v3i8.665?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.36418/jiss.v3i8.665, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYSA",
"status": "HYBRID",
"url": "https://jiss.publikasiindonesia.id/index.php/jiss/article/download/665/1295"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-08-23T00:00:00
|
[
{
"paperId": "dd1fffe6f1fcb2e37ced77495475f6262e9320ee",
"title": "STRATEGI PEMASARAN"
},
{
"paperId": "aba984655d76e8b571fd6300626490e98f89fe4c",
"title": "Impact of e-banking service quality on e-loyalty in pandemic times through interplay of e-satisfaction"
},
{
"paperId": "264888e71fc53c76782779cfbc3d34260564c4cc",
"title": "Prediksi Website Design Quality dan Service Quality terhadap Repurchase Intention Pada Pelanggan Shopee di Jakarta Dengan Customer Trust Sebagai Mediasi"
},
{
"paperId": "bd1a96e67e8c9722652a3eff2d4bbd723205be79",
"title": "Impact of Service Quality on Customer Loyalty and Customer Satisfaction in Islamic Banks in the Sultanate of Oman"
},
{
"paperId": "608e637f81af6d5995da099332374055957ce102",
"title": "Multiplatform Application Technology – Based Heutagogy on Learning Batik: A Curriculum Development Framework"
},
{
"paperId": "abcfd5214277eae26516a890894d16f8d7030ead",
"title": "The impact of e-service quality and customer satisfaction on customer behavior in online shopping"
},
{
"paperId": "530a821a9a6d908ea591c1de2ef0d427530e9a2e",
"title": "Comparing the service experience of satisfied and non-satisfied customers in the context of wine tasting rooms using the SERVQUAL model"
},
{
"paperId": "33ed8392d38f1d0de250281c5c7d963b26a4778a",
"title": "Relative advantage of interactive electronic banking adoption by premium customers"
},
{
"paperId": "6fb5610cce2b91a62493b7f627810a4afc228bf3",
"title": "Measuring service quality and customer satisfaction of the small- and medium-sized hotels (SMSHs) industry: lessons from United Arab Emirates (UAE)"
},
{
"paperId": "d54af3eebdfe06442d48a78a350eb43b9505c2e2",
"title": "Kualitas Pelayanan Terhadap Kepuasan Pelanggan"
},
{
"paperId": "bff631dacae269db183f98a8f81c482743f48de2",
"title": "INFLUENCE SERVICE QUALITY TO CUSTOMER SATISFACTION AND CUSTOMER LOYALTY USING SELF SERVICE TECHNOLOGY:INTERNET BANKING"
},
{
"paperId": "815303cad4440b687881a0af6baf4e6db3d33d6e",
"title": "The Impact of E-Banking Service Quality on Customer Satisfaction: Evidence From the Lebanese Banking Sector"
},
{
"paperId": "bf451cd5570a44b6fd80b330b6f526ae1d0047e6",
"title": "Pengaruh Internet Banking Terhadap Minat Nasabah Dalam Bertransaksi Dengan Technology Acceptance Model"
},
{
"paperId": "e837a307796d048fbca77eb8d829df586a00a541",
"title": "Multivariate Data Analysis (MVDA)"
},
{
"paperId": "5aa5b67ef3e6f008554262a1696767881b483fdb",
"title": "THE IMPACT OF WEBSITE DESIGN QUALITY, SERVICE QUALITY, AND ENJOYMENT ON REPURCHASE INTENTION THROUGH SATISFACTION AND TRUST AT ZALORA"
},
{
"paperId": "b8322369908a337cae78d4cbba46d088fbddf7c2",
"title": "PENGARUH KUALITAS PELAYANAN PT. GARUDA INDONESIA AIRLINES TERHADAP KEPUASAN KONSUMEN"
},
{
"paperId": "aa2f0e96e72e47e7360870013c81ebeeffa56374",
"title": "The Impact of E-Banking on Customer Satisfaction: Evidence from Banking Sector of Pakistan"
},
{
"paperId": "4aa0dab04feb644271a55cd5e6ab35bb9b5fce15",
"title": "Internet banking service quality and its implication on e-customer satisfaction and e-customer loyalty"
},
{
"paperId": "a662b4879b84a09b0f583e40e61338bfcca29a13",
"title": "SIKAP TERHADAP UANG DAN PERILAKU BERHUTANG"
},
{
"paperId": "abbabc85d324273c55d36e355a8ff874d6fae2bf",
"title": "Metode Penelitian Kuantitatif Kualitatif dan R&D"
},
{
"paperId": "7fc41430671285803065be373fb74d2ce409f818",
"title": "Faktor-faktor Psikologis Perilaku Berhutang pada Karyawan Berpenghasilan Tetap"
},
{
"paperId": "f939709b2d79de25a7db01e7e51ec32778a87ab9",
"title": "Analisis Pengaruh Kualitas Layanan Internet Banking dan Tingkat Kepuasan Terhadap Loyalitas Nasabah Pada PT Bank Rakyat Indonesia (Persero) Tbk Kantor Cabang Pembantu Jamsostek Jakarta"
},
{
"paperId": "ed0fd0f25a440d3cd070aa7d409a799d59c47b4c",
"title": "Internet banking acceptance model: Cross-market examination"
},
{
"paperId": "0244d4250b5912cd21e4ac49bb3c6889956fd8a2",
"title": "Debt and mental health: the role of psychiatrists"
},
{
"paperId": "88abcfbd1d427a5b5e7944c4bb0a3af8cce23b9b",
"title": "Debt, social disadvantage and maternal depression."
},
{
"paperId": "43ab9dd7aa318b8cdd6790b95667e8dc930ed342",
"title": "A Conceptual Model of Service Quality and Its Implications for Future Research"
},
{
"paperId": "1a2861a8fb4278ba61ea21814da4ec00b21a96cb",
"title": "Aplikasi Analisis Multivariate dengan Program IBM SPSS 25 edisi 9"
},
{
"paperId": null,
"title": "Digital banking in Indonesia: Building loyalty and generating growth"
},
{
"paperId": null,
"title": "Pengaruh Dimensi Kualitas Pelayanan Terhadap Kepuasan Pelanggan Pengguna Bpjs Kesehatan Di Kota Singaraja"
},
{
"paperId": null,
"title": "Digital Banking Trend in The New Normal Era"
}
] | 9,743
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Linguistics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/03013e291fb3192b286147f5bdb5770e434f91b2
|
[
"Computer Science"
] | 0.862956
|
Do Language Models Plagiarize?
|
03013e291fb3192b286147f5bdb5770e434f91b2
|
The Web Conference
|
[
{
"authorId": "2116713871",
"name": "Jooyoung Lee"
},
{
"authorId": "145535348",
"name": "Thai Le"
},
{
"authorId": "7557913",
"name": "Jinghui Chen"
},
{
"authorId": "2158951945",
"name": "Dongwon Lee"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Web Conf",
"WWW"
],
"alternate_urls": null,
"id": "e07422f9-c065-40c3-a37b-75e98dce79fe",
"issn": null,
"name": "The Web Conference",
"type": "conference",
"url": "http://www.iw3c2.org/"
}
|
Past literature has illustrated that language models (LMs) often memorize parts of training instances and reproduce them in natural language generation (NLG) processes. However, it is unclear to what extent LMs “reuse” a training corpus. For instance, models can generate paraphrased sentences that are contextually similar to training samples. In this work, therefore, we study three types of plagiarism (i.e., verbatim, paraphrase, and idea) among GPT-2 generated texts, in comparison to its training data, and further analyze the plagiarism patterns of fine-tuned LMs with domain-specific corpora which are extensively used in practice. Our results suggest that (1) three types of plagiarism widely exist in LMs beyond memorization, (2) both size and decoding methods of LMs are strongly associated with the degrees of plagiarism they exhibit, and (3) fine-tuned LMs’ plagiarism patterns vary based on their corpus similarity and homogeneity. Given that a majority of LMs’ training data is scraped from the Web without informing content owners, their reiteration of words, phrases, and even core ideas from training sets into generated texts has ethical implications. Their patterns are likely to exacerbate as both the size of LMs and their training data increase, raising concerns about indiscriminately pursuing larger models with larger training corpora. Plagiarized content can also contain individuals’ personal and sensitive information. These findings overall cast doubt on the practicality of current LMs in mission-critical writing tasks and urge more discussions around the observed phenomena. Data and source code are available at https://github.com/Brit7777/LM-plagiarism.
|
## Do Language Models Plagiarize?
### Jooyoung Lee
##### jfl5838@psu.edu Penn State University University Park, PA, USA
### Jinghui Chen
##### jzc5917@psu.edu Penn State University University Park, PA, USA
#### ABSTRACT
Past literature has illustrated that language models (LMs) often
_memorize parts of training instances and reproduce them in natural_
language generation (NLG) processes. However, it is unclear to what
extent LMs “reuse” a training corpus. For instance, models can generate paraphrased sentences that are contextually similar to training
samples. In this work, therefore, we study three types of plagiarism
(i.e., verbatim, paraphrase, and idea) among GPT-2 generated texts,
in comparison to its training data, and further analyze the plagiarism
patterns of fine-tuned LMs with domain-specific corpora which are
extensively used in practice. Our results suggest that (1) three types
of plagiarism widely exist in LMs beyond memorization, (2) both
size and decoding methods of LMs are strongly associated with the
degrees of plagiarism they exhibit, and (3) fine-tuned LMs’ plagiarism patterns vary based on their corpus similarity and homogeneity.
Given that a majority of LMs’ training data is scraped from the Web
_without informing content owners, their reiteration of words, phrases,_
and even core ideas from training sets into generated texts has ethical implications. Their patterns are likely to exacerbate as both
the size of LMs and their training data increase, raising concerns
about indiscriminately pursuing larger models with larger training
corpora. Plagiarized content can also contain individuals’ personal
and sensitive information. These findings overall cast doubt on the
practicality of current LMs in mission-critical writing tasks and urge
more discussions around the observed phenomena. Data and source
_code are available at https://github.com/Brit7777/LM-plagiarism._
### Thai Le
##### thaile@olemiss.edu University of Mississippi Oxford, MS, USA
### Dongwon Lee
##### dongwon@psu.edu Penn State University University Park, PA, USA
**ACM Reference Format:**
Jooyoung Lee, Thai Le, Jinghui Chen, and Dongwon Lee. 2023. Do Language
Models Plagiarize?. In Proceedings of the ACM Web Conference 2023 (WWW
_’23), May 1–5, 2023, Austin, TX, USA. ACM, New York, NY, USA, 12 pages._
https://doi.org/\@acmDOI
#### 1 INTRODUCTION
#### CCS CONCEPTS
- Computing methodologies → **Natural language generation.**
#### KEYWORDS
Language Models, Natural Language Generation, Plagiarism
Permission to make digital or hard copies of all or part of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed
for profit or commercial advantage and that copies bear this notice and the full citation
on the first page. Copyrights for components of this work owned by others than the
author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or
republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee. Request permissions from permissions@acm.org.
_WWW ’23, May 1–5, 2023, Austin, TX, USA_
© 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM.
ACM ISBN 978-1-4503-9416-1/23/04...$15.00
https://doi.org/\@acmDOI
Language Models (LMs) have become core elements of Natural
Language Processing (NLP) solutions, excelling in a wide range of
tasks such as natural language generation (NLG), speech recognition, machine translation, and question answering. The development
of large-scale text corpora (generally scraped from the Web) has
enabled researchers to train increasingly large-scale LMs. Especially,
large-scale LMs have demonstrated unprecedented performance on
NLG such that LM-generated texts routinely show more novel and
interesting stories than human writings do [35], and the distinction
between machine-authored and human-written texts has become
non-trivial [52, 53]. As a result, there has been a significant increase
in the use of LMs in user-facing products and critical applications.
Concerning the fast-growing adoption of language technologies, it
is important to educate citizens and practitioners about the potential
ethical, social, and privacy harms of these LMs, as well as strategies
and techniques for preventing LMs from adversely impacting people.
A body of recent studies has attempted to identify such hazards by
examining LMs’ capabilities in generating biased and hateful content
[41], spreading misinformation [3], and violating users’ privacy [12].
Particularly, it was shown that machine-generated texts can include
individuals’ private information such as phone number and email
address due to LMs’ over-memorization of training samples [11].
Some may argue that, since one’s private information was publicly
available in the first place, it is not a problem for LMs to memorize
and emit it in the generated texts. Still, the current data collection
processes (for building training corpora) do not consider how that
particular piece of information has been originally released [9]. For
example, it is possible for malicious attackers to hack an individual’s
private data and intentionally post it online. While training LMs on
corpora explicitly intended for public use with creators’ consents is
ideal, it is challenging to achieve in practice.
Note that over-memorization can be perceived as a threat to the
authorship and originality of training instances, as training sets for
LMs are routinely downloaded from the Internet without the explicit approval of content owners [9]. This behavior is known as
**plagiarism–i.e., the act of exploiting another person’s work or idea**
_without referencing the individual as its author [4]. As shown in_
-----
WWW ’23, May 1–5, 2023, Austin, TX, USA Lee et al.
Type Machine-Written Text Training Text
*** is the second amendment columnist for Breitbart news and host of *** is the second amendment columnist for Breitbart news and host of
Verbatim bullets with ***, a Breitbart news podcast. [...] (Author: GPT-2) bullets with ***, a Breitbart news podcast. [...]
Cardiovascular disease, diabetes and hypertension significantly increased For example, the presence of cardiovascular disease is associated with an
the risk of severe COVID-19, and cardiovascular disease increased the risk increased risk of death from COVID-19 [14] ; diabetes mellitus,
Paraphrase of mortality. (Author: Cord19GPT) hypertension, and obesity are associated with a greater risk of severe
disease [15] [16] [17] [18].
A system for automatically creating a plurality of electronic documents The method of claim 1, further comprising: monitoring an interaction of
based on user behavior comprising: [...] and wherein the system allows a the viewing user with the at least one of the plurality of news items; and
user to choose an advertisement selected by the user for inclusion in at least utilizing the interaction to select advertising for display to the viewing user.
one of the plurality of electronic documents, the user further being enabled
Idea to associate advertisement items with advertisements for the advertisement
selected by the user based at least in part on behavior of the user’s
associated advertisement items and providing the associated advertisement
items to the user, [...] . (Author: PatentGPT)
**Table 1: Examples of three types of plagiarism identified in the texts written by GPT-2 and its training set (more examples are shown**
**in Appendix). Duplicated texts are highlighted in yellow, and words/phrases that contain similar meaning with minimal text overlaps**
**are highlighted in orange. [...] indicates the texts omitted for brevity. Personally identifiable information (PII) was masked as ***.**
|*** is the bullets|second amendment with ***, a Breitbar|columnist for B t news podcast.|reitbart news and host of *** is the second amendme [...] (Author: GPT-2) bullets with ***|nt columnist, a Breitbart|
|---|---|---|---|---|
Table 1, for instance, plagiarized content written by a machine may
contain not only explicit text overlap but also semantically similar
information. Existing memorization studies on LMs have focused
only on the memorized sequences that are identical to training sequences [12, 30, 59]. This motivates our main inquiry of this work:
_To what extent (not limited to memorization) do LMs exploit phrases_
_or sentences from their training samples?_
On the other hand, the fine-tuning paradigm is widely used in
LMs for downstream NLP tasks. Specifically, LMs are initially pretrained on a massive and diverse corpus and then fine-tuned using
a smaller task-specific dataset. This enables LMs to create texts in
specific domains such as poetry [16] and song lyrics [49]. These
tasks require creativity and authenticity, which LMs are prone to fail
in. Therefore, the generation outputs of LMs have great moral and
ethical implications. Despite increasing efforts to comprehend the
over-memorization of pre-trained LMs, to the best of our knowledge,
no prior literature has studied on the memorizing behavior of finetuned LMs from both pre-training and fine-tuning corpora.
To fill this void of our understanding on the limits of LMs, in this
paper, we examine the plagiarizing behaviors of pre-trained and finetuned LMs. Our study is guided by two research questions: (RQ1)
**Do pre-trained LMs plagiarize? and (RQ2) Do fine-tuned LMs**
**plagiarize?. Specifically, we use OpenAI’s GPT-2 [44] for studying**
these inquiries.[1] We first construct a novel pipeline for automated
plagiarism detection and use it to identify three types of plagiarism
(i.e., verbatim, paraphrase, idea plagiarism) from passages generated by pre-trained GPT-2 with different combinations of model
sizes and decoding methods. For RQ2, three GPT-2 models are
fine-tuned using datasets in scholarly writing and legal domains,
which are later used for comparing plagiarism from pre-training and
fine-tuning corpora.
Our results demonstrate that machine-generated texts do plagia
rize from training samples, across all three types of plagiarism. We
discover three attributes that impact LMs’ plagiarism: 1) model
_size: larger models plagiarize more from a training set than smaller_
1We chose GPT-2 (instead of more recent LMs such as GPT-3) as it is the latest LM
whose replicated training corpus is available. Also, GPT-2 is very popular, ranked as
one of the most downloaded LMs from Hugging Face.
models; 2) decoding methods: decoding the outputs after limiting
the output space via top-p and top-k strategies are positively related
to heightened plagiarism levels as opposed to a raw vocabulary
distribution; 3) corpus similarity and homogeneity: a higher corpus
similarity level across pre-training and fine-tuning corpora, as well
as within fine-tuning corpora, enhances the degree of plagiarism for
a fine-tuned model.
In summary, our work makes the following contributions:
- By leveraging a BERT-based classifier together with Named
Entity Recognition (NER) on top of Sanchez-Perez et al. [48]’s
plagiarism detection model, we empirically highlight that LMs
do more than copying and pasting texts in a training set; it further
rephrases sentences or mimics ideas from other writings without
properly crediting the source.
- To the best of our knowledge, this is the first work to systematically study the plagiarizing behavior of fine-tuned LMs. Specifically, we find that restricting intra- and inter-corpus similarity
can considerably decrease the rate of plagiarism.
- We provide a deeper understanding of the factors that influence
LMs’ plagiarizing patterns such as model size, decoding strategies, and a fine-tuning corpus. Our results add value to the ongoing discussion around memorization in modern LMs and pave
the way for future research into designing robust, reliable, and
responsible LMs.
#### 2 RELATED WORK 2.1 Memorization in LMs
There is a growing body of literature that aims to study the memorization of neural LMs by recovering texts in the training corpus [31, 47]
or extracting artificially injected canaries [37, 58]. Carlini et al. [12]
and Brown et al. [9] emphasized that data memorization can intentionally or unintentionally lead to sensitive information leakage
from a model’s training set. Meanwhile, recent studies [25, 30] have
shown that training data of LMs tend to contain a large number of
near-duplicates, and overlapping phrases included in near-duplicates
significantly account for memorized text sequences. In order to distinguish rare but memorized texts from trivial examples, Zhang et al.
-----
Do Language Models Plagiarize? WWW ’23, May 1–5, 2023, Austin, TX, USA
[59] presented a notion of counterfactual memorization which measures a difference in the expected performance of two models trained
with or without a particular training sample.
Still, none of these works have explored beyond text overlap.
The most relevant research to ours is McCoy et al. [35], which
analyzed the novelty of machine-generated texts. Although authors
found 1,000 word-long duplicated passages from a training set,
they concluded that neural LMs can integrate familiar parts into
novel content, rather than simply copying training samples. However,
because they did not directly compare identified novel content with
training samples, the level of plagiarism is uncertain.
#### 2.2 Automatic Plagiarism Detection
Automated extrinsic plagiarism detection, in general, can be divided
into two subtasks: document retrieval and text alignment. While document retrieval focuses on fetching all documents that potentially
have plagiarized an existing document, the text alignment subtask
detects the location and content of plagiarized texts. Alzahrani [6]
retrieved candidate documents that share exactly copied sequences
and computed the similarity between overlapping 8-grams. There
are diverse ways to measure text similarity with segmented document pairs. For example, Küppers and Conrad [27] calculated the
Dice coefficient between 250 character chunks of passage pairs, and
Shrestha and Solorio [50] implemented the Jaccard similarity with
n-grams.
More recently, there has been continuous efforts in incorporating
word embedding and advanced machine learning or deep learning
models for plagiarism detection. Agarwal et al. [2] used Convolutional Neural Network (CNN) to obtain the local region information
from n-grams and applied Recurrent Neural Network (RNN) to capture the long-term dependency information. Similarly, Altheneyan
and Menai [5] viewed the task as a classification problem and developed a support vector machine (SVM) classifier using several
lexical, syntactic, and semantic features. In our proposed method, we
combine conventional similarity measurements and state-of-the-art
models to maximize the detection performance.
#### 3 PLAGIARISM: DEFINITION AND DETECTION 3.1 Taxonomy of Plagiarism
Plagiarism occurs when any content including text, source code, or
audio-visual content is reused without permission or citation from
an author of the original work [14, 40]. It has been a longstanding problem, especially in educational and research institutions or
publishers, given the availability of digital artifacts [13]. Plagiarism
can severely damage academic integrity and even hurt individuals’
reputation and morality [18]. To detect such activities, it is necessary
to have extensive knowledge about plagiarism forms and classes.
In this work, we focus on the three most commonly studied plagiarism types: verbatim plagiarism, paraphrase plagiarism, and idea
plagiarism. Verbatim plagiarism, which can be considered as the
most naive approach, is to directly copy segments of others’ documents and paste them into their writings [17]. To make plagiarism
less obvious, one may incorporate paraphrase plagiarism by replacing original words with synonyms or rearrange word orders [7].
Similarly, back translation, using two independent translators to
translate sentences back and forth, is common in generating paraphrases. Lastly, reuse of the core idea from the original content, also
known as idea plagiarism, is a challenging case for an automatic
detection due to limited lexical and syntactic similarities. Hence,
existing literature (e.g., Gupta et al. [21], Vani and Gupta [54]) specified the task to capture whether a document embeds a summary of
another document. While paraphrase plagiarism targets sentenceto-sentence transformations, idea plagiarism reads a chunk of the
content and condenses its main information into fewer sentences
(or vice versa). In essence, in this work, we adopt the following
definition of three plagiarism types:
- Verbatim plagiarism: exact copies of words or phrases without
transformation.
- Paraphrase plagiarism: synonymous substitution, word reordering, and/or back translation.
- Idea plagiarism: representation of core content in an elongated
form.
#### 3.2 Automatic Detection of Plagiarism
In this section, we introduce a two-step approach for automated
plagiarism detection. Suppose we have 𝑛 documents in a corpus
_𝐷={𝑑1, 𝑑2, ... 𝑑𝑛} and a query document 𝑑𝑞. The goal is to identify_
a pair of “plagiarized" text segments (𝑠1, 𝑠2) such that 𝑠1 (resp. 𝑠2) is
a text segment within a document 𝑑𝑖 ∈ _𝐷_ (resp. 𝑑𝑞).
**Step 1 (Finding Top-𝑛[′]** **Candidate Documents): First, for the**
given query document 𝑑𝑞, we aim to quickly narrow down to top-𝑛[′]
documents (out of 𝑛 documents, where 𝑛[′] ≪ _𝑛) which are likely to_
contain plagiarized pieces of texts. To do this, we utilize a document
similarity score as a proxy for plagiarism. Since recent LMs are
generally trained on gigantic corpora, it is non-trivial to store them
locally and compute a pair-wise document similarity. Hence, we implement a search engine using Elasticsearch[2], an open-source search
engine built on Apache Lucene that provides a distributed RESTful
search service with a fast response time. After storing the entire
training documents 𝐷 in Elasticsearch, using a machine-generated
document as the query document 𝑑𝑞, we retrieve top-𝑛[′] most-similar
documents. Elasticsearch utilizes the Okapi-BM25 algorithm [46], a
popular bag-of-words ranking function, by default. We used 𝑛[′] = 10
in experiments for the sake of time efficiency.[3]
**Step 2 (Finding Plagiarized Text Pairs and Plagiarism Type):**
Next, using the identified 𝑛[′] candidates {𝑑1, 𝑑2, ..., 𝑑𝑛[′]} for the query
document 𝑑𝑞, we aim to find plagiarized text pairs (𝑠1, 𝑠2) such that
_𝑠2 is one of three types of plagiarism against 𝑠1. For this task, we_
exploit text alignment algorithms that locate and extract most-similar
contiguous text sequences between two given documents. Such text
alignment algorithms are applicable to various tasks such as textreuse detection [51] and translation alignment [33]. In particular,
we employ the improved version of the winning method at the
plagiarism detection competition of PAN 2014.[4] Following, we
2https://www.elastic.co/elasticsearch/
3We performed a post-hoc analysis with a smaller (𝑛′ = 5) and a larger value (𝑛′ = 30)
of 𝑛[′] using GPT-2 xl to gauge its potential effects on identified plagiarism rates. The
results showed a marginal difference (e.g., 1.46% (𝑛[′] = 5) vs. 1.54% (𝑛[′] = 30) for
temperature setting), indicating that the choice of the 𝑛[′] value does not drastically
influence our findings.
4https://pan.webis.de/clef14/pan14-web/text-alignment.html
-----
WWW ’23, May 1–5, 2023, Austin, TX, USA Lee et al.
PanDataset GptPlagiarismDataset
Scores
Verbatim Paraphrase Idea Verbatim Paraphrase Idea
Precision 0.995 1.00 1.00 0.96 0.846 0.99
Recall 0.986 0.723 0.412 0.87 0.785 0.3
**Table 2: Evaluation results of our plagiarism detection pipeline.**
**For PanDataset, we perform the evaluation in a binary clas-**
**sification setting (e.g., verbatim plagiarism vs. no plagiarism).**
**Since GptPlagiarismDataset does not take into account docu-**
**ment pairs without plagiarism, we adopt a multi-nomial clas-**
**sification setting (e.g., verbatim plagiarism vs. paraphrase/idea**
**plagiarism).**
explain details on Sanchez-Perez et al. [48] and our improvement
strategies.
**Current Approach (Sanchez-Perez et al. [48]). Their methods**
consist of five steps which include (1) text-preprocessing (lowercasing all characters, tokenizing, and stemming); (2) obfuscation
type identification (verbatim/random/translation/summary obfuscation); (3) seeding (deconstructing long passages into smaller segments and finding candidate pairs through sentence-level similarity measurement given two documents); (4) extension (forming
larger text fragments that are similar via clustering); and (5) filtering (removing overlapping and short plagiarized fragments). In
summary, they transform the suspicious and source sentences as
term frequency–inverse document frequency vector weights and
then calculate the similarity between the sentence pairs using the
dice coefficient and cosine measure. Adaptive parameter selection
is achieved by testing two settings recursively for the summary
obfuscation corpus and the other three corpora.
**Our Improvements. To verify the effectiveness of Sanchez-Perez**
et al. [48] on our corpus, we manually inspected 200 plagiarism
detection results. For a fair comparison, the number of sentence pairs
in each category (none/verbatim/paraphrase/idea plagiarism) was
equally distributed. Our evaluation revealed that Sanchez-Perez et al.
[48] induces more false positives than their reported performance,
specifically in detecting the paraphrase type plagiarism (0.51 in
precision). It resulted from the model’s tendency of labeling nearduplicates with one character difference as paraphrases (should be
the “verbatim" plagiarism type) and its inability to distinguish a
minor entity-level discrepancy such as numerical values or dates. To
minimize such errors, after Sanchez-Perez et al. [48] retrieves all
paraphrased text segments, we post-process segments by chunking
them into sentences with NLTK[5]’s sentence tokenizer and apply a
RoBERTa-based paraphrase identification model [38][6] and NamedEntity Recognition (NER)[7] as additional validators. Specifically,
when there is at least one sentence pair whose probability score
(from the paraphrase detection model) ranges from 0.5 to 0.99[8]
and have the exactly matching set of entities, we ultimately accept
5https://www.nltk.org
6The RoBERTa classifier has achieved 91.17% accuracy on the evaluation set from the
MSRP corpus (https://www.microsoft.com/en-us/download/details.aspx?id=52398).
7We use SpaCy library (https://spacy.io).
8We specified 0.99 as the upper bound to avoid near-duplicate pairs.
the plagiarism result by Sanchez-Perez et al. [48]. This additional
restriction resulted in the following precision scores: 0.92 for no
plagiarism, 1.0 for verbatim type, 0.88 for paraphrase type, and 0.62
for idea type. To gauge both precision and recall, we utilize two
additional labeled datasets, PanDataset and GptPlagiarismDataset
(refer to Appendix A for more details on datasets). Both precision
and recall scores of each label are reported in Table 2. Note that at
the end, our plagiarism detection pipeline has high precisions at the
cost of low recalls, implying that the number of plagiarism cases
we report subsequently is only a “lower-bound" estimate of plagiarism rates that actually exist. For subsequent analyses, we utilize
two hyperparameters: (1) the minimum character count of common
substrings between the two documents for verbatim plagiarism is set
to 256; (2) the minimum character count permitted on either side of
a plagiarism case is set to 150. These thresholds are much stricter
than minimum 50 tokens (i.e., on average 127 characters) employed
by existing works [10, 30]. Again, this ensures that our following
report on RQ1 and RQ2 is the “lower-bound" estimate of plagiarism
frequencies.
#### 4 RQ1: DO PRE-TRAINED LMS PLAGIARIZE? 4.1 Experimental Setup
**Dataset. GPT-2 is pre-trained on WebText, containing over 8 million**
documents retrieved from 45 million Reddit links. Since OpenAI has
not publicly released WebText, we use OpenWebText which is an
open-source recreation of the WebText corpus.[9] It has been reliably
used by prior literature [25, 34].
**Model. GPT-2 is an auto-regressive language model predicting one**
token at a time in a left-to-right fashion. That is, the probability distribution of a word sequence can be calculated through the product
of conditional next word distributions. In response to an arbitrary
prompt, GPT-2 can adapt to its style and content and generate artificial texts. GPT-2 comes in 4 different sizes — small, medium, large,
and xl, with 124M, 355M, 774M, and 1.5B parameters, respectively.
We utilize all of them for analyses.
**Text Generation. Given that GPT-2 relies on the probability distri-**
bution when generating word-tokens, there exist various decoding
methods which are well known to be critical for performance in
text generation [24]. We primarily consider the following decoding
algorithms:
- Temperature [1]: control the randomness of predictions by dividing the logits by t before applying softmax
- Top-k [19]: filter the k most likely next words and redistribute
the probability mass
- Top-p [22]: choose from the smallest possible set of words whose
cumulative probability exceeds the probability p
It is reported that increasing parameter values (t, k, p) can notably
improve the novelty of machine-generated texts but may also deteriorate their quality sides [35]. Conversely, smaller parameter values
tend to yield dull and repetitive sentences [22].
Considering the difficulties in hyper-parameter tuning that can
confidently guarantee high-quality machine-authored texts, we use
9https://skylion007.github.io/OpenWebTextCorpus/
-----
Do Language Models Plagiarize? WWW ’23, May 1–5, 2023, Austin, TX, USA
**Figure 1: Document percentage w.r.t. three plagiarism types**
**from pre-training data**
off-the-shelf GPT-2 Output Dataset[10] provided by OpenAI. This
dataset has been reliably used by Kushnareva et al. [28] and Wolff
and Wolff [57] for neural text detection. Specifically, It contains
250,000 texts generated by four versions of the GPT-2 model with
aforementioned decoding approaches. Owners of the repository have
informed us that they used a ‘<|endoftext|>’ token as a prompt and
set t=1, k=40, 0.8<p<1.[11]. In total, there are 12 (i.e., 4 model size * 3
decoding methods) combinations, and we analyze 10,000 documents
in each combination.
#### 4.2 Results
We discover that pre-trained GPT-2 families do plagiarize from
the OpenWebText. Figure 1 illustrates the percentage of unique
machine-written documents regarding three plagiarism types based
on different model sizes and decoding strategies[12]. Consistent with
[12, 32], the larger the model size became, the higher occurrences
of plagiarism were observed when using temperature sampling. The
general trend still holds when GPT-2’s word token is sampled with
top-k and top-p truncation except for the xl model size. However,
interestingly, plagiarism frequencies were the highest when GPT-2
large models were used, not xl. We also find that decoding methods
affect models’ plagiarism. More precisely, top-k and top-p sampling
are more strongly associated with plagiarism than decoding with
temperature regardless of the model size. We conjecture that this
discrepancy is due to the fact that top-k and top-p decoding methods
disregard less probable tokens unlike random sampling, which may
push models to choose a memorized one as a next token.
#### 4.3 Qualitative Examination of Plagiarized Texts
**Lengths and Occurrences. Motivated by prior memorization stud-**
ies [10, 30], we inspect lengths and occurrences of texts that are
associated with verbatim plagiarism. We find that the median length
of memorized texts is 483 characters, and the longest texts contain
5,920 characters. In order to efficiently count the occurrences of plagiarized strings within OpenWebText, we utilize the established Elasticsearch pipeline, which includes setting plagiarized texts as search
10https://github.com/openai/gpt-2-output-dataset
11Equivalent to existing literature [15, 35], we only report results of these specific
hyperparameters because they were recommended by GPT-2 creators [44] Also, our
findings on the decoding methods were validated by additional experiments with more
diverse parameter values.
12Please note that sentences with proper quotation marks within identified plagiarism
cases were excluded from the analyses, as they do not constitute plagiarism.
**Figure 2: Number of unique PII-exposing substrings associated**
**with plagiarism categories**
queries and retrieving documents that embed provided texts.[13] We
find that some memorized sequences are from highly duplicated
texts throughout the training corpus: the newsletter sign-up text [14]
appeared at most 9,978 times and was memorized. Still, there exist
many instances where models memorize without seeing them more
than two times. While the median of occurrences for memorized
texts is 6, sequences related to paraphrase or idea plagiarism are
prone to not appear at all from training samples (median = 0).
**Inclusion of Sensitive Information. We now turn our attention to**
whether sequences associated with three plagiarism types contain
individuals’ personal or sensitive data. To achieve this, we use Microsoft’s Presidio analyzer,[15] a Python toolkit for personally identifiable information (PII) entity detection (e.g., credit card information,
email address, phone number). There are a total of 1,193 unique text
sequences (verbatim: 388, paraphrase: 507, and idea: 298) plagiarized by pre-trained GPT-2. We set a confidence threshold to 0.7.
A total number of plagiarized documents that reveal PII entities is
shown in Figure 2. Of 1,193 plagiarized sequences, nearly 28% include at least one element of location information and a person’s full
name. Although none of highly sensitive information (e.g., driver license number, credit card information, bank number, social security
number, and IP address) is revealed, the results show a possibility of
machine-generated texts disseminating personal data such as phone
number and email address through all three types of plagiarism.
#### 5 RQ2: DO FINE-TUNED LMS PLAGIARIZE? 5.1 Experimental Setup
**Dataset. We choose public English datasets related to scholarly**
and legal writings because plagiarism is deemed more sensitive and
intolerable in these domains [42]. Three datasets are:
- ArxivAbstract: includes 250,000 randomly selected abstracts on
arxiv.org, from the start of the site in 1993 to the end of 2019 [20].
It covers a wide range of disciplines (e.g., Physics, Computer
Science, Economics).
- Cord-19: consists of 500,000 scholarly articles about the COVID19 virus [55]. Medicine (55%), Biology (31%), and Chemistry
13By default, Elasticsearch does not allow searches to return more than the top 10,000
matching hits.
14“newsletter sign up continue reading the main story please verify you’re not a robot
by clicking the box. invalid email address. please re-enter...”
15https://microsoft.github.io/presidio/analyzer/
-----
WWW ’23, May 1–5, 2023, Austin, TX, USA Lee et al.
Plagiarism from Pre-Training Data Plagiarism from Fine-Tuning Data
Model Decoding
Verbatim Paraphrase Idea Verbatim Paraphrase Idea
**Pre-trained**
**GPT**
**Patent**
**GPT**
**Cord19**
**GPT**
**ArxivAbstract**
**GPT**
temp 47 (0.47%) 16 (0.16%) 5 (0.05%)
top-k 65 (0.65%) 32 (0.32%) 38 (0.38%) N/A
top-p **70 (0.7%)** 32 (0.32%) 15 (0.15%)
temp 0 (0%) 36 (0.36%) 21 (0.21%) 0 (0%) 32 (0.32%) 17 (0.17%)
top-k 0 (0%) **171 (1.71%)** **161 (1.61%)** 0 (0%) 2 (0.02%) 0 (0%)
top-p 0 (0%) 94 (0.94%) 130 (1.3%) 0 (0%) 3 (0.03%) 0 (0%)
temp 0 (0%) 6 (0.06%) 6 (0.06%) 43 (0.43%) 90 (0.9%) 42 (0.42%)
top-k 0 (0%) 79 (0.79%) 122 (1.22%) 46 (0.46%) **548 (5.48%)** **485 (4.85%)**
top-p 2 (0.02%) 57 (0.57%) 79 (0.79%) **72 (0.72%)** 388 (3.88%) 228 (2.28%)
temp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 3 (0.03%) 0 (0%)
top-k 0 (0%) 0 (0%) 1 (0.01%) 0 (0%) 0 (0%) 0 (0%)
top-p 0 (0%) 2 (0.02%) 0 (0%) 0 (0%) 2 (0.02%) 0 (0%)
**Table 3: Number (%) of machine-written documents w.r.t. three plagiarism types from pre-training & fine-tuning data. Blue repre-**
**sents the pre-trained model, whereas pink represents the fine-trained model. in A total number of documents we generated for each**
**model and decoding methods is 10,000.**
(3%) are primary domains of this corpus. For fine-tuning purposes, we randomly sample 200,000 documents.[16]
- PatentClaim: is provided by Lee and Hsiang [29] and has 277,947
patent claims in total.
**Model. Using these datasets, we fine-tune three independent GPT-2**
small models[17] and denote them as ArXivAbstractGPT, Cord19GPT,
and PatentGPT, respectively. The details on training configurations
can be found in Appendix B.
**Text Generation. For three fine-tuned models, we manually create**
10,000 machine-generated texts using the same prompt and parameter settings as GPT-2 Output Dataset.
#### 5.2 Results
We compare plagiarizing behaviors of three fine-tuned models using
both pre-training (OpenWebText) and fine-tuning datasets (PatentClaim, Cord-19, ArxivAbstract) in Table 3. Our findings show that
fine-tuning significantly reduces verbatim plagiarism cases from
OpenWebText. This observation aligns with GPT-2’s outstanding
adaptability to the writing styles of a new corpus. Yet, not all finetuned models are plagiarism-free; for PatentGPT and Cord19GPT,
the remaining plagiarism types regarding OpenWebText occurred
more frequently than the pre-trained GPT. Meanwhile, ArxivAbstractGPT barely plagiarized texts from OpenWebText. Interestingly,
models’ plagiarism behaviors change when we compare their generated texts against the fine-tuning samples. Cord19GPT was strongly
affiliated with plagiarism, whereas the other two models were not.
These results suggest that, although three models are fine-tuned in
a similar setting (regarding dataset size and training duration), their
patterns of plagiarism vary. We hypothesize that there are external
factors that affect models’ plagiarism. For example, if fine-tuning
and pre-training corpora have multiple similar or duplicated content,
the fine-tuned model would have been immensely exposed to it and
16Since most articles in CordD-19 exceed the length of 1,024 tokens, we only consider
the first five paragraphs starting from the ‘Introduction’ section.
17Due to constraints of computing resource, we only fine-tune the GPT-2 small variation.
**Figure 3: Perplexity (left) and similarity scores (right) of train-**
**ing data. Plagiarism rate represents the average percentage of**
**all plagiarism categories using the three decoding methods.**
may have started to remember it. Lee et al. [30] has shown a positive
relationship between memorized sequences and their frequencies
in a training set. Similarly, it is also possible that over-exposure
to particular texts may have been resulted from similar documents
within fine-tuning data. Next, we analyze a corpus similarity between
fine-tuning data and pre-training data and a homogeneity of finetuning data in Section 6 to verify our hypotheses.
#### 6 PLAGIARISM V.S. INTRA- AND INTER-CORPUS SIMILARITY 6.1 Inter-Corpus Similarity (across Datasets)
**Method. There are various methods to compute a corpus similarity.**
Generally speaking, we first transform document pairs into vectors, apply pair-wise document similarity measurements, and then
aggregate them. Yet, since the size of OpenWebText is huge, it
is computationally expensive to employ conventional approaches.
Thus, inspired by Kilgarriff and Rose [26] and Carlini et al. [12], we
utilize perplexity measures. The perplexity of a sequence estimates
the confidence levels of an LM when predicting the inclusive tokens
in a specific order. To compute the corpus similarities of pre-training
and fine-tuning sets, we retrieve the perplexity of the pre-trained
-----
Do Language Models Plagiarize? WWW ’23, May 1–5, 2023, Austin, TX, USA
**Before Filtering Low Perplexity** **After Filtering Low Perplexity**
Model Decoding
Verbatim Paraphrase Idea Verbatim Paraphrase Idea
Patent
GPT
Cord19
GPT
temp* 0 (0%) 26 (0.52%) 11 (0.22%) 0 (0%) 11 (0.22%) 9 (0.18%)
top-k* 0 (0%) 109 (2.18%) 109 (2.18%) 0 (0%) 79 (1.58%) 54 (1.08%)
top-p* 0 (0%) 66 (1.32%) 59 (1.18%) 0 (0%) 41 (0.82%) 27 (0.54%)
temp 0 (0%) 7 (0.14%) 6 (0.12%) 0 (0%) 4 (0.08%) 1 (0.02%)
top-k* 0 (0%) 67 (1.34%) 106 (1.12%) 0 (0%) 56 (1.12%) 36 (0.72%)
top-p* 5 (0.1%) 54 (1.08%) 59 (1.18%) 0 (0%) 35 (0.7%) 25 (0.5%)
**Table 4: Number (%) of machine-generated documents w.r.t. three plagiarism types before/after removing training samples with**
**low perplexity. The total number of generated documents for each model and decoding method is 5,000. * indicates a statistical**
**significance (𝑝** **< 0.05).**
GPT-2 on the fine-tuning dataset. Due to the limited space, we refer
the readers to the Appendix C for a detailed description of perplexity
calculation.
**Results. A low perplexity implies that LM is not surprised by the**
sequence. In our case, the lower the perplexity score is, the more
comparable a particular fine-tuned corpus is to OpenWebText. We
find that a perplexity score of PatentClaim is the lowest, following
Cord-19 and ArxivAbstract (Figure 3). This result concurs with
our initial observation where PatentGPT plagiarizes the most from
OpenWebText. Subsequently, we create two versions of PatentGPT
and Cord19GPT to test the effect of perplexity on plagiarism from
OpenWebText. While the first is trained with a subset of fine-tuning
samples excluding 30% of the documents with the lowest perplexity,
the second does not consider the perplexity.
For a fair comparison, we maintain the same training configurations for all model pairs.[18] Finally, we generate 5,000 documents
for each model using three decoding methods and compare their
plagiarism. As shown in Table 4, omitting low perplexity documents
mitigates the intensity of plagiarism from pre-training data.[19]
#### 6.2 Intra-Corpus Similarity (within Datasets)
**Method. Here we adopt a traditional document similarity mea-**
surement to quantify inner-similarity levels of fine-tuning datasets.
For each fine-tuning data, we first convert all instances into term
frequency-inverse document frequency (tf-idf) vectors and then aggregate the averaged cosine similarity over all examples.
**Results. We observe that the intra-corpus similarity of Cord-19 is**
more than twice higher than PatentClaim and ArxivAbstract (Figure 3). This result coincides with our observation in RQ2 where
Cord19GPT demonstrates a heightened degree of plagiarism. Moreover, our manual inspection of verbatim plagiarism cases supports
that most of them are frequently occurring substrings. For example,
a part of BMJ’s statement about copyright and authors’ rights[20]
appeared 588 times in the Cord-19 corpus. We further evaluate a correlation between corpus homogeneity and plagiarism by re-training
two Cord19GPT models. Specifically, the former is fine-tuned with
randomly selected 188,880 Cord-19 documents whereas the latter is
fine-tuned using filtered Cord-19 data where 11,120 highly similar
18PatentGPT variations are trained on 189,000 documents for 22,000 steps, whereas
Cord19 variations are trained on 140,000 documents for 40,850 steps.
19Refer to Appendix D for statistical testing results.
20https://authors.bmj.com/policies/copyright-and-authors-rights/
training instances (cosine similarity > 0.8) are removed. They are
both trained for roughly 42,390 steps. Table 5 supports the effectiveness of removing similar training instances in reducing plagiarism
from fine-tuning data.[21]
#### 7 FINDINGS
**1. Larger LMs plagiarize more. Consistent with Carlini et al. [12]**
and Carlini et al. [10], we find that larger GPT-2 models (large
and xl) generally generate plagiarized sequences more frequently
than smaller ones. Depending on the decoding approaches, however,
the model size that yields the largest amount of plagiarism change:
when the next token is sampled from truncated distribution, the
GPT-2 large model plagiarizes the most. On the other hand, the
GPT-2 xl becomes more strongly associated with plagiarism than
the GPT-2 large when the temperature setting without truncation is
employed. This discrepancy may be attributable to the error rates
of our paraphrase and idea plagiarism detection tool. Regardless, it
is evident that larger models plagiarize notably more from training
data. Considering the performance improvement of LMs with larger
model sizes, this finding sheds light on a trade-off between the
performance and copyright protection issues.
**2. Decoding algorithms affect plagiarism. Varying effects of de-**
coding methods and parameters on text quality and diversity have
been extensively studied [8, 15], but not from the plagiarism perspective. Particularly, top-p sampling is reported to be the most effective
decoding method in generating high-quality texts [23]. Despite its
efficiency in balancing quality and novelty, our analysis shows that
sampling with top-p or top-k truncation leads to more plagiarism
cases. This result shows that these popular sampling approaches still
pose critical flaws because they have not been thoroughly vetted in
terms of plagiarism. Thus, it is necessary to carefully choose and
evaluate decoding methods not only through the lens of quality and
diversity but also through the originality aspect.
**3. Fine-tuning LMs matter. Our findings highlight that fine-tuning**
a model with domain-specific data can mitigate verbatim plagiarism
from the pre-training dataset. Still, other types of plagiarism cases
have surged, in the case of PatentGPT and Cord19GPT, alongside
corpus similarity levels between pre-training and fine-tuning corpora.
Moreover, we observe that models’ plagiarism differs depending on
similarity degrees within a fine-tuning corpus. Our research validates
21Refer to Appendix D for statistical testing results.
-----
WWW ’23, May 1–5, 2023, Austin, TX, USA Lee et al.
**Before Filtering Similar Documents** **After Filtering Similar Documents**
Model Decoding
Verbatim Paraphrase Idea Verbatim Paraphrase Idea
CORD19
GPT
temp 15 (0.3%) 64 (1.28%) 22 (0.44%) 10 (0.2%) 49 (0.98%) 25 (0.5%)
top-k* 11 (0.22%) 301 (6.02%) 238 (4.76%) 11 (0.22%) 203 (4.06%) 184 (3.68%)
top-p* 21 (0.42%) 190 (3.8%) 111 (2.22%) 11 (0.22%) 153 (3.06%) 94 (1.88%)
**Table 5: Number (%) of machine-generated documents w.r.t. three plagiarism types before/after removing similar training samples.**
**The total number of generated documents for each model and decoding method is 5,000. * indicates a statistical significance (𝑝** **<**
**0.05).**
their relationships by comparing the rate of plagiarism before and
after removing syntactically or semantically similar instances in
fine-tuning data. Indeed, restricting inter- and intra-corpus similarity
can reduce the frequency of all plagiarism types. This result can
further be expanded as a simple solution to LMs’ plagiarism issues.
**4. LMs can pose privacy harms. Our qualitative examination of**
plagiarized texts reveals that LMs expose individuals’ sensitive or
private data not only through verbatim plagiarism but also paraphrase and idea plagiarism. Although all identified contents were
publicly available on the Web, emitting such sensitive information
in the generated texts can raise a serious concern. This finding adds
value to the ongoing discussion around privacy breaches from the
memorization of modern LMs.
#### 8 DISCUSSION AND ETHICS
**Discussion. In this work, we develop a novel pipeline for investi-**
gating LMs’ plagiarism in text generation processes and characterize a shift in plagiarism rates resulting from three attributes (i.e.,
model size, decoding methods, and corpus similarities). The datasets
utilized to train the models are the subject of this study. We use
GPT-2 as a representative LM to study because it is one of the most
downloaded LMs from Hugging Face at the end of 2022,[22] and its
reproduced training corpus is publicly accessible (which is a necessary condition to study the plagiarism of LMs). However, different
LMs may demonstrate different patterns of plagiarism, and thus our
results may not directly generalize to other LMs, including more
recent LMs such as GPT-3 or BLOOM. Future work can revisit the
proposed research questions against more diverse or modern LMs.
In addition, automatic plagiarism detectors are known to have
many failure modes (both in false negatives and false positives) [56].
Our plagiarism detection pipeline of Section 3.2 is no exception.
However, achieving a high precision with a low recall is not a major
issue in our problem domain, as we focus on demonstrating the
lower-bound of the plagiarism vulnerability in LMs (and in reality,
there are likely to be many more plagiarism cases that we missed
to detect due to low recalls). Likewise, prior memorization works
[12, 25] documented the lower-bound of the plagiarism susceptibility
and showed a small number of memorized instances. Regardless,
they were effective in inspiring others to continue exploring this
important phenomenon. As a result, we hope that our current finding
becomes useful to stimulate and raise public awareness about the
plagiarism behavior of popular LMs like GPT-2.
We also stress that distinguishing whether a reproduction of training datasets is a positive attribute of LM or not is beyond the scope
22https://huggingface.co/models?sort=downloads
of this work. It is highly context-dependent [30], and thus necessitates more sophisticated methods to disentangle. In our experiments,
we treat all instances of LM-generated texts that reiterate training
examples as “problematic", as the fine-tuning datasets we analyzed
are in academic and legal contexts where originality is valued.
Ultimately, a primary purpose of the exploration of the intraand inter-corpus similarity in models’ authorship violation is to
support our hypotheses and further motivate researchers to take this
into account when developing new LMs or fine-tuning current ones.
Yet, the current approach fails to completely eradicate plagiarism
occurrences.
**Ethics. Data and code, involving plagiarized texts we identified**
throughout this research, are available to the research community.
Due to the inclusion of individuals’ personal data in generated texts,
we employed data anonymization techniques prior to distribution.
Specifically, we filtered PII such as name, email address, and phone
number using Microsoft’s Presidio Anonymizer.[23] We recommend
that artificial documents generated by fine-tuned GPT-2 be utilized
strictly for research purposes.
#### 9 CONCLUSION
Our work presents the first holistic and empirical analyses of plagiarism in LMs by constructing a pipeline for the automatic identification of plagiarized content. We conclude that GPT-2 can exploit
and reuse words, sentences, and even core ideas (that are originally
included in OpenWebText, a pre-training corpus) in the generated
texts. Further, this tendency is prone to exacerbate as the model
size increases or certain decoding algorithms are employed. We also
discover that untangling corpus similarity and homogeneity can help
alleviate plagiarism rates by GPT-2. This is the first study to analyze
text generation outputs through the lens of plagiarism. Although the
goal of a supervised machine learning system is to learn to mimic
the distribution of its training data, we deem it crucial for model
users and designers to recognize the observed phenomena. The vulnerability of models to plagiarism can adversely impact societal and
ethical norms, particularly in literary disciplines that are intimately
connected to creativity and originality. Therefore, we recommend
researchers carefully assess the model’s intended usage and evaluate
its robustness before deployment.
#### ACKNOWLEDGMENTS
This work was in part supported by NSF awards #1934782 and
#2114824.
23https://microsoft.github.io/presidio/anonymizer/
-----
Do Language Models Plagiarize? WWW ’23, May 1–5, 2023, Austin, TX, USA
#### REFERENCES
[1] David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. 1985. A learning
algorithm for Boltzmann machines. Cognitive science 9, 1 (1985), 147–169.
[2] Basant Agarwal, Heri Ramampiaro, Helge Langseth, and Massimiliano Ruocco.
2018. A deep network model for paraphrase detection in short text messages.
_Information Processing & Management 54, 6 (2018), 922–937._
[3] Alim Al Ayub Ahmed, Ayman Aljabouh, Praveen Kumar Donepudi, and
Myung Suh Choi. 2021. Detecting Fake News using Machine Learning: A Systematic Literature Review. arXiv preprint arXiv:2102.04458 (2021).
[4] Asim M El Tahir Ali, Hussam M Dahwa Abdulla, and Vaclav Snasel. 2011.
Overview and comparison of plagiarism detection tools.. In Dateso. 161–172.
[5] Alaa Saleh Altheneyan and Mohamed El Bachir Menai. 2020. Automatic plagiarism detection in obfuscated text. Pattern Analysis and Applications 23, 4 (2020),
1627–1650.
[6] Salha Alzahrani. 2015. Arabic plagiarism detection using word correlation in
N-Grams with K-overlapping approach. In Proceedings of the Workshops at the
_7th Forum for Information Retrieval Evaluation (FIRE). 123–125._
[7] Alberto Barrón-Cedeño, Marta Vila, M Antònia Martí, and Paolo Rosso. 2013.
Plagiarism meets paraphrasing: Insights for the next generation in automatic
plagiarism detection. Computational Linguistics 39, 4 (2013), 917–947.
[8] Sourya Basu, Govardana Sachitanandam Ramachandran, Nitish Shirish Keskar,
and Lav R Varshney. 2020. Mirostat: A neural text decoding algorithm that directly
controls perplexity. arXiv preprint arXiv:2007.14966 (2020).
[9] Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, and
Florian Tramèr. 2022. What Does it Mean for a Language Model to Preserve
Privacy? arXiv preprint arXiv:2202.05520 (2022).
[10] Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian
Tramer, and Chiyuan Zhang. 2022. Quantifying Memorization Across Neural
Language Models. arXiv preprint arXiv:2202.07646 (2022).
[11] Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. 2019.
The secret sharer: Evaluating and testing unintended memorization in neural
networks. In 28th USENIX Security Symposium (USENIX Security 19). 267–284.
[12] Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel HerbertVoss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson,
et al. 2021. Extracting training data from large language models. In 30th USENIX
_Security Symposium (USENIX Security 21). 2633–2650._
[13] Roger Clarke. 2006. Plagiarism by academics: More complex than it seems.
_Journal of the Association for Information Systems 7, 1 (2006), 5._
[14] Georgina Cosma and Mike Joy. 2008. Towards a definition of source-code plagiarism. IEEE Transactions on Education 51, 2 (2008), 195–200.
[15] Alexandra DeLucia, Aaron Mueller, Xiang Lisa Li, and João Sedoc. 2020. Decoding methods for neural narrative generation. arXiv preprint arXiv:2010.07375
(2020).
[16] Liming Deng, Jie Wang, Hangming Liang, Hui Chen, Zhiqiang Xie, Bojin Zhuang,
Shaojun Wang, and Jing Xiao. 2020. An iterative polishing framework based on
quality aware masked language model for Chinese poetry generation. In Proceed_ings of the AAAI conference on artificial intelligence, Vol. 34. 7643–7650._
[17] Ish Kumar Dhammi and Rehan Ul Haq. 2016. What is plagiarism and how to
avoid it? Indian journal of orthopaedics 50, 6 (2016), 581.
[18] Julianne East. 2010. Judging plagiarism: a problem of morality and convention.
_Higher Education 59, 1 (2010), 69–83._
[19] Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story
generation. arXiv preprint arXiv:1805.04833 (2018).
[20] R. Stuart Geiger. 2019. ArXiV Archive: A tidy and complete archive of metadata
_for papers on arxiv.org, 1993-2019. https://doi.org/10.5281/zenodo.2533436_
[21] Deepa Gupta, K Vani, and LM Leema. 2016. Plagiarism detection in text documents using sentence bounded stop word n-grams. Journal of Engineering Science
_and Technology 11, 10 (2016), 1403–1420._
[22] Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The
curious case of neural text degeneration. arXiv preprint arXiv:1904.09751 (2019).
[23] Daphne Ippolito, Daniel Duckworth, Chris Callison-Burch, and Douglas Eck.
2019. Automatic detection of generated text is easiest when humans are fooled.
_arXiv preprint arXiv:1911.00650 (2019)._
[24] Daphne Ippolito, Daniel Duckworth, Chris Callison-Burch, and Douglas Eck.
2019. Human and automatic detection of generated text. (2019).
[25] Nikhil Kandpal, Eric Wallace, and Colin Raffel. 2022. Deduplicating training
data mitigates privacy risks in language models. arXiv preprint arXiv:2202.06539
(2022).
[26] Adam Kilgarriff and Tony Rose. 1998. Measures for corpus similarity and homogeneity. In Proceedings of the Third Conference on Empirical Methods for
_Natural Language Processing. 46–52._
[27] Robin Küppers and Stefan Conrad. 2012. A Set-Based Approach to Plagiarism
Detection.. In CLEF (Online Working Notes/Labs/Workshop).
[28] Laida Kushnareva, Daniil Cherniavskii, Vladislav Mikhailov, Ekaterina Artemova, Serguei Barannikov, Alexander Bernstein, Irina Piontkovskaya, Dmitri
Piontkovski, and Evgeny Burnaev. 2021. Artificial text detection via examining
the topology of attention maps. arXiv preprint arXiv:2109.04825 (2021).
[29] Jieh-Sheng Lee and Jieh Hsiang. 2020. Patent claim generation by fine-tuning
OpenAI GPT-2. World Patent Information 62 (2020), 101983.
[30] Katherine Lee, Daphne Ippolito, Andrew Nystrom, Chiyuan Zhang, Douglas Eck,
Chris Callison-Burch, and Nicholas Carlini. 2021. Deduplicating training data
makes language models better. arXiv preprint arXiv:2107.06499 (2021).
[31] Klas Leino and Matt Fredrikson. 2020. Stolen Memories: Leveraging Model Memorization for Calibrated {White-Box} Membership Inference. In 29th USENIX
_Security Symposium (USENIX Security 20). 1605–1622._
[32] Sharon Levy, Michael Saxon, and William Yang Wang. 2021. Investigating
Memorization of Conspiracy Theories in Text Generation. _arXiv preprint_
_arXiv:2101.00379 (2021)._
[33] Zehui Lin, Xiao Pan, Mingxuan Wang, Xipeng Qiu, Jiangtao Feng, Hao Zhou, and
Lei Li. 2020. Pre-training multilingual neural machine translation by leveraging
alignment information. arXiv preprint arXiv:2010.03142 (2020).
[34] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen,
Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta:
A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692
(2019).
[35] R Thomas McCoy, Paul Smolensky, Tal Linzen, Jianfeng Gao, and Asli Celikyilmaz. 2021. How much do language models copy from their training data?
Evaluating linguistic novelty in text generation using RAVEN. arXiv preprint
_arXiv:2111.09509 (2021)._
[36] Derek Miller. 2019. Leveraging BERT for extractive text summarization on
lectures. arXiv preprint arXiv:1906.04165 (2019).
[37] Fatemehsadat Mireshghallah, Huseyin A Inan, Marcello Hasegawa, Victor Rühle,
Taylor Berg-Kirkpatrick, and Robert Sim. 2021. Privacy regularization: Joint
privacy-utility optimization in language models. arXiv preprint arXiv:2103.07567
(2021).
[38] John X Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi.
2020. Textattack: A framework for adversarial attacks, data augmentation, and
adversarial training in nlp. arXiv preprint arXiv:2005.05909 (2020).
[39] Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov.
2019. Facebook FAIR’s WMT19 news translation task submission. arXiv preprint
_arXiv:1907.06616 (2019)._
[40] Chris Park. 2003. In other (people’s) words: Plagiarism by university students–
literature and lessons. Assessment & evaluation in higher education 28, 5 (2003),
471–488.
[41] John Pavlopoulos, Jeffrey Sorensen, Lucas Dixon, Nithum Thain, and Ion Androutsopoulos. 2020. Toxicity detection: Does context really matter? arXiv preprint
_arXiv:2006.00998 (2020)._
[42] Diane Pecorari. 2008. Academic writing and plagiarism: A linguistic analysis.
Bloomsbury Publishing.
[43] Robin L Plackett. 1983. Karl Pearson and the chi-squared test. International
_statistical review/revue internationale de statistique (1983), 59–72._
[44] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya
Sutskever, et al. 2019. Language models are unsupervised multitask learners.
_OpenAI blog 1, 8 (2019), 9._
[45] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang,
Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. 2020. Exploring the limits
of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res.
21, 140 (2020), 1–67.
[46] Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu,
Mike Gatford, et al. 1995. Okapi at TREC-3. Nist Special Publication Sp 109
(1995), 109.
[47] Ahmed Salem, Apratim Bhattacharya, Michael Backes, Mario Fritz, and Yang
Zhang. 2020. {Updates-Leak}: Data Set Inference and Reconstruction Attacks
in Online Learning. In 29th USENIX Security Symposium (USENIX Security 20).
1291–1308.
[48] Miguel A Sanchez-Perez, Alexander Gelbukh, and Grigori Sidorov. 2015. Adaptive algorithm for plagiarism detection: The best-performing approach at PAN
2014 text alignment competition. In International Conference of the Cross_Language Evaluation Forum for European Languages. Springer, 402–413._
[49] Zhonghao Sheng, Kaitao Song, Xu Tan, Yi Ren, Wei Ye, Shikun Zhang, and Tao
Qin. 2020. Songmass: Automatic song writing with pre-training and alignment
constraint. arXiv preprint arXiv:2012.05168 (2020).
[50] Prasha Shrestha and Thamar Solorio. 2013. Using a Variety of n-Grams for the
Detection of Different Kinds of Plagiarism. Notebook for PAN at CLEF 2013
(2013).
[51] Ilya Sochenkov, Denis Zubarev, Ilya Tikhomirov, Ivan Smirnov, Artem Shelmanov,
Roman Suvorov, and Gennady Osipov. 2016. Exactus like: Plagiarism detection
in scientific texts. In European conference on information retrieval. Springer,
837–840.
[52] Adaku Uchendu, Thai Le, Kai Shu, and Dongwon Lee. 2020. Authorship attribution for neural text generation. In Conf. on Empirical Methods in Natural
_Language Processing (EMNLP)._
[53] Adaku Uchendu, Zeyu Ma, Thai Le, Rui Zhang, and Dongwon Lee. 2021. TuringBench: A Benchmark Environment for Turing Test in the Age of Neural Text
Generation. In Findings of Conf. on Empirical Methods in Natural Language
-----
WWW ’23, May 1–5, 2023, Austin, TX, USA Lee et al.
_Processing (EMNLP-Findings)._
[54] K Vani and Deepa Gupta. 2017. Detection of idea plagiarism using syntax–
semantic concept extractions with genetic algorithm. Expert Systems with Appli_cations 73 (2017), 11–26._
[55] Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang,
Darrin Eide, Kathryn Funk, Rodney Kinney, Ziyang Liu, William Merrill, et al.
2020. Cord-19: The covid-19 open research dataset. ArXiv (2020).
[56] Debora Weber-Wulff. 2019. Plagiarism detectors are a crutch, and a problem.
_Nature 567, 7749 (2019), 435–436._
[57] Max Wolff and Stuart Wolff. 2020. Attacking neural text detectors. arXiv preprint
_arXiv:2002.11768 (2020)._
[58] Santiago Zanella-Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew
Paverd, Olga Ohrimenko, Boris Köpf, and Marc Brockschmidt. 2020. Analyzing
information leakage of updates to natural language models. In Proceedings of
_the 2020 ACM SIGSAC Conference on Computer and Communications Security._
363–375.
[59] Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian
Tramèr, and Nicholas Carlini. 2021. Counterfactual Memorization in Neural
Language Models. arXiv preprint arXiv:2112.12938 (2021).
#### A EVALUATION DATA FOR OUR PLAGIARISM DETECTION PIPELINE
We use two corpora with plagiarism labels to measure the precision
and recall scores of our proposed pipeline described in Section 3.2.
The first dataset (denoted as PanDataset) is originally introduced
as a test set for the fifth international competition on plagiarism
detection at PAN 2013.[24] It contains in total 3,170 source documents and 1,827 suspicious documents where 1,001 document pairs
are without plagiarism and 1,001 pairs are affiliated with verbatim
plagiarism. In order to automatically create document pairs for paraphrase plagiarism, the organizers applied machine-driven approaches
such as randomly replacing words based on a synonym database
like WordNet or back-translating sentences with existing translation
models (e.g., Google Translate[25]) using source documents. This
resulted in 2,002 pairs. Similarly, 1,186 summary plagiarism cases
are generated by existing text summarization models.
Given that PanDataset may exhibit different characteristics from
GPT-2 generated texts, we consider a subset of OpenWebText as
source documents, create suspicious documents, and use the pairs as
the second dataset (denoted as GptPlagiarismDataset). More specifically, we construct 1,000 document pairs for verbatim plagiarism
by extracting 500 character-long texts within source documents and
using them as suspicious documents. For paraphrase plagiarism,
we randomly select 5 sentences from 1,000 source documents and
employ Facebook FAIR’s WMT19 transformer [39] for back translation (English->German->English). Lastly, 1,000 document pairs
for summary plagiarism are created by two summarization models.
We first shorten the lengths of source documents with a BERT-based
extractive summarization model [36] and then transformed them
into meaningful summaries using T5 transformer [45] for abstractive summarization. This enables us to create more sophisticated
summaries with minimal overlapping strings.
#### B DETAILS ON FINE-TUNING CONFIGURATIONS
Our experimental environment is based on a Google Colab Pro+
with Tesla V100-SXM2-16GB and 55 GB of RAM. For fine-tuning,
we utilize a Python package called GPT-2-simple.[26] We maintain
24https://pan.webis.de/clef13/pan13-web/text-alignment.html
25http://translate.google.com
26https://github.com/minimaxir/gpt-2-simple
hyperparameters that are suggested in public repositories: learning
rate as 1e-4, temperature as 1.0, top-k as 40, and batch size as 1. The
ratio of training and validation sets is 8:2. To prevent the model from
overfitting, we stop training processes when a gap between training
and test losses reaches over 20% of training loss. Table 7 illustrates
their fine-tuning configurations. Fine-tuning one model for 10,000
steps approximately takes 5 hours.
#### C LM PERPLEXITY CALCULATION
Perplexity is defined as the exponentiation of the cross-entropy
between the data and LM predictions. Given a tokenized sequence
_𝑋_ = (𝑥0,𝑥1,𝑥2...𝑥𝑛), the perplexity of 𝑋 can be calculated by:
� _𝑚_ �
∑︁
_𝑝𝑒𝑟𝑝_ (𝑋 ) = 𝑒𝑥𝑝 −𝑚[1] _𝑙𝑜𝑔𝑓𝜃_ (𝑥𝑛 |𝑥 ≤𝑛−1)
_𝑛_
where 𝑙𝑜𝑔𝑓𝜃 (𝑥𝑛 |𝑥 ≤𝑛−1) is the log-likelihood of the 𝑛th token conditioned on the preceding tokens. Following the guideline provided
by Huggingface,[27] we rely on a strided sliding-window technique,
which entails moving the context window repeatedly so that the
model has a broader context when making each prediction. Here
a window size is a hyper-parameter we can adjust. To retrieve one
aggregated perplexity that represents the whole instances, we first
append all documents with newlines and then set the window size
as 512. For an individual document perplexity calculation of the
Cord-19 dataset, we reduce the window size to 50 since we do not
append all documents this time, and many Cord-19 documents tend
to be shorter than 512 tokens.
#### D STATISTICAL TESTING OF FILTERING
We perform the Pearson’s chi-squared test [43] to verify the statistical significance of the observed gap between before and after
filtering low-perplexity and similar documents. The test is used to
determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies. Here
we treat plagiarism as a binary variable (no plagiarism vs. plagiarism) and count the total number of documents accordingly. For
plagiarized document count, we do not distinguish plagiarism types.
Table 8 shows the results of the chi-squared test. Most of our experiments except for Cord19GPT’s temperature setting are found to be
statistically meaningful.
#### E PLAGIARIZED TEXT EXAMPLES
We present several examples of verbatim, paraphrase, and idea plagiarism from both pre-trained and fine-tuned models (Table 6). For
verbatim plagiarism, we identify cases where social media’s app ID
and its metadata are memorized, as well as an individual’s writing.
We also frequently find a paragraph related to journals’ copyright and
authors’ rights as verbatim plagiarism from the model trained with
academic papers. Examples associated with paraphrase plagiarism,
especially those authored by GPT-2 and Cord19GPT, demonstrate
models’ abilities in delivering factual information in a different
syntactic form without proper references. PatentGPT’s plagiarism
cases tend to mimic patent data by rephrasing and elaborating on the
described processes created by original patent owners.
27https://huggingface.co/docs/transformers/perplexity
-----
Do Language Models Plagiarize? WWW ’23, May 1–5, 2023, Austin, TX, USA
Type Machine-Written Text Training Text
Unexpected Error An unexpected error occurred. [...] "facebookAp- Unexpected Error An unexpected error occurred. [...] "facebookAp
Verbatim pID":***,"allow_select":true,"allow_filter":true,"allow_sheetlink":true pID":***,"allow_select":true,"allow_filter":true,"allow_sheetlink":true
[...] (Author: GPT-2) [...]
it reminded me of a feeling I’ve had right there on that road before. It it reminded me of a feeling I’ve had right there on that road before. It
Verbatim reminded me of all the times that people have come out to support the reminded me of all the times that people have come out to support the
blockade and stood together to make sure those trees stay standing. [...] blockade and stood together to make sure those trees stay standing. [...]
(Author: GPT-2)
I, the Submitting Author has the right to grant and does grant on behalf of I, the Submitting Author has the right to grant and does grant on behalf of
Verbatim all authors of the Work (as defined in the below author licence), an all authors of the Work (as defined in the below author licence), an
exclusive licence and/or a non-exclusive licence for contributions from exclusive licence and/or a non-exclusive licence for contributions from
authors who are: i) UK Crown employees; ii) where BMJ has agreed a authors who are: i) UK Crown employees; ii) where BMJ has agreed a
CC-BY licence shall apply, and/or iii) in accordance with the terms CC-BY licence shall apply, and/or iii) in accordance with the terms
applicable for US Federal Government officers or employees acting as part applicable for US Federal Government officers or employees acting as part
of their official duties; [...](Author: Cord19GPT) of their official duties; [...]
REUTERS/Kevin Lamarque U.S. President Donald Trump and First Lady REUTERS/Kevin Lamarque U.S. President Donald Trump, First Lady
Melania Trump, with their son Barron, arrive for a New Year’s Eve party at Melania Trump and their son Barron while aboard Air Force One on their
Paraphrase his Mar-a-Lago club in Palm Beach, Florida, U.S. December 31, 2017. [...] way to Florida, Mar-a-Lago in Palm Beach, Florida to spend the holiday at
(Author: GPT-2) Trump International Golf Club Mar-a-Lago. [...]
The development of natural killer cells (NK cells) is an important element Natural killer (NK) cells are a type of innate lymphoid cell that plays an
in the immune system as it provides the first line of defense against diverse important role in the first line of immune defense against any viral
Paraphrase pathogens. (Author: Cord19GPT) infection, including COVID-19.
A system, comprising: a sense circuit for receiving an electrical [...] and a Apple’s First Claim: A touch surface device, comprising: a touch-sensitive
digital compensator coupled with the sense circuit and for receiving the panel [...] and a sensing circuit coupled to the compensation circuit, the
Paraphrase output value from the decision circuit and generating a compensation value sensing circuit configured for receiving the compensated output signal.
in accordance with the output value [...] (Author: PatentGPT)
A method for testing electrical connections, comprising: [...] providing an The energy passing between elements A and B is in the form of an electric
electric voltage and an electric current to an electrical contact on the test current through the earth between the two ground connections.
element to transfer the electrical conductivity of the line to ground; wherein
the measuring is carried out with the electric current flowing from the
Idea electrical contact on the test element through the electric current to the
ground; [...] (Author: PatentGPT)
A control system comprising: a processor configured to execute an The system also may comprise a memory having stored thereon
operation on a memory and to output an instruction stream having a instructions that, upon execution by the at least one processor, cause the
plurality of executable instructions, wherein the output of the plurality of system to perform [...]
executable instructions is selectively selectable [...]; and a storage device
Idea
storing a plurality of items of a control structure, each of the control
structures containing executable instructions, which when executed by the
processor, cause the processor to perform [...] (Author: PatentGPT)
Symptoms of COVID-19 infections are relatively mild, such as fever, dry The most common symptoms of COVID-19 are headache, loss of smell,
cough, headache, diarrhea, dyspnoea, body ache, myalgia and sometimes nasal congestion, cough, asthenia, myalgia, rhinorrhea, sore throat, fever,
headache. In some infected patients, however, the infection is more rapid shortness of breath, nausea or vomiting, and diarrhea [2, 3] . Commonly
and severe with fever, dyspnoea, shortness of breath, cough and other reported comorbidities of COVID-19 are hypertension, obesity, diabetes,
Idea non-specific symptoms such as sore throat, runny nose, dry throat and and cardiovascular disease [4].
sputum production. [...] Several factors are strongly associated with
mortality in the SARS-CoV-2 outbreak. [...] and comorbidities such as
hypertension, obesity, chronic lung disease, obesity and diabetes. (Author:
**_Cord19GPT)_**
**Table 6: Examples of plagiarism identified in texts written by GPT-2 and its training set. Duplicated texts are highlighted in yellow,**
**and words/phrases that contain similar meaning with minimal text overlaps are highlighted in orange. [...] indicates the texts omitted**
**for brevity. Personally identifiable information (PII) was masked as ***.**
Model Name Training Steps Training / Test Loss
ArXivAbstractGPT 30,000 2.48 / 2.83
Cord19GPT 44,000 2.6 / 2.68
PatentGPT 32,300 1.65 / 1.87
**Table 7: Fine-tuning configurations**
|I, the Submitting Author has the ri all authors of the Work (as def exclusive licence and/or a non-ex authors who are: i) UK Crown e CC-BY licence shall apply, and applicable for US Federal Governm of their offciial duties;|ght to grant and does grant on behalf of I, the Submitting Author ined in the below author licence), an all authors of the Wor clusive licence for contributions from exclusive licence and/or mployees; ii) where BMJ has agreed a authors who are: i) UK /or iii) in accordance with the terms CC-BY licence shall a ent officers or employees acting as part applicable for US Federal [...](Author: Cord19GPT) of|has the right t k (as defined i a non-exclusi Crown emplo pply, and/or i Government their official|
|---|---|---|
-----
WWW ’23, May 1–5, 2023, Austin, TX, USA Lee et al.
Model Decoding Plagiarized Document # (before filtering vs. after filtering ) _𝑝_
Patent
GPT
Cord19
GPT
Cord19
GPT
temp 37 vs. 20 0.002
top-k 218 vs. 133 <0.00001
top-p 125 vs. 86 0.007
temp 13 vs. 5 0.059
top-k 173 vs. 92 <0.00001
top-p 118 vs. 60 0.00002
temp 101 vs. 84 0.207
top-k 550 vs. 398 <0.00001
top-p 322 vs. 258 0.006
**Table 8: Statistical results of the chi-squared test. The first result regarding Cord19GPT is for perplexity, whereas the second one is**
**for document similarity.**
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2203.07618, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2203.07618"
}
| 2,022
|
[
"Book",
"JournalArticle",
"Conference"
] | true
| 2022-03-15T00:00:00
|
[
{
"paperId": "28c7e583d90ccfc5c3078dfc1d6b80a9ad90248d",
"title": "Quantifying Memorization Across Neural Language Models"
},
{
"paperId": "55c36748f2a7c060c3313349c730b053ed03fbf7",
"title": "Deduplicating Training Data Mitigates Privacy Risks in Language Models"
},
{
"paperId": "62d17b6f6ad77fd71ef9954c7784700d5e316f1f",
"title": "What Does it Mean for a Language Model to Preserve Privacy?"
},
{
"paperId": "39c77e29a232a9fb62b3a3c89c50f487d73e27ce",
"title": "Counterfactual Memorization in Neural Language Models"
},
{
"paperId": "04db9b694280134f09af5fa787a306907edba29d",
"title": "How Much Do Language Models Copy From Their Training Data? Evaluating Linguistic Novelty in Text Generation Using RAVEN"
},
{
"paperId": "487abb8c36480671c76fe083423db8369c735c10",
"title": "DeepMemory: Model-based Memorization Analysis of Deep Neural Language Models"
},
{
"paperId": "4bc8851f2e2758326eb0d57f7d46ab9d74cfdf80",
"title": "How BPE Affects Memorization in Transformers"
},
{
"paperId": "7845bfb55f5ce573b87d77bb76d4d38829b37620",
"title": "TURINGBENCH: A Benchmark Environment for Turing Test in the Age of Neural Text Generation"
},
{
"paperId": "2af03a9bff2e6f14deca8f0b7bb243a56efd182a",
"title": "Artificial Text Detection via Examining the Topology of Attention Maps"
},
{
"paperId": "4566c0d22ebf3c31180066ab23b6c445aeec78d5",
"title": "Deduplicating Training Data Makes Language Models Better"
},
{
"paperId": "a16ae67070de155789a871cb27ecbf9eaa98b379",
"title": "All That’s ‘Human’ Is Not Gold: Evaluating Human Evaluation of Generated Text"
},
{
"paperId": "b856e46e23a9dd055392cafe372d73d673fe8e35",
"title": "Understanding Unintended Memorization in Language Models Under Federated Learning"
},
{
"paperId": "90188379e41c8aaceda914350f770b6b36bd70f0",
"title": "Idea plagiarism detection with recurrent neural networks and vector space model"
},
{
"paperId": "425a4a9c0598e4101ca2f2b930f5c6986ce40a99",
"title": "Privacy Regularization: Joint Privacy-Utility Optimization in LanguageModels"
},
{
"paperId": "5ef0554efe188ef7ab7f89a67bc9c1e3b31eeffb",
"title": "Investigating Memorization of Conspiracy Theories in Text Generation"
},
{
"paperId": "ae5e84574041c713a0f3f9694b5db2429cdb5b24",
"title": "Detecting Fake News Using Machine Learning : A Systematic Literature Review"
},
{
"paperId": "df7d26339adf4eb0c07160947b9d2973c24911ba",
"title": "Extracting Training Data from Large Language Models"
},
{
"paperId": "dc16126c479b8414fa9683709736865485597107",
"title": "SongMASS: Automatic Song Writing with Pre-training and Alignment Constraint"
},
{
"paperId": "4ab8b15f48157387f4d66d6f4f8abf5727e7bf13",
"title": "Cross-language text alignment: A proposed two-level matching scheme for plagiarism detection"
},
{
"paperId": "9da9cab545bd1534a8c953bb66e611243dcee24d",
"title": "Authorship Attribution for Neural Text Generation"
},
{
"paperId": "770c4c16e253e4a5dee01a8dfcbb0db3d7070a2b",
"title": "Paraphrase detection using LSTM networks and handcrafted features"
},
{
"paperId": "3c23a892605e55f260f647234eb6b5108c84ab84",
"title": "Decoding Methods for Neural Narrative Generation"
},
{
"paperId": "226a962e9e2e01cafc3803018bb8bf511d549e9f",
"title": "Pre-training Multilingual Neural Machine Translation by Leveraging Alignment Information"
},
{
"paperId": "f54eb942cb470e732e8e111a447de992f502b43a",
"title": "Multilevel Text Alignment with Cross-Document Attention"
},
{
"paperId": "8c72389d79157324271b9bd13d61e7dcad22e382",
"title": "Toxicity Detection: Does Context Really Matter?"
},
{
"paperId": "c9b56cb026a38e39bb0228faac57accd6f65e6f7",
"title": "TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP"
},
{
"paperId": "3ed06aca3b25a9af89f08b949753372d29647a10",
"title": "Trading Off Diversity and Quality in Natural Language Generation"
},
{
"paperId": "8bb4669196f02e022d782dcc6655bbd4ed6c68cb",
"title": "Automatic plagiarism detection in obfuscated text"
},
{
"paperId": "bc411487f305e451d7485e53202ec241fcc97d3b",
"title": "CORD-19: The Covid-19 Open Research Dataset"
},
{
"paperId": "8c2359bb7ae03cefd2077f4209bff1e117e928d3",
"title": "A Non-Parametric Test to Detect Data-Copying in Generative Models"
},
{
"paperId": "5d261a8a2e6911ba61dc996386c085e4b2d1ce38",
"title": "Attacking Neural Text Detectors"
},
{
"paperId": "e6c561d02500b2596a230b341a8eb8b921ca5bf2",
"title": "Scaling Laws for Neural Language Models"
},
{
"paperId": "782e7fba4dfa8e2095a71cfcf85995543f37ac75",
"title": "Analyzing Information Leakage of Updates to Natural Language Models"
},
{
"paperId": "29f1b554bdf371b486d4c0f83594f26217dadc8a",
"title": "An Iterative Polishing Framework based on Quality Aware Masked Language Model for Chinese Poetry Generation"
},
{
"paperId": "25e7e9b92ec0733c9c8a9ebbcaf421b0e2f73dae",
"title": "Scalable and language-independent embedding-based approach for plagiarism detection considering obfuscation type: no training phase"
},
{
"paperId": "9146414fca384e73f11ccfd3db8ad6d2a1e8eda2",
"title": "Automatic Detection of Generated Text is Easiest when Humans are Fooled"
},
{
"paperId": "821a4aedd40596d4a6a95b3b9246baa109193a08",
"title": "Human and Automatic Detection of Generated Text"
},
{
"paperId": "6c4b76232bb72897685d19b3d264c6ee3005bc2b",
"title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"
},
{
"paperId": "077f8329a7b6fa3b7c877a57b81eb6c18b5f87de",
"title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach"
},
{
"paperId": "4754993282f677e1b43407518cc7bfc665a8a37e",
"title": "Facebook FAIR’s WMT19 News Translation Task Submission"
},
{
"paperId": "76eac57d3ca9279def9c9542d536ed009dc492b0",
"title": "Patent Claim Generation by Fine-Tuning OpenAI GPT-2"
},
{
"paperId": "a7da2f44aa205b9b99d8c04d3dfa51c2840b6605",
"title": "Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference"
},
{
"paperId": "17ce56c9acc5f9c7d2d2a98a4aa5df72e55b2083",
"title": "Leveraging BERT for Extractive Text Summarization on Lectures"
},
{
"paperId": "cf4aa38ae31b43fd07abe13b4ffdb265babb7be1",
"title": "The Curious Case of Neural Text Degeneration"
},
{
"paperId": "3c5cd5a430e3a37cd0056ed057e7185b5570ad2b",
"title": "Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning"
},
{
"paperId": "026d83db6feb6eb0fa11a44affdd6c34559e1358",
"title": "Plagiarism detectors are a crutch, and a problem"
},
{
"paperId": "7e569f9815ee8c6e5d2c37b5c5c4f16ad60381f5",
"title": "Towards Demystifying Membership Inference Attacks"
},
{
"paperId": "29de7c0fb3c09eaf55b20619bceaeafe72fd87a6",
"title": "Hierarchical Neural Story Generation"
},
{
"paperId": "520ec00dc35475e0554dbb72f27bd2eeb6f4191d",
"title": "The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks"
},
{
"paperId": "ac7b1a81d54bf83643cfb7407e1d249c20ce364d",
"title": "A Deep Network Model for Paraphrase Detection in Short Text Messages"
},
{
"paperId": "a24d72bd0d08d515cb3e26f94131d33ad6c861db",
"title": "Ethical Challenges in Data-Driven Dialogue Systems"
},
{
"paperId": "5ddd38a5df945e4afee68d96ed51fd6ca1f7d4cf",
"title": "A Closer Look at Memorization in Deep Networks"
},
{
"paperId": "5fb9faf8e2e8482981dff5886771297383ed68ac",
"title": "Detection of idea plagiarism using syntax-Semantic concept extractions with genetic algorithm"
},
{
"paperId": "251ecb380caf8684d9b9f56142c8954d32644b75",
"title": "What is plagiarism and how to avoid it?"
},
{
"paperId": "9ff80ea8d025c266c48acb8d217c96243dad9db0",
"title": "Exactus Like: Plagiarism Detection in Scientific Texts"
},
{
"paperId": "4026a50c47fe0d8acfdfb9ca9ecfcf3cf28ec840",
"title": "Adaptive Algorithm for Plagiarism Detection: The Best-Performing Approach at PAN 2014 Text Alignment Competition"
},
{
"paperId": "5eff9066a9847cfffa3585afeaa4680e8251f446",
"title": "Plagiarism detection on electronic text based assignments using vector space model"
},
{
"paperId": "be407fd8869b899f36d82b1676f23142302a30a7",
"title": "Plagiarism Meets Paraphrasing: Insights for the Next Generation in Automatic Plagiarism Detection"
},
{
"paperId": "050483448b92e10a16842b3158f6f62bc7c8487b",
"title": "State-of-the-art in detecting academic plagiarism"
},
{
"paperId": "ae5e6c6f5513613a161b2c85563f9708bf2e9178",
"title": "Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection"
},
{
"paperId": "86a05c48fe3fc77f6f85da69d561d2e91fa8bf3e",
"title": "Intellectual property right and patent: Conceptual awareness of PhD students about plagiarism"
},
{
"paperId": "1004b14ba3abe57441544c98632c9326d9cd0c1f",
"title": "Plagiarism: The Internet and Student Learning—Improving Academic Integrity"
},
{
"paperId": "ee379ae7b0b141780e6ac90b44e971207d7392f7",
"title": "Academic Writing and Plagiarism: A Linguistic Analysis"
},
{
"paperId": "ce9cd0013b03ad1bf591f63e701b3c19ae55fbea",
"title": "Towards a Definition of Source-Code Plagiarism"
},
{
"paperId": "70fda5147aedd42c64143a464117b5ffde18a2e4",
"title": "Differential Privacy: A Survey of Results"
},
{
"paperId": "2f09807bfefb858dd66b86397476b69021e6ba97",
"title": "Arabic to French Sentence Alignment: Exploration of A Cross-language Information Retrieval Approach"
},
{
"paperId": "958dac022cda5c068fd5ba112306fecfb5bec938",
"title": "Computer-based plagiarism detection methods and tools: an overview"
},
{
"paperId": "e4ce10063cd25447dcde75c2d9ce327446ced952",
"title": "Calibrating Noise to Sensitivity in Private Data Analysis"
},
{
"paperId": "6822f6ac32684478fb35252e8f909837583cb4d9",
"title": "Plagiarism by Academics: More Complex Than It Seems"
},
{
"paperId": "46860d0b73d5725be5e94161adc23729c7afa64f",
"title": "In Other (People's) Words: Plagiarism by university students--literature and lessons"
},
{
"paperId": "0f6a0295855284d9458afc3840350c252b18dcd4",
"title": "Karl Pearson and the Chi-squared Test"
},
{
"paperId": "f39eaefc294db4336ee100bd3efc47c2b15684d0",
"title": "Scarecrow: A Framework for Scrutinizing Machine Text"
},
{
"paperId": "e8f1d69cdf4d92350ce237e2d6a615ebe2e52e43",
"title": "Mirostat: a Neural Text decoding Algorithm that directly controls perplexity"
},
{
"paperId": null,
"title": "Openai’s gpt-3 language model: A technical overview"
},
{
"paperId": "df2b0e26d0599ce3e70df8a9da02e51594e0e992",
"title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding"
},
{
"paperId": "9405cc0d6169988371b2755e573cc28650d14dfe",
"title": "Language Models are Unsupervised Multitask Learners"
},
{
"paperId": null,
"title": "ArXiV Archive: A tidy and complete archive of metadata for papers on arxiv.org, 1993-2019"
},
{
"paperId": "cd18800a0fe0b668a1cc19f2ec95b5003d0a5035",
"title": "Improving Language Understanding by Generative Pre-Training"
},
{
"paperId": null,
"title": "Latentdata privacy preserving with customized data utility for social network data"
},
{
"paperId": "b0a87335289264368a7ee804acc7715fc4799310",
"title": "A Deep Learning Approach to Persian Plagiarism Detection"
},
{
"paperId": "333bd416269cfa5033baa272d6938fc62892fcdd",
"title": "Plagiarism detection in text documents using sentence bounded stop word n-grams"
},
{
"paperId": "1682b8b395c7d7fa30b3cec961ac81fdda53e72d",
"title": "Convolutional Neural Network for Paraphrase Identification"
},
{
"paperId": "c8d679b8bac9ee453115e8480f06c7ae4af3ef0a",
"title": "Arabic Plagiarism Detection Using Word Correlation in N-Grams with K-Overlapping Approach, Working Notes for PAN-AraPlagDet at FIRE 2015"
},
{
"paperId": "e70e028bc8c9b56a5c56692ba8165fa59b374002",
"title": "A Winning Approach to Text Alignment for Text Reuse Detection at PAN 2014"
},
{
"paperId": "075184f44ae3a1f9bec321dcaf2919664b08b835",
"title": "Developing High-Resolution Universal Multi- Type N-Gram Plagiarism Detector Notebook for PAN at CLEF 2014"
},
{
"paperId": "5bd7f2cd44b03e9f8a392f5f5a7511376105555c",
"title": "Using a Variety of n-Grams for the Detection of Different Kinds of Plagiarism Notebook for PAN at CLEF 2013"
},
{
"paperId": "379209b47aeaf6e91dcaac7e7e33659cbf8aa994",
"title": "A Set-Based Approach to Plagiarism Detection"
},
{
"paperId": "0e353cb454b7099317d9fe4ab4931f50c86a9740",
"title": "Overview and Comparison of Plagiarism Detection Tools"
},
{
"paperId": "c388ac7c536f300f3c5ff5a9a6a64ede1e4fc771",
"title": "Judging plagiarism: a problem of morality and convention"
},
{
"paperId": "1c088220e33c325c38c583b73b96d449bf0f6d4c",
"title": "Plagiarism in Academia."
},
{
"paperId": "646c91a8005da3b5e945c570d7b2d227592ad0d8",
"title": "Measures for Corpus Similarity and Homogeneity"
},
{
"paperId": "be0c1172fb76476435ff47994f679fbca305571a",
"title": "Free Resources And Advanced Alignment For Cross-Language Text Retrieval"
},
{
"paperId": "fa8ffa9f8d8367b987845ae88f2adcac2689f110",
"title": "Okapi at TREC-3"
},
{
"paperId": "a0d16f0e99f7ce5e6fb70b1a68c685e9ad610657",
"title": "A Learning Algorithm for Boltzmann Machines"
},
{
"paperId": null,
"title": "WWW ’23, May 1–5, 2023, Austin, TX, USA"
},
{
"paperId": null,
"title": "Plagiarized Examples of Pre-trained GPT-2. Duplicated texts are highlighted in yellow, and words/phrases that contain similar meaning without text overlaps are highlighted in green"
},
{
"paperId": null,
"title": "LM PERPLEXITY CALCULATION Perplexity is defined as the exponentiation of the cross-entropy between the data and LM predictions. Given a tokenized sequence 𝑋 = ( 𝑥 0 ,𝑥 1 ,𝑥 2 ...𝑥 𝑛 )"
}
] | 19,314
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "external"
}
] |
https://www.semanticscholar.org/paper/0304118bbc3d87fc0d8255f738a5ddf572181722
|
[
"Computer Science",
"Engineering"
] | 0.8952
|
Software Security in Virtualized Infrastructures — The Smart Meter Example
|
0304118bbc3d87fc0d8255f738a5ddf572181722
|
it - Information Technology
|
[
{
"authorId": "2257292986",
"name": "Bernhard Beckert"
},
{
"authorId": "2261764399",
"name": "Dennis Hofheinz"
},
{
"authorId": "1398679281",
"name": "J. Müller-Quade"
},
{
"authorId": "2286512476",
"name": "Alexander Pretschner"
},
{
"authorId": "2261760298",
"name": "Gregor Snelting"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
### Karlsruhe Reports in Informatics 2010,20
##### Edited by Karlsruhe Institute of Technology, Faculty of Informatics
ISSN 2190-4782
# Software Security in Virtualized
Infrastructures
##### The Smart Meter Example
B. Beckert, D. Hofheinz, J. Müller-Quade, A. Pretschner, G. Snelting
beckert@kit.edu, dennis.hofheinz@kit.edu, joern.mueller-quade@kit.edu,
alexander.pretschner@kit.edu, gregor.snelting@kit.edu
# 2010
KIT – University of the State of Baden-Wuerttemberg and National
Research Center of the Helmholtz Association
-----
**Please note:**
This Report has been published on the Internet under the following
Creative Commons License:
http://creativecommons.org/licenses/by-nc-nd/3.0/de.
-----
## Software Security in Virtualized Infrastructures – The Smart Meter Example –
##### B. Beckert, D. Hofheinz, J. Müller-Quade, A. Pretschner, G. Snelting
Karlsruher Institut für Technologie (KIT)
#### Abstract
Future infrastructures for energy, traffic, and
computing will be virtualized: they will consist of decentralized, self-organizing, dynamically adaptive, and open collections of physical
resources such as virtual power plants or computing clouds. Challenges to software dependability, in particular software security will be
enourmous.
While the problems in this domain transcend
any specific instantiation, we use the example of
smart power meters to discuss advanced technologies for the protection of integrity and confidentiality of software and data in virtualized
infrastructures. We show that approaches based
on homomorphic encryption, deductive verification, information flow control, and runtime verification are promising candidates for providing
solutions to a plethora of representative challenges in the domain of virtualized infrastructures.
#### 1 Introduction
Future infrastructures for energy, traffic, and
computing will be virtualized, and will depend
on software to an unprecedented amount. Virtual power plants will consist of dynamically
adaptive, heterogeneous collections of physical
power sources such as wind power generators
or photovoltaic panels. Traffic management will
rely on large-scale simulation and multi-modal
route planning; future trips will happen in a
virtual environment before they take place in
the physical world. Cloud computing – the
prototype of a virtualized infrastructure – provides computing power through Internet outlets,
in form of Software-as-a-Service, Platform-as-aService, or Infrastructure-as-a-Service.
software to an amount previously unimaginable.
And while the state of the art perhaps allows
to develop the necessary software functionality,
virtualization generates software dependability
problems, which cannot be handled by today’s
software technology. Dependable functionality, communication, fault tolerance, adaptivity,
safety, security, and privacy will not only require
the adaption of known dependability techniques,
but also the development of new ones. For example, model checking or verification have never
been applied to self-organizing software driving
virtual power plants.
Software security will pose a particular challenge in virtualized infrastructures. Recent
attacks, e.g. based on the Stuxnet worm,
on SCADA systems controlling electrical grids
demonstrate that even today, security is fragile. It is beyond any doubt that this problem
will multiply in virtualized infrastructures. In
future infrastructures, integrity will be essential, meaning that input, output, and the process of critical computations cannot be manipulated from outside. For the protection of privacy, confidentiality will be essential (meaning
that private or secret data cannot flow to public ports), as well as appropriate filtering and
aggregation of data such that, e.g., information
about energy demand and supply can no longer
be linked directly to specific individuals. Classical IT security techniques such as access control and encryption will need additional breakthroughs, such as homomorphic cryptography,
to be useful in cloud computing or traffic infrastructures. New techniques such as semanticsbased software security analysis and information
flow control will be needed to master integrity
and confidentiality challenges.
The cluster initiative “Dependable Software
for Critical Infrastructures” (DSCI) will de
-----
ware dependability in virtualized infrastructures. DSCI focuses on E-Energy, E-Traffic, and
Cloud Computing. DSCI will provide guarantees for dependable functionality, communication, fault tolerance, adaptivity, safety, security,
and privacy in future infrastructures. A general
overview of DSCI can be found in [7].
In particular, DSCI investigates new approaches to software security in virtualized infrastructures, which exploit recent achievements
in algorithmics, language-based security, cryptography, and verification technology. DSCI will
also build on fundamental results to be developed by the new DFG Priority Programme “Reliably Secure Software Systems” (RS3). Several
DSCI researchers are also leading RS3 projects.
But note that RS3 is not concerned with critical infrastructures. In general, we are not aware
of any report that pinpoints the difficult security problems in such infrastructures. Hence the
DSCI contribution, as outlined in the current
overview article, can be summarized as follows:
**Contribution We investigate software secu-**
rity problems in future virtualized infrastructures; using smart metering as an example. We
demonstrate how a toolbox of advanced security
technologies, such as homomorphic cryptography, information flow control, deductive verification, proof-carrying code, and runtime verification, will be able to protect integrity and privacy in smart metering systems. We indicate
how our toolbox can be used to protect other
components in critical infrastructure, such as
SCADA systems.
**Organization We start by introducing our**
exemplary problem domain, that of smart energy meters, in Section 2. As a basis for discussion, we present an exemplary architecture
of such a system, perfectly aware that any concrete system is likely to differ in specific details.
On these grounds, we derive a set of challenges
and describe them in Section 3. In the remaining sections, we show how to use different technologies to tackle a selection of relevant problems: Section 4 shows how to use homomorphic
encryption for privacy-preserving aggregation of
data. Section 6 shows how to use deductive verification to the end of ensuring correctness and
absence of undesired information flows. Section 5 tackles the problem of undesired information flows on the basis of language-based approaches. Section 7 builds on these approaches
and adds to the static approaches of Sections 6
time verification and that explicitly targets information flow across system boundaries. The
conclusion discusses more general applications
in critical infrastructures, e.g. for SCADA systems. Related work is discussed throughout the
text.
#### 2 Smart Metering Systems
##### 2.1 Background
Smart metering technology makes it possible to
continuously measure the consumption of energy, gas, and water. Because the measuring devices are, or at least are planned to be, directly
connected to a respective IT infrastructure, it
is possible to transmit the measurement data in
varying intervals to a piece of data administration software (“cockpit”) which runs on a PC in
the respective household or company, or directly
to the energy provider or billing company.
The advantages are, depending on the perspective of the various stakeholders, considered
manifold: there is no need for physical people to read the meters; households can themselves detect a potential waste of energy by continuously monitoring consumption and comparing it with other households; fine-grained consumption information allows energy providers to
tune the load balancing of their networks; since
ressources cost differently at different times,
households can automatically switch on, say,
washing machines at the cheapest moment of
the night.
Whether or not all these anticipated advantages will become reality is not the subject of
this paper: for instance, we do not discuss if
the energy used for a continuously running DSL
modem does not outweigh the saved energy –
which in turn is estimated to not exceed roughly
Euro 3,00 per month per household, using today’s technology –; nor do we discuss if load
balancing will not continue to be done at the
level of entire street blocks; nor do we discuss
if local operating networks (LONs) – possibly
do not necessitate smart metering technology
at all to implement intelligent switching of electrical devices when it comes to the anticipated
next generation of smart meters that bidirectionally “communicate” with the devices; nor do
we touch the legal perspective [26].
We are convinced, however, that smart me
-----
convergence of business and embedded IT and
therefore are highly representative of tomorrow’s virtualized infrastructures. Moreover, it
is a fact that there is a politically motivated
desire to install these devices on a large scale;
that in terms of smart meters for electricity,
a regulation (2006/32/EG) requires new houses
to be equipped with respective basic technology
for energy efficiency reasons as of January 2010,
and that consumption data must be transmitted
electronically in standardized form since April
2010; that the EnWG requires the unbundling
of energy providers, measurement device operators, and device readers; and that major energy
providers are running huge (e.g., 5000 households in Cologne) sets of test installations today. On the other hand, the economic benefits of rolling out smart metering technology remains to be proved; information security problems that are concerned with the measuring devices as well as with communication of the measurement data have not fully been solved yet;
and it is also true that the population is becoming increasingly aware of the potential privacy
issues that emerge from this innovative technology, as highlighted by the example of the
2008 Big Brother award to Yello Strom for their
smart metering technology.
##### 2.2 Architecture of a Typical Smart Metering System
In the following, we sketch the architecture of
an abstract, yet typical smart metering system
for electricity.
Energy is measured in the measuring device
itself. The measuring device sends the data to a
data concentrator (also called MUC, multi utility communicator; the name is motivated by the
connection of a multitude of measuring devices
for gas, water, etc.). Taken together, these two
devices are usually called the smart meter. Depending on the frequency of transmittal (and,
consequently, the degree of aggregation of the
measurements), the meter sends measurement
data either directly to a mobile phone or PDA,
or via a power line or classical DSL modem (1)
to a local PC that runs data administration software and (2) to the gateway of the billing company, that can but need not necessarily be the
same as the energy provider (unbundling; some
solutions also include the energy providers as
intermediaries). The data administration soft
to build personal profiles, and to contrast these
personal profiles to other profiles (see below for
the back end). We will assume that this software is also used to control appliances [1].
Text messaging and email services are being
implemented that warn members of a household
if they have likely forgotten to switch off, say, an
oven (whether or not the smart metering software and hardware alone can detect if specific
appliances are switched on is the subject of an
ongoing debate [31] – in any case, in conjunction with appliances connected to LONs, this
is clearly possible, even if the metering device
alone does not provide sufficient information for
this task). In any case, remote handling of appliances or radiators in intelligent buildings appears feasible.
Metering data can be sent from the data administration software to many other IT systems,
including Web 2.0 services such as social networks where, among other things, people can
show off how “green” their household is, or where
avatars shrink and grow depending on the energy that has been consumed. Conversely, parts
of the data admin software can be implemented
in the cloud so that access via external PCs becomes possible.
When data is transmitted from the household
to the energy provider or the respective billing
company, a plethora of IT systems enters the
game. These include gateways for the metering data, a web back-end for the end customer’s
data administration software that, among other
things, can provide profiling data of comparable
households, billing services, CRM systems, the
implementation of sending the above warning
text messages or emails, etc. Finally, it is perfectly conceivable that in case customers agree,
their data is sent to third parties, including appliance vendors that, for instance, may offer
class A fridges that are guaranteed to be amortized within a specific period of time, call centers, advertisement providers, marketing companies, etc.
Accordingly, a typical architecture of the overall system – of which every energy provider
of course offers differing instantiations – very
roughly looks as depicted in Figure 1 (boxes are
components, arrows represent data flows).
The entire smart metering system is characterized by two main features. Firstly, it com
1Data management and control of appliances can and
should of course physically be implemented in separate
-----
Figure 1: Smart metering systems: Bird’s eye view
bines an embedded system (the metering device
itself) with several IT systems and, as such, is
an excellent example for tomorrow’s integrated
cyber-physical systems. Secondly, it is a highly
distributed system with many different areas of
governance, responsibility, and liability: the metering provider and operator, the home cockpit
software and its connection to Web 2.0 media,
the billing company, the energy provider, and
external third parties including call centers and
vendors. To date, it is unclear who will be the
responsible for the entire system, at least as far
as privacy is concerned.
##### 2.3 Trusted Device
For security reasons, certain components of the
smart metering system must be physically protected from manipulation. In particular,
_• the measuring device itself must be pro-_
tected from physical manipulation to ensure that the measurement corresponds to
the true eletricity consumption;
certain (software) functionality, including
encryption, which is protected from manipulation of its software;
_• the devices that physically switch appli-_
ances must be protected from manipulations that may make it accept false switching commands. (These devices may be integral components of the appliances or, for
“dumb” appliances they may be part of the
socket or plug.)
In our architecture, we assume that the smart
meter (i.e., the measuring device and the MUC)
and the trusted device are the same logical component.
**Architecture of the trusted device.** Software with different trust levels runs on the
trusted device (core, kernel, application). The
core cannot be updated remotely. The kernel
can be updated but only from trusted sources.
The applications can come from the same source
as the cockpit software.
The kernel part is basically a micro kernel pro
-----
Typical examples for critical functions that may
be provided by the trusted device kernel are:
_• (secure) communication (in particular with_
the provider and third parties), information
flow limits may be guaranteed for this communication;
_• cryptographic services (signing, encryption_
etc.);
_• access to the hardware (measuring device);_
_• switching appliances, i.e., secure communi-_
cation with the devices that switch appliances (by signing switching commands);
_• getting authorisation from the provider for_
changes in electricity consumption;
_• enforcing upper limits for energy consump-_
tion (set by the user or by the provider);
_• software updates (this includes checking au-_
thenticity of updates, checking proofs in
proof-carrying code);
_• logging all relevant events._
#### 3 Challenges
In a smart metering environment as described
above, a number of challenges arise, both related
to the integrity and confidentiality of software
and data. Concretely, we can isolate several desirable properties of a smart metering system.
**Confidentiality of customer data.** A customer (i.e., the owner of a household) might be
interested in protecting his or her detailed power
consumption traces. Namely, individual electrical devices (ovens, hair dryers, TV sets, etc.)
have characteristic power consumption patterns
which make it possible to even identify single
appliances [31]. Hence, detailed power traces
reveal a level of information about the customer
that makes it useful for marketing purposes. For
instance, heavy users of kitchen devices are more
likely to be susceptible to food-related advertisements. Heavy computer users might be more
susceptible to advertisements for microelectronics or computer games.
Detailed information about power consumption patterns can also be used on a larger
power consumption patterns to isolate individuals from certain groups (e.g., jobless persons,
night-shift workers, people who arrived home at
certain points in time, etc.). Specifically, largescale data mining could be used for dragnets.
Besides, detailed power traces can be used to
determine, e.g., how many people live in the
household, when the household members are on
vacation, or even when they leave the house.
In principle, this data is useful for burglars, in
particular when such data can be collected and
filtered on a large scale. Even more fine-grained
data about the household owners can be extracted by matching with typical consumption
patterns of, e.g., students, or persons with fulltime/part-time job, or without job.
Hence, to protect the customer’s privacy, detailed power consumption traces should be protected [31]. Of course, on the other hand, the
energy provider has a legitimate interest in using power consumption information for billing
and to predict power demands and adjust its
infrastructure.
**System Software Integrity.** The integrity
of the system and, in particular, the trusted device must be protected from attacks from the
user, the provider, or third parties. For this,
the design and the correct implementation of
the software in the trusted device plays a central
rôle.
As the cockpit software runs on the user’s PC
on a standard operating system, the integrity
of the cockpit software ist hard to protect from
attacks by the user (except by obscurity) or by
third parties using malware.
As a smart meter will be installed in households for quite some time before they are exchanged, it should be possible to remotely update the software on the trusted device (otherwise updates are too costly). It is a difficult
challenge to nevertheless ensure integrity. The
core of the trusted device, which cannot be updated itself, has to provide this assurance.
**Authenticity and integrity of measure-**
**ments.** Measurements exist both in raw and
in aggregated form. These aggregations pertain to the dimensions of both time (seconds,
hours, days, months) and space (one appliance,
a household, a house, a block, a district, a city).
Among other things, whenever these aggrega
-----
balancing, their integrity and authenticity become crucial properties. Otherwise, a possible
attack consists of tricking an energy provider
into thinking that either too much or too little
energy will be needed at a specified moment of
time, with potentially hazardous consequences
for the infrastructure.
**Authenticity of control signals.** One has
to ensure that control signals are not falsified.
Even if they are generated by the cockpit or the
user via PDA they cannot be trusted completely.
Terrorists could start a distributed denial-ofservice attack or worse if they can install malware on the cockpit and thus switch a large number of appliances at the same time, producing a
surge in energy consumption and system breakdown.
The only protection is that switching is done
by the trusted device (possibly requiring authorisation from the provider for certain changes in
consumption).
**Certification, trust, and adequacy of re-**
**quirements.** It is not sufficient to build a secure system. Security must be checkable and
certifiable. This is particularly important as
many stake holders are involved.
#### 4 Fully Homomorphic En- cryption: Operating on Encrypted Data
In this section, we will outline techniques to se_curely and efficiently aggregate data. This will_
in particular be useful to our secure metering use
case. However, of course the techniques will be
versatile enough for more general applications.
Hence, we first introduce the technical tools, and
then comment on their use in our smart metering example.
##### 4.1 Fully homomorphic encryp- tion
**Motivation:** **Cloud computing.** Virtualized infrastructures such as cloud computing allow to outsource computation tasks through Internet outlets. These Internet outlets are not
necessarily trustworthy. In fact, in services such
as Amazon’s Elastic Cloud, the customer does
located. In particular when working with sensitive data, it is of course highly undesirable to
send all data in plain to an unknown server.
An obvious solution is to encrypt the data
before transmitting it to servers in the cloud.
However, conventional encryption schemes do
not allow to compute on encrypted data: once
encrypted, the data can only be decrypted, but
not operated on. (In fact, in certain scenarios
such as Internet auctions, being able to manipulate encrypted data can become a security weakness: encrypted bids can be modified and then
used to overbid a competitor.)
**Fully homomorphic encryption (FHE).**
Until very recently, fully homomorphic encryption schemes (i.e., encryption schemes that allow arbitrary computations on encrypted data)
were actually deemed impossible. However, in
a breakthrough work, in 2008 Craig Gentry
from the IBM T.J. Watson research center finally succeeded in constructing the first FHE
scheme [18]. His scheme allows to perform arbitrary computations on encrypted data. The
result of such a computation is again encrypted,
so that the entity who performs the computation neither learns anything about the data nor
about the result.
Fully homomorphic encryption might seem
like the obvious way to achieve secure cloud
computing: instead of sending all data in plain
into the cloud to outsource computations on
that data, encrypt all data, and let the cloud
compute on this encrypted data. The encrypted
result can then be sent back to the customer,
who possesses the secret key to decrypt the result.
**The (in)efficiency of general FHE.** However, there is a catch with this idea. Namely, as
of today, FHE schemes are far too inefficient to
be useful in the cloud computing setting. That
is, computing on encrypted data is computationally far more expensive than computing on
plain, unencryted data. Depending on the desired level of security, current (June 2010) implementations of FHE schemes require several seconds to perform a single gate operation (i.e., a
bitwise and, or, or not operation) on encrypted
data. Besides, due to a highly redundant encoding (in current schemes), encrypting data results
in a dramatic blowup in storage requirements.
-----
encrypted data have to be expressed as circuits.
In particular, this requires to unroll loops, and
follow all branches of if...then...else...
constructs, which makes the computation in itself much more inefficient.
##### 4.2 Additively homomorphic en- cryption: an example
**Outline.** Hence, current homomorphic encryption techniques do not seem ready yet for
a direct application to the cloud computing setting. Still, there is hope for practical solutions
that only partially rest on the properties of homomorphic encryption. (We will later on comment on such solutions.) Besides, we can still
hope for practical solutions that are optimized
for specific settings.
**Additively homomorphic encryption.** In
our examples, we will only need to operate in
a very specific way on encrypted data. Put differently, we will only need to perform a specific
class of homomorphic operations on ciphertexts.
More specifically, we will only need an addi_tively homomorphic encryption scheme. (That_
is, an encryption scheme which allows to compute the encryption of the sum of several encrypted plaintexts.)
**Paillier’s scheme.** Such encryption schemes
are well-known to exist, and in fact are quite efficient. As an example of an additively homomorphic encryption scheme, we recapitulate Paillier’s encryption scheme [32]. Paillier’s scheme
works in the ring ZN 2 for a product N = PQ of
two large primes P and Q. Its algorithms are
defined as follows:
**Key generation. Choose N = PQ and g ∈**
ZN 2 with ord(g) = ϕ(N ) = (P − 1)(Q −
1). Publish the public key pk = (N, g) and
keep the secret key sk = (P _, Q)._
**Encryption. To encrypt m ∈{0, . . ., N −** 1},
uniformly choose r ∈{0, . . ., N − 1} and
compute the ciphertext
_N_
**Decryption. To decrypt C ∈** ZN 2, compute
_C_ [(][P][−][1)(][Q][−][1)]
= r _[N]_ [(][P][−][1)(][Q][−][1)](1 + N )[m][(][P][−][1)(][Q][−][1)]
= (1 + N )[m][(][P][−][1)(][Q][−][1)]
= 1 + m(P − 1)(Q − 1)N,
from which m(P − 1)(Q − 1) mod N and
thus m can be computed. (Note here that
1+ _N has order N, since (1+_ _N )[N]_ = N [2] =
0 mod N [2].)
A distinguishing feature of Paillier’s encryption
scheme is the (additively) homomorphic property: we have
Enc(pk _, m1) · Enc(pk_ _, m2)_
= r1[N] [(1 +][ N][ )][m][1][ ·][ r]2[ N] [(1 +][ N][ )][m][2]
= (r1r2)[N] (1 + N )[m][1][+][m][2]
= Enc(pk _, m1 + m2),_
where r1r2 and m1 + m2 are computed modulo N . (Technically, to ensure that C :=
Enc(pk _, m1)·Enc(pk_ _, m2) really is a properly dis-_
_tributed encryption of m1+m2, we have to reran-_
domize C by multiplying with a fresh random
value r _[N]_ .)
##### 4.3 Applications to smart meter- ing
**Setting.** In a smart metering system, we can
think of securely aggregating measurements before transmitting them to the energy provider,
in order to ensure (a certain degree of) confidentiality of the customer’s data. Concretely, we
can aggregate measurements in two dimensions:
over time (i.e., we can aggregate measurements
from throughout the week), or space (i.e., we
can aggregate from several customers). In both
cases, only an additively homomorphic encryption scheme is necessary.
**A concrete protocol.** [16] explain how the
secure aggregation of measurements across several customers can be performed using an additively homomorphic encryption scheme such
as Paillier’s scheme. Concretely, the idea is as
follows:
1. Each customer i performs his or her own
-----
the aggregation [�]i _[m][i][ of several measure-]_
ments from several customers. Each customer possesses a Paillier public/secret keypair, as does the energy provider.
passing cars and periodically transmit measurements to a base station. Transmitting those
measurements in plain and unencrypted could
result in a loss of anonymity: it could become
possible to track individual cars. Only encrypting measurements and sending them to the base
station would still allow that base station to
track individual cars. However, suppose now
that encrypted measurements could be aggre_gated, in the sense that each sensor_
_• first receives an encryption of the so far ag-_
gregated measurements Enc(pk _,_ [�]j[i][−]=1[1] _[m][j]_ [)]
of the previous station,
_• then homomorphically adds its own (en-_
crypted) measurement Enc(pk _, mi_ ) to that
previous measurement,
_• and sends the encrypted accumulated mea-_
surement Enc(pk _,_ [�]j[i] =1 _[m][i]_ [)][ to the next sta-]
tion.
In this scenario, a base station would only receive accumulated traffic information. Such accumulated information can still be helpful, e.g.,
to detect and potentially prevent traffic jams,
but protects the privacy of individuals. In fact,
we can even have a tradeoff between privacy
and monitoring accuracy by adjusting the degree of accumulation. Thus, we have a system
whose properties can be adjusted by fine-tuning
parameters, similar to systems in algorithm engineering. Depending on the desired concrete
application parameters, as well as security and
efficiency goals, we can hope to find an optimal
point in this continuum for a given specific application.
In this traffic analysis setting, a certain homomorphic property is required from the used
encryption scheme, since it must be possible to
aggregate natural numbers. However, very efficient encryption schemes—such as Paillier’s encryption scheme—with such a limited accumulation property are well-known to exist. In particular, while fully homomorphic encryption would
lead to an impractical solution, a practical solution can be found by using the specific structure
of the problem.
**Secure storage.** Similarly, when merely large
_storage capacities in the cloud are required,_
again efficient solutions exist. For instance,
consider a scenario in which a large medical database that includes individual patient
records is to be outsourced into the cloud. En
2. All customers engage in an efficient multiparty protocol to compute an encryption
of the aggregation [�]i _[m][i][ of measurements]_
under the energy provider’s public key.
�
_C := Enc(pk_ _,_ _mi_ ).
_i_
(Note that this does not involve point-topoint communication among the customers,
but only a link from each customer to the
energy provider.)
3. In the end, the energy provider decrypts C
and (only) learns the aggregation of measurements, while no customer learns anything (on top of his or her own measurement of course).
**How to go further.** This approach of secure aggregation demonstrates the applicability of (limited) homomorphic encryption to the
smart metering setting. In particular, [16] show
how cryptography can be used to simultaneously achieve seemingly contradictory requirements (the energy provider’s desire to gather information vs. the customer’s privacy). Our goal
is to extend these ideas for the use in a practical
smart metering system.
For instance, as outlined in Section 3, an additional requirement present in a smart metering system is the integrity (i.e., authenticity) of
measurements. Such an authenticity requirement can be fulfilled by digitally signing the
measurements. However, signed measurements
can no longer be easily accumulated (e.g., inside a Paillier encryption). It is an interesting
and unique challenge to combine such authentication methods with aggregation techniques.
(More specifically, we would want to aggregate
signed pieces of data, such that an aggregated
signatures authenticates the accumulated data.)
##### 4.4 More examples
**Secure aggregation of traffic data.** As another example of the use of (limited) homomorphic encryption, consider the secure aggregation
of traffic data. We could imagine a number
-----
to protect the privacy of individual patients. At
the very least, a potentially curious server in the
cloud might learn which (encrypted) records are
accessed more often.
However, in this setting, cryptographic technologies such as private information retrieval
_(PIR) can be employed. PIR techniques allow_
for a comparatively efficient access to encrypted
stored data, while the actual server on which
that data is stored learns essentially only that
an access took place (but not which part of the
data was accessed).
##### 4.5 Supporting FHE by other tools
Orthogonally, we can hope to use (fully) homomorphic encryption techniques in an efficient
way when they are supported by additional
cryptographic tools. For instance, imagine a
small tamper-proof hardware device that performs arbitrary computations. Of course, one
has to be careful in making such assumptions,
since
(a) it takes a significant effort in hardware design to protect such a device against physical attacks, and
(b) since the device is small, we cannot assume
that it is computationally very powerful.
Such hardware tokens can be used to bootstrap
a very general class of secure computations. In
particular, hardware tokens alone enable arbitrary secure two-party computations. (In a secure two-party computation, both parties get
an input x1, resp. x2, and eventually receive an
output f (x1, x2), where the function f is agreed
upon. It should be stressed that both parties do
_not learn anything about the other party’s in-_
put, beyond f (x1, x2) of course.) Many real-life
protocol tasks (e.g., negotiations for a price) can
be expressed as such a secure two-party computation.
However, constructing tamper-proof devices
requires an expensive dedicated hardware design. In particular, if we have k different devices for the use in different contexts, we will
have to design and protect k different pieces of
hardware. Fully homomorphic encryption can
come to our aid here: instead of designing k
different hardware devices, we only design one
universal decryption device. Such a device contains the secret key that is necessary to decrypt
(fully homomorphic) encryptions. We can now
inputs x1 and x2 (this can be done using a public encryption key), and then homomorphically
computes an encryption of f (x1, x2) from the encrypted xi . Finally, the result is decrypted and
output. Observe that only the initial encryption
and final decryption steps actually have to be
protected; the actual computation takes place
on encrypted data and thus could even be performed publicly.
Along these lines, fully homomorphic encryption allows to generically construct arbitrary
tamper-proof hardware from one single and very
specific piece of hardware for encryption and decryption. In particular, an expensive hardware
protection process has only to be performed
once and for all. Arbitrary tamper-proof hardware devices can be derived almost canonically.
Of course, we still have to ensure that our
solution is reasonably efficient. In particular,
when using fully homomorphic encryption, we
still suffer from a considerable slowdown. However, hardware tokens are already only used
for certain protocol-critical operations for which
hardware support is required to ensure security.
(One can think of using hardware tokens only
to store and thus physically protect long-term
secret keys.) Hence we can hope that the use
of computationally very expensive techniques
like fully homomorphic encryption is much more
practical than, e.g., in a generic cloud computing setting.
##### 4.6 A tradeoff
We believe that the preceding examples demonstrate that it is crucial to use “heavy” cryptographic techniques like fully homomorphic encryption with care, with a lot of fine-tuning for
the actual application. Again we have a tradeoff between efficiency and security, where the
degree to which a cryptographic tool like fully
homomorphic encryption is used determines the
characteristics of an implementation.
#### 5 Language-Based Security
Traditional software security mechanisms, such
as access control, certifications of origin, protocol verification, intrusion detection, will of
course be necessary in virtualized infrastructures, but will not be sufficient. For DSCI, in_tegrity will be essential, meaning that critical_
-----
side. For the protection of privacy, confiden_tiality will be essential, meaning that private_
data cannot flow to public ports. However, both
cannot be guaranteed with classical techniques
alone: classical approaches do not really give
guarantees about the behaviour of software, but
rather about its origin.
Fortunately, research in software security has
developed techniques such as proof-carrying
code and information flow control (IFC), which
analyze the true semantics of software, and provide guarantees about software behavior and not
just its “packaging”. As such analyses examine the program source code, they are called
“language-based”. Modern program analysis
based on interprocedural dataflow analysis, abstract interpretation, or model checking has
developed very powerful tools for discovering
anomalies in software. Experimental security
infrastructures based on these techniques have
been developed in large European projects [5].
IBM developed a tool for IFC which can analyse large programs written in full Java [35]. New
results concerning central notions such as noninterference and declassification are pursued in
the new DFG priority program “Reliably Secure Software Systems” (RS3). RS3 integrates
software security with advanced verification and
program analysis. In the following, we will describe some of the new security techniques, as
well as their application to smart meters. Note
that these techniques have many other applications in virtualized infrastructures.
##### 5.1 Proof Carrying Code
Proof carrying code is code for software components (typically mobile components), which
comes with an (encoded) formal proof of some
desireable property of the software. Properties might be functional, safety, or security related. Proofs are written in some formal logic,
and refer to the program text of the software
(e.g. loop invariants in Hoare logic). Upon installation or plug-in, the proof must automatically be checked for correctness, and it must be
checked that the proof does indeed correspond
to the software component. Proof carrying code
is based on the fact that checking a proof can
be done efficiently, in contrast to the expensive
(manual) construction of the proof. In the literature, appropriate formal logics as well as efficient proof checkers have been described in de
c l a s s PasswordFile {
private String [ ] names ;
/∗ P: c o n f i d e n t i a l ∗/
private String [ ] passwords ;
/∗ P: s e c r e t ∗/
// Pre : a l l s t r i n g s are interned
public boolean check ( String user,
String password /∗P: c o n f i d e n t i a l ∗ /) {
boolean match = f a l s e ;
try {
f o r ( i n t i =0; i<names . length ; i++) {
i f ( names [ i ]==user
&& passwords [ i ]==password ) {
match = true ;
break ;
}
}
}
catch ( NullPointerException e ) {}
catch ( IndexOutOfBoundsException e ) {};
return match ; /∗ R: public ∗/
}
}
Figure 2: A Java password checker
veloped a security infrastructure based on proof
carrying code, which is used for Java code in
mobile devices.
In the smart meter application, proof carrying code could be very helpful once new software versions are downloaded to the smart meter. Integrity and privacy properties must be
formalized when developing the software to be
downloaded, and corresponding formal proofs
be constructed (this will be a nontrivial task).
The checker is based on theorem prover technology, and must be part of the trusted device
(see section 2.3). Upon download, the checker
will guarantee functionality and security, or – if
proof checking fails – will disallow installation.
##### 5.2 Information Flow Control
Proof carrying code can guarantee arbitrary
functional or security related properties, but requires expensive proof preparation and nontrivial checkers. As an alternative, new techniques
for language-based security can be applied to
guarantee integrity and privacy. In particular,
_information flow control analyses the program_
source or byte code for security leaks. Data
which are marked confidential (e.g. power consumption traces) must not flow to public ports
(e.g. the gateway of the energy provider), or
perhaps only in aggregated form as discussed
-----
Figure 3: Program dependency graph for figure
2 (exceptions included)
(e.g. appliance switching commands) must not
be manipulated from outside (e.g. by the billing
company – but perhaps manipulation from the
“cockpit” is allowed).
Technically, information flow control is difficult, in particular for realistic programs (e.g.
100 kLOC) written in realistic languages (e.g.
full Java byte code). Concurrency and multithreading make information flow particular demanding. The theoretical foundations, such
as noninterference and declassification, are still
subject to ongoing research. The Mobius
project delivered the first information flow infrastructure for Java Card applications on mobile devices; it is based on security type systems.
In Germany, the new SPP “reliably secure software” integrates information flow control with
modern program analysis and verification technology. Let us thus describe one such approach,
as developed in the group of G. Snelting [22], in
more detail.
Security type systems, as used by Mobius,
have been an important step and are quite efficient, but can be unprecise, resulting in false
alarms. A more precise analysis must exploit
flow-sensitive, object-sensitive, and contextsensitive information as computed by interprocedural dataflow analysis. The results of such
an analysis can be encoded in form of a program dependency graph, as indicated in figure
3. Without going into details, note that information can flow in the program only along
paths in the dependency graph. If there is no
Figure 4: Program dependency graph for figure 2 (exceptions excluded) with computed security levels (white=public, grey=confidential,
dark=secret). The program contains a security
leak showing up as a level conflict in the return
node (upper right). Indeed, there is an information flow from the secret password table to the
public output, which can be exploited.
flow of information. This fundamental property
(for which a machine-checked formal proof exists [38]) makes dependency graphs so suitable
for information flow control. Note that in the
presence of procedures, arrays, objects, exceptions, etc. the construction of the graph becomes very complex. Hundreds of papers have
been written on the subject; today, two dependency graph implementations for full Java exist
(one, the JOANA tool, developed in Snelting’s
group), as well as a commercial implementation
for C/C++, called CodeSurfer.
For information flow control, input and output ports in the graph must be annotated with
security levels. For the rest of the program resp.
its dependency graph, security is checked by a
fixpoint iteration which is based on the following
fundamental equations:
_R(x_ ) ≥ _S_ (x ) if x ∈ _dom(R)_
where S is the security level of a graph node
_x_, P the annotation of an input port, and R
the annotation of an output port. For figure 2,
the resulting security levels are shown in figure
4. The JOANA analysis can handle full Java
bytecode and scales up to 50kLOC; it is implemented as an Eclipse plug-in (figure 5). Full
details can be found in [22, 20, 21]. The analysis is currently adapted for mobile components
_P_ (x ) ⊔ � _S_ (y), if x ∈ _dom(P_ )
_y∈pred(x_ )
� _S_ (y), otherwise
_y∈pred(x_ )
_S_ (x ) ≥
-----
Figure 5: Eclipse plugin for information flow control
For smart meters and many other software **6** **Deductive Program Verifi-**
in virtualized infrastructures, it will be neces- **cation**
sary to apply information flow control to (se-
lected parts of) the source code. In particular,
##### 6.1 Deductive Verification for En-
the analysis can guarantee that integrity of the
##### suring Confidentiality and In-
trusted device cannot be broken by software at-
tacks. This is in turn essential for dependable **tegrity in Smart Meters**
cryptography and proof checking. Information
Various measures can be taken to ensure the
flow control can also guarantee that household
confidentiality and integrity of software in smart
appliances cannot be controlled directly by ex-
meters. But for the smart meter to be trustwor-
ternal software, thus protecting safety and in-
thy, in the end it is indispensable that the ker-
tegrity of the appliances. Information flow con-
nel software in the trusted device is functionally
trol will guarantee pricavy protection by intro-
correct.
ducing appropriate security levels for secret, en-
Even if other techniques (e.g., run-time check-
crypted, aggregated, and public data; analysing
ing or proof-carrying code) are used to ensure
the information flow for all such data in the
critical properties, certain functionality of the
smart meter and the cockpit, and by carefully
kernel must be verified in addition (e.g., it must
introducing declassification [30] e.g. at aggrega-
be shown that the run-time checker is imple-
tion points.
mented correctly). One may use information-
flow control to show that communication mech-
For software outside the smart home, a com- anisms are used in such a way that confiden-
plete information flow analysis will not be possi- tiality is preserved. But one must still verify
ble due to the tremendous software size and the that these communication mechanisms are im-
number of stakeholders. Still, information flow plemented correctly and do not allow informa-
control can analyse critical software kernels, but tion leaks.
must be combined with more traditional tech- Thus, validating the functional correctness of
nology such as cryptography, certificates and the trusted device’s system kernel is central to
mandatory access control. Static information ensuring the integrity of the smart metering sys-
flow control will also be extended by dynamic tem. And since bugs in the kernel could be ex-
analysis and runtime verification, as described ploited for system-wide attacks against a critical
-----
formal methods – such as deductive program
verification – to ensure the kernel’s correctness.
Formal methods are also needed because
smart metering technology combines two tradeoffs in a complex way: confidentiality vs. intelligent control, and integrity vs. adaptability and
openness. This entails that properties need to
be ensured that balance these trade-offs and are
accordingly complex and difficult to formulate.
The integration with the physical world adds
further complexity.
There are different possibilities for who performs the verification of different system parts.
In particular, we applications and device drivers
running on top of the trusted kernel. A certification agency may be involved in different roles.
It may validate the softare itself, it may check a
verication performed by the system developer, it
may certify tools used for verification, or it may
provide (formalisations of) properties, and/or
tools that allow the user to check evidence provided by the developer (proof-carrying code).
##### 6.2 Deductive Verification of Sys- tem Code
**Overview.** The field of deductive program
verification, i.e., formal reasoning about the behaviour of programs, is old. The idea of applying deduction to programs goes back at least
to the work of Scott, Plotkin, and Milner in
the late 1960s. Recent years have brought
tremendous advances in both scope and practicality, however. Today, program verification
is applied to real-world software. For example, security-critical system software is verified
in the Verisoft XT project (see, e.g., the paper
by [9] on deductive verification in Verisoft XT)
and the L4.verified project (see the overview paper by [27]).
As an example for a successful method for deductive verification of system code, we below
describe the approach used in the Verisoft XT
project. While Verisoft XT did not lead to a
full verification of a mikro kernel (mostly due
to a lack in time and man-power), it was clearly
demonstrated that a complete verification is feasible. The kernel considered in Verisoft XT,
SYSGO’s PikeOS, may very well serve as the
basis for implementing a trusted device kernel
**Verisoft XT: Verifying the PikeOS Mi-**
**cro Kernel** In the first phase of the Verisoft
project it has been shown that pervasive formal
verification of an academic operating system including its execution environment, like the underlying hardware and the compiler, is feasible.
In the subproject Avionics of the successor
project Verisoft XT, this knowledge was applied
and refined to the verification of a real world
implementation of a microkernel used in industrial embedded systems, namely PikeOS from
SYSGO AG which operates in safety-critical environments. One goal of the Verisoft XT subproject Avionics was to prove functional properties of the source code of the microkernel using
Microsoft’s verification tool VCC [10].
PikeOS (see http://www.pikeos.com/) consists of a microkernel acting as paravirtualizing
hypervisor and a system software component.
The PikeOS kernel is particularly tailored to
the context of embedded systems, featuring realtime functionality and orthogonal partitioning
of resources such as processor time, user address
space memory and kernel resources.
PikeOS could easily be adapted to the requirements of a smart metering system. It would, of
course, have to be extended by additional functionality for this particular application, such as
encrypted communication, switching appliances
etc.
**The Verifying Compiler Approach.** It is
widely recognized that interaction is indispensable in deductive verification of real-world code.
Verification engineers have to guide the proof
search and provide information reflecting their
insight into the workings of the program. Lately
we have seen a shift towards a paradigm, called
verifying compilers [25], where the required information is provided in form of program annotations instead of interactively during proof
construction. This has some interesting consequences upon the verification process and the
way annotations are used to specify programs
as the lines between requirement specification
and information required for proof construction
and proof search guidance get blurred.
Also, verifying compilers allow for new ways
of coping with programming language semantics. Instead of directly axiomatizing the complex semantics of the high-level programming
language, verification is done at the level of an
-----
mantics. A prominent example is Microsoft’s
BoogiePL [13], which is used in Spec# and VCC
among other tools. Annotated code in the intermediate language is typically obtained from
annotated source code by using compiler technology. Despite additional problems – the transformation from annotated source code to intermediate code, for example, obfuscates the verification problem and makes it harder to map verification results back to the source code level –,
the use of an intermediate language offers substantial advantages. It facilitates adaptation to
other programming languages, but foremost it
allows a separation of concerns, namely the semantics of the source code programming language on the one hand and the genuine verification problem on the other. Also, intermediate languages usually have constructs, such as a
non-deterministic choice operator, that are difficult to include in a real programming language
but are very useful for formal specification and
verification.
Tools following the verifying compiler
paradigm include Spec# [4], VCC [34], and
Caduceus [15]. They are all based on powerful fully-automatic provers and decision
procedures, and they support real-world programming languages such as C and C#. VCC
was used in the Verisoft XT project.
Verification in VCC is modular, both with respect to threads and functions. Functions are
equipped with contracts in form of pre- and
post-conditions, giving all necessary conditions
to call the function and the guarantees on the
state, when the function returns. Callers are
then verified with respect to the contracts, not
bodies, of the called functions. The program is
verified as if it were executed by a single thread
but, to handle concurrency, predicates describing knowledge about the state are weakened at
possible points of interleavings to simulate the
effects of other threads.
**Specifying a microkernel with a simula-**
**tion relation.** As explained above, properties
of a smart metering system that need to be verified to ensure integrity and confidentiality are
rather complex and difficult to formulate.
The standard approach to specifying the required properties on an abstract level is a simulation theorem. The proof is conducted by inductively showing that each step of the specifi
smart meter’s trusted device, is realized by a
certain number of steps in the implementation.
For example, a simulation theorem has been
developed and proven in the first Verisoft
project [36]. While a real-world micro kernel differs from the “academic” system used in
Verisoft I (e.g., full C semantics, interruptible
kernel, shared memory, real-world architecture),
the principal approach to show its correctness
remains the same: formally verifying a simulation theorem between an abstract specification
and the concrete implementation of the system.
For a simulation proof we need to look at
the system at different layers of abstraction. In
our case there are three of them. The first
and most abstract one is cvm – the specification model. It consists of an abstract kernel
that specifies the user-visible parts of the implementation and hides hardware functionality.
The other part consists of the additional (possibly untrusted) processes running on the system.
We interpret these processes as separate virtual
_machines that communicate with each other_
only via defined channels (e.g. shared memory,
IPC). The concrete kernel layer represents the
C and assembly implementation which precisely
describes the functionality of most parts of the
kernel, given one has assigned an unambiguous
semantics to C by fixing a compiler and an architecture. Finally, the architecture layer models
the physical hardware on which assembly code,
compiled C code, and the additional processes
are executed. Formally we can model these layers as follows
_• cvm – the abstract model consisting of:_
**– cvm.vm(i** ) – the virtual machine of
the i -th process, consisting of the a
CPU context vm(i ).cpu and a virtual
memory portion vm(i ).m of some adjustable size.
**– cvm.c(i** ) – the C configuration of the
abstract kernel thread i, comprising
components like program code or a local memory stack and sharing global
memory with the other threads. Note
that c(i ) only becomes active when
_vm(i_ ) enters the kernel (e.g., via a system call).
_• k_ (i ) – the C configuration of the concrete
kernel thread which implements c(i ), including additional data structures not visi
-----
_• h – the model of the underlying hardware_
_architecture, basically comprising the CPU_
context h.cpu and physical memory h.m.
One can then define relations connecting the
different layers. For instance, we define a B_relation [17] that relates specification and im-_
plementation of the additional processes. It
states that the context of the active process
agrees with the CPU registers and all other processes are encoded in dedicated data structures
of the kernel. For the virtual machines’ memory
it demands that memory contents are equal to
those of the corresponding regions on the physical machine. There is also an abstraction re_lation between abstract and concrete kernel as_
well as a compiler consistency relation between
C code and compiler-generated assembly code,
that guarantee that the concrete kernel program
is correctly executed on the underlying hardware.
Formally, one combines these relations into
an overall relation cvm-sim(cvm, k _, h) stating_
that the cvm model is simulated by the concrete kernel k and the hardware state h. In addition there are implementation invariants impl _inv_ (cvm, k _, h) for specific layers, which specify_
that the contained components and data structures remain well-defined.
For any n execution steps in the cvm model
a trace of m hardware steps can be found that
simulates the cvm execution, such that all three
layers are consistent to each other.
With transitions on the cvm model and the
hardware defined by step functions δcvm and
_δh_, the overall simulation theorem between cvm,
concrete kernel and architecture layer can be
stated as follows. Assuming validity and induction start preconditions on the initial configurations cvm [0] and h [0] we have:
_∀_ _n ∃_ _m ∃_ _k_ �impl -inv (δcvm[n] [(][cvm] [0][)][,][ k] _[, δ]h[m]_ [(][h] [0][))][ ∧]
_cvm-sim(δcvm[n]_ [(][cvm] [0][)][,][ k] _[, δ]h[m]_ [(][h] [0][)]�
##### 6.3 Deductive Verification of Information-flow Properties
As said above, tremendous progress has been
achieved in formal verification of functional
properties of software. At the same time seminal
papers have been published showing that it is in
principle possible to formulate information-flow
problems as proof obligations in program logics.
our own experience in formal methods for functional properties in order to specify and verify
information flow properties.
In the simplest case, a confidentiality policy
can be formalized as non-interference [12] and
described in terms of an indistinguishability relation on states. That is, two program states are
indistinguishable for L if they agree on values
of L variables. The non-interference property
says that any two runs of a program starting
from two initial states indistinguishable for L,
yield two final states that are also indistinguishable for L variables. This notion is employed
and made explicit in the information-flow analysis.
In a smart-metering system, more complex
properties such as controlled information release
need to be assured. Verifying such properties is
a current hot research topic. We will carry out
related research in the DeduSec project within
the DFG Priority Programme 1496 “Reliably Secure Software Systems – RS3”. In this project,
we plan to define syntax and semantics of a specification language for information-flow properties at the level of (Java) programs. The goal
is a language that is expressive enough to allow security requirements at the system level to
be easily and flexibly broken down into program
level requirements. Further, we will design and
implement a system for verifying programs annotated with security properties and specifications. More specifically, we will be concerned
with the rule-based generation of first-order verification conditions from annotated Java programs. The technological basis will be the KeY
system (co-developed by us) [8].
Our project is based on recent advances in
using program logics (such as Hoare Logic or
Dynamic Logic) for the specification and verification of information-flow properties at code
level. Using program logics, non-interference
can be directly formalized (e.g., [6, 12, 37]); or
it can be translated into dependence properties, which in turn can be formalized in program logics logic (this has been investigated for
a simple imperative language [2, 1], for a simple
object-oriented language [3], and for sequential
Java [19]). Non-interference can also be translated into proof obligations that can – in principle – be handled by unmodified existing program verification tools using a technique called
_self-composition [12, 11, 6]._
We also plan to adapt the concept of owner
-----
erties. This concept has been developed in
the context of deductive verification of functional properties to specify that complex data
structures are not changed in unexpected ways
(e.g. [28]). For information-flow properties,
ownership has to be adapted so that one can
specify that data structures are not read in an
unintended way.
##### 6.4 Further Challenges
**Adequacy of Requirements and Certifi-**
**cation** As explained in this report, enforcing
confidentiality and integrity of a smart metering
system involves a varity of measures. While the
deductive methods described above mostly apply to the implementation at code-level, the verified properties must be related to higher-level
requirements (e.g., policies). Relating these levels to each other is a scientific challenge that still
demands research.
Also, it is important for certification, that not
only are the verified properties adequate and
validation and analysis techniques are correctly
applied but that this adequacy and correctness
can be checked and validated by third parties.
This requires further research and extensions of
existing verification methods.
**Verification** **of** **Evolving** **Software.** For
evolving software, analyses of properties have to
be repeated. This fact has not been addressed
in current software verification and certification
approaches, which are design-once-change-never
oriented. Most quality assurance methods are
challenged by adaptability. How to adapt and
“repair” verification proofs and formal models
after an adaptation is an unsolved problem, and
verifying self-adaptive systems is a great challenge.
#### 7 Data Usage Control with Runtime Verification and Dynamic Data Flow Anal- ysis
The system architecture in Figure 1 depicts
several data flows some of which are potentially privacy-sensitive and deserve protection.
The data types in question include raw sensor
also traffic data that is created whenever the
customer interacts with any other of the various stakeholders. The problem, then, is to
make sure that these different kinds of data are
used w.r.t. laws and regulations, but also w.r.t.
customer-defined requirements.
This problem spans three dimensions. The
first dimension is usage control proper, as found
in digital rights management systems: given a
specific data item, how can the usage – events
including printing, saving, copying, etc. – of
this data be controlled. Typical solutions to
this problem include, among other things, runtime verification (the scientific roots of which
are temporal logics and automata theory), and
complex event processing (the scientific roots
of which are active data base technology and
event-condition-action rules). The second problem dimension is data flow analyis across different representations. Usage control mechanisms,
as mentioned above, are fundamentally bound
to the notion of events and usually do not consider data at all. As such, the events in question
are usually parameterized with one concrete representation of a sensitive data item. However,
usage control policies are usually meant to be
concerned not with one but rather with all representations of the data. For instance, if the
customer’s master data should not be copied,
this requirement applies to both some textual
representation and the pixel representation on
a screen. Similarly, daily energy consumption
comes in the form of a number of raw measurements as well as in the form of some graph on
a screen. If the raw data must not be copied,
then this means that a screenshot of the graphical representation must not be taken as well.
The scientific core of this problem is, on the
one hand, data flow and information flow analysis within one layer of abstraction, e.g., within
.NET CIL or within some RTL language. On
the other hand, data flows in-between these levels of abstraction must be monitored, which is
a rather new and open problem. Finally, the
third problem dimension is distribution: Data
flows must not only be detected (second problem dimension) and controlled (first problem dimension) within one of the IT systems represented by boxes in Figure 1, but also in-between
different systems and governance domains. In
other words, different representations may exist
on different machines, and all of them must be
controlled.
-----
metering device and sent to the customer’s data
management system on a per-second basis, and
to the frontend of the energy provider on a 15minutes basis. The data managament software
computes profiles, deltas with other people’s
profiles and historical data, and displays the result of these computations in graphical form.
Because the customer has provided his consent,
this fine-grained measurement data is sent to a
vendor of appliances who can recommend some
class A fridge. At the same time, the customer
may not fully understand his monthly bill and
contact a call center which, in turn, has access
to a plethora of different kinds of data. In this
setting, there are different kinds of data in different representations on different machines in different governance (and liability domains). The
problem then is, how can this data be controlled.
This is a real problem: Among other things,
only recently, a variety of Android mobile phone
applications—that could be part of the smart
metering system—have been shown to disclose
location information to advertisement servers or
SIM and phone numbers to other stakeholders
without explicitly asking for the user’s consent
[14].
##### 7.1 Runtime Verification
Roughly speaking, runtime verification denotes
a set of techniques that implement decision procedures for whether a future or past temporal
logic formula is satisfied, open, or violated for a
finite prefix of a possibly infinite trace of (sets
of events). As such, runtime verification is, in
contrast to model checking or deductive theorem proving, a technique that is solely used dynamically. Statements on the truth value of a
formula are hence made for one given trace and
one moment in time rather than for all traces of
the system under consideration.
Runtime verification is relevant in the context of smart metering contexts when it comes
to monitoring the usage of data. Roughly, monitors are implemented that listen to the events
that happen in the system. These events include the access to possibly sensitive data items,
copying these items, but also deletion requirements. These events happen at different levels
of abstraction, including the level of machine
language, data bases, runtime systems such as
.NET or Java virtual machines, infrastructure
applications such as X11, within applications
of these layers, events that relate to sensitive
data items must be observed. This is done by
(automatically) transforming the temporal logic
formulas that specify adequate data usage into
respective monitors at the respective layers of
abstraction.
There is a variety of algorithms for performing
runtime verification with a variety of optimality results concerning, among other things, the
possibility to decide on truth or falsity of a formula at the earliest possible moment in time,
the number of states that need to be stored,
etc. [29].
For controlling data usage, a simple temporal logic with abstractions for limited cardinality
constraints is the Obligation Specification Language, or OSL [23]. As we will explain below,
traces are sequences of sets of events. Then,
given an OSL formula ϕ and a trace (prefix)
_t, runtime verification decides at runtime, for_
each moment in time n, whether or not ϕ is
true at n (can never be violated in the future),
violated (can never become true in the future),
or whether this decision cannot be taken yet. It
is possible to automatically synthesize monitors
from policies written in OSL. These generated
monitors allow us to detect runtime violations
of properties like those described in Section 3.
With minor extensions, it is in many cases also
possible to prevent a policy violation.
**7.1.1** **System Model**
We introduce the syntax and semantics of OSL.
We formalize both in Z, a formal language based
on typed set theory and first-order logic with
equality. We have chosen Z because of its rich
notation, which we explain as it is encountered.
We have also given a more user-friendly syntax
to OSL [24], which we do not present here for
brevity’s sake. The current version of OSL supports all usage control requirements identified
above, except environment conditions.
The semantics of our language is defined over
traces with discrete time steps. At each time
step, a set of events can occur. An event corresponds to the execution of an action and we use
these two terms interchangeably.
Each Event has a name and parameters, specifying additional details about the event. For example, a usage event can indicate on which data
item it is performed or by which device. An example of an event is (snd _, {(obj_ _, o), (rcv_ _, r_ )}),
-----
with name obj has value o while the value of rcv
is r —intuitively, the object o is sent to receiver
_r_ .
Each event belongs to an event class. Possible
event classes include usage and other, the latter
standing for all non-usage events, e.g., payments
or notifications. This distinction enables us to
prohibit all usages on a data item while still allowing other events such as payments.
**7.1.2** **Syntax**
An OSL policy consists of a set of event declarations and a set of obligational formulae. Each
obligational formula consists of the data consumer’s name and a logical expression.
Φ defines the syntax of the logical expressions
contained in obligational formulae, as shown in
Figure 6. For brevity’s sake, we omit the formal
definition of the set of events, Event, here—we
may simply assume this set to be given. Efst (e)
refers to the start of an event e and Eall (e) to
ongoing events.
We define an additional restriction on the policy syntax (omitted here): we demand that all
events that are mentioned in a policy are compliant with the event declaration, i.e., they may
only contain parameters that are declared and
corresponding values. Fewer parameters are allowed in a policy, because of the implicit universal quantification over unspecified parameters.
**7.1.3** **Informal Semantics**
We informally describe the semantics of OSL’s
operators here; a formal definition is provided
elsewhere [23]. They are classified into propositional operators, temporal operators, cardinality operators, and permit operators, the latter
of which we do not discuss here.
**Propositional Operators** The operators
_not, and_ _, or_, and implies have the same semantics as their propositional counterparts ¬, ∧, ∨,
and ⇒.
**Temporal Operators** The until operator
corresponds to the weak until operator from
LTL [33]. We use the weak version of the until
operator because it is better suited for expressing usage control requirements (cf. §??). We
generalize the next operator of LTL to after,
which takes a natural number n as input and
we can express concepts like during (something
must hold constantly during a given time interval) and within (something must hold at least
once during a given time interval).
**Cardinality Operators** Cardinality operators restrict the number of occurrences of a specific event or the accumulated duration of an
event. The repuntil operator limits the maximum number of times an event may occur until
another event occurs. For example,
_repuntil_ (15, Efst (snd _, {(obj_ _, sd_ ), (rcv _, ep)}),_
_Efst_ ((chck _, ∅)))_
states that sensor data sd is sent at most
15 times to the energy provider ep before a
self check event chck must take place. With
_repuntil_, we can also define repmax, which is
syntactic sugar for defining the maximum number of times an event may occur in the unlimited
future.
A policy is satisfied by a trace iff all obligations specified in the policy are satisfied by the
trace. The definition of obligation satisfaction
builds on the above semantics but requires a
system model that includes activations of obligations. Such a complete system model is presented in [24].
##### 7.2 Dynamic data flow analysis
In the following, we assume a reserved parameter, obj, indicating which object the event is
related to and a reserved value for that object
_nil, used to indicate no object. In the case of_
events that need more than one object parameter (like copy, which requires a source and a
destination), we assume the presence of a single obj parameter only; other parameters will
be defined using different names. For instance,
the syntax for a send command will be similar
to send ({(obj _, obj1_ ), (dst, obj2 )}).
**7.2.1** **Data Items and Data Containers**
To the end of data flow analysis, we need to introduce the distinction between data items and
_containers for data items._ Roughly, the idea
is that in order to control all copies of a data
item, we keep track of all its representations,
or containers. Containers are the different representations of data, including files, database
records, network packets, memory regions, etc.
-----
Φ ::= true | false | Efst _⟨⟨Event⟩⟩| Eall_ _⟨⟨Event⟩⟩| not⟨⟨Φ⟩⟩| and_ _⟨⟨Φ × Φ⟩⟩| or_ _⟨⟨Φ × Φ⟩⟩|_
_implies⟨⟨Φ × Φ⟩⟩| until_ _⟨⟨Φ × Φ⟩⟩| always⟨⟨Φ⟩⟩| after_ _⟨⟨N × Φ⟩⟩| within⟨⟨N × Φ⟩⟩|_
_during⟨⟨N × Φ⟩⟩| repmax_ _⟨⟨N × Φ⟩⟩| repuntil_ _⟨⟨N × Φ × Φ⟩⟩_
Figure 6: Syntax of OSL
of events, according to the type of the obj parameter: events of class dataUsage define actions on data objects, while events of class con_tainerUsage refer to a single container._
Within the system, only events of class containerUsage can happen, because each monitored event in a trace is related to a specific
representation of the data. DataUsage events
are used only in the definition of policies, where
it is possible to define a rule abstracting from
the specific representation of the information.
Accordingly, we define two subsets of events,
_CEvent and DEvent, respectively for events of_
class dataUsage and containerUsage.
We demand that all events of class usage have
an object parameter. This parameter indicates
the object the event is referred to. So, as we
discussed before, ParamName obj for events of
class dataUsage has to be mapped to data, as
well as for events of class containerUsage it has
to be mapped to a container.
Of course, the system has to satisfy some
additional sanity constraints that we omit for
brevity’s sake.
**7.2.2** **Data State**
In order to integrate information flow detection
capabilities into the semantic model of OSL, we
need to add also functions for modeling the relationship between containers and data.
The same data can be stored in multiple containers. Multiple data items can be stored in a
single container. We model this n-to-n relation
with two functions, one from Data to a set of
_Container_, and another one from a Container
to a set of Data. Although one can be derived
from the other, we use two functions for simplicity’s sake.
We also have to define an InitialCont function, a bijective mapping between data and containers that represents the initial container that
stores a data item as soon as it starts to be monitored by the system.
Moreover, we introduce the Alias function to
ers. By connected, we mean that a content update of the first implies a content update of the
other ones. This happens when, for instance,
multiple containers are mapped to (totally or
partially) overlapping memory areas. In this
case, writing data in one of them implies writing in the other ones. Last but not least, we
define the Naming function from a set of names
(a subset of the set of parameter values, Param_Values) to Container. As discussed before, this_
is useful to model renaming activities.
_Name : P ParamValue_
_Alias : Container �→_ P Container
_Naming : Name �→_ _Container_
Now we are ready to define a state (IFState)
of the information flow model as the triple
_Storage × Alias × Naming._ The transition relation among states is of course dependent on
the system that is modeled. We define IFR
as IFState × Event �→ _IFState._ According to
the semantic model of OSL, at each time step,
a trace consists of a set of events rather than
a single one. For this reason we need to define a state transition relation IFRSet of type
_IFState × P Event �→_ _IFState. If all the events_
in the set are independent, then this is equivalent to the union of IFR applied to the set. But
this being not always the case, we must consider
transitions caused by a set of events.
We need to consider a particular state Σi
where the storage function contains only an
empty mapping for the reserved object nil and
the alias function is empty. W.l.o.g we can assume this to be the initial state of the system.
_IFState : Storage × Alias × Naming_
_IFR : IFState × Event �→_ _IFState_
_IFRSet : IFState × P Event �→_ _IFState_
Σi == ((nil _, ∅), ∅, ∅)_
**7.2.3** **Syntax and State based formulae**
In order to monitor data flows, we keep track of
the data state: which containers contain which
-----
ated not only over traces of events, but also over
states of the data flow model.
To define state-based formulae we add an operator state⟨⟨Φs _⟩⟩_ to Φ on the grounds of a new
set of state-based operators Φs . In order to express constraints on data instead of containers, we introduce three new operators, denyC,
_denyD and limit._
Φs ::= denyC _⟨⟨Data × P Container_ _⟩⟩|_
_limit⟨⟨Data × P Container_ _⟩⟩|_
_denyD⟨⟨Data × Data⟩⟩_
**7.2.4** **Informal Semantics**
We can concentrate on the new construct state()
and on the set Φs . The state() operator is
needed to syntactically merge the new stateformulae with the original system while keeping
the two models separate: pure state formulae
appear as argument of a state() function.
Intuitively, denyC (d _, C_ ) forbids the presence
of data d in one of the containers in set C. This
operator is useful to express constraints, like for
instance “profile s must not be distributed over
the network”, which becomes denyC (s, {cnet _})._
The rule denyD(d1, d2) claims that data d1 and
data d2 cannot be combined, which means they
can never be in the same container.
_limit(d_ _, C_ ) is the dual of denyC : it expresses
the constraint that data d can only be in containers of set C. If AC is the set of all possible containers of the system, then denyC (d _, C_ )
is equivalent to limit(d _, AC_ _\C_ ). This can be
used to express concepts like “data d must be
deleted”, limit(d _, ∅), which is useful for forensic_
analyses.
**7.2.5** **Implementation**
We have implemented generic technology to perform this data flow tracking. In the SPP 1496,
Reliably Secure Software System, we are currently working on a general schema for connecting different layers of abstraction (which, to iterate, has not been done in the case of the smart
metering system yet). It is noteworthy here that
the static information flow detection technologies from Sections 5 and 6 are likely to, at one
single layer, substitute dynamic detection techniques for the same layer. This is particularly
appealing if the static techniques prove to be
Java bytecode, for instance, when not too much
dynamic binding takes place.
**7.2.6** **Application to Smart Metering**
With the help of OSL, augmented by constructs
to speak of a system’s data state, it is possible
to specify policies that allow or disallow the flow
of information within a distributed system, even
when the boundaries of internal components are
crossed. This is formally captured by containers that may or may not contain specific data
items. Because OSL can be expressed in LTL, it
is almost trivial to automatically derive generic
monitors from usage control policies. In order
to be applied to the smart metering system, we
need to connect these generic monitors to the
concrete different subsystems, thus yielding a
controlled system where it is possible to detect
or prevent the flow of data from, say, the data
management software, to, say, a call center.
More concretely, we would need to deploy
several of these monitors at different locations
in the system. One monitor tracks the data
flow within the smart meter itself (that is, the
trusted device). In reality, this will not be one
monitor but rather a set of monitors that monitor data flows at and in-between the different
levels of abstraction within the trusted device,
including the operating system and application
layers. Another monitor is required for the cockpit. This is a full-fledged PC, so the monitor
again consists of a set of monitors that track
data flow at and in-between the different layers
of abstraction of this PC, including the operating system, window manager, data bases, and
applications like web browsers or email clients.
At this stage, the granularity of data to be monitored also changes; we are more likely to speak
of user profiles than of single measurements at
this stage. In case the cockpit communicates
with third party software, either Web 2.0 media,
or billing or CRM software, then these systems
need to be monitored in an identical way; and
this process continues when considering the fact
that data may be forwarded to call centers.
For all of these different systems, we need to
either write or generate OSL policies to configure the generic runtime monitors that implement usage control and data flow detection.
As mentioned above, some of the monitors (or
submonitors at one layer of abstraction) are
likely to leverage static results from the work
-----
tion. Once such a system is in place, we can
provide guarantees in terms of system-wide data
flows in the overall distributed smart metering
system, thus addressing the important privacy
challenges described in Section 3.
#### 8 Conclusions
The recent Stuxnet attacks on SCADA systems
controlling industrial plants demonstrate that
the software security risk is high for today’s critical infrastructures. It will be even higher for
tomorrow’s virtualized infrastructures such as
E-Energy, E-Traffic, and Cloud Computing. In
this report, we have described a mix of techniques which will reduce security and privacy
risks in such infrastructures. Concentrating on
smart metering, we have shown:
_• Homomorphic encryption schemes, as well_
as their combination with authentification
methods, allows E-Energy providers to collect usage profiles in aggregated form, while
customer privacy is still protected.
_• Language-based security methods analyse_
the true semantics of smart metering software, instead of just providing guarantees
about its origin.
_• Proof carrying code allows to securely_
download software into the smart meter
while checking its functionality. The necessary proof checker (as well as the encryption
software) resides in a trusted device inside
the smart meter.
_• Information flow control protects critical_
computations, such as control of household
appliances, and discovers privacy leaks.
IFC is also used to protect integrity of the
trusted device.
_• Deductive verification can guarantee func-_
tional correctness for e.g. the proof checker
and the encryption software, as well as for
the smart meter kernel. Verification can as
well support IFC.
_• Runtime verification can dynamically de-_
tect information flow in smart meters
against predefined privacy policies expressed as dynamic (temporal) properties
in case static IFC is not possible or too unprecise, or system boundaries need to be
While we have concentrated on the smart metering example, let us conclude with an outlook
to how our technology will help to prevent attacks on SCADA systems (SCADA being abundant in critical infrastructures); such as the recent Stuxnet attacks:
_• Stuxnet used stolen certification keys. This_
highlights the approach of DSCI and RS3,
namely that we need to analyse the true
semantics of a program and not just certify its origin. It is not clear whether
today’s language-based security techniques
can analyse the full Stuxnet code, but program analysis and IFC are becoming more
powerful every year.
_• Current SCADA systems lack a trusted de-_
vice, which would greatly reduce the risk of
infiltration.
_• Stuxnet relies on a whole set of zero-day ex-_
ploits. The latter are often based on software bugs or attacks such as buffer overflow attacks. Modern program analysis has
developed powerful tools for bug-finding or
IFC, which help to discover such anomalies.
_• Verification, while expensive, can today_
formally verify realistic systems such as
SCADA security cores or even operating
systems.
_• Proof carrying code techniques prevent_
downloading malware, and runtime verification can dynamically discover illegal information flow.
We do not claim that we can prevent Stuxnet
with our current box of DSCI security approaches. But techniques as proposed in the
current article will certainly make attacks much
more difficult, not just on smart meters, but on
general SCADA systems, and on critical infrastructures as a whole.
We plan to actually develop and apply the
techniques in the scope of the DSCI cluster initiative. If funding is agreed, work will start in
2012. We plan to engineer available methods
for usage in E-Energy, E-Traffic, and Cloud systems; as well as to develop new approaches to
security. Several demonstrators will be used to
evaluate the DSCI approach, such as the “KIT
Smart Home” and the “KIT Federated Cloud”.
The Smart Meter example will be the first realistic case study for our new approaches to de
-----
#### References
[1] Torben Amtoft, Sruthi Bandhakavi, and
Anindya Banerjee. A logic for information flow in object-oriented programs. In
J. Gregory Morrisett and Simon L. Peyton Jones, editors, _Proceedings of the_
_33rd ACM SIGPLAN-SIGACT Symposium_
_on Principles of Programming Languages,_
_POPL 2006, Charleston, South Carolina,_
_USA, January 11-13, 2006, pages 91–102._
ACM, 2006.
[2] Torben Amtoft and Anindya Banerjee. Information flow analysis in logical form. In
Roberto Giacobazzi, editor, Static Anal_ysis, 11th International Symposium, SAS_
_2004, Verona, Italy, August 26-28, 2004,_
_Proceedings, volume 3148 of LNCS, pages_
100–115. Springer, 2004.
[3] Torben Amtoft and Anindya Banerjee. A
logic for information flow analysis with an
application to forward slicing of simple imperative programs. Sci. Comput. Program.,
64(1):3–28, 2007.
[4] Mike Barnett, Rustan Leino, and Wolfram
Schulte. The Spec# programming system: An overview. In Construction and
_Analysis of Safe, Secure, and Interopera-_
_ble Smart Devices (CASSIS), International_
_Workshop, 2004, Marseille, France, Re-_
_vised Selected Papers, LNCS 3362, pages_
49–69. Springer, January 2005.
[5] Gilles Barthe, Lennart Beringer, Pierre
Crégut, Benjamin Grégoire, Martin Hofmann, Peter Müller, Erik Poll, Germán
Puebla, Ian Stark, and Eric Vétillard.
Mobius: Mobility, ubiquity, security. objectives and progress report. In TGC
_2006: Proceedings of the second symposium_
_on Trustworthy Global Computing, LNCS._
Springer-Verlag, 2006.
[6] Gilles Barthe, Pedro R. D’Argenio, and
Tamara Rezk. Secure information flow by
self-composition. In 17th IEEE Computer
_Security Foundations Workshop, CSFW-_
_17, Pacific Grove, CA, USA, pages 100–_
114. IEEE Computer Society, 2004.
[7] B. Beckert, T. Dreier, A. Grunwald,
T. Leibfried, J. Müller-Quade, R. Reuss
S. Tai, W. Tichy, P. Vortisch, D. Wagner, and M. Zitterbart. Dependable software for critical infrastructures: Computing, energy, mobility. Project proposal,
Karlsruher Institut für Technologie, 2010.
[8] Bernhard Beckert, Reiner Hähnle, and Peter H. Schmitt, editors. _Verification of_
_Object-Oriented Software:_ _The KeY Ap-_
_proach. LNCS 4334. Springer-Verlag, 2007._
[9] Bernhard Beckert and Michał Moskal. Deductive verification of system software in
the Verisoft XT project. KI, 2009. Online
first version available at SpringerLink.
[10] Ernie Cohen, Markus Dahlweid, Mark
Hillebrand, Dirk Leinenbach, Michał
Moskal, Thomas Santen, Wolfram Schulte,
and Stephan Tobies. VCC: A practical
system for verifying concurrent C. In Proc.
_TPHOLs 2009, LNCS 5674, pages 23–42._
Springer, 2009. Invited paper.
[11] Ádám Darvas, Reiner Hähnle, and Dave
Sands. A theorem proving approach to
analysis of secure information flow. In
Roberto Gorrieri, editor, Workshop on Is_sues in the Theory of Security, WITS. IFIP_
WG 1.7, ACM SIGPLAN and GI FoMSESS, 2003.
[12] Ádám Darvas, Reiner Hähnle, and Dave
Sands. A theorem proving approach to
analysis of secure information flow. In Dieter Hutter and Markus Ullmann, editors,
_Proc. 2nd International Conference on Se-_
_curity in Pervasive Computing,_ volume
3450 of LNCS, pages 193–209. Springer,
2005.
[13] Rob DeLine and K. Rustan M. Leino. BoogiePL: A typed procedural language for
checking object-oriented programs. Technical Report MSR-TR-2005-70, Microsoft
Research, 2005.
[14] William Enck, Peter Gilbert, Byung-Gon
Chun, Landon Cox, Jaeyeon Jung, Patrick
McDaniel, and Anmol Sheth. Taintdroid:
An information-flow tracking system for realtime privacy monitoring on smartphones.
In Proc. 9th USENIX Symposium on Oper_ating Systems Design and Implementation,_
-----
[15] Jean-Christophe Filliâtre and Claude
Marché. Multi-prover verification of C
programs. In Formal Methods and Software
_Engineering,_ LNCS 3308, pages 15–29.
Springer, 2004.
[16] Flavio Garcia and Bart Jacobs. Privacyfriendly Energy-metering via Homomorphic Encryption. In 6th Workshop on Se_curity and Trust Management (STM 2010),_
2010.
[17] Mauro Gargano, Mark A. Hillebrand, Dirk
Leinenbach, and Wolfgang J. Paul. On
the correctness of operating system kernels. In Joe Hurd and Thomas F. Melham, editors, Theorem Proving in Higher
_Order Logics, 18th International Confer-_
_ence, TPHOLs 2005, Oxford, UK, August_
_22-25, 2005, Proceedings, volume 3603 of_
_LNCS, pages 1–16. Springer, 2005._
[18] Craig Gentry. Fully homomorphic encryption using ideal lattices. In Proceedings
_of the 41st Annual ACM Symposium on_
_Theory of Computing (STOC 2009), pages_
169–178, 2009.
[19] Christian Haack, Erik Poll, and Aleksy
Schubert. Explicit information flow properties in JML. In 3rd Benelux Workshop
_on Information and System Security (WIS-_
_Sec), November 2008._
[20] Christian Hammer. Information Flow Con_trol for Java - A Comprehensive Approach_
_based on Path Conditions in Dependence_
_Graphs. PhD thesis, Universität Karlsruhe_
(TH), Fak. f. Informatik, July 2009. ISBN
978-3-86644-398-3.
[21] Christian Hammer. Experiences with pdgbased ifc. In Proc. International Sympo_sium on Engineering Secure Software and_
_Systems (ESSoS’10), February 2010._
[22] Christian Hammer and Gregor Snelting. Flow-sensitive, context-sensitive, and
object-sensitive information flow control
based on program dependence graphs. Int.
_J. Inf. Sec., 8(6):399–422, 2009._
[23] Manuel Hilty, Alexander Pretschner, David
Basin, Christian Schaefer, and Thomas
Walter. A policy language for usage control. In 12th European Symposium on Re_search in Computer Security, pages 531–_
[24] Manuel Hilty, Alexander Pretschner,
Thomas Walter, and Christian Schaefer. A
system model and an obligation lanugage
for distributed usage control. Technical
Report I-ST-20, DoCoMo Euro-Labs,
2006.
[25] C. A. R. Hoare. The verifying compiler:
A grand challenge for computing research.
_Journal of the ACM, 50(1):63–69, 2003._
[26] M. Karg. Datenschutzrechtliche Rahmenbedingungen beim Einsatz intelligenter
Zähler. Datenschutz und Datensicherheit,
34(6):365–372, 2010.
[27] Gerwin Klein, June Andronick, Kevin Elphinstone, Gernot Heiser, David Cock,
Philip Derrin, Dhammika Elkaduwe, Kai
Engelhardt, Rafal Kolanski, Michael Norrish, Thomas Sewell, Harvey Tuch, and Simon Winwood. sel4: formal verification
of an operating-system kernel. _Commun._
_ACM, 53(6):107–115, 2010._
[28] K. Rustan M. Leino and Peter Müller. Object invariants in dynamic contexts. In
_Proc. ECOOP 2008, LNCS 3086. Springer,_
2004.
[29] Martin Leucker and Christian Schallhart.
A brief account of runtime verification.
_Journal of Logic and Algebraic Program-_
_ming, 78(5):293–303, may/june 2009._
[30] Alexander Lux and Heiko Mantel. Declassification with explicit reference points. In
_ESORICS, pages 69–85, 2009._
[31] K. Müller. Gewinnung von Verhaltensprofilen am intelligenten Stromzähler. Daten_schutz und Datensicherheit, 34(6):359–364,_
2010.
[32] Pascal Paillier. Public-key cryptosystems based on composite degree residuosity
classes. In Advances in Cryptology (EURO_CRYPT 1999), pages 223–238, 1999._
[33] Amir Pnueli. The temporal semantics of
concurrent programs. In Proc. Interna_tional Sympoisum on Semantics of Concur-_
_rent Computation, pages 1–20, 1979._
[34] Wolfram Schulte, Xia Songtao, Jan Smans,
and Frank Piessens. A glimpse of a verifying C compiler. In Proceedings, C/C++
-----
[35] Omer Tripp, Marco Pistoia, Stephen J.
Fink, Manu Sridharan, and Omri Weisman.
TAJ: effective taint analysis of web applications. In PLDI ’09: Proceedings of the
_2009 ACM SIGPLAN conference on Pro-_
_gramming language design and implemen-_
_tation, pages 87–97. ACM, 2009._
[36] Alexandra Tsyban. Formal Verification of a
_Framework for Microkernel Programmers._
PhD thesis, Dept. Computer Science, Saarland Univ., 2009. `http://www-wjp.cs.`
```
uni-sb.de/publikationen/Tsy09.pdf.
```
[37] Martijn Warnier. _Language Based Secu-_
_rity for Java and JML. PhD thesis, Rad-_
boud University, Nijmegen, The Netherlands, 2006.
[38] Daniel Wasserrab. Backing up slicing:
Verifying the interprocedural two-phase
horwitz-reps-binkley slicer. In Gerwin
Klein, Tobias Nipkow, and Lawrence
Paulson, editors, The Archive of Formal
_Proofs._ `http://afp.sf.net/entries/`
`HRB-Slicing.shtml,` November 2009.
Formal proof development.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1524/itit.2011.0636?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1524/itit.2011.0636, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://publikationen.bibliothek.kit.edu/1000020452/1978716"
}
| 2,011
|
[
"JournalArticle"
] | true
| null |
[] | 22,285
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Economics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0305aca1dd5a596bd9e7bf9df3ba7fd2900c36ad
|
[] | 0.886825
|
Research on the status of e-commerce development based on big data and Internet technology
|
0305aca1dd5a596bd9e7bf9df3ba7fd2900c36ad
|
International Journal of Electronic Commerce Studies
|
[
{
"authorId": "1657479882",
"name": "Chung-Lien Pan"
},
{
"authorId": "2116072000",
"name": "Ya Liu"
},
{
"authorId": "2270068",
"name": "Yu-chun Pan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int J Electron Commer Stud"
],
"alternate_urls": [
"http://ijecs.academic-publication.org/home"
],
"id": "a0704247-a92d-4f97-9a68-bb530c54b300",
"issn": "2073-9729",
"name": "International Journal of Electronic Commerce Studies",
"type": "journal",
"url": "http://ijecs.academic-journal.org/"
}
|
Cross-border cooperation in big data, Internet technology, and e-commerce plays an important role in guiding the people-oriented development of technology applications. In order to provide the latest research fronts of e-commerce development in the new era, this study used the VOSviewer to systematically review the development status of e-commerce supported by big data and Internet technology on the basis of mapping 265 kinds of literature retrieved from The Web of Science database from 1989 to 2020. This paper produces a concise research cluster map based on the co-occurrence network of key phrase data. The clusters cover keyword overview, major countries, organizations, top-level sources, co-citation networks, and bibliographical coupling networks. The analysis of the key phrases map shows that there is still a big gap in the research of e-commerce. With the progress and popularization of the Internet, the public has become more and more interested in electronic transactions, and e-commerce has become more popular. The research on country and organization cluster shows that, with China, the United States, and the United Kingdom as the typical examples, countries with dominant data resources have a greater influence on the organization cluster and source cluster, and are more closely related to each other. To source coupled-cluster analysis and bibliography from a total of two aspects has carried on the more in-depth research, studies have shown that e-commerce topics focused on production and economic research subject, "international journal of production research", "the mis quarterly" and "sustainability Basel " are thought to be the highest rate in publications, in e-commerce and the network technology research field, the "sustainability" is the dominant top journals. At the same time, publications with high co-citation rates have a high degree of bibliographic coupling.
|
Vol.13, No.2, pp.27 48, 2022
doi: 10.7903/ijecs.1977
# Research on the Status of E-Commerce Development Based on Big Data and Internet Technology
Chung-Lien Pan
Guangzhou Nanfang College
peter5612@gmail.com
Ya Liu*
Guangzhou Nanfang College
liuyahnu@163.com
Yu-Chun Pan
National Taiwan University of Science and Technology
[alice719tw@gmail.com](mailto:alice719tw@gmail.com)
## ABSTRACT
Cross-border cooperation in big data, Internet technology, and e-commerce plays an
important role in guiding the people-oriented development of technology applications. In
order to provide the latest research fronts of e-commerce development in the new era, this
study used the VOSviewer to systematically review the development status of ecommerce supported by big data and Internet technology on the basis of mapping 265
kinds of literature retrieved from The Web of Science database from 1989 to 2020. This
paper produces a concise research cluster map based on the co-occurrence network of key
phrase data. The clusters cover keyword overview, major countries, organizations, toplevel sources, co-citation networks, and bibliographical coupling networks. The analysis
of the key phrases map shows that there is still a big gap in the research of e-commerce.
With the progress and popularization of the Internet, the public has become more and
more interested in electronic transactions, and e-commerce has become more popular.
The research on country and organization cluster shows that, with China, the United
States, and the United Kingdom as the typical examples, countries with dominant data
resources have a greater influence on the organization cluster and source cluster, and are
more closely related to each other. To source coupled-cluster analysis and bibliography
from a total of two aspects has carried on the more in-depth research, studies have shown
that e-commerce topics focused on production and economic research subject,
"international journal of production research", "the mis quarterly" and "sustainability
Basel " are thought to be the highest rate in publications, in e-commerce and the network
technology research field, the "sustainability" is the dominant top journals. At the same
time, publications with high co-citation rates have a high degree of bibliographic coupling.
**Keywords:** Internet, Big Data, technology, E-commerce, co-occurrence network
-----
## 1. INTRODUCTION
Research fronts are the focus of many researchers in recent years. Research fronts are
usually represented by a set of articles that discuss the same or similar issues [1]. Research
fronts can reveal theoretical trends and the emergence of new topics[2]. In recent years,
the development of Internet technology, the Internet of Things, big data and e-commerce
has become research fronts, attracting wide attention and exerting a wide and far-reaching
impact on society, economy, and politics.
Internet technology is a base for electronic marketing expansion, especial in the
developed countries. Internet technology is an information technology (IT) that diffuses
at exponential rates among business-to-business organizations[3]. Big Data has captured
a lot of interest in the industry, with anticipation of better decisions, efficient
organizations, and many new jobs. Much of the emphasis is on the challenges of the four
V's of Big Data: Volume, Variety, Velocity, and Veracity, and technologies that handle
the volume, including storage and computational techniques to support analysis. However,
the most important feature of Big Data, the raison d'etre, is none of these 4 V's-but
values[4]. One possible definition of electronic commerce (E-commerce) is "business
transactions done electronically rather than by physical means, this includes not only
transactions related to trading in goods and services but also interchanges between trading
partners, such as sales support, logistics and customer services"[5]. Industry 4.0 is the
fourth industrial revolution. It is formed on the building blocks of the Industrial Internet
of Things, real-time data collection, and predictive analytics using big data analytics,
artificial intelligence, and cloud manufacturing[6].By using Internet platforms,
information and communication technologies, "Internet Plus" combines the concept of
"Internet Plus" with modern information technology applications such as the Internet of
Things, cloud computing, big data, and mobile Internet to create a new ecosystem for
modern industries. With the gradual development of national industry policy, electronic
commerce in China officially entered the "Internet plus" era[7]. Based on the analysis
results in the customer data, it became the direction of electronic commerce to pay
attention to different features of customers and carry out accurate personalized marketing
with big data technology[8]. The term "big intelligence moving cloud" is a combination
of big data, intelligence, mobile Internet, and cloud computing. It is a revolution of new
technologies that cross-fuse various technologies and form a new technology supported
by multiple information technologies. The interaction of the four can build an accounting
big data platform integrating finance, management, and business, and improve the
timeliness and efficiency of logistics cost management of e-commerce enterprises[9]. The
Industry 4.0 phenomenon offers opportunities and challenges to all business models.
Despite the literature advances in this field, little attention has been paid to the interplay
of smart production systems (SPSs), big data analytics (BDA), cyber-physical systems
-----
(CPS), the internet of things (IoT), and the potential business process management (BPM)
improvements[10]. When the country attaches great importance to the development of
the "big intelligence moving cloud ", how sustainable is the development of its related
technologies? Are there any new directions and areas for development? With the support
of Internet technology and big data, what breakthroughs have e-commerce made? What
new sparks have erupted from the collision of Internet technology, big data, and ecommerce? These are the current trend of development in the era of attention and research
issues.
The study of electronic commerce in the world began in the late 1970s. The
implementation of e-commerce can be divided into two steps, EDI business started in the
mid-1980s, Internet business began in the early 1990s. The 1990s is an information age
and an era of the knowledge economy. The Internet began to popularize and gradually
change people's way of life. Since 1991, commerce and trade activities that had been
excluded from the Internet have officially entered the kingdom, thus making e-commerce
the biggest hotspot of Internet applications. Dell, an American company known for its
direct-to-consumer online direct sales model, had online sales of up to $5 million in May
1998. The revenues of Amazon's online bookstore, another Internet upstart, soared from
$15.8m in 1996 to $400m in 1998. After decades of development of the Internet, big data,
as a new term, began to attract the attention of the theoretical circle in 2010. Its concepts
and characteristics were further enriched, and relevant data processing technologies
emerged one after another. Big data began to show the vitality and maintained its peak
development from 2011. The successful integration of e-commerce, the Internet, and big
data has injected fresh vitality into the development of the social economy in continuous
collision and integration. Exploring the frontier of its development is very necessary to
summarize its glorious history and reveal its future innovation trend.
Under the above background, the purpose of this scientometric review is to summarize
the research status from 1989 to 2020, conduct statistical and visual processing of the
results and data searched through The Web of Science (WoS) to make them easier to
understand, and comprehensively capture the development of this field through the
scientific cartography system. To achieve a systematic review of the development status
of e-commerce with the support of big data and Internet technology, we used the scientific
mapping tool VOSviewer to carry out interactive visualization and multiple bibliometric
analysis of literature. Therefore, this paper provides a deep and broad perspective for the
academic and practical circles to understand the basic knowledge structure and evolution
process of the interdisciplinary field of e-commerce.
Section 2 of this paper describes the theoretical basis and literature basis of this study. In
section 3, the methodology of applied literature retrieval and analysis techniques is
described, and the knowledge domain is scientifically mapped. Section 4 includes cooccurrence analysis, keyword analysis, co-citation analysis, and bibliographic coupling
-----
analysis of all relevant bibliographic records collected from the Web of Science (WoS),
and summarizes hot research issues in this field. Finally, the fifth part summarizes the
research results and guides future research and practice.
## 2. LITERATURE REVIEW
Internet of Things (IoT), Cyber-Physical System (CPS), Cloud Computing (CC),
Artificial Intelligence (AI), Big Data Analytics (BDA), Digital Twin (DT), etc, which
have greatly advanced the development of sustainable smart manufacturing throughout
the lifecycle[11]. The internet of things, the blockchain, and big data technologies are
potential enablers of sustainable agriculture supply chains, smart agriculture is
transforming the agricultural sector in terms of economic, social, and environmental
sustainability[12], [13]. Big data analytics (BDA) and the Internet of Things (IoT) tools
are considered crucial investments for firms to distinguish themselves among competitors.
Drawing on a strategic management perspective, BDA and IoT capabilities can create
significant value in business processes if supported by a good level of data quality, which
will lead to better competitive advantage[14]. Smart Manufacturing, which is the fourth
revolution in the manufacturing industry and is also considered as a new paradigm[15].
At present, the Internet of things is still in its initial stage of development, achieve more
intelligent life still faces many problems and challenges, this also attracted many scholars
to research, the research on scientific measurement of Internet of Things shows that the
hot research topics include application, communication protocols, operating systems and
so on [16]. Some scholars have studied the coexistence of Bluetooth, wireless multidomain network, WIFI, and other communication technologies, as well as the
identification of things, integration, and management of big data[17], [18].
Several challenges exist in IoT, such as security, bandwidth management, interfacing
interoperability, connectivity, packet loss, and data processing[19]. Industry 4.0 is not
only a new industrial revolution, but also a crucial integration challenge that involves
several actors from the IoE, which are people, data, services, and things. Moving to
Industry 4.0 involves the collection of massive amounts of data and the development of
big data applications that can ensure a quick data flow between different systems,
including massive amounts of data and information collected from smart sensors, and
sending them to cloud applications that allow real-time data monitoring and processing.
Securing and protecting the transmitted data represents a big issue to be discussed and
resolved[20], [21]. The positive impact of the Internet of things on human life is profound,
and its derivative value chain will improve the sustainable development of the
economy[22]. A balance must be struck between the identity and access control required
by the Internet of things and the user's right to privacy and identity[23]. More research is
-----
needed to understand the differences between benefits and risks and how individuals and
organizations interact in different Internet of things systems[24].
As more and more devices are connected to Internet products, when they reach a certain
level, they will create value for individual consumers and companies, driving the
development of all walks of life. For example, in the e-commerce industry, e-retailers can
use the Internet of things to select the most suitable product delivery service provider for
customers or provide accurate positioning of services to achieve synergies and improve
customer satisfaction and better shopping experience[25, 26]. In the Internet era of
information sharing, users' word of mouth plays an important role in e-commerce
websites[27]. The research trend of Internet technology mainly focuses on artificial
intelligence, big data and other aspects[28, 29].
The manufacturing industry has recently been focusing on improving energy efficiency
to reduce greenhouse gas emissions and achieve sustainable growth. The focus is on
combining existing energy technologies with new information and communication
technologies as the Fourth Industrial Revolution approaches[30].Manufacturing
industries can only be achieved by combining the physical manufacturing world and
digital world, to realize a series of smart manufacturing activities, such as active
perception, real-time interaction, automatic processing, intelligent control, and real-time
optimization, etc.[31].With the rapid development of the Internet of Things, CyberPhysical Systems, and Big Data, sustainable smart manufacturing provides a new strategy
for energy management by applying advanced information technologies[32].The
emergence of the Internet of Things (IoT) as the new paradigm of Information and
Communication Technology (ICT) and rapid changes in technology and urban needs urge
cities around the world to formulating smart city policies[33].The Internet of things
enhanced the effectiveness of response operations in terms of resource accountability,
specialized actions, situation assessment, resource allocation, and multi-organization
coordination[34]. The Internet of things can be used to collect more and more data, these
data can be used by decision-makers to obtain the necessary information of ATI Mely
Fashion[35]. IoT and Data Analytics, which will change the entire supply chain process,
and this has the potential to revolutionize management[36]. In an empirical application,
the Internet of things can help asset managers make the right decisions at the right time
by providing sufficient quality data to generate the required information, thus benefiting
asset management organizations[37].
The Internet of things is not only a valuable technology for remote and networked control
of devices and data sources, but also comes with considerations for other internetconnected devices: potential security and privacy issues associated with the use of these
devices[38-41]. Therefore, in terms of the Internet of things and remote network control,
there is huge development potential, but also a security and privacy problem that cannot
be ignored[42]. The fact that data related to the Internet of things devices are sent over
-----
the Internet and stored in the cloud makes them vulnerable to attacks and it may expose
the Internet of things devices to hackers[43]. When the Internet of things devices are used
with sensitive personal data related to medical treatment, their security and privacy are
particularly important[44]. Scientists call for a new regulatory approach that can intercept
attacks, validate data, control access, and guarantee customer privacy[45].
The impact of the Internet on the acceptance of the Internet of things, since the 2000s, the
Internet age has become a global population phenomenon. In the continuous development
of all walks of life are facing different challenges [46, 47]. However, in the process of
human development, it is not difficult to find that people have technical literacy to adopt
new innovative solutions [48–50]. With the increasing popularity of Wi-Fi and 4g-LTE
Internet connection, the use of IoT devices is becoming more and more common in our
daily life[51]. In terms of the use of the Internet, some scholars have found from 2014 to
2017 that house buyers and tenants aged between 18 and 49 are highly involved in Internet
activities, so they are most willing to use the Internet of things in their daily
consumption[52, 53].By using the scientometrics method, some scholars grouped the
overall terms that appear frequently from the Scopus paper database according to
keywords, titles, and abstracts. Their study found a remarkable increase in the number of
articles on IoT in each category of the paper. The use of the scientometrics method makes
the analysis able to focus on the movement of characteristics and IoT themes to
researcher's direction that has not found at this time, as a comprehensive guide to further
research and industry strategy that is more directed on concepts that support the 4th
industrial revolution[54]. Some scholars have analyzed the research output data on ‘Big
Data’ during 2010-2014 indexed in both, the Web of Knowledge and Scopus. The
analysis maps comprehensively the parameters of total output, growth of output,
authorship and country-level collaboration patterns, major contributors (countries,
institutions, and individuals), top publication sources, thematic trends, and emerging
themes in the field[55]. Researcher reviewed papers published in the ten top journals to
investigate the contributions of the Information Systems & MIS articles in the electronic
commerce literature. The bibliometric study examines the extant literature on Information
Systems & MIS and international business, and the results provide a global perspective
of the field, identifying the works that have had the greatest impact, the intellectual
interconnections among authors and published papers, and the main research traditions
or themes that have been explored in Information Systems & MIS studies. Structural and
longitudinal analyses reveal changes in the intellectual structure of the field over time
[56]. Some researchers used scientometric data extracted from Scopus, explored how the
Internet has become a powerful knowledge machine which forms part of the scientific
infrastructure across not just technology fields, but also right across the social sciences,
sciences, and humanities[57].
-----
To sum up, the development of Internet technology, the Internet of Things, big data, and
e-commerce has become a hot topic in recent years. Although there are a lot of articles in
some fields, there are few discussions on combining the three for comprehensive research,
and it is impossible to have a complete picture of seeing trees and forests. On the other
hand, visualization research on the hotspots and trends of E-commerce with the support
of Big Data and Internet technology through scientometric is still lacking. This paper
makes up for the lack of comprehensive research in these three areas by mapping key
phrases, including major country, organization, and source clusters, based on the WoS
database. These maps will help to track and explore interdisciplinary cooperation over
the years, laying the foundation for the application of Internet technology and big data in
the field of e-commerce.
## 3. DATA AND METHODS
The concept of the research fronts was first proposed by Price to describe the dynamic
nature and ideological status of the research field[58]. Research fronts are the focus of
many researchers in recent years[1]. They are usually represented by a set of articles that
discuss the same or similar issues. Typically, the research fronts consist of about 40 or 50
recently published articles; the study of changes in a relatively small literary network can
help to track the trajectory of an uncounted number of documents[58]. Research fronts
can reveal theoretical trends and the emergence of new topics[59].
To obtain literature related to e-commerce, Internet technology, and big data,
scientometrics analysis in this paper uses the tool scientific Network (WoS) to carry out
"advanced search" query, take the subject to be studied as the core keywords (including
their similar meaning phrases), and search within the scope of the main work area of the
research subject. Finally, keywords and research fields are set as follows:
TS= ("Electronic Commerce" OR "Internet Technology" OR "Big Data") AND TS=
("Digital Technology" OR " Virtual Technology" OR "Online Communication" OR
"Mobile Technology" OR "Internet of Things" OR "New Media") AND SU= ("Business
& Economics" OR "Government & Law" OR "Social Sciences" OR "Management" OR
"Communication" OR "Technology"). Various literacy terms have been selected from the
Oxford Bibliography. After removing samples with too narrow search results and too
little data for discussion, in February 2020, 265 articles (including SCI-expanded and
SSCI) were retrieved as samples of this study. Then, the VOSviewer and Python visual
package are used to map, and the keyword overview, major countries, organizations, toplevel sources, co-citation networks, and bibliographical coupling networks cluster
diagrams are drawn, by creating various clusters, checking sizes of the nodes, and
checking the relationships and proximity of the nodes, a thorough analysis is carried out
one by one.
-----
## 4. RESEARCH MAPPING RESULTS
This section will make a comprehensive analysis of the research results from the aspects
of literature release and citation trends, top research institutions, and keyword clusters by
using graphs and tables.
## 4.1 Annual trends
As showed in figure 1 and figure 2, the number of publications on this topic began to
appear in 2006, increased rapidly and exponentially since 2015, and reached its peak in
2019 (The data for 2020 is not comprehensive, so the comparison is not included). The
increasing trend of the number of citations is consistent with the increasing trend of
publications.
**Figure 1. Trends in publications from 2000 to 2020**
**Figure 2. Variation trend of citations from 2000 to 2020**
-----
## 4.2 Keyword graphs and clusters
To build a keyword network, this paper uses VOSviewer software to build the author's
keyword co-occurrence network graph. There are 954 keywords in the author. After
screening, 56 more important keywords were selected and analyzed, and the so-called
"more important" keywords appeared at least 3 times. According to the co-occurrence
relationship, the 56 keywords studied in this paper were divided into 6 clusters, each
cluster corresponding to a different color. Each circular node in the figure represents a
keyword. The larger the area of the node is, the more critical the keyword is in the study.
According to Figure3, the green cluster includes 11 keywords such as the Internet of
things, big data analysis, cloud computing, and the fourth industrial revolution. The
Internet of things (IoT) is an information carrier based on the Internet, traditional
telecommunication network, etc., which enables all objects to form an interconnected
network. Through observation, it is found that the keyword Internet of things is most
closely related to other clusters. Therefore, we can see that in the era of big data, people
are more inclined to infiltrate the Internet of things into various industries and use cloud
computing and industry 4.0 to create new business models. There are 9 keywords in the
dark bule cluster, the main keywords are privacy, data protection, deep learning, security.
Among them, the keywords of privacy and security in this cluster are correlated with
those of service and commodity. The focus of this cluster is on the privacy and security
problems brought by the application of big data and how to effectively utilize the deep
learning of big data for data protection in the commodity and service industry. The red
cluster has 16 keywords, dominated by smart city, data mining, energy efficiency, and
sustainability. This cluster is less related to other clusters, among which smart cities are
most closely related to sustainability. The keywords of the whole group focus on energy
efficiency and the sustainable development of each industry system.
There are 7 keywords in the yellow cluster. Blockchain, Internet of things, digitization,
e-commerce, and innovation are the most influential keywords in the cluster. The focus
of this cluster is the relatively emerging digital science. As shown in the figure, although
the yellow cluster is not mature and large, each node has begun to communicate with
other fields in the figure. This shows that the scientific application of emerging data has
begun, and has full development potential. The light blue cluster involves a few nodes,
but the big data node is one of the most important central nodes in the whole picture. The
association emanating from this central node radiates almost to the main node of each
population, connecting the areas of interest in this paper. In addition to big data, the nodes
represented by artificial intelligence and digital transformation are more significant than
other keywords in the cluster, and they are related to blockchain, digitalization, and other
fields by themselves. However, compared with other keywords in the same cluster, the
relationship with e-commerce in the yellow cluster is weak. By comparison, it can be
found that the application of big data and other related technologies in the field of e
-----
commerce is highly feasible, and there is a large development space, which is worthy of
further study by scholars in this region. In the purple cluster, the nodes of data analysis,
intelligent manufacturing, cybersecurity, and risk management occupy a prominent
position. The purple nodes shown in the figure do not have a relatively concentrated
distribution like other clusters. However, in a relatively dispersed form, based on the
interrelation among the purple clusters, there are negligible correlation influences on the
Internet of things, big data analysis, privacy, data protection, deep learning, and other
fields.
Combined with the overall view, the largest nodes "Internet of things" and "big data" in
the diagram are closely related to smart cities, data analysis, and the fourth industrial
revolution, but all of them are sparsely linked to e-commerce. Although technologies such
as big data and the Internet of things have been well developed in various fields, it is
obvious that there is still a big gap in the research of e-commerce in this aspect. With the
progress and popularity of the Internet, the public is more and more interested in
electronic transactions, and e-commerce is more and more popular. As a direct result of
the development of the Internet, e-commerce is a new development direction for the
application of big data technology. Therefore, it is an important task to explore the use of
Internet technology and big data analysis to promote the systematic transformation of ecommerce.
**Figure 3. Keyword cluster overview**
## 4.3 Top organizations
In this study, the author's mechanism distribution was tested to determine its geographical
distribution. Table 1 lists all organizations accounting for more than 5%. The top three
are in China, the US, and the UK, accounting for 60.4 percent of the total. China overtook
the United States to assume the top spot. The booming development of the Internet has
driven the rapid development of e-commerce in China. The annual "double eleven" event,
ranging from tens of millions to hundreds of billions RMB, is a great epitome of the
-----
development of the B2C model in China. No matter in terms of user scale or market scale,
China's e-commerce is undoubtedly the fastest growing up in the world.
**Table1. Main geographical distribution of authors**
Country Number of articles %
China 59 22.30%
America 58 21.90%
Britain 43 16.20%
Korea 20 7.60%
Australia 18 6.80%
India 17 6.40%
Spain 16 6.00%
Others 34 12.80%
Figure 4 shows the status of collaborating on a topic. As shown in figure (a), China, the
United States, and the United Kingdom dominate the national network and become the
three main parts of the research. Figure (b) clearly shows that six major organizational
clusters, namely the Chinese academy of sciences in China, northwestern polytechnical
university, Zhejiang university, Pennsylvania state university in the United States, the
University of Oxford in the United Kingdom, and the University of Melbourne in
Australia, are closely related to each other. In the distribution channels of figure (c), the
main central clusters include journals on sustainability, international production research,
technology forecasting and social change, business vision, production planning and
control, and sustainable cities and society, and are closely related to each other. Through
the comparative analysis of the three clusters, it can be found that the countries that
occupy the dominant position in data resources have a greater influence on the
organization cluster and source cluster, and have closer connections with each other.
China, the United States, and the United Kingdom are notable cases.
**Figure 4. (a) Collaboration between states and institutions**
-----
**Figure 4. (b) Organization cluster**
**Figure 4. (c) Source cluster**
In order to explore the bibliographic coupling between publications and the co-citation
between cited sources, VOSviewer software was used to construct the co-occurrence
network graph. Table 2 lists the major journals related to e-commerce under big data and
Internet technology, with 4 clusters. The "Discipline" section lists the areas of WoS
research, and the "Journal" section lists the rankings based on the number of articles
published. Through the data sorted out in Table 2, the corresponding display diagram is
drawn. And the visualization results were shown in figure 5.
-----
**Table2. Top - level publication source details based on the bibliographic coupling relationship network**
Cluster Journal Discipline
#1 (3-1) Symmetry-Basel, (3-3) Sustainable Cities and Society, (5)
Big Data & Society, (6-1) Computer Law & Security Review,
(7-3) Nano Energy, (7-2) Telecommunications Policy, (7-1)
IEEE Systems Journal, (8-7) Automation in Construction, (8-4)
Arabian Journal for Science and Engineering
#2 (3-4) Technological Forecasting and Social Change, (4-1)
Business Horizons, (7-4) Complexity, (8-8) Journal of Systems
Science and Systems Engineering, (8-2) Journal of Retailing
and Consumer Services, (8-6) Professional De La Information,
(8-1) Business Process Management Journal
#3 (2) International Journal of Production Research, (4-2)
Production Planning & Control, (6-2) Journal of Manufacturing
Systems, (8-3) International Journal of Production Economics
Multidisciplinary Sciences; Green & Sustainable Science &
Technology; Construction & Building Technology; Energy &
Fuels; Social Sciences, Interdisciplinary; Law; Nanoscience &
Nanotechnology; Physics, Applied; Chemistry, Physical;
Materials Science, Multidisciplinary; Communication;
Information Science & Library Science; Telecommunications;
Operations Research & Management Science; Computer
Science, Information Systems; Engineering, Electrical &
Electronic; Engineering, Civil
Regional & Urban Planning; Business; Multidisciplinary
Sciences; Mathematics, Interdisciplinary Applications;
Operations Research & Management Science; Communication;
Information Science & Library Science; Management
Operations Research & Management Science; Engineering,
Manufacturing; Engineering, Industrial
#4 (1) Sustainability, (3-2) Journal of Cleaner Production Green & Sustainable Science & Technology; Environmental
Sciences; Environmental Studies; Engineering, Environmental
-----
**Figure 5. Top-level publication source details**
Since the publication "Journal of cleaner production" contains most of the papers, it is
omitted in this study to ensure the rationality of the results. In figure 6, it can be found
that the references cited by the author are co-cited. According to the co-citation
relationship, 34 clusters are mainly divided into three clusters, namely cluster (a), (b), and
(c). The larger the node, the greater the strength of the links between its citations or the
number of citations. These three clusters are the most prominent ones of "international
journal of production research", "mis quarterly" and "sustainability Basel". Among them,
the "international journal of production research" of the cluster (a) has the strongest cocitation relationship, followed by the "international journal of production economics"
which is also located in the cluster (a). It can be found that topics such as e-commerce to
mainly focus on the production and economic research disciplines. "Technological
forecasting and social change", "Harvard business review", "production planning &
control" and so on are also at the top of the list, indicating that these sources have been
cited by more scholars, with a high citation rate, a large range of applicable research fields
and more influence.
Figure 7 shows the co-occurrence network, or bibliographic coupling, of the first 22
publications. It can be found from the figure that the bibliography in this study with a
high degree of coupling and citation quantity is mainly located in the cluster (c). Among
the research on the integration of e-commerce and Internet technologies, the publication
"sustainability" publishes most articles and occupies a dominant position in the top-level
publications. Also, this publication has a high link strength with "international journal of
production research" and "production planning & control", and they are all in the cluster
(c). It can be found that they refer to multiple public publications together, with a high
degree of cross-referencing, mainly focusing on the intersection of sustainability,
production, business, and engineering. This is roughly the same as the result of figure 5.
Publications with high co-citation rates also have a high degree of bibliographic coupling.
Another cluster (d) centered on "sustainable cities and society" focuses on social science,
-----
big data, architecture, and engineering. The symmetry-base cluster (a) focuses on areas
such as the digital economy, mathematics, and computer law. In the cluster (b) centered
on "technological forecasting and social change", the research fields are mainly related to
business management, retail services, and information science. It can be found that the
purpose and scope of the journals of these four clusters are similar, and they all focus on
multidisciplinary comprehensive research concentrating on the business economy, data,
and science.
**Figure 6. Top cited sources: a co-citation relationship network visualization**
**Figure 7. Top publication sources: a bibliographic coupling relationship network**
visualization
-----
## 5. CONCLUSIONS
5.1 Findings and Contributions
Based on the co-occurrence data network, it can be found that :(1) since 2015, China, the
United States, and the United Kingdom have occupied an important position in the rapidly
growing publications. As a developing country, China's ranking among the top three is
closely related to its booming development in e-commerce. The birth and popularity of
"Double 11", "Double 12" and other new "festivals" have provided strong support for
China to occupy the core position in publications; (2) the current study power is given
priority to with institutions from China, the United States, the United Kingdom, the lack
of researchers from other industries at the core of enterprise, community, which related
to the industry attribute, nature of work, to a certain extent, this phenomenon caused the
disconnection between theory and practice, which suggests that further research should
focus on collaboration between industry, theory, and practice of cross-border joint; (3)
cluster analysis of major publications reflects the interdisciplinary nature of this research
subject, and the research mode of "Internet +", "big data +" and "e-commerce +" will
become a new research direction and hot field. The citation rate of the paper is positively
correlated with the coupling degree of the literature. These classic journals and researches,
such as "International Journal of Production Research "," Production Planning & Control
"and "Sustainability", have been generally recognized with wide application scope and
great influence; (4) from the point of keywords cluster analysis, the biggest node in figure
"Internet of things" and "Big data" are closely related to wisdom city, data analysis, the
fourth industrial revolution, but they have less to do with e-commerce, this shows that
with the support of Big data and Internet technology, there is still a lot of space for the
development of e-commerce. With the popularization of the Internet, the public's
adaptability to and enthusiasm for electronic transactions becoming increasingly
sophisticated, e-commerce will usher in a new round of blowout development. These
findings will provide guidance and assistance to researchers in e-commerce, big data, and
related technologies.
## 5.2 Limitations and Further Research Direction
This research, however, is subject to several limitations. On the one hand, data for 2020
are incomplete, so the number of articles published and cited must not be comparable
with other years. On the other hand, this paper analyzes the frontier of e-commerce
development under the support of big data and Internet technology, explores the current
development trend and key issues of concern. However, there is still a lack of research on
what measures e-commerce should take under the current development trend and how to
realize faster and more stable development with the help of big data, the Internet of Things,
artificial intelligence, and other technologies. This is also the focus of future research.
-----
## 6.CITATIONS
[1] J. S. Liu, L. Y. Y. Lu, W. M. Lu., “Research fronts in data envelopment analysis,”
_Omega, Vol. 58, pp. 33-45, 2016._
[2] X. W. Su, X. Li, Y. X. Kang., “A bibliometric analysis of research on intangible
cultural heritage using citespace,” SAGE Open, Vol. 9, No. 2, pp. 1-15, 2019.
[3] G. J. Avlonitis and D. A. Karayanni., “The impact of internet use on business-tobusiness marketing,” Industrial Marketing Management, Vol. 29, No. 5, pp. 441459, 2000.
[4] A. Sheth., “Transforming big data into smart data: Deriving value via harnessing
volume, variety, and velocity using semantic techniques and technologies,” in Conf.
_2014 IEEE 30th International Conference on Data Engineering, Chicago, IL, USA,_
2014, pp. 2-2.
[5] G. Oosthuizen., “Security issues related to E-commerce,” Network Security, Vol.
1998, No. 5, pp. 10-11, 1998.
[6] K. Tiwari and M. S. Khan., “Sustainability accounting and reporting in the industry
4.0,” Journal of Cleaner Production, Vol. 258, pp. 1-12, 2020.
[7] J. Li, C. Ma, M. R. Guo, L. L. Gao, G. Q. Gu., “Development status and technical
framework of internet-plus modern agriculture,” in Proc. 2018 the 10th
_International Conference on Computer and Automation Engineering, Brisbane,_
Australia, 2019, pp. 1-7.
[8] C. Lu, W. H. Qiu, X. L. Cheng., “Research on aviation big data and e-commerce
applications,” in Proc. 2017 International Conference on Electronic, Control,
_Automation and Mechanical Engineering,_ Sanya, China, 2018, pp.446-450.
[9] C. Zhang and X. Pei., “Research on logistics cost control of e-commerce enterprises
under the background of ‘Dazhi Yiyun’—Taking Jingdong mall as an example,”
_Modern Management, Vol. 9, No. 3, pp. 408-413, 2019._
[10] M. M. Queiroz, S. Fosso Wamba, M. C. Machado, R. Telles., “Smart production
systems drivers for business process management improvement: An integrative
framework,” Business Process Management Journal, Vol. 26, No. 5, pp. 10751092, 2020.
[11] Y. Liu, Y. F. Zhang, S. Ren, M. Y. Yang, Y. T. Wang, D. Huisingh., “How can
smart technologies contribute to sustainable product lifecycle management?”
_Journal of Cleaner Production, Vol. 249, pp. 1-4, 2020._
[12] S. S. Kamble, A. Gunasekaran, S. A. Gawankar., “Achieving sustainable
performance in a data-driven agriculture supply chain: A review for research and
applications,” International Journal of Production Economics, Vol. 219, pp. 179194, 2020.
-----
[13] A. M. Ciruela-Lorenzo, A. R. Del-Aguila-Obra, A. Padilla-Meléndez, J. J. PlazaAngulo., “Digitalization of agri-cooperatives in the smart agriculture context.
proposal of a digital diagnosis tool,” Sustainability, Vol. 12, No. 4, pp. 1325, 2020.
[14] N. Côrte-Real, P. Ruivo, T. Oliveira., “Leveraging internet of things and big data
analytics initiatives in European and American firms: Is data quality a way to
extract business value?” Information & Management, Vol. 57, No. 1, pp. 48-51,
2020.
[15] H. S. Kang, J. Y. Lee, S. S. Choi, H. Kim, J. H. Park, J. Y. Son, B. H. Kim, S. D.
Noh., “Smart manufacturing: Past research, present findings, and future directions,”
_International Journal of Precision Engineering and Manufacturing-Green_
_Technology, Vol. 3, No. 1, pp. 111-128, 2016._
[16] J. Ruiz-Rosero, G. Ramirez-Gonzalez, J. Williams, H. Liu, R. Khanna, G.
Pisharody., “Internet of things: a scientometric review,” Symmetry, Vol. 9, No. 12,
pp. 301, 2017.
[17] J. Iqbal, M. Khan, M. Talha, H. Farman, B. Jan, A. Muhammad, H. A. Khattak.,
“A generic internet of things architecture for controlling electrical energy
consumption in smart homes,” Sustainable Cities and Society, Vol. 43, pp. 443-450,
2018.
[18] L. Uden and W. He., “How the internet of things can help knowledge management:
a case study from the automotive domain,” Journal of Knowledge Management,
Vol. 21, No. 1, pp. 57-70, 2017.
[19] P. K. Khatua, V. K. Ramachandaramurthy, P. Kasinathan, J. Y. Yong, J.
Pasupuleti, A. Rajagopalan., “Application and assessment of internet of things
toward the sustainability of energy systems: Challenges and issues,” Sustainable
_Cities and Society, Vol. 53, pp. 2-10, 2020._
[20] M. Sanchez, E. Exposito, J. Aguilar., “Industry 4.0: survey from a system
integration perspective,” International Journal of Computer Integrated
_Manufacturing, Vol. 33, No. 10-11, pp. 1017-1041, 2020._
[21] C. E. Cotet, G. C. Deac, C. N. Deac, C. L. Popa., “An innovative industry 4.0
cloud data transfer method for an automated waste collection system,”
_Sustainability, Vol. 12, No. 5, pp. 1839, 2020._
[22] J. Nagy, J. Oláh, E. Erdei, D. Máté, J. Popp., “The role and impact of industry 4.0
and the internet of things on the business strategy of the value chain—the case of
hungary,” Sustainability, Vol. 10, No. 10, pp. 3491, 2018.
[23] S. Wachter., “Normative challenges of identification in the internet of things:
Privacy, profiling, discrimination, and the GDPR,” Computer Law & Security
_Review, Vol. 34, No. 3, pp. 436-449, 2018._
-----
[24] A. J. A. M. van Deursen and K. Mossberger., “Any thing for anyone? a new digital
divide in internet-of-things skills: a new digital divide in internet-of-things skills,” policy
_& internet, Vol. 10, No. 2, pp. 122-140, 2018._
[25] J. Yu, N. Subramanian, K. Ning, D. Edwards., “Product delivery service provider
selection and customer satisfaction in the era of internet of things: A Chinese eretailers’ perspective,” International Journal of Production Economics, Vol. 159,
pp. 104-116, 2015.
[26] Y. T. Tsai, S. C. Wang, K. Q. Yan, C. M. Chang., “Precise positioning of
marketing and behavior intentions of location-based mobile commerce in the
internet of things,” Symmetry, Vol. 9, No. 8, pp. 139, 2017.
[27] X. Y. Yu, S. K. Roy, A. Quazi, B. Nguyen, Y. Q. Han., “Internet entrepreneurship
and ‘the sharing of information’ in an internet-of-things context: The role of
interactivity, stickiness, e-satisfaction and word-of-mouth in online SMEs’
websites,” Internet Research, Vol. 27, No. 1, pp. 74-96, 2017.
[28] T. Saarikko, U. H. Westergren, T. Blomquist., “The internet of things: Are you
ready for what’s coming?” Business Horizons, Vol. 60, No. 5, pp. 667-676, 2017.
[29] J. Serrano-Cobos, “Tendencias tecnológicas en internet: hacia un cambio de
paradigma,” El Profesional de la Información, Vol. 25, No. 6, pp. 843-850, 2016.
[30] K. T. Park, Y. T. Kang, S. G. Yang, W. B. Zhao, Y. S. Kang, S. J. Im, D. H. Kim,
S. Y. Choi, S. Do Noh., “Cyber physical energy system for saving energy of the
dyeing process with industrial internet of things and manufacturing big data,”
_International Journal of Precision Engineering and Manufacturing-Green_
_Technology, Vol. 7, No. 1, pp. 219-238, 2020._
[31] A. Vatankhah Barenji, X. L. Liu, H. Y. Guo, Z. Li., “A digital twin-driven
approach towards smart manufacturing: reduced energy consumption for a robotic
cellular,” International Journal of Computer Integrated Manufacturing, Vol. 34,
No. 7-8, pp. 1-16, 2020.
[32] S. Y. Ma, Y. F. Zhang, S. Ren, H. D. Yang, Z. F. Zhu., “A case-practice-theorybased method of implementing energy management in a manufacturing factory,”
_International Journal of Computer Integrated Manufacturing, Vol. 34, No. 7-8, pp._
1-15, 2020.
[33] N. Noori, T. Hoppe, M. de Jong., “Classifying pathways for smart city
development: comparing design, governance and implementation in Amsterdam,
Barcelona, Dubai, and Abu Dhabi,” Sustainability, Vol. 12, No. 10, pp. 4030, 2020.
[34] L. Yang, S. H. Yang, L. Plotnick., “How the internet of things technology
enhances emergency response operations,” Technological Forecasting and Social
_Change, Vol. 80, No. 9, pp. 1854-1867, 2013._
-----
[35] M. Kumar, G. Graham, P. Hennelly, J. Srai., “How will smart city production
systems transform supply chain design: a product-level investigation,” International
_Journal of Production Research, Vol. 54, No. 23, pp. 7181-7192, 2016._
[36] S. Fosso Wamba, S. Akter, A. Edwards, G. Chopin, D. Gnanzou., “How ‘big data’
can make big impact: Findings from a systematic review and a longitudinal case
study,” International Journal of Production Economics, Vol. 165, pp. 234-246,
2015.
[37] P. Brous and M. Janssen., “Advancing e-government using the Internet of things:
A systematic review of benefits,” in Electronic Government, E. Tambouris, M.
Janssen, H. J. Scholl, M. A. Wimmer, K. Tarabanis, M. Gascó, B. Klievink, I.
Lindgren, P. Parycek, Eds. Cham: Springer International Publishing, 2015, pp. 156169.
[38] T. Xu, J. B. Wendt, M. Potkonjak., “Security of IoT systems: Design challenges
and opportunities,” in Conf. International Conference on Computer-Aided Design,
San Jose, CA, USA, 2014, pp. 417-423.
[39] M. U. Farooq, M. Waseem, A. Khairi, S. Mazhar., “A critical analysis on the
security concerns of internet of things (iot),” International Journal of Computer
_Applications, Vol. 111, No. 7, pp. 1-6, 2015._
[40] Y. H. Hwang., “IoT Security & Privacy: Threats and Challenges,” in Proc. The 1st
_ACM Workshop on IoT Privacy, Trust, and Security - IoTPTS ’15, Singapore,_
Republic of Singapore, 2015, pp. 1-1.
[41] K. Zhao and L. Ge., “A survey on the internet of things security,” in Conf. 2013
_Ninth International Conference on Computational Intelligence and Security,_
Emeishan, China, 2013, pp. 663-667.
[42] M. Tellez, S. El-Tawab, H. M. Heydari., “Improving the security of wireless
sensor networks in an IoT environmental monitoring system,” in Conf. 2016 IEEE
_Systems and Information Engineering Design Symposium, Charlottesville, VA,_
USA, 2016, pp. 72-77.
[43] M. Stanislav and T. Beardsley., “Hacking iot: a case study on baby monitor
exposures and vulnerabilities,” Rapid, Vol. 7, pp. 17, 2015.
[44] S. M. Riazul Islam, Daehan Kwak, M. Humaun Kabir, M. Hossain, Kyung-Sup
Kwak., “The internet of things for health care: a comprehensive survey,” IEEE
_Access, Vol. 3, pp. 678-708, 2015._
[45] R. H. Weber., “Internet of Things – New security and privacy challenges,”
_Computer Law & Security Review, Vol. 26, No. 1, pp. 23-30, 2010._
[46] T. Correa., “Digital skills and social media use: How Internet skills are related to
different types of Facebook use among ‘digital natives’,” Information,
_Communication & Society, Vol. 19, No. 8, pp. 1095-1107, 2016._
-----
[47] A. Šorgo, T. Bartol, D. Dolničar, B. Boh Podgornik., “Attributes of digital natives
as predictors of information literacy in higher education: Digital natives and
information literacy,” British Journal of Educational Technology, Vol. 48, No. 3,
pp. 749-767, 2017.
[48] J. Andersson, H. Hellsmark, B. A. Sandén., “Shaping factors in the emergence of
technological innovations: The case of tidal kite technology,” Technological
_Forecasting and Social Change, Vol. 132, pp. 191-208, 2018._
[49] F. Caputo, V. Scuotto, E. Carayannis, V. Cillo., “Intertwining the internet of things
and consumers’ behaviour science: Future promises for businesses,” Technological
_Forecasting and Social Change, Vol. 136, pp. 277-284, 2018._
[50] I. K. Wang and R. Seidle., “The degree of technological innovation: A demand
heterogeneity perspective,” Technological Forecasting and Social Change, Vol.
125, pp. 166-177, 2017.
[51] J. Gubbi, R. Buyya, S. Marusic, M. Palaniswami., “Internet of Things (IoT): A
vision, architectural elements, and future directions,” Future Generation Computer
_Systems, vol. 29, No.7, pp. 1645-1660,2013._
[52] S. Gurtner, R. Reinhardt, K. Soyez., “Designing mobile business applications for
different age groups,” Technological Forecasting and Social Change, Vol. 88, pp.
177-188, 2014.
[53] E. C. Tzavela, C. Karakitsou, E. Halapi, A. K. Tsitsika., “Adolescent digital
profiles: A process-based typology of highly engaged internet users,” Computers in
_Human Behavior, Vol. 69, pp. 246-255, 2017._
[54] M. Dachyar, T. Y. M. Zagloel, L. R. Saragih., “Knowledge growth and
development: internet of things (IoT) research, 2006–2018,” Heliyon, Vol. 5, No. 8,
pp. 1-12, 2019.
[55] V. K. Singh, S. K. Banshal, K. Singhal, A. Uddin., “Scientometric mapping of
research on ‘Big Data’,” Scientometrics, Vol. 105, No. 2, pp. 727-741, 2015.
[56] W. Glänzel., “Expression of concern: Bibliometric study of electronic commerce
research in information systems & mis journals,” Scientometrics, Vol. 114, No. 3,
pp. 1423, 2018.
[57] E. T. Meyer and R. Schroeder., “The net as a knowledge machine: How the
Internet became embedded in research,” New Media & Society, Vol. 18, No. 7, pp.
1159-1189, 2016.
[58] D. J. de Solla Price., “Networks of scientific papers,” Science, Vol. 149, No. 3683,
pp. 510-515, 1965.
[59] C. Chen., “CiteSpace II: Detecting and visualizing emerging trends and transient
patterns in scientific literature,” Journal of the American Society for Information
_Science & Technology, Vol. 57, No. 3, pp. 359-377, 2005._
-----
-----
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.7903/ijecs.1977?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.7903/ijecs.1977, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "http://academic-pub.org/ojs/index.php/ijecs/article/download/1977/445"
}
| 2,021
|
[
"JournalArticle",
"Review"
] | true
| 2021-09-01T00:00:00
|
[
{
"paperId": "f70054444a2d4c115256fa23f5205a80458e220c",
"title": "Industry 4.0: survey from a system integration perspective"
},
{
"paperId": "4e3a744d6aacb15dea3a75633096f0d89a2985d7",
"title": "A case-practice-theory-based method of implementing energy management in a manufacturing factory"
},
{
"paperId": "9c3bb27f3ecfbd998bd9943b1f2f38de87104804",
"title": "Sustainability accounting and reporting in the industry 4.0"
},
{
"paperId": "a9cd925d4079eeefc9005cbf76948785b5131f46",
"title": "Classifying Pathways for Smart City Development: Comparing Design, Governance and Implementation in Amsterdam, Barcelona, Dubai, and Abu Dhabi"
},
{
"paperId": "1154cfe89c8b6ef1ff5d6d69ffe9ecd273dee1c8",
"title": "How can smart technologies contribute to sustainable product lifecycle management?"
},
{
"paperId": "0647d0cce94c7c582de67a3dfdd4b4516de398af",
"title": "An Innovative Industry 4.0 Cloud Data Transfer Method for an Automated Waste Collection System"
},
{
"paperId": "8b1a74426a7da667b13861d5734e1a534c6cf447",
"title": "Smart production systems drivers for business process management improvement: An integrative framework"
},
{
"paperId": "bdc3c011de1d53396f922caed01ba689e2a3d446",
"title": "Digitalization of Agri-Cooperatives in the Smart Agriculture Context. Proposal of a Digital Diagnosis Tool"
},
{
"paperId": "3857730c001294efab3d3fec0948f50b98a684d5",
"title": "Application and assessment of internet of things toward the sustainability of energy systems: Challenges and issues"
},
{
"paperId": "9cea8e31d6d84fec1c37087604e2b4ea69ef3be2",
"title": "Leveraging internet of things and big data analytics initiatives in European and American firms: Is data quality a way to extract business value?"
},
{
"paperId": "7c198475e8f9c756524a22ca03d4b1cd2ab140b3",
"title": "Knowledge growth and development: internet of things (IoT) research, 2006–2018"
},
{
"paperId": "8986e5bc6d6768dcb9566c154b919408b9fd763a",
"title": "Research on Logistics Cost Control of E-Commerce Enterprises under the Background of “Dazhi Yiyun”—Taking Jingdong Mall as an Example"
},
{
"paperId": "fcf8e1ddff5b53fdb323236c46e28f1f913522f7",
"title": "A Bibliometric Analysis of Research on Intangible Cultural Heritage Using CiteSpace"
},
{
"paperId": "660749e2a940bf5a8caccede2b2318a7616956ed",
"title": "Cyber Physical Energy System for Saving Energy of the Dyeing Process with Industrial Internet of Things and Manufacturing Big Data"
},
{
"paperId": "c08463cd4abe2693379d58f64d9b89a3a1feffbe",
"title": "Development Status and Technical Framework of Internet-Plus Modern Agriculture"
},
{
"paperId": "c1f7ec5cd3be2d0fcedd5cc830fb133c3699b18c",
"title": "A generic internet of things architecture for controlling electrical energy consumption in smart homes"
},
{
"paperId": "b2fed7c11c728a0f2d23185564b0d963312055b4",
"title": "Intertwining the internet of things and consumers' behaviour science: Future promises for businesses"
},
{
"paperId": "0429ccb5c4b97af8a3378a7d487079c3d81d7c7d",
"title": "The Role and Impact of Industry 4.0 and the Internet of Things on the Business Strategy of the Value Chain—The Case of Hungary"
},
{
"paperId": "ec4d7baa60d19fedba5aae3cd23e284ad0c78d32",
"title": "Shaping factors in the emergence of technological innovations: The case of tidal kite technology"
},
{
"paperId": "853e7384f8c03168cb7e7693c6e644f479af69c3",
"title": "Normative challenges of identification in the Internet of Things: Privacy, profiling, discrimination, and the GDPR"
},
{
"paperId": "81f4486ddc9049a92cb6e3d0566455ce7f01e142",
"title": "Any Thing for Anyone? A New Digital Divide in Internet‐of‐Things Skills"
},
{
"paperId": "5fcf3ca83102e72790bdbafbfab6b800e3b82760",
"title": "Research on Aviation Big Data and E-commerce Applications"
},
{
"paperId": "8c25ed01a50161a0f44058f1c754a600ccf3eb65",
"title": "Internet of Things: A Scientometric Review"
},
{
"paperId": "73f603b35c6660bda2a5413224b762bd2997f099",
"title": "The degree of technological innovation: A demand heterogeneity perspective"
},
{
"paperId": "13a02bff9c3d6465dd11f58e3d42895cf795ce86",
"title": "Precise Positioning of Marketing and Behavior Intentions of Location-Based Mobile Commerce in the Internet of Things"
},
{
"paperId": "e793023aa620bfefbf50f8615ffaea57d34915bf",
"title": "Attributes of digital natives as predictors of information literacy in higher education"
},
{
"paperId": "49c7b7f9357ce054d73a8d30d60cb1e87c8dea17",
"title": "Adolescent digital profiles: A process-based typology of highly engaged internet users"
},
{
"paperId": "54c4bb8f6201216f300bfdab0ffdaefa72a5d875",
"title": "How the Internet of Things can help knowledge management: a case study from the automotive domain"
},
{
"paperId": "0e44a32fe0ca4f2800135a4e7c3fa5a2eaaeb638",
"title": "Internet entrepreneurship and \"the sharing of information\" in an Internet-of-Things context: The role of interactivity, stickiness, e-satisfaction and word-of-mouth in online SMEs' websites"
},
{
"paperId": "f708d7f907051784064795f6f6d2ca60365b0a20",
"title": "Tendencias tecnológicas en internet: hacia un cambio de paradigma"
},
{
"paperId": "d599f9ac4afc242723ebe54172835ca4450e01f2",
"title": "Digital skills and social media use: how Internet skills are related to different types of Facebook use among ‘digital natives’"
},
{
"paperId": "c21d1a3e010b44348f9857b41a3c1b89e2565bf3",
"title": "How will smart city production systems transform supply chain design: a product-level investigation"
},
{
"paperId": "def478653fc12389126271bd66b237307d62c950",
"title": "The net as a knowledge machine: How the Internet became embedded in research"
},
{
"paperId": "c81928564160c0ae0c8c1952366b7c1948764241",
"title": "Improving the security of wireless sensor networks in an IoT environmental monitoring system"
},
{
"paperId": "50ab0482224df90b59f5972f28c07626d89a1fab",
"title": "Smart manufacturing: Past research, present findings, and future directions"
},
{
"paperId": "524eebc740d60ec2e8a64094d5c3b658c6f20d72",
"title": "Scientometric mapping of research on ‘Big Data’"
},
{
"paperId": "eebe35863cc4571828455e2aa341c846186dbbd6",
"title": "Advancing e-Government Using the Internet of Things: A Systematic Review of Benefits"
},
{
"paperId": "cddb22908f28a1636cbbdeb3a4f0e00f9cef05a9",
"title": "The Internet of Things for Health Care: A Comprehensive Survey"
},
{
"paperId": "21ab698a7103bc00f49890fefd77ed687fd8fdfd",
"title": "IoT Security & Privacy: Threats and Challenges"
},
{
"paperId": "c55c743586c74b220fde94f8a6092d5cda3c6fee",
"title": "A Critical Analysis on the Security Concerns of Internet of Things (IoT)"
},
{
"paperId": "09e72c1ddaa2f5b26f6c3a04583005bddaa030c7",
"title": "How ‘Big Data’ Can Make Big Impact: Findings from a Systematic Review and a Longitudinal Case Study"
},
{
"paperId": "54c74ae0e8d9066925767ebb54715292a11dfd66",
"title": "Security of IoT systems: Design challenges and opportunities"
},
{
"paperId": "2b9ac2f42bcfda1600119b29cd42b667206720a3",
"title": "Designing mobile business applications for different age groups"
},
{
"paperId": "3a0b1f0995976908b394d2b5d74da5e985fc161e",
"title": "Transforming Big Data into Smart Data: Deriving value via harnessing Volume, Variety, and Velocity using semantic techniques and technologies"
},
{
"paperId": "6d81fa9ae2f96560bd5f6bf9377c0243ac4f6d55",
"title": "A Survey on the Internet of Things Security"
},
{
"paperId": "706c4f5930962606ce351fee5cb290993ed40fec",
"title": "How the internet of things technology enhances emergency response operations"
},
{
"paperId": "72c4d8b64a9959ea45677ca1955d3491ef0f1c62",
"title": "Internet of Things (IoT): A vision, architectural elements, and future directions"
},
{
"paperId": "75cf13f35bb4456270c1c7cb835e234d42b90af4",
"title": "Web site design with the patron in mind: A step-by-step guide for libraries"
},
{
"paperId": "53e0f9d837fca5a1c41eac02b3cba97458fe9d50",
"title": "The Impact of Internet Use on Business-to-Business Marketing"
},
{
"paperId": "38e467b8ea4e062930958f5163991c9f0d2cfd5d",
"title": "Feature: Security issues related to E-commerce"
},
{
"paperId": "81b141fc665e8d0e3b2cc6aa6df23a860997f2d2",
"title": "NETWORKS OF SCIENTIFIC PAPERS."
},
{
"paperId": null,
"title": "A digital twin-driven approach towards smart manufacturing: reduced energy consumption for a robotic cellular"
},
{
"paperId": "f15feac99bd55f8166c753599fdb806258aacb07",
"title": "Achieving sustainable performance in a data-driven agriculture supply chain: A review for research and applications"
},
{
"paperId": "b2b3361c6aabd688d2ce2498e2bfdd36c2a5974c",
"title": "Expression of Concern: Bibliometric study of Electronic Commerce Research in Information Systems & MIS Journals, Scientometrics, 2016, 109(3), 1455–1476 (https://doi.org/10.1007/s11192-016-2142-8)"
},
{
"paperId": "399595cb145d6a90668024c66edba013bdf61750",
"title": "Research fronts in data envelopment analysis"
},
{
"paperId": "020e4a7ac702dab2a37037bfc8a643b9332f6bf7",
"title": "Product delivery service provider selection and customer satisfaction in the era of internet of things: A Chinese e-retailers’ perspective"
},
{
"paperId": null,
"title": "Hacking iot: a case study on baby monitor exposures and vulnerabilities"
},
{
"paperId": "9d6508a44e957d837490d69936db5a211432a411",
"title": "Internet of Things - New security and privacy challenges"
}
] | 12,159
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Materials Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0306030a938e25d0d71862636fff937b53d1cf77
|
[
"Medicine"
] | 0.886318
|
A multi modal approach to microstructure evolution and mechanical response of additive friction stir deposited AZ31B Mg alloy
|
0306030a938e25d0d71862636fff937b53d1cf77
|
Scientific Reports
|
[
{
"authorId": "16380318",
"name": "S. Joshi"
},
{
"authorId": "2157934497",
"name": "Shashank Sharma"
},
{
"authorId": "2151447416",
"name": "M. Radhakrishnan"
},
{
"authorId": "1395656570",
"name": "M. Pantawane"
},
{
"authorId": "2164885977",
"name": "Shreyash M. Patil"
},
{
"authorId": "2154182529",
"name": "Yuqi Jin"
},
{
"authorId": "1799377133",
"name": "Teng Yang"
},
{
"authorId": "2165072119",
"name": "Daniel A. Riley"
},
{
"authorId": "5738327",
"name": "R. Banerjee"
},
{
"authorId": "4336511",
"name": "N. Dahotre"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Sci Rep"
],
"alternate_urls": [
"http://www.nature.com/srep/index.html"
],
"id": "f99f77b7-b1b6-44d3-984a-f288e9884b9b",
"issn": "2045-2322",
"name": "Scientific Reports",
"type": "journal",
"url": "http://www.nature.com/srep/"
}
|
Current work explored solid-state additive manufacturing of AZ31B-Mg alloy using additive friction stir deposition. Samples with relative densities ≥ 99.4% were additively produced. Spatial and temporal evolution of temperature during additive friction stir deposition was predicted using multi-layer computational process model. Microstructural evolution in the additively fabricated samples was examined using electron back scatter diffraction and high-resolution transmission electron microscopy. Mechanical properties of the additive samples were evaluated by non-destructive effective bulk modulus elastography and destructive uni-axial tensile testing. Additively produced samples experienced evolution of predominantly basal texture on the top surface and a marginal increase in the grain size compared to feed stock. Transmission electron microscopy shed light on fine scale precipitation of Mg17\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{17}$$\end{document}Al12\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{12}$$\end{document} within feed stock and additive samples. The fraction of Mg17\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{17}$$\end{document}Al12\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$_{12}$$\end{document} reduced in the additively produced samples compared to feed stock. The bulk dynamic modulus of the additive samples was slightly lower than the feed stock. There was a ∼\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\sim\,$$\end{document} 30 MPa reduction in 0.2% proof stress and a 10–30 MPa reduction in ultimate tensile strength for the additively produced samples compared to feed stock. The elongation of the additive samples was 4–10% lower than feed stock. Such a property response for additive friction stir deposited AZ31B-Mg alloy was realized through distinct thermokinetics driven multi-scale microstructure evolution.
|
## OPEN
# A multi modal approach to microstructure evolution and mechanical response of additive friction stir deposited AZ31B Mg alloy
#### Sameehan S. Joshi[1,2], Shashank Sharma[1,2], M. Radhakrishnan[1,2], Mangesh V. Pantawane[1,2], Shreyash M. Patil[1,2], Yuqi Jin[1,2], Teng Yang[1,2], Daniel A. Riley[1,2], Rajarshi Banerjee[1,2] & Narendra B. Dahotre[1,2][*]
**Current work explored solid-state additive manufacturing of AZ31B-Mg alloy using additive friction**
**stir deposition. Samples with relative densities ≥ 99.4% were additively produced. Spatial and**
**temporal evolution of temperature during additive friction stir deposition was predicted using multi-**
**layer computational process model. Microstructural evolution in the additively fabricated samples**
**was examined using electron back scatter diffraction and high-resolution transmission electron**
**microscopy. Mechanical properties of the additive samples were evaluated by non-destructive**
**effective bulk modulus elastography and destructive uni-axial tensile testing. Additively produced**
**samples experienced evolution of predominantly basal texture on the top surface and a marginal**
**increase in the grain size compared to feed stock. Transmission electron microscopy shed light on fine**
**scale precipitation of Mg17Al12 within feed stock and additive samples. The fraction of Mg17Al12 reduced**
**in the additively produced samples compared to feed stock. The bulk dynamic modulus of the additive**
**samples was slightly lower than the feed stock. There was a ∼ 30 MPa reduction in 0.2% proof stress**
**and a 10–30 MPa reduction in ultimate tensile strength for the additively produced samples compared**
**to feed stock. The elongation of the additive samples was 4–10% lower than feed stock. Such a**
**property response for additive friction stir deposited AZ31B-Mg alloy was realized through distinct**
**thermokinetics driven multi-scale microstructure evolution.**
Magnesium alloys find applications in automobile, aerospace, and biomedical industries due to high specific
strength resulting from a low density of these materials[1][–][5]. Mg alloys also have excellent bio-compatibility[6][,][7] and
electromagnetic shielding capability[8]. However, Mg alloys have tendency to oxidize during casting and they
develop strong texture during deformation, thus putting limitations on processing of Mg alloys using conventional methods such as casting and cold working[4][,][9]. Therefore, researchers explored strategies to overcome these
limitations by using additive manufacturing (AM) routes such as laser beam additive manufacturing (LBAM),
wire arc additive manufacturing (WAAM), and additive friction stir deposition (AFSD)[10][–][12]. LBAM and WAAM
techniques are based on fusion of the feed material which is in the form of powder or wire. Both LBAM and
WAAM techniques depend on melting and consolidation of the precursor material. On the other hand, AFSD is
a solid state method. The feed material used during AFSD is in the form of rods or chips that are available commercially avoiding the usage of powder[13]. This is especially important for Mg as its powder is highly pyrophoric[14].
AFSD works on the principle similar to friction stir processing (FSP). However, instead of a solid tool utilized
for FSP, a hollow non-consumable tool is employed during AFSD. The feed material is fed through the hollow
rotating tool which deforms plastically due to frictional heat generated between the tool, feed material, and the
substrate. Such a friction results in softening of the feed material followed by its extrusion underneath the tool.
The tool is then traversed for subsequent deposition of a layer. AFSD has evolved recently with development
1Department of Materials Science and Engineering, University of North Texas, 3940 N Elm St, Denton, TX 76207,
USA. [2]Center for Agile and Adaptive Additive Manufacturing, University of North Texas, 3940 N Elm St, Denton,
TX 76207 USA [*] il N d D h t @ t d
-----
of AM machines such as MELD[®]. It has the ability of producing fully dense large components with complex
geometries[15][,][16]. AM of conventional ferrous[17] and non-ferrous[18][–][20] alloys has been explored through AFSD.
Till date there have been very few reports published related to AFSD of Mg alloys[21][–][23]. Work by Calvert
demonstrated successful deposition of WE43 Mg alloy through ASFD, but it lacked in explaining the evolution
of microstructures in correlation to the process attributes[21]. Robinson et. al. demonstrated AFSD of AZ31B-Mg
and examined the microstructural as well as mechanical property evolution[22]. The tensile test results showed that
there was ∼ 20% drop in 0.2% proof stress (0.2% PS) and identical ultimate tensile strength (UTS) for the AFSD
processed AZ31B-Mg compared to the wrought AZ31B-Mg material. This work provided a limited explanation
and rationale behind such a lowering of the mechanical properties. In another effort, Williams et. al. deposited
WE43 Mg alloy through AFSD[23]. Although these authors reported a ∼ 22 times reduction in grain size for the
AFSD fabricated material compared to the feed stock, they still observed a ∼ 80 MPa reduction in 0.2% PS, ∼
100 MPa reduction in UTS, and 11% reduction in elongation compared to the feed material. Whilst this work
examined various processing conditions during AFSD, it lacked in physical explanation about the structureproperty evolution in AFSD WE43 Mg alloy.
Based on above discussion, the mechanisms behind process-structure-property response in AFSD produced
Mg alloys are not fully explored. Furthermore, compared to conventional FSP, AFSD involves addition of multiple layers which may result in subjecting the previously deposited material to repetitive thermokinetics thereby
potentially impacting the microstructure evolution. Experimental monitoring of thermophysical parameters
during such a complex process is difficult and limited in terms of spatial as well as temporal resolution. In light
of this, computational modeling of the multi layer additive deposition process can provide insights into the
thermokinetic effects experienced by the AFSD produced material throughout the process. Such predictions
of thermokinetics could be vital in uncovering the processing-structure-property response in the AFSD fabricated material. While there are multiple computational modeling efforts related to conventional FSP and rotary
friction welding (RFW)[24][–][27], there is sparsity of literature related to simulation of AFSD process. Recently, a
smooth particle hydrodynamics-based AFSD model has been proposed[28]. However, the model was restricted to
a single deposition track, thus lacking in prediction of the effects of repetitive thermokinetics associated with
subsequently added layers. Furthermore, the reported computational run time was substantially high (> 30 hrs).
In light of the limited experimental and computational efforts related to the AFSD process highlighted above,
the current work systematically investigated the multi scale microstructure evolution and resultant mechanical
property response in AFSD AZ31B-Mg alloy. The microstructure observations were explained using spatial and
temporal thermokinetics predicted by a multi layer computational process model. The mechanical properties of
the AFSD AZ31B-Mg were evaluated using non destructive effective bulk modulus elastography (EBME) and
destructive uni-axial tensile tests. The observed property response was analyzed based on the micro and nano
scale structural changes experienced by the AFSD processed material compared to the feed stock. The current
work formed as a part of continuation of efforts by the present research group focusing on the advanced processing of the Mg alloys[2][,][6][,][29][–][37].
### Methods and materials
#### Additive friction stir deposition. AFSD fabrication was conducted on MELD[®] machine equipped with a
hollow cylindrical tool containing coaxial cavity of 9.5 × 9 .5 mm[2] cross-section (Fig. 1a). Outer diameter and
height of the AFSD tool were 38.1 mm and 138 mm respectively. Commercially available AZ31B-Mg (chemical
composition in wt%: Mg-3w%Al-1%Zn-0.5%Mn) bar stock in H24 temper condition with dimensions 9.5 × 9
.5 × 460 mm[3] was fed into the actuator setup through the hollow AFSD tool. The H24 temper treatment for the
feed material consisted of forming the material below 160 °C followed by annealing in the temperature range of
150–180 °C[38]. AZ31B-Mg plate was utilized as the substrate plate during AFSD. It is worth noting here that, the
current study formed as a continuation effort of the previous publication by the authors related to the process
optimization aspects of the AFSD fabrication of AZ31B Mg alloy[37]. Several preliminary trials were conducted to
carry out AFSD of AZ31B Mg material to select the AFSD process parameters leading to successful AFSD fabrication of AZ31B Mg. The tool rotation velocity was maintained at 400 rpm, whereas, the tool linear velocities
of 4.2 and 6.3 mm/s were implemented in the AFSD processing during the current work. It was observed during
initial multiple trials that the successful deposition with minimal flash occurred when the feed rate for the bar
stock was maintained at ∼ 50% of the tool linear velocity. A layer of material was deposited with 140 mm length
and the tool was shifted upwards by 1 mm to deposit a subsequent layer. A total of 5 layers were deposited with
each set of processing condition. The onboard sensors monitored variation in tool torque and actuator force as
a function of time during each AFSD condition. A type K thermocouple was embedded 4 mm below surface
of substrate plate at a location directly below the center of AFSD deposit to monitor the temporal variation of
temperature during deposition.
The tool residence time (ttool ) and feed residence time (t feed ) were estimated using following equations
ttool = [2][ ×]Vlinear[ R][tool] (1)
where R tool is the outer radius of the tool and V linear is the tool linear velocity
tfeed = [2][ ×]Vlinear[ R][feed] (2)
where R feed is the equivalent circular radius of the feed material (5.3 mm).
The heat input imparted by the tool (Htool ) due to tool torque was expressed as[39][,][40]
-----
**Figure 1. Schematics of (a) the AFSD process, (b) important AFSD process parameters and attributes employed**
in the current work, (c) non destructive testing via EBME Method, and (d) location of tensile specimen
machined along tool traverse direction through the thickness of the AFSD deposits.
� τtool average �
Htool = [4][π]3 [2] [ω] Rtool(Atool − Afeed) (Rtool[3] [−] [R]feed[3] [)] (3)
where ω is the rotational velocity of tool-feed assembly, τtool average is the average torque experienced by the AFSD
tool during the deposition (Fig. 1 b), A tool is the area of cross-section of the tool, and A feed is the area of crosssection of the feed. Similarly, the heat input corresponding to the feed stock (H feed ) was derived as follows[39][,][40]
� Factuator average �
Hfeed = [4][π]3 [2] [µω] Afeed (Rfeed[3] [−] [3][R]feed[2] [.][h][)] (4)
where µ is the coefficient of friction (0.6) between feed stock and the base plate[41], F actuator average is the average
actuator force acting upon feed material during deposition (Fig. 1 b), and h is the layer thickness.
Finally, the total energy input per unit area Q total was estimated as
Qtotal = Qtool + Qfeed = [H][tool]Atool[.][t][tool] + [H][feed]Afeed[.][t][feed] (5)
where Q feed and Q feed are energy inputs per unit area for tool and feed stock respectively. The process parameters,
values of average tool torque, average actuator force, and computed total energy inputs are presented in Fig. 1b.
Further details about the computations of heat and energy inputs during AFSD process can be located in previous publication by the present research group[37].
#### Examination of multi‑scale microstructure. As an initial step, the as fabricated samples were visually observed and then sectioned for successive analysis. Density of the sectioned samples was evaluated using
Archimedes method with the aid of a high precision Sartorius micro-balance based on the protocol provided
in ASTM B962 standard[42]. At least 3 samples were evaluated for density for each AFSD processing condition.
-----
Microstructural characterization of the as-received feed stock and AFSD processed AZ31B-Mg samples was
performed in X-Z plane by electron back-scattered diffraction (EBSD) in a scanning electron microscope (SEM)
and transmission electron microscopy (TEM) techniques. The samples were sectioned from the central steady
state zone. Samples for EBSD were prepared with preliminary mechanical polishing employing SiC papers in
the range of 800–1200 grit with ethanol as a lubricant. The samples were then transferred to Buehler textmet
cloths containing diamond suspensions with average particle sizes of 1 and 0.25 μm respectively to obtain a mirror finished surface. The mechanically polished AZ31B-Mg samples appeared to develop the oxide layer, which
prevented obtaining Kikuchi signals during EBSD. This issue was addressed by ion polishing using a Gatan 682
precision etching coating system with the ion beam current of 190 μA and voltage of 5 keV. The sample surface
was inclined at 4° with respect to the ion beam and polished for 30 s. EBSD was performed using a ThermoFisher Nova NanoSEM 230 operating at 20 keV equipped with a Hikari super EBSD detector. The sample surface
was tilted with respect to the primary electron beam by mounting on 70° pre-tilted holder kept at a working
distance of 12 mm. The generated data were further analyzed in TSL OIM analysis 8.0 software, where orientation image maps (OIM) and pole figures were generated. To represent the micro-texture on normal plane of the
processed samples, measured data in the X-Z plane of the AFSD sample were rotated by 90° around X-axis. A
similar approach was adopted for the feed stock material. For better statistics and data consistency of grain sizes
and micro-texture, multiple OIM scans (5) were taken from each sample condition.
Cross-sectional TEM foils were prepared using a Thermo-Fisher Nova 200 Nanolab dual beam focused ion
beam (FIB) microscope. A 30 KV Ga[2][+] beam was used in making trenches and for initial thinning of the foils.
Final thinning to foil thickness less than 100 nm was made with a 5 keV Ga[2][+] beam. A platinum coating was
deposited to protect the processed sample surface from ion beam damage. TEM imaging was performed using a
Thermo-Fisher Tecnai G2 F20 microscope operating at 200 keV to obtain both bright field and dark field micrographs along with corresponding selected area diffraction patterns (SADP).
#### Mechanical evaluation. As a first level of mechanical property evaluation, dynamic elastic constants of
the feed stock and AFSD samples were measured using the non-destructive EBME method (Fig. 1c). These tests
were performed inside a 480 mm × 300 mm × 180 mm glass tank filled with commercially available cutting oil,
where the sample and longitudinal transducer were completely immersed, as depicted in Fig. 1c. An Olympus
V211 0.125-inch diameter 20 MHz planar immersion-style transducer was used to excite a broadband pulse
from 13 to 27 MHz with a repetition rate of 2 ms. The scanning motion was accurately controlled by the UR5
robotic arm using MATLAB script. A JSR Ultrasonic DPR 500 Pulse/ Receiver provided the pulse source and
time trigger, and the data was collected by a Tektronix MDO 304 at 1 GHz sampling rate. The contours were
raster-scanned with the areas of 100 mm× 25 mm with 1 mm spatial intervals. At each scanned location, the
scan was paused for 20 s for collecting the average of the 512 acoustic signals. The transducer surface aligned
parallel to the sample surface (XY plane) with a distance of more than 2 wavelengths. In present experiments,
the recorded signals were the reflections from the upper and lower sample surfaces. The additional fundamental
details of the EBME process employed to obtain the dynamic elastic constants are provided in the earlier reports
of the authors[43][,][44].
Next level of mechanical evaluation of AZ31B-Mg feed stock and AFSD samples was carried out using uniaxial tensile testing. Flat dog bone shaped tensile specimen with the gage length of 25 mm and thickness of 1.5
mm in accordance with ASTM E8 standard[45] were machined out along the length of the deposited sample using
wire electrical discharge machine (EDM) (Fig. 1d). The tensile tests were conducted as per ASTM E8 standard employing a strain rate of 10[−][4] /s on 25 kN load cell Instron universal testing machine equipped with an
extensometer. At least 4 samples were tested for each AFSD condition and the feed material. Values of Young’s
modulus, 0.2% PS, UTS, and % elongation were estimated from the recorded engineering stress-strain curves.
#### Multi layer computational process model. A computational model of multi-layer process was
employed to predict the spatial and temporal variation in temperature during AFSD fabrication of AZ31B-Mg
alloy. AFSD comprises of multiple unique phenomena such as feed rod deformation, material extrusion, stirring and deposition compared to other friction based processing techniques[18]. Sequential events of interactions
among feed, tool, and substrate materials during AFSD as discussed in the “Introduction” section were taken
into consideration while formulating the computational process model (Fig. 2a–c). These steps were repeated
during simulation of total 5 layers. In AFSD, the primary source of heat generation can be attributed to frictional
contact between feed rod/substrate interface and the extruded material/tool shoulder interface (Fig. 2). A multilayer frictional heating thermal model for AFSD was developed employing the governing equation pertaining to
conduction-based heat transfer as expressed below:
∂T
ρCp (6)
∂t [+][ ρ][C][p][(][u][�][ · ∇][T][)][ = ∇·][ (][k][∇][T][)][ +][ q][p][′′′]
In the above equation, T is temperature, t is time, ρ represents density (kg/m[3] ), C p is specific heat (kJ/mol), and
u⃗ is advection velocity. Importantly, the term q p′′′ represents volumetric heat generation. In context of AFSD q p
can be related to volumetric heat generation due to plastic deformation. However, formulation of q p′′′ requires
detailed information about plastic strains rates and flow stress, which is computationally taxing (thermomechanical or CFD model is required) and challenging, especially for multi-layer modeling framework[25][,][46]. In light of
this, only frictional heating during AFSD was considered in a surface heat flux boundary condition based on a
simple theory of pure conduction models associated with friction stir welding (FSW)[24][,][47]. Thus, the boundary
heat flux q f due to frictional contact between feed rod/substrate interface can be expressed as:
-----
**Figure 2. Schematics of multi-layer computational model methodology adopted in the current work showing**
(a) steps in AFSD process, (b) model boundary conditions, (c) multi layer formulation approach, and (d)
validation exercise with the aid of time temperature plots showing thermocouple readings and multi layer
computational process model predictions.
-----
qf = τyield × (ωR − Vlinear sinθ); 0 < R ≤ Rfeed (7)
τyield corresponds to shear stress experienced by the deforming material at the feed rod/substrate interface and
R is the distance from center of the feed towards the feed edge. The assumption underlying above formulation
is based on the existence of a fully sticking contact at the interface under plastic deformation of the feed material (Fig. 2b). When the feed material thermally softens via plasticization, the shear stress τyield under sticking
assumptions can be expressed as[48]
τyield = [σ]√[yield]3 (8)
where σyield is the temperature dependent yield strength of the depositing material available for AZ31B-Mg alloy
in the open literature[41]. Similarly, the surface heat flux at extruded material/tool shoulder interface (Fig. 2b) can
be expressed as following
qs = Mtool average × (1 − δ) × (ωR − Vlinear sinθ); Rfeed < R ≤ Rtool (9)
δ corresponds to slip-rate signifying the sliding/sticking contact state of the extruded material under tool shoulder. Thus, δ = 0 corresponds to fully sticking regime and δ = 1 denotes fully sliding regime. Thus, in case of
sliding/sticking regime the value of δ ranges anywhere from 1 to 0. The term M tool average is derived from back
calculation using experimentally obtained tool torque τtool average data during deposition[37], as explained below
� 2π � Rtool
τtool average = 0 Rfeed η × r × �Mtool averageR dR dθ� (10)
where η is mechanical efficiency. Furthermore, the slip rate[49] can be expressed as
δ = 1 − exp� −δoω�R − Rfeed� � (11)
ωo�Rtool − Rfeed�
where δo is a scaling constant and ωo is the reference value for the rotational tool speed. According to experimental observations, these values are adjusted to represent material flowability under the tool shoulder. For instance,
for a given material that gets readily extruded, covering a large portion of the tool shoulder area, the term (1 − δ)
should gradually shift from 1 towards zero as R changes from Rfeed to Rtool and vice versa. Thus, the above two
position-dependent boundary heat flux conditions prescribe the thermal contribution in the developed model.
Figure 2b illustrates the schematic representation of the longitudinal cross-section of the computational
domain. A quiet element activation/deactivation strategy was employed to incorporate the multi-layer
deposition[50][,][51]. For any given point during deposition, the material preceding the moving tool area corresponds
to deposited material. Hence, the material properties of the consolidated material were assigned to those elements. For the rest of the elements, material properties of air were assigned.
Lastly, all the boundaries associated with deposited material (contingent upon tool position and activation
status) were assigned convective and radiative boundary conditions as expressed below:
qloss = h∞(T∞ − T) + εσ(T∞[4] [−] [T] [4][)] (12)
where q loss is flux due to heat losses, h ∞ is the convection coefficient, T ∞ is the ambient temperature, ǫ is the
emissivity, σ is Stefan-Boltzmann constant. The thermophysical parameters discussed above are temperature
dependent. The above mathematical model was executed on commercial FEA software COMSOL[®] Multiphysics. An adaptive meshing strategy (dependent upon temperature and thermal gradient of mesh elements) were
employed to achieve reasonable computational time considering the pure conduction problem. The dimension
of each deposited track was 140×38× 1 mm[3] . Accordingly, the adaptive meshing strategy ensures a minimum
element size of 1 mm in the thermally optimum region. The choice of 1 mm element size was based on mesh
sensitivity analysis. The computational time for consecutive 5-layer deposition was less than 20 minutes on an
Intel(R) Xeon (R) (Gold 6252 CPU @2.10 GHz–190 GB) processor.
The validation of the proposed thermal model was assessed using thermocouple temperature measurements.
Figure 2d depicts the comparison between thermokinetic parameters (time and temperature) at any given locations within the AFSD layers measured by a thermocouple and predicted by a computational simulation. The
temperature-time cycles in Fig. 2d are associated with the locations at the center of each AFSD layer corresponding to thermocouple based measurements and computational predictions. As can be observed, the thermal model
provides reasonable agreement with the actual thermal evolution during the AFSD process. The minor variations
from the actual temperature profile (Fig. 2d) can be attributed to heat generation due to plastic dissipation being
neglected in the thermal model and smaller computational domain size compared to the experimentally used
AZ31B-Mg base plate. Nevertheless, the proposed thermal model provides valuable information on layer-by-layer
thermal evolution during AFSD. As a side note, a parallel study is underway in the current research group focusing on coupled thermal and thermomechancial phenomena during AFSD process and authors intend to report
these results in a separate manuscript. Nonetheless, attempts were made to explain the microstructure evolution
in correlation with the computationally predicted thermokinetic parameters in AFSD fabricated AZ31B-Mg.
-----
**Figure 3. EBSD data showing OIM, texture plots, and grain size distributions corresponding to (a) feed stock,**
(b) 82 J/mm[2], and (c) 116 J/mm[2] samples.
### Results and discussion
The AFSD fabricated samples were examined visually prior to cutting for microstructure observations. Although
oxidation is a concern during additive fabrication of Mg based materials, and it is likely that there may be some
oxygen pickup during AFSD of AZ31B Mg, no oxide layers were detected during visual observations of the AFSD
fabricated samples. In general, AFSD being a solid-state process, the oxygen diffusion in solid is likely to be slow
to introduce recognizable amount of oxygen in AZ31B Mg during processing. After visual observations, the
samples were cut and prepared for successive set of observations. The Archimedes density of sectioned samples
was measured as per ASTM B962 standard[42]. The average density values were 1.761 ± 0.006 and 1.768 ± 0.006
g/cm[3] for the samples corresponding to input energies of 82 and 116 J/mm[2] respectively as against the density
value of 1.77 g/cm[3] for the feed stock material. This corresponds to relative density values of 99.4 and 99.8%
for 82 and 116 J/mm[2] samples respectively indicating that a reasonable consolidation of material was achieved
during AFSD process under the set of processing parameters employed in the present efforts.
First level of microscopy observations on AFSD AZ31B-Mg were performed using SEM-EBSD. OIM maps
qualitatively indicated that the AFSD samples experienced a recognizable increase in the grain size compared to
the feed stock (Fig. 3). This was also statistically confirmed from the grain size distribution, where the average
grain size in both 82 J/mm[2] (15 ± 4 μm) and 116 J/mm[2] (18 ± 3 μm) AFSD samples was 1.4–1.6 fold higher
compared to the feed stock (11 ± 3 μm) (Fig. 3). An increase in grain size after the AFSD process can occur due to
dynamic recrystallization and grain growth mechanisms as the feed stock undergoes severe plastic deformation
accompanied by the simultaneous generation and accumulation of heat during the AFSD process[52].
In addition to grain size, the crystallographic texture evolution after the AFSD process can also be noticed
in 0001 pole figures (Fig. 3). The crystallographic textures in all three samples were close to basal plane texture,
and the texture appears to sharpen with an increase in the input energy from 82 J/mm[2] to 116 J/mm[2] . The feed
stock exhibited a substantially large spread ( ∼ 30°) around the maximum texture intensity and the location of
-----
**Figure 4. TEM data showing bright field images for (a) feed stock with inset showing magnified view of**
the precipitates, (b) 82 J/mm[2] AFSD sample, (c) 116 J/mm[2] AFSD sample, and (d) high resolution view of
precipitate in 82 J/mm[2] AFSD sample with inset showing the coherent interface between precipitate and the
matrix. The selected area diffraction pattern in (e) corresponds to α-Mg matrix and (f) is the fast Fourier
transform image depicting β Mg17Al12 precipitate.
maximum intensity was 15° away from the ideal basal pole location (Fig. 3a). For 82 J/mm[2] and 116 J/mm[2]
samples, the maximum texture intensities were observed to deviate 35 and 15.5° respectively from the basal pole,
and the orientation spread was found to be ∼ 25° around the maximum intensity in both the cases (Fig. 3b, c).
To seek further insight into microstructure and phase evolutions, the AZ31B-Mg feed stock and AFSD samples were observed using high resolution TEM imaging (Fig. 4). The bright field (BF) TEM image corresponding
to the feed stock revealed a uniform distribution of nm sized second phase precipitates (Fig. 4a). These precipitates exhibited both spherical and elongated morphologies in these TEM images. However, it should be noted
that both morphologies are likely to be the same type of precipitate, viewed along two orthogonal directions.
Therefore, it is likely that these precipitates have a cylindrical or cigar shaped morphology in three dimensions
(inset of Fig. 4a). The sizes of the precipitates ranged between 20 and 60 nm. Although, the fraction of precipitates in both the AFSD samples was significantly reduced (Fig. 4b, c) compared to the feed stock (Fig. 4a),
these second phase precipitates possessed an atomically coherent interface with the matrix (Fig. 4d). The SADP
analysis revealed matrix as α-Mg phase (Fig. 4e) while the second phase precipitates were β Mg17Al12 phase as
confirmed by the FFT pattern (Fig. 4f). In addition, no oxide phases were detected during high resolution TEM
observations which was consistent with the visual observations noted before. A qualitative comparison of the
microstructures suggests that with increasing deformation energy imposed during the AFSD processing, the
fraction of precipitates significantly reduced. Additionally, the AFSD processed samples exhibited coarser grain
size (Fig. 4b–c) as confirmed earlier through EBSD analysis (Fig. 3). In addition, the matrix grains of both the
AFSD samples (82 and 116 J/mm[2] ) appeared to be free of dislocation contrast pointing towards possible restoration mechanisms (Fig. 4b–c). The process-induced dissolution of precipitates is attributed to the combination of
spatial and temporal thermokinetic effects, which are discussed in the subsequent paragraphs.
In order to realize the thermokinetic effects of AFSD process on the distinct microstructure evolution in
processed AZ31B-Mg described above, the spatial and temporal variation of temperature during AFSD as predicted by the multi layer computational process model was examined (Fig. 5). The temperature was probed at
the center of AFSD track at the interface between layer 1 and the substrate as well as at a location within layer
3 (100 μm above interfaces between layers 2 and 3). A virtual probe location at the interface of layer 1 and the
substrate experienced a first single thermal cycle during fabrication of layer 1, where it achieved the maximum
temperature of 430 °C for 82 J/mm[2] sample (Fig. 5a) and 450 °C for 116 J/mm[2] sample (Fig. 5b) at the instance
of deposition. Subsequent thermal cycles (#s-2, 3, 4, and 5) were experienced by the probe location during the
fabrication of successive layers (layer 2 5) resulting in the reheating of deposited material at the probe location
-----
**Figure 5. Predicted time-temperature plots for AFSD process using a multi layer computational process model**
corresponding to (a) 82 J/mm[2] and (b) 116 J/mm[2] samples. Important phase transition and conventional heat
treatment temperature ranges are indicated for reference.
in the corresponding preceding layers for both the AFSD conditions. The peak temperatures developed during
deposition of subsequent layers were above 400 °C at any virtual location in layer 1 for both the AFSD conditions
(Fig. 5). Notably, a slight increase ( ∼ 5–10 °C) in the maximum temperature of the second reheating thermal cycle
due to heat accumulation was observed in both the AFSD samples (Fig. 5a and b). The maximum temperature
achieved at any virtual location in layer 1 due to subsequent reheating thermal cycles (corresponding to layers
3–5) decreased gradually in both the AFSD samples as a result of increasing distance between the probing location and the layer being deposited (Fig. 5a and b). The lowest temperature within layer 1 during reheating cycle
while layer 2 was deposited on the top of it was above 150 °C and subsequent deposition of layers 3-5 reheated
the material in layer 1 above 200 °C.
It is apparent that the probe location in layer 3 experienced thermal cycles only thrice during the fabrication of
layers 3, 4, and 5, as predicted in Fig. 5. The heat accumulation effect is distinct from the maximum temperature
of first thermal cycles experienced by location in layer 3 compared to that of the location lying at the interface of
substrate and layer 1 for both AFSD conditions (Fig. 5). In addition, due to higher linear deposition velocity of
2 2
6 3 / f 82 J/ d t 4 2 / f 116 J/ th d ti f di th l
-----
cycles were ∼ 30 and 38 s respectively (Fig. 5). Such distinct characteristics of heating-reheating cycles imposed
on AFSD fabricated material influenced the microstructure evolution as described below.
According to the equilibrium Mg–Al phase diagram, above 200 °C, the β phase (Mg17Al12 ) is thermodynamically unstable and undergoes dissolution to form a single-phase α-Mg[53]. As discussed before, during the entire
AFSD process, the reheating experienced by the previously deposited material kept the temperatures in the single
α-Mg phase regime at any virtual location within the previously deposited material (Fig. 5). The solutionizing
temperatures for AZ31B-Mg have been reported to be in the range of 250–400 °C[54][,][55]. Upon conclusion of the
AFSD process, the deposit cooled down to room temperature with the cooling rates in the range of 1–2 °C/s.
However, the re-precipitation of β phase may occur below the 200 °C provided there is no significant diffusion
of Al away from the precipitate. In conventional processing, the aging of few hours is required to uniformly
precipitate β phase[56]. Based on the spatial and temporal thermal history predicted by the computational process
model (Fig. 5), it was likely that the deposited AZ31B-Mg material remained in single α-Mg phase field during
the entire time of the AFSD process. To further quantitatively verify the dissolution of β phase during deposition and possibility of re-precipitation of β phase during cooling, extent of Al diffusion affected by process
thermokinetics was computed for both the AFSD conditions. The computationally predicted thermal cycles,
especially those corresponding to the location lying within the layer 3 were more relevant to understand the β
precipitate dissolution/re-precipitation as the microscopy observations were conducted in this region. Since the
β precipitate becomes thermodynamically unstable above 200 °C, precipitate dissolution occurs and aluminum
atoms driven by local temperature rise can diffuse away from the precipitate site. The solution to Fick’s second
law of diffusion with varying diffusion coefficients gives concentration spread with distance and time. Using its
general solution, the diffusion length (x’) could be estimated over a period of time as follows:
x[′] = �D(T).t (13)
where D(T) is the diffusion coefficient as a function of temperature. The diffusion coefficient is expressed in
Arrhenius form giving its temperature dependence as follows:
D(T) = D0exp [−][E] (14)
RT
where D 0 is the diffusion constant (3.275×10[−]5 m [2]/s[57]), R is the gas constant (8.314 J/(mol K)), and E is the
activation energy corresponding to stress-free lattice (E= 130.4×10[3] J/mol[57]). However, E is also affected by the
overall residual stress present in the material. The nature of stress present decides the resultant value of the activation energy. For instance, overall compressive stress would increase the activation energy while tensile stress
would decrease it[58]. Accordingly, the following equation of diffusion coefficient dependent on temperature and
stress was considered.
D(T) = D0exp� −[E ± ( [σ] [×]3[�] [)][]] � (15)
RT
where σ is the stress (130 MPa as limiting experimentally observed value in the present case) and � is the molar
volume (1.399 × 10[−][5] m [3]/mol). With the above equation, diffusion is primarily dependent on temperature. However, with the temperature-time relation obtained from the computational model (Fig. 5), diffusion coefficient
dependence on time was obtained. This exercise allowed integration of Eq. 13 over a definite time range as follows:
� t2
x[′][2] = D(T)dt (16)
t1
The above equation was solved numerically to obtain the diffusion length of Al during heating and cooling events
of each thermal cycle experienced by a location in layer 3. Figure 6 provides cumulative diffusion length with each
thermal cycle in 82 J/mm[2] and 116 J/mm[2] samples. The total diffusion spread of Al atoms in 116 J/mm[2] sample
is broader (14 μm) compared to 82 J/mm[2] (4 μm) due to comparatively lower linear deposition speed and higher
heat accumulation for the 116 J/mm[2] sample. Such broad diffusion lengths of Al atoms can effectively dissolve
the precipitate and homogenize the alloy during the thermal process such as AFSD. During the cooling phase,
the diffusion of Al in Mg decelerated and became sluggish making it difficult for the β phase to re-precipitate.
Therefore, the AFSD samples had a significant reduction in β phase fraction (Fig. 4b–c). Such a thermokinetics
driven microstructure evolution affected the mechanical response of the AFSD samples.
The AFSD samples were first examined using non destructive EBME technique described in the “Methods
and materials” Section. The scanned data of the three-dimensional volume of the AFSD samples from the top XY
plane was collected and rendered as contour plots of the average spatial distribution of dynamic bulk modulus.
Along the same lines, the contour plot of dynamic bulk modulus was rendered for the feed stock scanned from
the normal plane. These contour plots of dynamic bulk modulus are presented in Fig. 7a, b, and c, corresponding to feed stock, 82 J/mm[2], and 116 J/mm[2], respectively. The spatial distribution of dynamic bulk modulus for
the feed stock was confined to the narrow range of 57.5–60.0 GPa (Fig. 7a). A similar range of dynamic bulk
modulus (57.0–60.5 GPa) was recorded for the 82 J/mm[2] AFSD sample (Fig. 7b). However, this range was considerably shifted towards lower modulus values of 54.5–57.0 GPa for the 116 J/mm[2] AFSD sample (Fig. 7c). The
values of dynamic bulk modulus obtained via ultrasound qualitatively reflect the extent of residual stress in the
material[43][,][44]. The elastic modulus is the inherent property of the material associated with inter-atomic potential
-----
**Figure 6. Computed cumulative diffusion lengths over the entire AFSD cycle for 82 J/mm[2] and 116 J/mm[2]**
conditions.
energy and spacing. The presence of residual stress is associated with the elastically strained lattice, which affects
the inter-atomic spacing and decreases the potential energy, thereby reducing the elastic modulus of the material.
The feed stock is likely to have the lowest residual stresses as it received H24 treatment[38], which justifies a
higher dynamic modulus of feed stock. However, the difference in the dynamic moduli of AFSD samples indicates a difference in residual stress. This discrepancy can be addressed by analyzing the OIM micrographs at
higher magnification (Fig. 7d–f). The OIM micrographs taken at higher magnification indicated the presence
of mechanical twins in both AFSD samples, while they were not observed in the feed stock. Moreover, it can
be observed that mechanical twins were more prevalent in 82 J/mm[2] sample while they were scarcely observed
in 116 J/mm[2] sample (Fig. 7e and f). The presence of mechanical twins in the 82 J/mm[2] sample indicates that
deformation is heavily accompanied by twins in addition to slip. Moreover, the formation of mechanical twins
accommodates extensive lattice strain[59][,][60], thereby reducing the overall residual stress in 82 J/mm[2] sample, which
is also reflected in its EBME map with a dynamic modulus similar to feed stock (Fig. 7a and b). On the contrary,
the scarcity of mechanical twins in 116 J/mm[2] sample suggests deformation majorly via slip. Moreover, the strain
rate generated during the 116 J/mm[2] sample fabrication due to lower linear velocity is likely to be lower[37]. Also,
the longer duration of thermal cycles associated with the fabrication of 116 J/mm[2] sample sustains the heat for
a longer duration (Fig. 5b). These rationalize the scarcity of mechanical twins in 116 J/mm[2] sample. As the slip
accommodates lower lattice strain, the residual stress in 116 J/mm[2] sample is likely to be higher compared to
82 J/mm[2] sample, which is justified through EBME maps showing reduced dynamic modulus (Fig. 7b and c).
Engineering stress stain curves for AZ31B-Mg feed stock and AFSD samples possessed nearly identical slope
in the elastic regime indicating similar Young’s modulus of 40 GPa for these samples (Fig. 8). However, there was
a reduction of ∼ 30 MPa in the 0.2% PS for the AFSD samples compared to the feed material at 158 ± 15 MPa
(Fig. 8). Such a reduction could be attributed to an increase in the average grain size by 4–7 μm (Fig. 3) and the
reduction in fraction of Mg17Al12 precipitates in the AFSD samples compared to the feed stock (Fig. 4). These
two effects simultaneously led to reduction in the barriers for dislocation motion, thus lowering the 0.2% PS for
the AFSD samples compared to the feed stock. The UTS of feed stock was 258 ± 8 MPa which was marginally
higher by 10 MPa and 26 MPa compared to 82 J/mm[2] and 116 J/mm[2] AFSD samples respectively (Fig. 8). The
AZ31B-Mg feed stock material elongation was 20 ± 2%. On the other hand, the AFSD samples exhibited lower
elongation of 16 ± 4% and 10 ± 4% for 82 J/mm[2] and 116 J/mm[2] samples respectively (Fig. 8). Such a reduction
in elongation could be attributed to evolution of strong basal texture on the XY surface/subsurface of the AFSD
samples (Fig. 3). The samples were loaded in Y direction (perpendicular to the build direction)(Fig. 1c). During
uni-axial tensile loading, the lattice rotates in such a way that the basal slip plane normal is tilted towards loading axis[61]. The material accommodates deformation until the basal plane normal becomes perpendicular to the
loading axis at which the Schmid factor of the slip planes approaches zero. In the current case, the base material
was associated with a diffused basal texture with 15° offset from the 0001 basal pole (Fig. 3a). On the other hand
the 82 J/mm[2] sample possessed a sharp basal texture with 35° offset (Fig. 3b). Such an offset with sharper texture
requires higher amount of deformation for bringing the basal plane normal perpendicular to the loading axis.
As a result, the 82 J/mm[2] sample experienced a higher elongation among the AFSD samples. On the other hand,
although the basal texture was sharp, the offset was lower for the 116 J/mm[2] sample (Fig. 3c), hence accommodating lesser deformation than the 82 J/mm[2] sample, before the basal plane normal was aligned perpendicular to
the loading axis, resulting in lower elongation. As a note, reduction in mechanical properties for AFSD fabricated
Mg alloys has been reported before [22][,][23]. However, these works lacked the explanation about the correlation of
process thermokinetics driven multi scale microstructure evolution with the resultant mechanical behavior.
As a next step in analysis, the fracture surfaces of the broken samples from the tensile tests were observed
i d l t d SEM (Fi 8b d) Th f t f l d i b ittl f il d ith
-----
**Figure 7. Bulk modulus data obtained by EBME technique corresponding to (a) feed stock, (b) 82 J/mm[2], and**
(c) 116 J/mm[2] samples along with high magnification OIM data for (d) feed stock, (e) 82 J/mm[2], and (f) 116 J/
mm[2] samples.
cleavage like fracture for both feed stock and AFSD AZ31B Mg samples. It has been reported that Mg based
materials inherently have a low fracture toughness and usually exhibit a cleavage fracture mode during quasistatic tensile loading in a vast temperature range[62][–][64].
### Conclusions
Current work explored solid state additive manufacturing of AZ31B-Mg alloy via AFSD process. The average
Archimedes density values of AFSD fabricated samples were 1.761 ± 0.006 and 1.768 ± 0.006 g/cm[3] for the
processing conditions corresponding to the input energies of 82 and 116 J/mm[2] respectively compared to the
Archimedes density value of 1.77 g/cm[3] for the feed stock material. This translates into relative density values of
99.4 and 99.8% for 82 and 116 J/mm[2] samples respectively indicating a reasonable consolidation of the AFSD
fabricated AZ31B Mg material. The temporal and spatial variation of temperature during AFSD process was
predicted using a multi layer computational process model. The temperature experienced by the material during
the deposition and due to subsequent reheating as a result of added layers on the top remained in single α-Mg
-----
**Figure 8. (a) Representative stress strain curves along with tabulated mechanical properties for feed stock**
as well as AFSD samples and fractographs corresponding to (b) feed stock, (b) 82 J/mm[2], and (c) 116 J/mm[2]
samples. The insets in the fractographs present high magnification views of the corresponding highlighted
regions.
phase field region (above 200 °C). Such distinct thermokinetic conditions led to an average grain size of 15 ± 4
and 18 ± 3 μm for 82 J/mm[2] and 116 J/mm[2] AFSD conditions respectively compared to 11 ± 3 μm for the feed
stock. The AFSD processed samples developed a strong basal texture on the top surface. The feed stock exhibited
a diffused texture aligned 15° offset to 0001 pole. Both AFSD samples possessed a strong basal texture on the top
surface aligned 35 and 15° offset to 0001 pole for 82 J/mm[2] and 116 J/mm[2] conditions respectively. The higher
temperatures experienced by the AFSD material (greater than 200 °C) during deposition followed by cooling
down to room temperature with 1–2 °C/s rates resulted in a marked reduction in fraction of nano scale β phase
in the AFSD samples compared to the feed stock material. AFSD sample deposited with 82 J/mm[2] revealed a
higher amount of twinning compared to 116 J/mm[2] and feed stock material. As a result, the non destructively
evaluated bulk modulus was lower for 116 J/mm[2] sample (54.5-57.0 GPa) compared to the 82 J/mm[2] sample
(57.0-60.5 GPa) and feed stock (57.5-60.0 GPa). Feed stock and AFSD AZ31B-Mg samples exhibited nearly same
Young’s modulus of ∼ 40 GPa during uni-axial tensile tests. However, AFSD sample deposited with 82 and 116 J/
mm[2] input energies possessed a 0.2%PS of 132 ± 15 MPa and 129 ± 13 respectively which was lower than 0.2%
PS of 158 ± 15 for the feed stock. UTS of AFSD samples was 248 ± 10 and 232 ± 19 MPa for 82 and 116 J/mm[2]
conditions respectively. The feed stock UTS was 258 ± 8 MPa. The elongation of the AFSD AZ31B-Mg was lower
by 4 % and 10 % for 82 and 116 J/mm[2] process conditions respectively compared to the feed stock at 20%. The
distinct thermokinetic effects involving multiple reheating cycles during AFSD led to the unique microstructure
having a coarser grain size and reduced fraction of β phase leading to such a reduction in tensile properties for
the AFSD AZ31B-Mg compared to the feed stock.
### Data availibility
The data sets used and/or analyzed during the current study will be made available from the corresponding
author on a reasonable request.
Received: 23 May 2022; Accepted: 27 July 2022
### References
1. Commin, L., Dumont, M., Masse, J.-E. & Barrallier, L. Friction stir welding of AZ31 magnesium alloy rolled sheets: Influence of
processing parameter. Acta Mater. **57, 326–334 (2009).**
2. Joshi, S. S., Mohan, M., Seshan, S., Kumar, S. & Suwas, S. Effect of addition of Al & Ca and heat treatment on the cast Mg-6Zn
alloy. Mater. Sci. Forum **765, 33–37 (2013).**
3. Shrikant, J. S. Development of cast magnesium alloys with improved strength. Master’s thesis (2014).
4. Kulekci, M. K. Magnesium and its alloys applications in automotive industry. Int. J. Adv. Manuf. Technol. **39, 851–865 (2008).**
5. Bagheri, B., Abbasi, M., Abdollahzadeh, A. & Mirsalehi, S. E. Effect of second-phase particle size and presence of vibration on
az91/sic surface composite layer produced by fsp. Trans. Nonferrous Metals Soc. China **30, 905–916 (2020).**
6. Wu, T.-C. et al. Microstructure and surface texture driven improvement in in-vitro response of laser surface processed AZ31B
magnesium alloy. J. Magnes. Alloys **9, 1406–1418 (2021).**
7. Dahotre, N. B. & Joshi, S. Machining of Bone and Hard Tissues (Springer, Cham, Switzerland, 2016).
8. Aghion, E. et al. The art of developing new magnesium alloys for high temperature applications. Mater. Sci. Forum **419, 407–418**
(2003).
9 A hi E & B fi B M i ll d l t t d th 21[st] t _M t_ _S i F_ **350 19 30 (2000)**
-----
10. Karunakaran, R., Ortgies, S., Tamayol, A., Bobaru, F. & Sealy, M. P. Additive manufacturing of magnesium alloys. Bioact. Mater.
**5, 44–54 (2020).**
11. Bär, F. et al. Laser additive manufacturing of biodegradable magnesium alloy WE43: A detailed microstructure analysis. Acta
_Biomater._ **98, 36–49 (2019).**
12. Holguin, D. A. M., Han, S. & Kim, N. P. Magnesium alloy 3D printing by wire and arc additive manufacturing (WAAM). MRS
_Adv._ **3, 2959–2964 (2018).**
13. Yu, H. Z. & Mishra, R. S. Additive friction stir deposition: A deformation processing route to metal additive manufacturing. Mater.
_Res. Lett._ **9, 71–83 (2021).**
14. Angelo, P. & Subramanian, R. Powder Metallurgy: Science, Technology and Applications (PHI Learning Pvt. Ltd., New Delhi, 2008).
15. Gradl, P., Mireles, O. & Andrews, N. Intro to additive manufacturing for propulsion systems. In AIAA Joint Propulsion Conference
(2018).
16. Singh, U., Lohumi, M. & Kumar, H. Additive manufacturing in wind energy systems: A review. In Proceedings of International
_Conference in Mechanical and Energy Technology 757–766 (Springer, 2020)._
17. Asiatico, P. M. The applicability of additive friction stir deposition for bridge repair. Master’s thesis, Virginia Tech (2021).
18. Garcia, D. et al. In situ investigation into temperature evolution and heat generation during additive friction stir deposition: A
comparative study of Cu and Al-Mg-Si. Addit. Manuf. **34, 101386 (2020).**
19. Perry, M. E. et al. Tracing plastic deformation path and concurrent grain refinement during additive friction stir deposition.
_Materialia_ **18, 101159 (2021).**
20. Griffiths, R. J. et al. A perspective on solid-state additive manufacturing of aluminum matrix composites using MELD. J. Mater.
_Eng. Perform._ **28, 648–656 (2019).**
21. Calvert, J. R. Microstructure and mechanical properties of WE43 alloy produced via additive friction stir technology. Master’s
thesis, Virginia Tech (2015).
22. Robinson, T. W. et al. Microstructural and mechanical properties of a solid-state additive manufactured magnesium alloy. J. Manuf.
_Sci. Eng._ **144 (2022).**
23. Williams, M. et al. Elucidating the effect of additive friction stir deposition on the resulting microstructure and mechanical properties of magnesium alloy we43. Metals **11, 1739 (2021).**
24. Schmidt, H. B. & Hattel, J. H. Thermal modelling of friction stir welding. Scr. Mater. **[58, 332–337. https://doi.org/10.1016/j.scrip](https://doi.org/10.1016/j.scriptamat.2007.10.008)**
[tamat.2007.10.008 (2008).](https://doi.org/10.1016/j.scriptamat.2007.10.008)
25. Schmidt, H. & Hattel, J. Modelling heat flow around tool probe in friction stir welding. Sci. Technol. Weld. Join. **[10, 176–186. https://](https://doi.org/10.1179/174329305X36070)**
[doi.org/10.1179/174329305X36070 (2005).](https://doi.org/10.1179/174329305X36070)
26. Zhai, M., Wu, C. S. & Su, H. Influence of tool tilt angle on heat transfer and material flow in friction stir welding. J. Manuf. Process.
**[59, 98–112. https://doi.org/10.1016/j.jmapro.2020.09.038 (2020).](https://doi.org/10.1016/j.jmapro.2020.09.038)**
27. Liu, Q., Han, R., Gao, Y. & Ke, L. Numerical investigation on thermo-mechanical and material flow characteristics in friction stir
welding for aluminum profile joint. Int. J. Adv. Manuf. Technol. **[114, 2457–2469. https://doi.org/10.1007/s00170-021-06978-8](https://doi.org/10.1007/s00170-021-06978-8)**
(2021).
28. Stubblefield, G. G., Fraser, K., Phillips, B. J., Jordon, J. B. & Allison, P. G. A meshfree computational framework for the numerical
[simulation of the solid-state additive manufacturing process, additive friction stir-deposition (AFS-D). Mater. Des.https://doi.org/](https://doi.org/10.1016/j.matdes.2021.109514)
[10.1016/j.matdes.2021.109514 (2021).](https://doi.org/10.1016/j.matdes.2021.109514)
29. Samant, A. N., Du, B., Paital, S. R., Kumar, S. & Dahotre, N. B. Pulsed laser surface treatment of magnesium alloy: Correlation
between thermal model and experimental observations. J. Mater. Process. Technol. **209, 5060–5067 (2009).**
30. Santhanakrishnan, S. et al. Macro-and microstructural studies of laser-processed WE43 (Mg-Y-Nd) magnesium alloy. Metall.
_Mater. Trans. B_ **44, 1190–1200 (2013).**
31. Ho, Y.-H., Vora, H. D. & Dahotre, N. B. Laser surface modification of AZ31B Mg alloy for bio-wettability. J. Biomater. Appl. **29,**
915–928 (2015).
32. Wu, T.-C., Ho, Y.-H., Joshi, S. S., Rajamure, R. S. & Dahotre, N. B. Microstructure and corrosion behavior of laser surface-treated
AZ31B Mg bio-implant material. Lasers Med. Sci. **32, 797–803 (2017).**
33. Lu, J. Z. et al. Optimization of biocompatibility in a laser surface treated Mg-AZ31B alloy. Mater. Sci. Eng. C **105, 110028 (2019).**
34. Kalakuntla, N. et al. Laser patterned hydroxyapatite surfaces on AZ31b magnesium alloy for consumable implant applications.
_Materialia_ **11, 100693 (2020).**
35. Ho, Y.-H. et al. In-vitro bio-corrosion behavior of friction stir additively manufactured AZ31B magnesium alloy-hydroxyapatite
composites. Mater. Sci. Eng. C **109, 110632 (2020).**
36. Ho, Y.-H. et al. In-vitro biomineralization and biocompatibility of friction stir additively manufactured AZ31B magnesium alloyhydroxyapatite composites. Bioact. Mater. **5, 891–901 (2020).**
[37. Joshi, S. S. et al. Additive Friction stir deposition of AZ31B magnesium alloy. J. Magnes. Alloyshttps://doi.org/10.1016/j.jma.2022.](https://doi.org/10.1016/j.jma.2022.03.011)
[03.011 (2022).](https://doi.org/10.1016/j.jma.2022.03.011)
38. Avedesian, M. M. et al. _ASM Specialty Handbook: Magnesium and Magnesium Alloys (ASM International, Materials Park, OH,_
1999).
39. Riahi, M. & Nazari, H. Analysis of transient temperature and residual thermal stresses in friction stir welding of aluminum alloy
6061–T6 via numerical simulation. Int. J. Adv. Manuf. Technol. **55, 143–152 (2011).**
40. Zhang, Z. et al. Experimental and numerical studies of re-stirring and re-heating effects on mechanical properties in friction stir
additive manufacturing. Int. J. Adv. Manuf. Technol. **104, 767–784 (2019).**
41. Singh, A. K., Sahlot, P., Paliwal, M. & Arora, A. Heat transfer modeling of dissimilar FSW of Al 6061/AZ31 using experimentally
measured thermo-physical properties. Int. J. Adv. Manuf. Technol. **105, 771–783 (2019).**
42. B962, A. Standard test methods for density of compacted or sintered powder metallurgy (pm) products using archimedes’ principle.
Annual Book of ASTM Standards. ASTM (2001).
43. Pantawane, M. V. et al. Thermomechanically influenced dynamic elastic constants of laser powder bed fusion additively manufactured Ti6Al4V. Mater. Sci. Eng. A **[811, 140990. https://doi.org/10.1016/J.MSEA.2021.140990 (2021).](https://doi.org/10.1016/J.MSEA.2021.140990)**
44. Pantawane, M. V. et al. Crystallographic texture dependent bulk anisotropic elastic response of additively manufactured Ti6Al4V.
_Sci. Rep._ **[11, 1–10. https://doi.org/10.1038/s41598-020-80710-6 (2021).](https://doi.org/10.1038/s41598-020-80710-6)**
45. ASTM, E. et al. Standard test methods for tension testing of metallic materials. Annual Book of ASTM Standards. ASTM (2001).
[46. Meyghani, B. & Wu, C. Progress in thermomechanical analysis of friction stir welding. Chin. J. Mech. Eng. (English Edition)https://](https://doi.org/10.1186/s10033-020-0434-7)
[doi.org/10.1186/s10033-020-0434-7 (2020).](https://doi.org/10.1186/s10033-020-0434-7)
47. Colegrove, P. A., Shercliff, H. R. & Zettler, R. Model for predicting heat generation and temperature in friction stir welding from
the material properties. Sci. Technol. Weld. Join. **[12, 284–297. https://doi.org/10.1179/174329307X197539 (2007).](https://doi.org/10.1179/174329307X197539)**
48. Schmidt, H., Hattel, J. & Wert, J. An analytical model for the heat generation in friction stir welding. Modell. Simul. Mater. Sci.
_Eng._ **[12, 143–157. https://doi.org/10.1088/0965-0393/12/1/013 (2004).](https://doi.org/10.1088/0965-0393/12/1/013)**
49. Nandan, R., Roy, G. G. & Debroy, T. Numerical simulation of three dimensional heat transfer and plastic flow during friction stir
welding. Metall. Mater. Trans. A **[37, 1247–1259. https://doi.org/10.1007/s11661-006-1076-9 (2006).](https://doi.org/10.1007/s11661-006-1076-9)**
50. Nartu, M. et al. Omega versus alpha precipitation mediated by process parameters in additively manufactured high strength
Ti-1Al-8V-5Fe alloy and its impact on mechanical properties. Mater. Sci. Eng. A **[821, 141627. https://doi.org/10.1016/J.MSEA.](https://doi.org/10.1016/J.MSEA.2021.141627)**
[2021 141627 (2021)](https://doi.org/10.1016/J.MSEA.2021.141627)
-----
51. Joshi, S. S., Sharma, S., Mazumder, S., Pantawane, M. V. & Dahotre, N. B. Solidification and microstructure evolution in additively
manufactured H13 steel via directed energy deposition: Integrated experimental and computational approach. J. Manuf. Process.
**[68, 852–866. https://doi.org/10.1016/J.JMAPRO.2021.06.009 (2021).](https://doi.org/10.1016/J.JMAPRO.2021.06.009)**
52. Antoniswamy, A. R., Carter, J. T., Hector, L. G. & Taleff, E. M. Static recrystallization and grain growth in az31b-h24 magnesium
alloy sheet. In Magnesium Technology 2014 139–142 (Springer, 2014).
53. Okamoto, H. & Okamoto, H. Phase Diagrams for Binary Alloys Vol. 44 (ASM International, Materials Park, OH, 2000).
54. Sepehrband, P., Lee, M. & Burns, A. Pre-straining effect on precipitation behaviour of AZ31B. In Magnesium Technology 2016
89–92 (Springer, 2016).
55. Wong, T. W., Hadadzadeh, A., Benoit, M. J. & Wells, M. A. Impact of homogenization heat treatment on the high temperature
deformation behavior of cast az31b magnesium alloy. J. Mater. Process. Technol. **254, 238–247 (2018).**
56. Vagge, S. & Bakshi, S. Effect of precipitation hardening on stress corrosion cracking susceptibility index of az31b magnesium alloy
in simulated body fluid. Mater. Today Proc. **38, 2191–2199 (2021).**
57. Zhou, B. C., Shang, S. L., Wang, Y. & Liu, Z. K. Diffusion coefficients of alloying elements in dilute Mg alloys: A comprehensive
first-principles study. Acta Mater. **[103, 573–586. https://doi.org/10.1016/J.ACTAMAT.2015.10.010 (2016).](https://doi.org/10.1016/J.ACTAMAT.2015.10.010)**
58. Druzhinin, A. V. et al. Effect of internal stress on short-circuit diffusion in thin films and nanolaminates: Application to Cu/W
nano-multilayers. Appl. Surf. Sci. **[508, 145254. https://doi.org/10.1016/J.APSUSC.2020.145254 (2020).](https://doi.org/10.1016/J.APSUSC.2020.145254)**
59. Warwick, J., Jones, N., Rahman, K. & Dye, D. Lattice strain evolution during tensile and compressive loading of CP Ti. Acta Mater.
**60, 6720–6731 (2012).**
60. Sofinowski, K. et al. In situ tension-tension strain path changes of cold-rolled mg az31b. Acta Mater. **164, 135–152 (2019).**
61. Hosford, W. F. Mechanical Behavior of Materials (Cambridge University Press, UK, 2010).
62. Pineau, A., Benzerga, A. A. & Pardoen, T. Failure of metals I: Brittle and ductile fracture. Acta Mater. **107, 424–483 (2016).**
63. Feng, F. et al. Experimental study on tensile property of az31b magnesium alloy at different high strain rates and temperatures.
_Mater. Des._ **57, 10–20 (2014).**
64. Rodriguez, A., Ayoub, G., Mansoor, B. & Benzerga, A. Effect of strain rate and temperature on fracture of magnesium alloy az31b.
_Acta Mater._ **112, 194–208 (2016).**
### Acknowledgements
Authors acknowledge the infrastructure and support of Center for Agile and Adaptive Additive Manufacturing
(CAAAM) funded through State of Texas Appropriation: 190405-105-805008-220 and Materials Research Facility (MRF) at the University of North Texas for access to microscopy and phase analysis facilities. Authors would
like to acknowledge Shelden Dowden for help during AFSD processing and tensile tests.
### Author contributions
S.S.J. and N.B.D. conceived the research idea. S.S.J., S.M.P., and D.A.R. performed the experiments. S.S. conducted the computational modeling. M.R., M.V.P., Y.J., and T.Y. performed the material characterization. S.S.J.,
S.S., M.R., M.V.P., and S.M.P. wrote the manuscript. R.B. and N.B.D. reviewed as well as edited the manuscript.
### Competing interests
The authors declare no competing interests.
### Additional information
**Correspondence and requests for materials should be addressed to N.B.D.**
**Reprints and permissions information is available at www.nature.com/reprints.**
**Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and**
institutional affiliations.
**Open Access This article is licensed under a Creative Commons Attribution 4.0 International**
License, which permits use, sharing, adaptation, distribution and reproduction in any medium or
format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons licence, and indicate if changes were made. The images or other third party material in this
article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from
[the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.](http://creativecommons.org/licenses/by/4.0/)
© The Author(s) 2022
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC9346001, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.nature.com/articles/s41598-022-17566-5.pdf"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-08-02T00:00:00
|
[
{
"paperId": "2b01e5169c85b9adf60892e4994d330a15bd5bf4",
"title": "Additive friction stir deposition of AZ31B magnesium alloy"
},
{
"paperId": "2fe3b88e43ab8694d5a0d702b185251c79f2443f",
"title": "Microstructural and Mechanical Properties of a Solid-State Additive Manufactured Magnesium Alloy"
},
{
"paperId": "eeacbeeda602205097a0d15e4fdd542f939d12de",
"title": "Elucidating the Effect of Additive Friction Stir Deposition on the Resulting Microstructure and Mechanical Properties of Magnesium Alloy WE43"
},
{
"paperId": "11945e310e640fbf450a9f7365b73a73e25275a3",
"title": "Tracing plastic deformation path and concurrent grain refinement during additive friction stir deposition"
},
{
"paperId": "39b8ef82ec70c7ea24f80c989ed951870e36fb91",
"title": "Solidification and microstructure evolution in additively manufactured H13 steel via directed energy deposition: Integrated experimental and computational approach"
},
{
"paperId": "c764f79362bff05899cbd5c1b201fed3db14fb7e",
"title": "Omega versus alpha precipitation mediated by process parameters in additively manufactured high strength Ti–1Al–8V–5Fe alloy and its impact on mechanical properties"
},
{
"paperId": "068389f7ecb02f34d96437c0d7ce3a9acb704597",
"title": "Numerical investigation on thermo-mechanical and material flow characteristics in friction stir welding for aluminum profile joint"
},
{
"paperId": "d55ca1a70e6e83e8cb234d9a7eff51e571a1f128",
"title": "Thermomechanically influenced dynamic elastic constants of laser powder bed fusion additively manufactured Ti6Al4V"
},
{
"paperId": "4e061ceb6161654843144633f2605a26414b985b",
"title": "A meshfree computational framework for the numerical simulation of the solid-state additive manufacturing process, additive friction stir-deposition (AFS-D)"
},
{
"paperId": "0567e607674c68026faa5f4ed325fb599b536204",
"title": "Microstructure and surface texture driven improvement in in-vitro response of laser surface processed AZ31B magnesium alloy"
},
{
"paperId": "a826cef32158dc7fb66ae6b008a9fe0233716822",
"title": "Crystallographic texture dependent bulk anisotropic elastic response of additively manufactured Ti6Al4V"
},
{
"paperId": "a01ce9ec932d9aa2f27705e0c3fb97242507870e",
"title": "Additive friction stir deposition: a deformation processing route to metal additive manufacturing"
},
{
"paperId": "dbd11d438fb2c8b6533215127370351b7bbb2f7d",
"title": "Influence of tool tilt angle on heat transfer and material flow in friction stir welding"
},
{
"paperId": "fb7ad4ce1e2b1ce5ba2f3184083baf405c211e06",
"title": "In situ investigation into temperature evolution and heat generation during additive friction stir deposition: A comparative study of Cu and Al-Mg-Si"
},
{
"paperId": "34e8d3e7e5001c112d197dcdb11fc456bcea3071",
"title": "In-vitro biomineralization and biocompatibility of friction stir additively manufactured AZ31B magnesium alloy-hydroxyapatite composites"
},
{
"paperId": "733a8b99cbfcffcfd5ce0fbb3aa6bc68ea69b6c8",
"title": "Effect of precipitation hardening on stress corrosion cracking susceptibility index of AZ31B magnesium alloy in simulated body fluid"
},
{
"paperId": "308de9be429235c6547fe78e5d395876d7627bc8",
"title": "Laser patterned hydroxyapatite surfaces on AZ31B magnesium alloy for consumable implant applications"
},
{
"paperId": "9fcf20cc8ec86f8aa0e603d1c4db0f8d129be700",
"title": "Effect of second-phase particle size and presence of vibration on AZ91/SiC surface composite layer produced by FSP"
},
{
"paperId": "6ab6db9bae23aaf11d00a0201503939be414f6ee",
"title": "In-vitro bio-corrosion behavior of friction stir additively manufactured AZ31B magnesium alloy-hydroxyapatite composites."
},
{
"paperId": "fa41f86c7b1d85fc06927c09fe73cb85beb4a24a",
"title": "Effect of internal stress on short-circuit diffusion in thin films and nanolaminates: Application to Cu/W nano-multilayers"
},
{
"paperId": "e7a3bb1672114cf6117a2d6b4d5baf438800598e",
"title": "Progress in Thermomechanical Analysis of Friction Stir Welding"
},
{
"paperId": "dce0891c826096fcca6bc44322631151cf15ae40",
"title": "Additive manufacturing of magnesium alloys"
},
{
"paperId": "f4a29c30b9a42ac93e85ba6747b1974adcc24434",
"title": "Optimization of biocompatibility in a laser surface treated Mg-AZ31B alloy."
},
{
"paperId": "595e264607d3ffd3d79072b159a73fe86ad24016",
"title": "Laser additive manufacturing of biodegradable magnesium alloy WE43: A detailed microstructure analysis."
},
{
"paperId": "9f9264e584280094cfa95f7a13a1401b1a7f2eab",
"title": "Heat transfer modeling of dissimilar FSW of Al 6061/AZ31 using experimentally measured thermo-physical properties"
},
{
"paperId": "f6c433a2d5605b851f2ee2d5ae18716510cdd2d5",
"title": "Experimental and numerical studies of re-stirring and re-heating effects on mechanical properties in friction stir additive manufacturing"
},
{
"paperId": "d3fb9d434d7479c043e97946833e13941ed87a1a",
"title": "In situ tension-tension strain path changes of cold-rolled Mg AZ31B"
},
{
"paperId": "f8d03a1b258bf9f022fd8cf92c08fe83746762d5",
"title": "A Perspective on Solid-State Additive Manufacturing of Aluminum Matrix Composites Using MELD"
},
{
"paperId": "4a93b54269afc152973a4d1b01c0bbd01c78be13",
"title": "Magnesium Alloy 3D Printing by Wire and Arc Additive Manufacturing (WAAM)"
},
{
"paperId": "e59abe0a6ae81399a54bd1ec4df34977dc83d050",
"title": "Intro to Additive Manufacturing for Propulsion Systems"
},
{
"paperId": "21fc470cb42192dc70c969a0f2be1189661aa20c",
"title": "Impact of homogenization heat treatment on the high temperature deformation behavior of cast AZ31B magnesium alloy"
},
{
"paperId": "2f6d3c4c87794310d7f317850d9bc5e3e24350e5",
"title": "Microstructure and corrosion behavior of laser surface-treated AZ31B Mg bio-implant material"
},
{
"paperId": "26003f14f2a5d45d9a4e7b701462c760be9fc717",
"title": "Effect of strain rate and temperature on fracture of magnesium alloy AZ31B"
},
{
"paperId": "0c4266b5d3a3d3d54bf3d3a4c0c6b515f667db5e",
"title": "Machining of Bone and Hard Tissues"
},
{
"paperId": "a8e4e688e4a7ff95ccbfcb12e0a07b0638ecb492",
"title": "Failure of metals I: Brittle and ductile fracture"
},
{
"paperId": "9b54f1d51dbd2757641e4c1a15b25eaf2aa3c39d",
"title": "Diffusion coefficients of alloying elements in dilute Mg alloys: A comprehensive first-principles study"
},
{
"paperId": "a9f5cbe22516c38058ef9048b21e6e940c3a9e5f",
"title": "Microstructure and Mechanical Properties of WE43 Alloy Produced Via Additive Friction Stir Technology"
},
{
"paperId": "0e847bbe2039f236b2f1b5eaeb1bac13a241d4e1",
"title": "Laser surface modification of AZ31B Mg alloy for bio-wettability"
},
{
"paperId": "7b9926970727cc58d26f0c6a6cc3249eb883c75b",
"title": "Experimental study on tensile property of AZ31B magnesium alloy at different high strain rates and temperatures"
},
{
"paperId": "7061c6b7e282151956ac0a03e53c224f4687f29e",
"title": "Effect of Addition of Al & Ca and Heat Treatment on the Cast Mg-6Zn Alloy"
},
{
"paperId": "640d9ac34248111fb86d90e140f110ba76ebf558",
"title": "Macro- and Microstructural Studies of Laser-Processed WE43 (Mg-Y-Nd) Magnesium Alloy"
},
{
"paperId": "0a9745559f8886ced500fca6ed79572a088173f1",
"title": "Lattice strain evolution during tensile and compressive loading of CP Ti"
},
{
"paperId": "58d574927a888878f61a9bcd4377a055b107f768",
"title": "Development Of Cast Magnesium Alloys With Improved Strength"
},
{
"paperId": "85400f829898c8d43b6ce854bf795ea33a4cc5e3",
"title": "Analysis of transient temperature and residual thermal stresses in friction stir welding of aluminum alloy 6061-T6 via numerical simulation"
},
{
"paperId": "faba314ee44e9ec86f9c792deb16ba877f82ad5f",
"title": "Pulsed laser surface treatment of magnesium alloy: Correlation between thermal model and experimental observations"
},
{
"paperId": "b3720d6080378f63684165275eafa14921c87e8c",
"title": "Mechanical behavior of materials"
},
{
"paperId": "a80b1ceba96ae382fabe657f10d351021063a766",
"title": "Magnesium and its alloys applications in automotive industry"
},
{
"paperId": "8558ccde50e01586bac78ac3a131827be5de4280",
"title": "Thermal modelling of friction stir welding"
},
{
"paperId": "03930ec6dc2a9cdac3e9a7fb12c27dde8b061042",
"title": "Model for predicting heat generation and temperature in friction stir welding from the material properties"
},
{
"paperId": "1aeced083709e97022b80456f7daa5c962fb95d3",
"title": "Numerical simulation of three-dimensional heat transfer and plastic flow during friction stir welding"
},
{
"paperId": "dbf9d199143a2e0c3d392118d48eb18c13c20b55",
"title": "Modelling heat flow around tool probe in friction stir welding"
},
{
"paperId": "f18cde2622e86590863188011cde3757cbd41fea",
"title": "The Art of Developing New Magnesium Alloys for High Temperature Applications"
},
{
"paperId": "29a7ac228ecb3a5b251883ff6b07dc91232dfcc6",
"title": "Magnesium Alloys Development towards the 21st Century"
},
{
"paperId": "72f086bb4ffd1e90a75b91c4c5d0162b7de22997",
"title": "Brittle and ductile fracture"
},
{
"paperId": null,
"title": "The applicability of additive friction stir deposition for bridge repair"
},
{
"paperId": "ea21ce2025de7300d51e1a90a431bd193b65c18e",
"title": "Additive Manufacturing in Wind Energy Systems: A Review"
},
{
"paperId": "6a4c5f56de6d18d435b65cc2243901ac805bc961",
"title": ": A Review of the"
},
{
"paperId": "3d3c31afd404bfed3f3ad2e2b9e29e4740055564",
"title": "Pre-Straining Effect on Precipitation Behaviour of AZ31B"
},
{
"paperId": "d82ee09954663f8b96d9ada535d3e6a777f6cedf",
"title": "Static recrystallization and grain growth in AZ31B-H24 magnesium alloy sheet"
},
{
"paperId": null,
"title": "Influence of processing parameter"
},
{
"paperId": "19588446d74a5514c2c73abbf9eb766640a73481",
"title": "Friction stir welding of AZ31 magnesium alloy rolled sheets: Influence of processing parameters"
},
{
"paperId": null,
"title": "Powder Metallurgy: Science, Technology and Applications (PHI Learning Pvt"
},
{
"paperId": "8253ccd6916909cff15ca07a9524d146cbc5fdae",
"title": "An analytical model for the heat generation in friction stir welding"
},
{
"paperId": null,
"title": "Standard test methods for density of compacted or sintered powder metallurgy (pm) products using archimedes' principle. Annual Book of ASTM Standards"
},
{
"paperId": null,
"title": "Phase Diagrams for Binary Alloys Vol. 44 (ASM International, Materials"
},
{
"paperId": null,
"title": "ASM Specialty Handbook: Magnesium and Magnesium Alloys"
},
{
"paperId": "6b8f3a91b0b4aee4075b42f54b1ce27570693381",
"title": "Standard Test Methods for Tension Testing of Metallic Materials 1"
}
] | 15,988
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/03070aa441815ee9b9ecbe5aa14fa39d1fe4898f
|
[] | 0.823457
|
A Practical and Efficient Node Blind SignCryption Scheme for the IoT Device Network
|
03070aa441815ee9b9ecbe5aa14fa39d1fe4898f
|
Applied Sciences
|
[
{
"authorId": "2107999416",
"name": "Ming-Te Chen"
},
{
"authorId": "2149216502",
"name": "Hsuan-chao Huang"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Appl Sci"
],
"alternate_urls": [
"http://www.mathem.pub.ro/apps/",
"https://www.mdpi.com/journal/applsci",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-217814"
],
"id": "136edf8d-0f88-4c2c-830f-461c6a9b842e",
"issn": "2076-3417",
"name": "Applied Sciences",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-217814"
}
|
In recent years, Internet of Things (IoT for short) research has become one of the top ten most popular research topics. IoT devices also embed many sensing chips for detecting physical signals from the outside environment. In the wireless sensing network (WSN for short), a human can wear several IoT devices around her/his body such as a smart watch, smart band, smart glasses, etc. These IoT devices can collect analog environment data around the user’s body and store these data into memory after data processing. Thus far, we have discovered that some IoT devices have resource limitations such as power shortages or insufficient memory for data computation and preservation. An IoT device such as a smart band attempts to upload a user’s body information to the cloud server by adopting the public-key crypto-system to generate the corresponding cipher-text and related signature for concrete data security; in this situation, the computation time increases linearly and the device can run out of memory, which is inconvenient for users. For this reason, we consider that, if the smart IoT device can perform encryption and signature simultaneously, it can save significant resources for the execution of other applications. As a result, our approach is to design an efficient, practical, and lightweight, blind sign-cryption (SC for short) scheme for IoT device usage. Not only can our methodology offer the sensed data privacy protection efficiently, but it is also fit for the above application scenario with limited resource conditions such as battery shortage or less memory space in the IoT device network.
|
# applied sciences
_Article_
## A Practical and Efficient Node Blind SignCryption Scheme for the IoT Device Network
**Ming-Te Chen** **[†]** **and Hsuan-Chao Huang** **_[∗][,†]_**
Department of Computer Science and Information Engineering, National Chin-Yi University of Technology,
Taichung 41170, Taiwan; mtchen@ncut.edu.tw
*** Correspondence: sc100@ncut.edu.tw; Tel.: +886-4-23924505 (ext. 8775)**
† These authors contributed equally to this work.
[����������](https://www.mdpi.com/article/10.3390/app12010278?type=check_update&version=2)
**�������**
**Citation: Chen, M.-T.; Huang, H.-C.**
A Practical and Efficient Node Blind
SignCryption Scheme for IoT Device
Network. Appl. Sci. 2022, 12, 278.
[https://doi.org/10.3390/app12010278](https://doi.org/10.3390/app12010278)
Academic Editor: Gianluca Lax
Received: 8 November 2021
Accepted: 21 December 2021
Published: 28 December 2021
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright: © 2021 by the authors.**
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: In recent years, Internet of Things (IoT for short) research has become one of the top ten**
most popular research topics. IoT devices also embed many sensing chips for detecting physical
signals from the outside environment. In the wireless sensing network (WSN for short), a human can
wear several IoT devices around her/his body such as a smart watch, smart band, smart glasses, etc.
These IoT devices can collect analog environment data around the user’s body and store these data
into memory after data processing. Thus far, we have discovered that some IoT devices have resource
limitations such as power shortages or insufficient memory for data computation and preservation.
An IoT device such as a smart band attempts to upload a user’s body information to the cloud server
by adopting the public-key crypto-system to generate the corresponding cipher-text and related
signature for concrete data security; in this situation, the computation time increases linearly and the
device can run out of memory, which is inconvenient for users. For this reason, we consider that, if
the smart IoT device can perform encryption and signature simultaneously, it can save significant
resources for the execution of other applications. As a result, our approach is to design an efficient,
practical, and lightweight, blind sign-cryption (SC for short) scheme for IoT device usage. Not only
can our methodology offer the sensed data privacy protection efficiently, but it is also fit for the above
application scenario with limited resource conditions such as battery shortage or less memory space
in the IoT device network.
**Keywords: sign-cryption; unsign-cryption; cryptography module; IoT device**
**1. Introduction**
In recent years, Internet of Things(IoT for short) devices has widely applied in our
daily life. From the life of human beings to industry 4.0, there are many common machines
composed of several IoT devices such as the air conditioner, electronic vehicle, mobile
phone, etc. These devices can collect physical signal data and transfer these data to a
powerful gateway device of the IoT network through the Internet in a digital manner.
When the gateway has received the sensed data from a sender node, it preserves these
records in a database or cloud storage service. However, such IoT devices have limitations
compared with a general gateway server, such as fewer memory space or limited computing
power. This situation usually occurs in the communications between nodes of wireless
sensing network(WSN for short) and IoT networks. Once an IoT device has collected
physical data from a human body, it then must forward these data to the powerful gateway
that can preserve the final result data into a database and perform other cryptography
operations. From the above scenario, we discover that, if any IoT devices attempt to
perform a heavy encryption/decryption computation such as modular exponentiation
over a large prime number in a public key algorithm, then they must perform a signature
operation later for concrete security protection and authentication on these sensed data.
This will lead to fast power consumption and free-memory usage of these nodes.
-----
_Appl. Sci. 2022, 12, 278_ 2 of 13
To solve the above situation, we adopt the sign-cryption approach to let a sensing
node perform the lightweight sign-cryption operation and generate the final cipher-text
with its own signature simultaneously on the powerful server side. When the gateway
server has received this cipher-text from a sensor node, it can decrypt this cipher-text first,
then perform the validation of this plain-text with the inside signature’s help for data
authentication.
We consider the following situation of an IoT device called DSi that attempts to
transfer sensed data to a receiver called, where i = 1 _l and l is the total number of all_
_R_ _∼_
the sensor nodes. To keep the data confidential, DSi must encrypt its own data first. At
this time, it can adopt an efficient encryption/decryption method to generate a cipher-text.
Then, DSi can forward this cipher-text to a powerful base station (BS for short), which it
equips with more computing power than all the sensor nodes in the same IoT network.
However, DSi must also consider its own memory limitation and remaining computing
power to perform such encrypting/decryption computation in sequence. The node DSi
may not be able to perform the signature computation after it has generated encryption if
the remaining power is not enough to perform signature generation in this time; thus, it
must transfer the heavy computation to a powerful node such as the base station BS.
Due to the mentioned situations, we concluded that, if there exists an efficient method
allowing IoT devices to perform encryption and signature operations on the sensed data
in one operation, it could save more computing time and energy, which can then be used
for other computations. In the recent literature, sign-cryption was discussed in [1–4]. The
authors claimed that the sender can transfer the data only to perform one sign-cryption
time, and it can output a cipher-text with a guaranteed signature within. Then, the receiver
can decrypt the received cipher-text with a secret random number inside the corresponding
signature. When the signature is verified by the receiver successfully, the receiver can obtain
the random secret value by applying its own secret key. Finally, the receiver R can obtain the
final data by inputting this secret random number to decrypt the cipher-text. Unfortunately,
their computation efficiency are not practical to fit above situation for IoT device network.
There are some research limitations in our proposed scheme. One is that the sender device
is already authenticated with the receiver ; they both inherently trust each other within
_S_ _R_
the same IoT network environment. The authentication mechanism is beyond the scope of
this research. Another limitation is that IoT device management is also beyond our research.
We can adopt other proposed authentication mechanisms [5–10] for devices to authenticate
with each other in an IoT device network and also construct an IoT devices group with other
devices. Our scheme focuses on the efficient signature and encryption scheme for these power
limitation IoT devices such as the Zigbee chips or IoT sensor devices embedding less memory.
To provide a mechanism to generate a signature and a cipher-text for IoT devices
simultaneously, we propose an efficient and practical, fair sign-cryption scheme based
on quadratic residue (QR for short) for the IoT device network. Not only does it offer
an efficient and practical solution to IoT devices, but it also reduces the signature and
cipher-text generation cost in our methodology. We also offer the formal security proof on
our proposed scheme in the Appendix A and evaluate the efficiency of our mechanism in
this research.
**2. Related Work and Security Definitions**
_Related Work_
In this section, we discuss the related research proposed in [1–4]. In [1], the authors
propose a scheme for the vehicular sensor network and assume that there exists
_CPAS_
two TAs, where one is a tracing Authority (TRA for short) and the other is a public key
generation center (PKG for short) for tracing the identity and key pairs of all vehicles,
respectively. The TRA can produce a pseudo-ID for all vehicles after it has verified the real
identity from them. The PKG also can generate the key-pairs for these vehicles. If there is a
dispute in the protocol, the TRA can determine the real identity of the pseudo-ID key-pair
through the help of the PKG. At this time, each vehicle does not show its real identity
-----
_Appl. Sci. 2022, 12, 278_ 3 of 13
through the above scheme’s methodology. On the other hand, we can discover that the total
efficiency computation of this scheme is 3Pa + 1SM for signature verification operation
and 3Pa + (n + 1)SM for n signatures batch verification, where Pa is a pairing operation
and SM is a symmetric encrypting operation. We consider that the pairing operation is
demanding for comparing our scheme with others in Table 1 for Internet of Things (IoT for
short) devices. From the efficiency comparison in Table 1, we can see that our approach is
much more efficient than [1]. In [2], we observed that authors also claim their scheme is
more efficient than those in other articles [3,4]. However, this paper [2] is still slower than
our proposed approach in Table 1.
On one hand, from the data authentication aspect, the gateway is unaware of what the
sensor node’s data are in our approach. The sensor node will blind the forward data first
before sending these data to the gateway. On the other hand, the gateway also provides its
own random parameters during the signature generation of the offline-sign-cryption phase.
This means that each signature is generated by the gateway’s signing parameters and the
sensor node’s parameters after the above offline-sign-cryption and online-sign-cryption
phases. Meanwhile, our approach can guarantee the situation where the signer cannot fully
control the signature generation and provide the unlinkability to the signature. In [3], the
authors provide an efficient sign-cryption methodology between the traditional public key
crypto-system to the identity-based crypto-system and vice versa. This can be applied in
the multireceiver construction for the IoT device network and provides a general prototype
for this crypto-system transformation. We think that this idea is effective and suitable for
the IoT device to transfer sensing data to another crypto-system construction. However, the
sensing node still requires great computation effort on the paring operation and can cause a
performance bottleneck on these sensor nodes. We also see in [3] that its computation cost
is about 3 Pa, where Pa is a pairing operation on a large prime number q. Finally, in [4],
the authors claim their approach is only about 4 Mu + 2 Pa, where Mu is the modular
multiplication and Pa is the paring operation. After converting to the final computation
approximately, we discover that this scheme still costs 409 Mu more than ours in Table 1. In
this approach, our contribution is to construct an efficient methodology that can generate a
signature and encryption based on the QR at the same time and also preserve a concrete
security proof on well-known hard problems such as the RSA factoring problem [11].
**Table 1. Performance comparison.**
**Sign-Cryption** **Unsign-Cryption** **Totally** **Approx.**
[1] 2Mu + 1Pa 3Pa + 1Ad + 1⊕ 4Pa + 2Mu + 1Ad + 1⊕ 327Mu + 1⊕
[2] 4Mu + 1Ex + 2Ha + 1⊕ 1Ex + 2Pa + 2Mu + 2Ha 2Ex + 2Pa + 6Mu + 4Ha + 1⊕ 647Mu + 1⊕
[3] 4Ha + 1Ex + 2⊕ 3Ha + 1Pa + 2⊕ 1Ex + 1Pa + 7Ha + 4⊕ 322.8Mu + 4⊕
[4] 1Ex + 2Mu + 2Ha + 1⊕ 2Pa + 3Ha + 1Ad + 1Ex + 2Pa+ 2Mu + 1Ad + 5Ha + 1⊕ 409Mu + 1⊕
Ours 4Ha + 29Mu + 1⊕ + 1SE 1SD + 2Ha + 1⊕ 33Mu + 1SE + 1SD + 6Ha + 2⊕ 36.2Mu + 2⊕
_Ex—Modular exponentiation, Ad—Addition operation, Mu—Modular multiplication, SE—Symmetric Encryp-_
tion operation, Ha—Hash operation, SD—Symmetric Decryption operation, Pa—Pairing operation, ⊕—XOR
bit operation.
**3. The Proposed Scheme**
The following is our proposed scheme, which contains four phases: the initial phase,
blinding phase, offline-sign-cryption phase, and the unsign-cryption phase.
_3.1. Preliminary_
In this subsection, we provide some definitions used in our proposed scheme as follows:
- _n: A large prime number, which it computes from two large primes p1 and p2 such_
that n = p1 · p2, where p1 ≡ _p2 ≡_ 3 (mod 4).
- _l: The total number of all Internet of Things (IoT for short) nodes._
- _nˆ: A large prime number, which it computes from two large prime p3 and p4 such_
that ˆn = p3 · p4, where p3 ≡ _p4 ≡_ 3 (mod 4).
-----
_Appl. Sci. 2022, 12, 278_ 4 of 13
- _DSi: An IoT data sender, which is a sensor node that forwards collected data to the_
receiver R, where i = 1 _l and l is the number of all sensor nodes._
_∼_
- _BS: A base station, which helps to collect data sent from a sensor node DSi, where_
_i = 1_ _l._
_∼_
- _R: An IoT data receiver, which receives data from the sender DSi._
- : An exclusive-or operation for symmetric encryption/decryption usage.
_⊕_
- _H1, H2: Two secure hash functions that each of them maps Zn[∗]_ _[→{][0, 1][}][n][ with collision-]_
resistance and outputs the same n-bits hash strings.
- _Epkj_ : A symmetric key encryption function for the party j with the public key pkj,
where j ∈{DSj, R}, where j = 1 ∼ _l._
- _Dskj_ : A symmetric key decryption function for the party j with the private key skj,
where j ∈{DSj, R}, where j = 1 ∼ _l._
_3.2. Initial Phase_
In this phase, an IoT node DSl acts as a data sender; it first selects two large, distinct
primes, where one is p1 and the other is p2 such that n = p1 · p2, where l = 1 ∼ _l and l are_
totally node numbers. DSi also publishes this n and we could know that given a QR in
_Zn[∗][; there are four different square roots (or 2 roots) of the QR in][ Z]n[∗][. From this property,]_
we could derive the 2[i]th roots of the QR in Zn[∗][, where][ i][ must be larger than 1 in][ Z]n[∗][. On]
one hand, we assume that there exists a powerful base station as a signer BS, which also
selects two large primes, where one is p3 and the other is p4 in the same IoT network
environment. It also computes ˆn = p3 · p4 and sets up to let n < ˆn. Then, it publishes ˆn and
its prefix string Ω. In the following, we take Fan and Lei’s Scheme [12] as our reference.
Nevertheless, the data receiver (R for short) sets up its own private/public key pair as (skR,
_pkR). When the set-up is finished, it publishes its own public key to the IoT network._
- First, a node DSi randomly chooses its own QR numbers (z1, z2, z3) from Zn[∗] [similar]
with y1, y2 and y3, where each of them is computed from yi = (z[2]i mod n) and
_i = 1 ∼_ 3, respectively. Then, base station BS also selects two random QR numbers α
and β such that they allow (β[2]/α[2] mod n) to belong to QR in Zn[∗][.][ DS]i [also publishes]
(n, y1, y2, y3) to the signer BS. Once the signer BS has received them from DSi, DSi
computes γ = (κ[2] mod ˆn) with a random number κ and the identifier ˆz = H1(z)
mod ˆn with an identifier number z. After setting up these random numbers, BS
forwards (γ, ˆn, z, ˆz) to DSi and enters the offline-signing phase.
_3.3. Offline-Signing Phase_
- When DSi has received (γ, ˆn, z, ˆz) from the BS, DSi also computes the following
messages if the checking of z is valid, where ˆz = H1(z) mod ˆn. DSi selects a random
number r ∈ _Zn[∗]_ [and computes the following:]
_C1 = EpkR_ (r)
_C2 = H1(r) ⊕_ _m_
_C3 = H1(C1, C2, r, ˆz, m)_
(1)
- After computing the above equations, DSi also allows β[2]/α[2] as τ and performs the
following:
_C1[′]_ [=][ C][1] _[∗]_ _[τ][2][ ∗]_ _[γ]_
_C2[′]_ [=][ C][2] _[∗]_ _[γ]_ (2)
_C3[′]_ [=][ C][3] _[∗]_ _[γ]_
_h = H1(C1[′]_ [,][ C]2[′] [,][ C]3[′] [)]
- From the above equations, we know that DSi blinds the sensor data and computes
a cipher-text (C1[′] [,][ C]2[′] [,][ C]3[′] [)][. Then,][ DS][i][ forwards (][C]1[′] [,][ C]2[′] [,][ C]3[′] [,][ h][,][ z][,][ ˆ][z][) to][ BS][. When][ BS][ has]
-----
_Appl. Sci. 2022, 12, 278_ 5 of 13
received these messages from DSi[′][, it verifies above them with][ z][, checks the][ h][ from]
(C1[′] [,][ C]2[′] [,][ C]3[′] [)][, and enters the online-signing phase.]
_3.4. Online-Signing Phase_
- When BS obtains (C1[′] [,][ C]2[′] [,][ C]3[′] [,][ h][,][ z][,][ ˆ][z][) from][ DS][i][, it could perform verification of these]
cipher-texts. If they are valid, then BS decrypts them with γ[−][1] as follows:
_C1 = C1[′]_ _[∗]_ _[τ][2][ ∗]_ _[γ][−][1]_
_C2 = C2[′]_ _[∗]_ _[γ][−][1]_ (3)
_C3 = C3[′]_ _[∗]_ _[γ][−][1]_
- After decrypting the above cipher-texts successfully, BS computes the signature as
follows with a QR number λ:
_C3[′]_ [=][ C]3[−][2] _∗_ ( _[β]_
_α_ [)][−][2][ ∗] [(][λ][)][2]
_C3[′′]_ [=][ C]3[′] (mod n)
_[∗]_ _[y][1]_
_C2[′′]_ [=][ C]2[′] (mod n)
_[∗]_ _[y][2]_
_C1[′′]_ [=][ C]1[′] (mod n)
_[∗]_ _[y][3]_
(4)
- The signer BS finishes the signing operation and generates the signature (C1[′′][,][ C]2[′′][,][ C]3[′′][)]
to the data sender DSi. When the node DSi has received this signature, it could
unblind the signature by computing the following operations:
_C1[′]_ [=][ C]1[′′] 3
_[∗]_ _[y][−][1]_
_C2[′]_ [=][ C]2[′′] 2
_[∗]_ _[y][−][1]_
_C3[′]_ [=][ C]3[′′] _[∗]_ _[y]1[−][1]_ (5)
_C3[∗]_ [=][ C]3[′]
_[∗]_ [(][ 1]α [)][2]
= C3[−][2] _∗_ _β[−][2]_ _∗_ (λ)[2]
- Then, the DSi computes the final encrypted cipher-text messages (C1[′′′][,][ C]2[′′′][,][ C]3[′′′][)][ to the]
_BS in the following and enters the unsign-cryption phase:_
_C1[′′′]_ [=][ C]1[′′]
_[∗]_ _[γ]_
_C2[′′′]_ [=][ C]2[′′]
_[∗]_ _[γ]_
_C3[′′′]_ [=][ C]3[∗]
_[∗]_ _[γ]_
(6)
_3.5. Unsign-Cryption Phase_
- When BS received these cipher-text messages from DSi, it can decrypt by the following
operations:
_C3[∗]_ [=][ C]3[′′′]
_[∗]_ _[γ][−][1]_
_t = (C3[∗][)][2][ ∗]_ [(][λ][)][−][4]
(7)
= C3[−][4] _∗_ _β[−][4]_
_t[∗]_ = t ∗ _y1_
-----
_Appl. Sci. 2022, 12, 278_ 6 of 13
- After BS has computed this signature t from the above equation, it forwards (t[∗], z, ˆz)
to the node DSi and allows the DSi to decrypt t[∗] and un-blinds this signature t
as follows:
_t = t[∗]_ _∗_ _y1[−][1]_
_SR = t ∗_ _β[4]_
(8)
= C3[−][4] _∗_ _β[−][4]_ _∗_ _β[4]_
= C3[−][4] mod n
- After DSi summarizes the above equation, we conclude that the node DSi has the
final signature σR = (SR, C1, C2, C3), where S[4]R [=][ C][3][ =][ H][1][(][C][1][,][ C][2][,][ γ][,][ τ][,][ ˆ][z][,][ m][)][. Then,]
the node DSi can forward the sign-cryption signature σR and cipher-text messages
(C1, C2, C3) to the receiver R of the Internet host.
- Once the receiver R has obtained this sign-cryption signature σR and cipher-text
messages (C1, C2, C3) from DSi, it can perform the following steps:
_r[∗]_ = DskR (C1)
_m_ =[?] _C2 ⊕_ _H1(r[∗])_
?
_C3_ = H1(C1, C2, r, ˆz, m)
?
_S[4]R_ = C3
**4. Functionality Comparisons and Security Analysis**
(9)
In this section, we could provide functionality comparisons with other schemes and
security analysis about our proposed scheme.
_4.1. Fast Sign-Cryption Operation_
The proposed scheme only needs three hash operations, one ⊕ operations, five multiplication operations, and one symmetric encryption in the offline-signing phase. In this
situation, our proposed scheme is more efficient than [2]. In addition, the sensor node DSi
can blind the sensed data to the base station efficiently and with data confidence. The base
station BS cannot be aware of the sensed data content. If the base station is compromised by
a malicious attacker, DSi can also protect this data to prevent its exposure outside the IoT
network. At the same time, it also guarantees the protection of user’s personal information.
_4.2. Signer Fair Signature Operation_
Our proposed scheme can offer the signature of sensed data after the base station BS
has received the encrypted sensed data from the user. In this time, BS only can apply the
square root operation on these sensed data to generate the corresponding signature under
these blind and encrypted data. In the online-signing operation, the IoT device can perform
lightweight operations on the user’s sensed data and obtain the signing result after the
offline-signing phase performed by the signer BS. From the two signing phases above,
we know that the IoT device and the base station can present some random numbers in
these phases to prevent the unfair situation that the signature generation is controlled by a
certain party.
_4.3. User Data Protection_
In our proposed scheme, we use the sign-cryption method to generate the encryption
data with the corresponding signature within. In this time, the signer cannot know what
the plain-text is without the corresponding decryption key. Only the receiver is aware
of the corresponding decryption key to decrypt this cipher-text. Thus, our sign-cryption
scheme could offer privacy protection of the user’s personal sensed information.
-----
_Appl. Sci. 2022, 12, 278_ 7 of 13
_4.4. Efficiency Comparisons_
In this section, we evaluate the efficiency of our approach in the following. First, there
is an assumption that the prime numbers p1, p2, p3 and p4 are 1024 bits in length; Ha is
computation time for one hash computation; SE is the time for a symmetric encryption
operation, and SD is time for a symmetric decryption operation. Meanwhile, we also
define that Ex is the computation time for one modular exponential operation in a 1024-bit
module, Mu is the time for one modular multiplication in a 1024-bit module, Mecc is the
time for a number performing another point addition over an elliptic curve [13], and Pa is
the time for the computation time of a bilinear pairing operation of two elements over an
elliptic curve. Then, we assume that Ex ≈ 8.24Mecc for the ARM CPU to process at 200 Mhz
in [14]. From the above assumption, we can discover that there exists some relation in
the following, where Ex 240Mu = 600Ha 3Pa and Ad 5Mu in [15–21]. From the
_≈_ _≈_ _≈_
above computation time evaluation, we can see that our approach total computation time
is 33Mu + 6Ha + 2 +1SE + 1SD. Then, the result is approximate to 36 Mu modular
_⊕_
multiplication operations. Comparing with [2], we can see that our approach is much
faster under the 1024-bit prime numbers. In the following two simulation results shown in
Figures 1 and 2, our approach provides the QR-signature simulation and RSA signature
simulation, respectively. On the other hand, we implemented our approach on a Ubuntu
20.04 operating system with Intel Core i5-1135G7 CPU @ Base 2.4 GHz up to 4.2 GHz
CPU and 8 GB memory. This simulation is carried out by using GO language and python
language with “crypto/encoding/Matplotlib” library on the 10 nodes to 50 nodes, where
are shown in Figures 1 and 2, respectively.
**Figure 1. QR Signature Simulation on 10 nodes to 50 nodes.**
_4.5. Security Definitions_
4.5.1. QR Signature Security
We provide the definition on the digital signature’s security as follows: In the initial
phase, we assume that there exists some functions used in our proposed scheme; one is the
signature generating function Sig( ) and the other is the verification function Ver( ), where
_·_ _·_
the signer S can input her/his signing key skS into this signing function with the message
_m. Then, we can claim that σ is the resulting output from the signing function by S and_
the receiver R can verify σ by the verification function Ver(·) with the message m and the
signer’s public key pkS. The above scheme is based on well-known hard problems such
as the RSA factoring problem. If there exists an attacker whose goal is to forge a valid
_F_
signature S[′] on the message m and pass the verification, i.e., Ver(S[′], m, pkS) = 1, then F
outputs it successfully with non-negligible probability larger than ε, we can use F ’s ability to
factor the RSA factoring problem. However, in fact, the attacker F ’s advantage is less than ε.
-----
_Appl. Sci. 2022, 12, 278_ 8 of 13
This means that the probability of to output a forged signature and for this signature to
_F_
pass the verification function with non-negligible probability is less than ε.
_Adv[Si[′]_ _i[,][ m][,][ pk][S][) =][ 1][]][ <][ ε][.]_
_[←−F][ Sig][(][sk][S][,][m][)][|][Ver][(][S][′]_
**Figure 2. RSA Signature Simulation on 10 nodes to 50 nodes.**
4.5.2. Unforgeability
In this proposed scheme, we provide the signature definition of our sign-cryption
scheme. From the above digital signature definition, we discuss the case where there exists
a forger with the ability to forge a valid QR-signature on our scheme. We assume that
_F_
there are some functions such that F can make the hash query to the hash functions H1(·)
and H2(·), symmetric encryption EncpkR (·) function and the signing function Sig(·). After
preparing these functions, F can make its own query on these functions. F can ask i times
query, where i = 1 ∼ _l and l is the total number of IoT nodes. After the above qs times_
query, if F can output qs + 1 signatures on our proposed scheme, we can use F to break
the RSA factoring problem.
1
_Adv[Un f]F_ _Sig(·),H1(·),H2(·),RO1,EpkR (·)_ [(][θ][,][ t][′][)][ ≤] 2[l] _· qs · qe · qd_ + ε[′].
**Lemma 1. First, we assume that there exists a secure digital signature function Sig(** ) and a
_·_
_secure hash function H1(·), which could be replaced with a random oracle RO1 and a secure hash_
_function H2 in our proposed scheme. We also claim that our proposed scheme with the above_
_unforgeability (Unf for short) satisfies the following situations. In other words, if our scheme is_
(t[′], ε[′]) unforgeable, then
1
_Adv[Un f]F_ _Sig(·),H1(·),H2(·),RO1,EpkR (·)_ [(][θ][,][ t][′][)][ ≤] 2[l] _· qs · qe · qd_ + ε[′].
_where t[′]_ _is total experiment simulation time, including simulating l as an upper bound on the_
_number of IoT devices, at most signature oracle qs times query, at most encryption oracle qe times_
_query, at most decryption oracle qd times query, and ε[′]_ _has taken over the coin toss of our scheme._
4.5.3. Indistinguishability
In this definition, we assume the Indistinguishable (Ind for short) game where there
exists an attacker in the following simulation, which is controlled by a simulator .
_A_ _S_
First, we defined that there is a symmetric encryption/decryption function Epki (·)/Dski (·),
where i ∈{DSj, BS, R}, j = 1 ∼ _l, in which DSj is one of the l IoT devices; BS is the base_
station, and R is the receiver of the outside network. The simulator S will prepare all
-----
_Appl. Sci. 2022, 12, 278_ 9 of 13
set-up parameters including key pairs for the above parties. After set-up is complete, S will
launch the proposed scheme simulation with A. A can perform the encryption/decryption
on the chosen message m. S also can reply the cipher-text C = Epki (m) and the original
message m to A. After the above game simulation, S can replace the encryption/decryption
functions to an encryption/decryption oracle (τ, τ[−][1]), which performs the same action as
our above symmetric encryption/decryption function. Through the above training phase,
_A sends a chosen target message (M0, M1) to S; S will perform a coin flip b on the message_
(Mb, M1−b). Then, S inputs the Mb to the encryption oracle Epki to obtain the final result
_Cb. S forwards Cb to A to guess whether Mb is M0 or M1 on its coin flip b[′]—that is,_
_Pr[b[′]_ _b = b[′]] <_ [1]
_←−A[(][E][pki][ (][·][)][,][D][ski][ (][·][)][,][τ][,][τ][−][1][)]|_
2 [+][ ε][′][.]
4.5.4. Indistinguishable-Chosen Cipher-Text Attack (Ind-CCA for Short)
In this proposed scheme, we continue to define the chosen cipher-text attack security
of our SC approach. There also exists an attacker, whose goal is to distinguish the
_A_
cipher-text of our sign-cryption scheme. First, we assume that there is a simulator to
_A_
control the environment situational parameters including key pairs, security parameters,
and hash length. After setting up, defines the experiment in which can make a query
_S_ _A_
as follows.
- Phase 1: In this phase, the attacker could make the encryption/decryption query
_A_
on the chosen message m. If A makes the encryption query on the m of the IoT device
_i, where i = 1 ∼_ _l, then S inputs the m into Ci,1 = Epki_ (γi), Ci,2 = m ⊕ _H1(γi) and_
_Ci,3 = H1(C1, C2, γi, m), where i = 1 ∼_ _l. Here, S will preserve these parameters into_
the encryption oracle list Ei entry. On the other hand, A asks the decryption query on
the cipher-text (Ci,1, Ci,2, Ci,3), S will check if there are any parameters matching this
cipher-text in the Ei entry. If the answer is yes, S forwards the original message back
to A and keeps this query in the decryption oracle Di entry.
- Challenge: In this phase, if A chooses a target IoT device j[∗] and a message pair
(M0[∗][,][ M]1[∗][), where][ M]0[∗] [and][ M]1[∗] [are never asked the encryption query and decryption]
query before, j[∗] = i and i = 1 _l. In this time,_ will toss the coin flip b and inputs
_̸_ _∼_ _S_
the Mb[∗] [into the encryption oracle][ E][pk][∗]j [(][·][)][. Finally,][ S][ returns the target cipher-text]
(C1,j∗, C2,j∗, C3,j∗ ) to A. When A has received this target cipher-text, it still can make
the decryption query on other cipher-texts except (C1,j∗, C2,j∗, C3,j∗ ).
In the following, we model above the actions as game simulation steps that we played
with the attacker .
_A_
_ExpA[Ind],SC[−][CCA][−][b](θ)_
**Phase 1**
_i ∈{1, . . ., l}, Mi ←−A[E][pki][ (][·][,][θ][)][,][D][ski][ (][·][,][θ][)][,][H][1][(][·][)]_
_γi ←−{0, 1}[∗]_
_C1,i ←−_ _Epki_ (γi)
_C2,i ←−_ _Mi ⊕_ _H1(γi)_
_C3,i ←−_ _H1(C1,i, C2,i, γi, Mi)_
**Challenge Phase**
_b ∈{0, 1}, j[∗]_ _̸= i, (Mb[∗][,][ M]1[∗]−b[)][ ←−A]_
_b_
_Mb,j∗_ _←−S_
_C1,j∗_ _←−_ _Epkj∗_ (γj∗ )
_C2,j∗_ _←−_ _Mi ⊕_ _H1(γj∗_ )
_C3,j∗_ _←−_ _H1(C1,j∗_, C2.j∗, γj∗, Mb,j∗ )
_b[′]_ _←−A(Epkj∗_ (·,θ),Dpkj∗ (·,θ),τ,τ[−][1])(C1,j∗, C2,j∗, C3,j∗, Mb∗[,]
_M1[∗]−b[)]_
Return b[′].
-----
_Appl. Sci. 2022, 12, 278_ 10 of 13
The advantage ok function of the adversary A where it is defined as AdvA[Ind],SC[−][CCA](θ) =
_|Pr[ExpA[Ind],SC[−][CCA][−][1](θ) = 1] −_ _Pr[ExpA[Ind],SC[−][CCA][−][0](θ) = 1]| < ε[′]._
**Lemma 2. We defined that our sign-cryption SC scheme can withstand Ind-CCA attacks if there**
_exists no such attacker A that could guess the cipher-text during above experiment Exp with_
_non-negligible probability than ε[′], i.e.,_
1 + ε[′]
_AdvA[Ind],SC[−][CCA](θ, t) <_ 2 · qe · qd,
_where at most t time bound, at most qe times encryption query, at most qd times decryption query_
_under the θ security parameter._
**Theorem 1. First, we assume that our sign-cryption SC scheme is an Ind-CCA secure symmetric**
_encryption/decryption scheme with a secure hash random oracle H1 and also satisfied with the_
_unforgeability (Unf) in the following. Then, we can say that, if SC is (t[′], ε[′]) Ind-CCA secure and_
_unforgeable, then_
1 1 + ε[′]
_Adv[Un f]F_,A[,],SC[Ind][−][CCA](θ, t) ≤ ( 2[l] _· qs · qe · qd_ _· ε +_ 2 · qe · qd ),
_where t is the maximum total experiment time including adversary execution time, l is an upper_
_bound on the number of all IoT devices of at most qs times signing query, at most encryption oracle_
_qe times query, and at most decryption oracle qd times query under the security parameter θ in_
_the experiment._
**5. Conclusions**
In the final result, we can see that our approach is suitable for an IoT device to compute
the QR signature and encryption simultaneously. From Table 1, we also can see that our
approach is more efficient than other schemes [1–4]. Our methodology not only efficiently
computes the encryption and signature simultaneously, but can also support the fair protocol
of two parties during communication between these IoT devices. This point also prevents
allowing a single device such as the powerful gateway being compromised by attackers when
IoT devices attempt to perform a signature operation or data exchange with this gateway. At
the same time, this approach also provides data privacy protection for users. On one hand,
our future goal is to develop a lightweight hierarchical sign-cryption scheme for IoT devices,
and it can offer the authentication functionality between different levels of IoT devices with
data privacy protection simultaneously. On the other hand, our approach can extend to
develop a novel and real practical IoT data migration methodology for the IoT network in
the future.
**Author Contributions: Conceptualization, M.-T.C. and H.-C.H.; methodology, M.-T.C.; software,**
H.-C.H.; validation, M.-T.C. and H.-C.H.; formal analysis, M.-T.C.; investigation, H.-C.H.; resources,
H.C.H.; data curation, H.-C.H.; writing—original draft preparation, M.-T.C.; writing—review and
editing, H.-C.H.; visualization, H.-C.H.; supervision, H.-C.H.; project administration, H.-C.H.; funding acquisition, H.-C.H. All authors have read and agreed to the published version of the manuscript.
**Funding: This research received no external funding.**
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Acknowledgments: This study was supported in part by grants from the Ministry of Science and**
Technology of the Republic of China (Grant No. MOST 109-2221-E-167-028-MY2).
**Conflicts of Interest: The authors declare no conflict of interest.**
-----
_Appl. Sci. 2022, 12, 278_ 11 of 13
**Appendix A**
**Proof of Theorem 1. First, we define experiments of the above two security definitions**
and each attacker’s ability, respectively. We will provide the proof of Lemma 1 and also
define that there exists an attacker whose goal is to forge a signature in the proposed
_F_
scheme. We also define a simulator that can control the experiment of the proposed
_S_
scheme. On the other hand, is given a signing oracle Sig( ), which can perform the same
_S_ _·_
action as signature generation by the signer in our approach. S also prepares all IoT device
key pairs, including the receiver’s one.
Before beginning the experiment of digital signature, is given a hard RSA problem
_S_
in n[∗] and its goal is to use the F ’s ability to factor this n[∗]. During this time, S will also
prepare the symmetric encryption/decryption function for the encryption/decryption
_F_
query. The query types are discussed below.
- Encrypting query: F can make an encrypting query on the chosen message m, the
target receiver i and the corresponding hash value H1(ri[′][)][. During this time,][ S][ checks]
the H1 list record and determines the random number ri[′][. If there is no hash record on]
the list, S will generate the (∗, H1(ri[′][)][,][ r]i[′][)][ entry for the random number][ r]i[′] [on the list.]
Then, generates the corresponding cipher-texts in the following:
_S_
_C1[′]_ [=][ E]pki [(][r]i[′][)]
_C2[′]_ [=][ m][ ⊕] _[H][1][(][r]i[′][)]_
_C3[′]_ [=][ H][1][(][C]1[′] [,][ C]2[′] [,][ r]i[′][,][ m][)][.]
(A1)
Then, S forwards this cipher-text (C1[′] [,][ C]2[′] [,][ C]3[′] [) back to][ F][ to finish this Encryption query]
and records (C1[′] [,][ C]2[′] [,][ C]3[′] [) into the][ H][1][ list to be noted as (][C]1[′] [,][ C]2[′] [,][ C]3[′] [,][ H][1][(][r]i[′][)][,][ r]i[′][).]
- Decrypting query Dec(·): When F forwards a cipher-text (C1[′] [,][ C]2[′] [,][ C]3[′] [) to][ S][,][ S][ will]
search the H1 list to see if there is any entry in this list; if yes, S uses the H1(ri[′][)][ to]
decrypt the cipher-text (C1[′] [,][ C]2[′] [,][ C]3[′] [). Finally,][ S][ returns][ m][ back to][ F] [.]
- QR Signnature query: When F makes the signature query on the chosen message m,
will generate the following:
_S_
_C1[′]_ [=][ E]pki [(][r]i[′][)]
_C2[′]_ [=][ m][ ⊕] _[H][1][(][r]i[′][)]_
_C3[′]_ [=][ H][1][(][C]1[′] [,][ C]2[′] [,][ r]i[′][,][ m][)]
4
_S[′]R_ = C3′
(A2)
After generating the signature S[′]R [and the corresponding cipher-text (][C]1[′] [,][ C]2[′] [,][ C]3[′] [),][ S]
will check the signature list s1 to see if there is any entry inside; if no, S preserves the
signature S[′]R [into the signature list and stores (][C]1[′] [,][ C]2[′] [,][ C]3[′] [,][ S][′]R[,][ H][1][(][r]i[′][)][,][ r]i[′][,][ m][) in the][ s][1][ list.]
Then, S transfers S[′]R [back to][ F] [.][ F][ can make the above signature query several times]
on the chosen message m. If F has made l times signature query on the message m,
can forge l + 1 signatures on the message m. Then, we can have the probability of
_F_
adversary
_F_
1
_Adv[Un f]F_,Sig(·),Enc(·),Dec(·)[(][θ][,][ t][)][ ≤] 2[l] _· qs · qe · qd_ _· ε,_ (A3)
where there is at most qs times signature query, at most qe times encryption query,
and at most qd times decryption query in the polynomial t time bound under security
parameter θ.
Second, we present the proof of Lemma 2 as follows. We assumed that there exists an
attacker A whose goal is to distinguish a cipher-text (C1, C2, C3) from a given message tuple
(M0, M1) with non-negligible probability. Before simulating the experiment, we model a
simulator S, which is given a RSA hard problem n[∗] and its goal is to factor n[∗] and find the
prime factor of n[∗]. During this time, S also generates all key pairs of IoT devices including
-----
_Appl. Sci. 2022, 12, 278_ 12 of 13
the base gateway BS and the receiver R. When everything is ready, the S also allows A to
send query types in the following.
- Cipher-text query on Enc( ): In this simulation, can also launch a cipher-text query
_·_ _A_
with an input the message m, the target receiver i, and the corresponding hash value
_H1(ri) to S. When receiving this query, S checks the H1 list records and finds out if_
there exists a random number ri and other related records before. If there is no hash
record on the list, S will generate a new entry (∗, H1(ri), ri) for the random number ri
on the list. Then, performs the following steps:
_S_
_C1 = Epki_ (ri)
_C2 = m ⊕_ _H1(ri)_
_C3 = H1(C1, C2, ri, m)_
(A4)
Subsequently, S sends this cipher-text (C1, C2, C3) back to A and stores (C1, C2, C3)
into the H1 list to be noted as (C1, C2, C3, H1(ri), ri).
- Plain-text query on Dec( ): When makes a plain-text query on with an cipher-text
_·_ _A_ _S_
(C1, C2, C3), S will search the H1 list first to see if there is any entry inside or not; if
yes, S uses the H1(ri) to decrypt the cipher-text (C1, C2, C3) and returns m back to A.
- Signing query: When makes an QR signature signing query on the chosen cipher_A_
text (C1, C2, C3), S will calculate the following equations:
_C1 = Epki_ (ri)
_C2 = m ⊕_ _H1(ri)_
_C3 = H1(C1, C2, ri, m)_
_S[4]R_ [=][ C][3]
(A5)
After performing the above training, we defined it as the Phase 1 training phase of
the experiment in the above definition. In the next phase, the A can send a target message
tuple (M0[∗][,][ M]1[∗][) and forward it to][ S][. In this time,][ S][ will choose one of them by a coin toss]
on b. Then, S performs signing steps as follows:
_C1[∗]_ [=][ E]pki [(][r]i[∗][)]
_C2[∗]_ [=][ M]b[∗] _i_ [)]
_[⊕]_ _[H][1][(][r][∗]_
_C3[∗]_ [=][ H][1][(][C]1[∗][,][ C]2[∗][,][ r]i[∗][,][ M]b[∗][)]
_S[4]R[∗]_ [=][ C]3[∗]
(A6)
After generating the above cipher-text (C1[∗][,][ C]2[∗][,][ C]3[∗][,][ S][4]R[∗][),][ S][ returns it back to the][ A][.]
During this time, can make the decryption query except on the target cipher-text
_A_
(C1[∗][,][ C]2[∗][,][ C]3[∗][,][ S][4]R[∗][). If][ A][ can distinguish the cipher-text (][C]1[∗][,][ C]2[∗][,][ C]3[∗][,][ S][4]R[∗][) computed from][ M]b[∗][,]
we can have
_AdvA[Ind],SC[−][CCA](θ) =|Pr[ExpA[Ind],SC[−][CCA][−][1](θ) = 1] −_ _Pr[ExpA[Ind],SC[−][CCA][−][0](θ) = 1]|_
=Pr[ExpA[Ind],SC[−][CCA][−][1](θ) = 1] − (1 − _Pr[ExpA[Ind],SC[−][CCA][−][1](θ) = 1])_ (A7)
_< ε[′]._
Then, we can obtain that
1 + ε[′]
_AdvF[Ind],A[−],SC[CCA](θ, t) = Pr[ExpF[Ind],A[−],SC[CCA][−][1](θ) = 1] ≤_ 2 · qe · qd,
where at most qe times encryption query and at most qd times decryption query in the
polynomial t time bound under the security parameter θ. The probability that A can
-----
_Appl. Sci. 2022, 12, 278_ 13 of 13
distinguish the above target cipher-text (C1[∗][,][ C]2[∗][,][ C]3[∗][) is less than][ ε][′][. We have summarized]
the above proofs of Lemmas 1 and 2. We can obtain
1 1 + ε[′]
_Adv[Un f]F_,A[,],SC[Ind][−][CCA](θ, t) ≤ ( 2[l] _· qs · qe · qd_ _· ε +_ 2 · qe · qd ).
**References**
1. Shim, K.A. CPAS: An Efficient Conditional Privacy-Preserving Authentication Scheme for Vehicular Sensor Networks. IEEE
_[Trans. Veh. Technol 2012, 61, 1874–1883. [CrossRef]](http://doi.org/10.1109/TVT.2012.2186992)_
2. Naresh, V.S.; Reddi, S.; Kumari, S.; Allavarpu, V.D.; Kumar, S.; Yang, M.H. Practical Identity Based Online/Off-Line Signcryption Scheme
[for Secure Communication in Internet of Things. IEEE Access 2021, 9, 21267–21278. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2021.3055148)
3. Sun, Y.; Li, H. Efficient signcryption between TPKC and IDPKC and its multi-receiver construction. Sci. China Inf. Sci. 2010,
_[53, 557–566. [CrossRef]](http://dx.doi.org/10.1007/s11432-010-0061-5)_
4. Li, F.; Xiong, P. Practical secure communication for integrating wireless sensor networks into the Internet of Things. IEEE Sens. J.
**[2013, 13, 3677–3684. [CrossRef]](http://dx.doi.org/10.1109/JSEN.2013.2262271)**
5. Hammi, B.; Fayad, A.; Khatoun, R.; Zeadally, S.; Begriche, Y. A Lightweight ECC-Based Authentication Scheme for Internet of
[Things (IoT). IEEE Syst. J. 2020, 3, 3440–3450. [CrossRef]](http://dx.doi.org/10.1109/JSYST.2020.2970167)
6. Choi, S.; Ko, J.; Kwak, J. A Study on IoT Device Authentication Protocol for High Speed and Lightweight. In Proceedings of the
2019 International Conference on Platform Technology and Service (PlatCon), Jeju, Korea, 28–30 January 2019; pp. 1–5.
7. Ning, H.; Liu, H.; Yang, L.T. Aggregated-Proof Based Hierarchical Authentication Scheme for the Internet of Things. IEEE Trans.
_[Parallel Distrib. Syst. 2015, 3, 657–667. [CrossRef]](http://dx.doi.org/10.1109/TPDS.2014.2311791)_
8. Kim, B.; Yoon, S.; Kang, Y.; Choi, D. PUF based IoT Device Authentication Scheme. In Proceedings of the 2019 International
Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Korea, 16–18 October 2019;
pp. 1460–1462.
9. Lounis, K.; Zulkernine, M. T2T-MAP: A PUF-Based Thing-to-Thing Mutual Authentication Protocol for IoT. IEEE Access 2021, 9,
[137384–137405. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2021.3117444)
10. Taher, B.H.; Jiang, S.; Yassin, A.A.; Lu, H. Low-Overhead Remote User Authentication Protocol for IoT Based on a Fuzzy Extractor
[and Feature Extraction. IEEE Access 2019, 7, 148950–148966. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2019.2946400)
11. Rivest, R.; Shamir, A.; Adleman, L. A method for obtaining digital signatures and public-key cryptosystems. Commun. ACM 1978,
_[21, 120–126. [CrossRef]](http://dx.doi.org/10.1145/359340.359342)_
12. Fan, C.I.; Lei, C.L. A User Efficient Fair Blind Signature Scheme for Untraceable Electronic Cash. J. Inf. Sci. Eng. 2002, 18, 47–58.
13. [Koblitz, N.; Menezes, A.; Vanstone, S. The state of Elliptic curve cryptography. Des. Codes Cryptgogr. 2000, 19, 173–193. [CrossRef]](http://dx.doi.org/10.1023/A:1008354106356)
14. [Lauter, K. The Advantages of Elliptic curve cryptography for wireless security. IEEE Wirel. Commun. 2004, 11, 62–67. [CrossRef]](http://dx.doi.org/10.1109/MWC.2004.1269719)
15. Bertinoi, G.; Breveglieri, L.; Chen, L.; Fragneto, P.; Harrison, K.; Pelosi, G. A pairing SW implementation for smart cards. J. Syst.
_[Softw. 2008, 81, 1240–1247. [CrossRef]](http://dx.doi.org/10.1016/j.jss.2007.09.022)_
16. Hankerson, D.; Menezes, A.; Scott, M. Software Implementation of pairings. Identity-Based Cryptogr. Cryptol. Inf. Secur. 2008,
_2, 188._
17. Hohenberger, S. Advances in Signatures, Encryption, and E-Cash from Bilinear Groups. Ph.D. Dissertation, Massachusetts
Institute of Technology, Cambridge, MA, USA, 2006.
18. Li, Z.; Higgins, J.; Clement, M. Performance of Finite Field Arithmetic in an Elliptic Curve Cryptosystem. In Proceedings of
the 9th IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunications Systems
(MASCOTS’01), Cincinnati, OH, USA, 15–18 August 2001; pp. 249–256.
19. Ramachanfdran, A.; Zhou, Z.; Huang, D. Computing cryptography algorithm in Portable and embedded devices. In Proceedings
of the IEEE International Conference on Portable Information Devices, Orlando, FL, USA, 25–29 May 2007; pp. 1–7.
20. Schneier, B. Applied Cryptography, 2nd ed.; John Wiley & Sons: New York, NY, USA, 1996.
21. Takashima, K. Scaling Security of Elliptic Curves with Fast Pairing Using Efficient Endomorphisms. IEICE Trans. Fundam. Electron.
_[Commun. Comput. Sci. 2007, 90, 152–159. [CrossRef]](http://dx.doi.org/10.1093/ietfec/e90-a.1.152)_
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/app12010278?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/app12010278, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2076-3417/12/1/278/pdf?version=1640754039"
}
| 2,021
|
[] | true
| 2021-12-28T00:00:00
|
[
{
"paperId": "c03f06d8ae58af6d3fdad121cfb8a6ab29962521",
"title": "Efficient signcryption between TPKC and IDPKC and its multi-receiver construction"
},
{
"paperId": "80b388f07313e609a0fbd7dadfbadc69ae3b653e",
"title": "Protection"
},
{
"paperId": "b720c5158b68a8bbd1dbcfa88c5e050c1c65991c",
"title": "A pairing SW implementation for Smart-Cards"
},
{
"paperId": "cf789ce723ca1c35847cb8c19ddae048c68236aa",
"title": "Software Implementation of Pairings"
},
{
"paperId": "34d6e0a4c4e3b4df332e23700ae70e03a5171cd7",
"title": "Scaling Security of Elliptic Curves with Fast Pairing Using Efficient Endomorphisms"
},
{
"paperId": "41ac300e6a7aba12bd790bcfe1341f65d2d7f284",
"title": "A User Efficient Fair Blind Signature Scheme for Untraceable Electronic Cash"
}
] | 14,468
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/03071a70a522f5c17d8cf33f0f13436585f7be5b
|
[
"Computer Science"
] | 0.84913
|
Coo: Consistency Check for Transactional Databases
|
03071a70a522f5c17d8cf33f0f13436585f7be5b
|
arXiv.org
|
[
{
"authorId": "2109029364",
"name": "Haixiang Li"
},
{
"authorId": "2109167672",
"name": "Yuxing Chen"
},
{
"authorId": "2141762475",
"name": "Xiaoyan Li"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ArXiv"
],
"alternate_urls": null,
"id": "1901e811-ee72-4b20-8f7e-de08cd395a10",
"issn": "2331-8422",
"name": "arXiv.org",
"type": null,
"url": "https://arxiv.org"
}
|
In modern databases, transaction processing technology provides ACID (Atomicity, Consistency, Isolation, Durability) features. Consistency refers to the correctness of databases and is a crucial property for many applications, such as financial and banking services. However, there exist typical challenges for consistency. Theoretically, the current two definitions of consistency express quite different meanings, which are causal and sometimes controversial. Practically, it is notorious to check the consistency of databases, especially in terms of the verification cost. This paper proposes Coo, a framework to check the consistency of databases. Specifically, Coo has the following advancements. First, Coo proposes partial order pair (POP) graph, which has a better expressiveness on transaction conflicts in a schedule by considering stateful information like Commit and Abort. By POP graph with no cycle, Coo defines consistency completely. Secondly, Coo can construct inconsistent test cases based on POP cycles. These test cases can be used to check the consistency of databases in accurate (all types of anomalies), user-friendly (SQL-based test), and cost-effective (one-time checking in a few minutes) ways. We evaluate Coo with eleven databases, both centralized and distributed, under all supported isolation levels. The evaluation shows that databases did not completely follow the ANSI SQL standard (e.g., Oracle claimed to be serializable but appeared in some inconsistent cases), and have different implementation methods and behaviors for concurrent controls (e.g., PostgreSQL, MySQL, and SQL Server performed quite differently at Repeatable Read level). Coo aids to comprehend the gap between coarse levels, finding more detailed and complete inconsistent behaviors.
|
## Coo: Consistency Check for Transactional Databases
### Haixiang Li, Yuxing Chen, Xiaoyan Li
##### lihaixiangDB@gmail.com;axinggu@gmail.com;li_xiaoyan@pku.edu.cn
#### ABSTRACT
In modern databases, transaction processing technology provides
ACID (Atomicity, Consistency, Isolation, Durability) features. Consistency refers to the correctness of databases and is a crucial property for many applications, such as financial and banking services.
However, there exist typical challenges for consistency. Theoretically, the current two definitions of consistency express quite different meanings, which are causal and sometimes controversial.
Practically, it is notorious to check the consistency of databases,
especially in terms of the verification cost.
This paper proposes Coo, a framework to check the consistency
of databases. Specifically, Coo has the following advancements. First,
Coo proposes partial order pair (POP) graph, which has a better expressiveness on transaction conflicts in a schedule by considering
stateful information like Commit and Abort. By POP graph with
no cycle, Coo defines consistency completely. Secondly, Coo can
construct inconsistent test cases based on POP cycles. These test
cases can be used to check the consistency of databases in accurate (all types of anomalies), user-friendly (SQL-based test), and
cost-effective (one-time checking in a few minutes) ways.
We evaluate Coo with eleven databases, both centralized and
distributed, under all supported isolation levels. The evaluation
shows that databases did not completely follow the ANSI SQL standard (e.g., Oracle claimed to be serializable but appeared in some inconsistent cases), and have different implementation methods and
behaviors for concurrent controls (e.g., PostgreSQL, MySQL, and
SQL Server performed quite differently at Repeatable Read level).
Coo aids to comprehend the gap between coarse levels, finding
more detailed and complete inconsistent behaviors.
and each degree gradually forbids four standard anomalies. This
is very mature in designing 2PL [31] protocol and standard isolation levels [34]. The latter checks consistency by verifying if the
result satisfies integrity constraints. However, lacking the quantified standard of consistency may cause confusion or misuse of
databases in production. For example, Oracle claimed the Serializable level supported in their databases by preventing all four
standard anomalies yet proved to be only Snapshot Isolation level
(more detailed anomalies shown in Table 4). Practically, it requires
huge effort to design a good black-box testing tool for consistency
checks. This is twofold. (ii) It has a high knowledge bar. The
learning cost for users is high from setting up environments and
modifying system modules, to understanding/modifying test cases
and analyzing and debugging anomalies. Application scenarios are
sometimes limited as some database services are closed-source or
cloud-based where users often are not allowed to make changes or
collect intermediate profiles. (iii) It has a high verification cost.
Neither collecting nor checking is cost-effective [27, 40, 47, 55]. It is
proved to be a NP-complete problem [24, 43] to verify a serializable
commit order of all transactions via little known information of
read-from dependency from input and output profiles (e.g., Cobra
[48]). Some excellent works by random tests (e.g., Elle [16, 39]) can
simulate some anomaly cases, but may waste a lot of time and computation on checking consistent transactions. Worse, the anomaly
behaviors by these random tests sometimes can be hardly analyzed
and reproduced.
These real-time [25, 37, 42, 46, 47, 53, 55] or post-verify [16, 24,
48] solutions are often costly and user-side burdened. This drives
us to a root-cause question that can we discover, define, and generate all forms of data anomalies so that we can feed them all into
databases and cost-effectively check once and for all. To address
the question, we discuss current challenges of lacking of standards
from two aspects, i.e., the formal definition of (1) data anomalies
and (2) consistency.
**Challenge 1. Define data anomalies.** The ANSI SQL [34] specifies four isolation levels and four data anomalies. This standard
is classical and has been widely used in real databases. However,
the definition of data anomalies is casual and has been controversial from time to time [19]. The standard anomalies are singleobject and avoided by lock-based protocols, yet more complex data
anomalies, which are occasionally reported case by case as shown
in Table 1, are hardly fit into defined levels. Existing literature
[14, 18, 34] revised the definition to some extent. However, there is
still little research to define and classify complete data anomalies,
resulting in that the anomalies can still be ambiguous interpretations without a formal expression. For example, Long Fork Anomaly [28] and Prefix Violation [26] have the same expression yet reported by different instances. Many deadlocks (e.g., [41]), both local and global, are introduced and discussed, yet we think they are
also a form of anomalies.
**Challenge 2. Relate inconsistency to all data anomalies.**
#### KEYWORDS
Database, ACID, Consistency, Isolation Levels, Data Anomalies
#### 1 INTRODUCTION
Nowadays, real-world applications rely on databases for data storage, management, and computation. Transaction processing is one
of the key components to guaranteeing the consistency of data. Especially, financial industries like securities companies, banks, and
e-commercial companies often have zero tolerance for the inconsistency of any data anomalies in any form for their core transaction data. However, there exist typical challenges for consistency,
and there is no direct and simple method to guarantee or check the
consistency.
**Motivation.** Obtaining consistency for databases is vital yet it
is known to be notorious and challenging from several perspectives. (i) It lacks standards. Theoretically, there exist two classical definitions with different meanings for consistency. These definitions are casual and consistency is guaranteed by either eliminating certain types of anomalies [35] or satisfying integrity constraints [34, 49]. The former divides consistency into four degrees,
-----
Haixiang Li, Yuxing Chen, Xiaoyan Li
**Table 1: A thorough survey on data anomalies in existing literature.**
**No** **Anomaly, reference, year** **Examples or expressions in original papers** **Our expressions (Table 2)**
1 Dirty Write [34] 1992 푊1 [푥1 ]...푊2 [푥2 ]...((퐶1 or 퐴1) and (퐶2 or 퐴2) in any order) Dirty Write
2 Lost Update [18] 1995 푅1 [푥0 ]...푊2 [푥1 ]...퐶2...푊1 [푥2 ] Lost Update Committed
3 Dirty Read [34] 1992 푊1 [푥 ]...푅2 [푥 ]...(퐴1 and 퐶2 in either order) Dirty Reads
4 Aborted Read [52] 2015, [14] 2000 푊1 [푥 : 푖 ]...푅2 [푥 : 푖 ]...(퐴1 and 퐶2 in any order) Dirty Reads
5 Fuzzy/Non-repeatable Read [34] 1992 푅1 [푥 ]...푊2 [푥 ]...퐶2...푅1 [푥 ]...퐶1 Non-repeatable Read Committed
6 Phantom [34] 1992 푅1 [푃 ]...푊2[푦 in 푃 ]...퐶2...푅1 [푃 ]...퐶1 Phantom
7 Intermediate Read [52] 2015, [14] 2000 푊1 [푥 : 푖 ]...푅2 [푥 : 푖 ]...푊1[푥 : 푗 ]...퐶2 Intermediate Read
8 Read Skew [18] 1995 푅1 [푥0 ]...푊2 [푥1 ]...푊2 [푦1 ]...퐶2...푅1 [푦1 ] Read Skew Committed
9 Unnamed Anomaly [45] 2000 푅3 [푦 ]...푅1 [푥 ]...푊1 [푥 ]...푅1 [푦 ]...푊1 [푦 ]...퐶1...푅2 [푥 ]...푊2 [푥 ]...푅2 [푧 ]...푊2 [푧 ]...퐶2...푅3 [푧 ]...퐶3 Step IAT
**Figure 1: Coo framework. Contributions are C1: theoretical**
**basis, C2: consistency check modules, and C3: evaluation**
**and analysis of eleven databases.**
There exist two previous works defining the consistency of databases.
The first by Jim Gray et al. [35] defined several levels of consistency, which are strongly related to the ANSI SQL standard anomalies and 2PL protocol [36]. The second [34, 49] defined the consistency such that the final result is the same as one of the serializable
schedules. However, both definitions hardly correlate with newly
reported or undiscovered anomalies. For example, new anomalies
like Full Write Skew (in Table 2) are hard to be quantified into previous definitions and their levels. Not to mention that with slightly
different schedules (e.g., Non-repeatable Read and Non-repeatable
Read Committed) may behave quite differently between databases
(result in Table 4). Lacking complete mapping between data anomalies and inconsistency may lead to incomplete and sometimes
non-reproducible consistency check (e.g., Elle [16, 39]).
**Contribution (C).** This paper proposes Coo, which contributes
to pre-check the consistency of databases, filling the gap in contrast to real-time or post-verify solutions. Figure 1 shows the framework of Coo, which contributes to the following three aspects:
- C1: Coo has theoretical basics. We propose Partial Order Pair
(POP) Graph, considering stateful information (i.e., commit, and
abort), can model any schedule, compared to the traditional conflict graph which is limited to model transaction history. For
example, we will show that Read Skew (without stateful information) and Read Skew Committed (with stateful information),
which were treated as the same previously, are completely different anomalies (i.e., different formal expressions in Table 2 and
different evaluation behaviors in Table 3). By POP cycles, we can
define all data anomalies, correlating reported known (e.g., Dirty
Read and Deadlocks) and unexposed mysterious anomalies.
- C2: Coo is black-box and cost-effective. The core consistency
check modules are a generator and a checker, which are independent of databases. The generator provides SQL-like quires
and schedules based on our definition of data anomalies, and the
checker recognizes the consistent and inconsistent behaviors of
the executed schedules. Each defined anomaly will be tested individually by issuing parallel transactions via ODBC driver to
tested databases. The consistency check is accurate (all types
of anomalies), user-friendly (SQL-based test), and cost-effective
(one-time checking in a few minutes).
- C3: Eleven databases are evaluated. Through the evaluation,
both centralized and distributed, we unravel the consistent and
inconsistent behaviors under different isolation levels. And we
are, as far as our knowledge goes, the first to propose methods
for distributed evaluation. We can specifically show the occurrence of anomaly types in non-consistent databases or at their
weaker isolation levels. Our evaluation found that some databases
(e.g., Oracle, OceanBase, and Greenplum) claimed to be serializable but can not avoid some IAT anomalies (defined in Section 3.3). Also, we in-depth analyze the behaviors of different
databases at different isolation levels with various implementation methods (e.g., different behaviors to designed anomaly
cases by PostgreSQL, MySQL, and SQL Server in Repeatable Read
level).
The rest of this paper is organized as follows. Section 2 presents
the preliminary. Section 3 introduces our new model to define data
anomalies and correlate inconsistency. Section 4 evaluates our model
with real databases. Section 5 surveys the related work. Section 6
concludes the paper.
-----
Coo: Consistency Check for Transactional Databases
#### 2 PRELIMINARY
This section provides the preliminary that will be used and extended in the following section.
Objects, Operations, Transactions. We consider storing data ob**jects** 푂푏푗 = {푥,푦, ...} in a database. Operations are divided into two
groups, i.e., object-oriented operations and state-expressed operations. Object-oriented operations are operations on objects by
reading or writing. Let 푂푝푖 describe the possible invocations set:
reading or writing an object by transaction 푇푖. State-expressed
**operations are operations to express states of transactions, con-**
sisting of Commit (C) and Abort (A). Transaction is a group of
operations, interacting objects, with or without a state-expressed
operation at the end, representing a committed or an active state.
We use subscripts to represent the transaction number. For example, 푂푝푖 [푥푛] is 푥-oriented operations by transaction 푇푖; 퐶푖 and 퐴 푗
are the committed and abort operations by 푇푖 and 푇푗, respectively.
Schedules. An Adya [15] history 퐻 comprises a set of transactions 푇 on objects, an order 퐸 over operations 푂푝 in 푇 . The 퐸 is
persevered the order within a transaction and obeyed the object
version order <푠. A schedule 푆 is a prefix of 퐻 .
Example 2.1. We show an example of a schedule 푆1 in the following:
푆1 = 푅1 [푥0] 푅3 [푥0] 푊1 [푦1] 푅3 [푦1] 퐶3 푊2 [푥1] 푅1 [푦1] 퐴1. (1)
which involves three transactions, where푇1 = 푅1 [푥]푊1 [푦]푅1 [푦]퐴1,
푇2 = 푊2 [푥], and 푇3 = 푅3 [푥]푅3 [푦]퐶3 are aborted, active, and committed transactions respectively. The set of operations is 푂푝 (푆1) =
{푅1 [푥], 푅3 [푥],푊1 [푦], 푅3 [푦],푊2 [푥], 푅1 [푦]}. For operations on the
same object, we have the version order, e.g., 푅1 [푥0] <푠 푊2 [푥1].
Note we don’t have version order between two reads, e.g., (푅1 [푥0],
푅3 [푥0]) or between different objects, e.g, (푅3 [푥0], 푊1 [푦1]), meaning reversing these operations may be an equivalent schedule.
Conflict dependency and Conflict graph. Every history is associated with a conflict graph (also called directed serialization graph)
[20, 54], where nodes are committed transactions and edges are the
conflicts (read-write, write-write, or write-read) between transactions. The conflict graph is used to test if a schedule is serializable.
Intuitively, an acyclic conflict graph indicates a serializable schedule, thus the consistent execution and final state. Figure 2(a) depicts
the graphic representation of 푆1.
#### 3 CONSISTENCY MODEL
This section introduces a new consistency model called Coo that
can correlate all data anomalies. Specifically, we first proposed Partial Order Pair (POP) Graph, which also considers state-expressed
operations. We then show any schedule can be represented by a
POP graph and our checker can check an anomaly via its POP cycle.
Lastly, our generator constructs both centralized and distributed
test cases based on POP cycles for the evaluation.
#### 3.1 Partial Order Pair Graph
Adya’s model introduced some non-cycle anomalies [15, 16] like
Dirty Reads and Dirty Write. The reason is that they did not consider state-expressed operations in conflict graph, yet these operations sometimes may be equivalent to object-oriented ones [29].
We strive to map all anomalies via cycles by considering these stateexpressed operations. We first formally define POPs as extended
conflicts in the following.
Definition 3.1. Partial Order Pair (POP). Let푇푖,푇푗 be transactions in a Schedule 푆 and 푇푖 ≠ 푇푗 . A Partial Order Pair (POP) is
the combination of object-oriented and state-expressed operations
from 푇푖 and 푇푗 and satisfies:
- both transactions operate on the same object;
- at least one operation affects the object version (a write or
a rollback of a write).
Lemma 3.2. There exist at most 9 POPs in an arbitrary schedule,
i.e.,푃푂푃 = {푊푊,푊푅, 푅푊,푊퐶푊,푊퐶푅,푅퐶푊, 푅퐴,푊퐶,푊퐴}.
Proof. The proof can be trivially achieved by enumerating all
possible combinations of object-oriented and state-expressed operations. Let 푇푖,푇푗 be transactions in a Schedule 푆 and 푝푖 ∈ 푇푖 with
푞 푗 ∈ 푇푗 being object-oriented operations that access the same object, (푝푖,푞 푗 ) ∈{푊푖푊푗,푊푖푅푗, 푅푖푊푗 }. The following is a list of all
possible combinations.
1. 푝푖 − 푞 푗 : Both transactions 푇푖 and 푇푗 are still active.
The transaction 푇푖 ends before 푇푗 :
2. 푝푖 − 퐶푖 − 푞 푗 : 푇푖 commits before 푞 푗 ;
3. 푝푖 − 퐴푖 − 푞 푗 : 푇푖 aborts before 푞 푗 ;
4. 푝푖 − 푞 푗 − 퐶푖: 푇푖 commits after 푞 푗 ;
5. 푝푖 − 푞 푗 − 퐴푖: 푇푖 aborts after 푞 푗 ;
The transaction 푇푖 ends after 푇푗 :
6. 푝푖 − 푞 푗 − 퐶 푗 : 푇푗 commits after 푝푖;
7. 푝푖 − 푞 푗 − 퐴 푗 : 푇푗 aborts after 푝푖.
The operation 푝푖 will not affect the operation 푞 푗 in combination
3 due to the timely rollback of 푇푖. So does combination 7. We obtain 15 cases by substituting {푊푖푊푗, 푅푖푊푗,푊푖푅푗 } into (푝푖푞 푗 ) of the
remaining 5 combinations.
Among them, 푊푖푊푗퐶 푗 and 푊푖푊푗 both have the identical effect
of modifying the accessing object by 푊푗, we group them together
as POP 푊푊 . Similarly, we use POP 푊푅 to represent 푊푖푅푗 and
푊푖푅푗퐶푖 and POP 푅푊 to represent 푅푖푊푗 and 푅푖푊푗퐶 푗 . Because read
operations are not affected by a commit or abort, we put 푅푖푊푗 퐴푖
and 푅푖푊푗퐶푖 into 푅푊 . Similarly, we put 푊푖푅푗퐶 푗 into 푊푅. Three
cases with committed of푇푖, i.e., 푊푖퐶푖푅푗 [푥], 푊푖퐶푖푊푗 [푥], and 푅푖퐶푖푊푗 [푥],
are specified as types 푊퐶푅, 푊퐶푊, and 푅퐶푊, respectively.
Finally, we have three special combination cases, i.e., 푊푖푅푗 퐴푖,
푊푖푊푗퐶푖, and 푊푖푊푗 퐴푖, that are more complex as they have two version changing states. As for 푊푖푅푗 퐴푖, we have first changing state
by 푊푖푅푗 then second changing state by 푅푗 퐴푖. 푊푖푅푗 belongs to POP
푊푅 and 푅푗 퐴푖 [푥] belongs to new POP 푅퐴. Likewise, 푊푖푊푗퐶푖 has
푊푊 and 푊퐶 POPs, and 푊푖푊푗 퐴푖 has 푊푊 and 푊퐴 POPs.
In summary, these 15 combination cases are grouped into 9 types
POPs, i.e., 푊푊,푊푅, 푅푊,푊퐶푊,푊퐶푅, 푅퐶푊, 푅퐴,푊퐶,푊퐴.
Note that RA, WA, and WC are from the combination of a cycle,
meaning RA, WA, and WC existed only when the cycle already
existed, and this cycle is a 2-transaction cycle on a single object.
Let F : 푃푂푃 (푆) → 푇 (푆) ×푇 (푆) be the map between POPs and the
transaction orders, e.g., F (푊푖퐶푖푅푗 [푥]) = (푇푖,푇푗 ). In terms of POPs
and their orders, we can define POP graphs.
-----
**Figure 2: Comparison of (a) conflict and (b) POP graphs.**
Definition 3.3. Partial Order Pair Graph (POP graph).
Let 푆 be a schedule. A graph 퐺 (푆) = (푉, 퐸) is called Partial Order Pair Graph (POP graph), if vertices are transactions in 푆 and
edges are orders in POPs derived from 푆, i.e (i) 푉 = 푇 (푆); (ii)
퐸 = F (푃푂푃 (푆)).
Conflict and POP graphs differ in edges and expressiveness. Example 3.4 exemplifies the distinction between them.
Example 3.4. Continuing Example 2.1, we obtain objects 푂푏푗 =
{푥,푦}, and operations 푂푝 [푥] = {푅1 [푥0]푅3 [푥0]퐶3푊2 [푥1]} and 푂푝 [푦] =
{푊1 [푦1]푅3 [푦1]퐶3푅1 [푦1]퐴1} from 푆1. Note that we don’t put 퐴1 in
푂푝 [푥] as they don’t have a write on object 푥 by 푇1. We derive
POP from these operations, i.e. {푅1푊2 [푥], 푅3퐶3푊2 [푥], 푊1푅3 [푦],
푅3퐴1 [푦]}. The Conflict graph and the POP graph for 푆1 are shown
in Figure 2. Note that edges from 푇3 to 푇2 are different in conflict
(RW) and POP (RCW) graph. This time, by a POP graph, the Dirty
Read is expressed by a cycle formed by 푇1 and 푇3.
Lemma 3.5. Arbitrary schedules can be represented by POP graphs.
Proof. Given an arbitrary schedule 푆 with 푂푝 (푆) being the set
of operations by transactions T = {푇1,푇2, . . .,푇푛}. First, we can
derive sets of operations for variables from 푆, {푂푃 [푥]|푥 ∈ 푂푏푗 (푆)}.
Then we can find all the combination cases in each object operation
set 푂푝 [푥]. Finally, we classify them into POPs referred to the proof
of Lemma 3.2. Through the above method, we can get the POP
set 푃푂푃 (푆) corresponding to the schedule 푆. Then, by F, we get
the ordering between transactions based on POPs. We can model
POP graphs using the transactions set and the dependent orders
between transactions.
#### 3.2 Consistency and Consistency Check
With POP cycles, we now are ready to define data anomalies, then
define consistency with no data anomaly.
Definition 3.6. Data Anomaly. The schedule exists a data anomaly exists if the represented POP graph has a cycle.
The definition of data anomalies by POP graphs differs from conflict graph one in three aspects. Firstly, POP graphs model schedules instead of histories (e.g., Full Write in Table 2). Secondly, POP
graphs can express all anomalies with state-expressed (e.g., Dirty
Read in Definition 3.4). Thirdly, POP graphs can model more distinct anomalies (e.g., Read Skew and Read Skew Committed in Table 2 are different but considered as the same by conflict graph).
We now define the consistency of a schedule.
Definition 3.7. Consistency Schedule 푆 satisfies consistency
if the represented POP graph exists no cycle.
Haixiang Li, Yuxing Chen, Xiaoyan Li
Checker. By definition 3.7, consistency, no data anomalies, and
acyclic POP graphs are equivalent. Likewise, inconsistency, existing data anomalies, and existing POP cycles are equivalent. So a
**consistency checker is to test if a schedule exists a data anomaly,**
i.e., if the represented graph has a cycle. In theory, the consistency
check is sound: if it reports an anomaly in a schedule, then that
anomaly should exist in every history of that schedule. The consistency check is complete: if it reports an anomaly in a schedule,
then a POP cycle exists in the schedule of that anomaly. As a schedule is a prefix of history, the anomaly occurring in the schedule also
occurs in the corresponding histories. So the soundness is correct.
As we defined that the anomaly schedule exists a POP cycle, the
completeness is also correct.
#### 3.3 Consistency Check in Practice
This part discusses the consistency check in practice. As each POP
cycle may express an anomaly scenario, it is neither cost-effective
nor possible to test infinite cycles. Our test cases involve trading
off the cost and time spent against the completeness. We want as
less as test cases to express as much as the database’s inconsistent
behaviors. By soundness, an anomaly may exist in different schedules or histories. We consider exploring the simplest form for a
data anomaly, which will be used for the design and classification
of data anomalies for the evaluation. As most known data anomalies (e.g., Dirty Write and Dirty Read) are single-object, we start
with one object POP cycles.
Lemma 3.8. A POP cycle with three transactions (푁푇 = 3) by one
object (푁푂푏푗 = 1) exists a cycle with two transactions.
Proof. We exclude POPs RA, WA, and WC in our discussion, as
these POPs appeared in a two-transaction one-object cycle, which
need no proof. We first assume the POP cycle is 퐺 = {{푇1,푇2,푇3},
{(푇1, 푇2), (푇2,푇3), (푇3,푇1)}. We let {(푝1, 푞2), (푝2,푞3), (푝3,푞1)} be
the object-oriented operations in forming cycle 퐺 = {(푇1, 푇2),
(푇2,푇3), (푇3,푇1)}. We let <푠 denote the version order. So the graph
can be represented by {푝1 <푠 푞2; 푝2 <푠 푞3; 푝3 <푠 푞1}. As each POP
should have a write operation, we have the following situations.
If 푝1 = 푊, (i) if 푝1 happens before 푝2, i.e., 푝1 <푠 푝2, since 푝2 <푠
푞3, then 푝1 <푠 푞3, meaning a POP from 푇1 to 푇3. By original POP
from 푇3 to 푇1, 푇1 and 푇3 forms a cycle. (ii) if 푝1 happens later than
푝2, i.e., 푝2 <푠 푝1, meaning a POP from 푇2 to 푇1, then, 푇1 and 푇2
forms a cycle.
If 푝1 = 푅, then 푞2 = 푊 . Likewise, (i) if 푞2 <푠 푝3, then 푇1 and 푇2
forms a cycle. (ii) if 푝3 <푠 푞2, then 푇2 and 푇3 forms a cycle.
Lemma 3.9. A POP cycle with any number of transactions (푁푇 ≥
3) by one object (푁푂푏푗 = 1) exists a cycle with two transactions.
Proof. The proof is by induction. The theorem holds for 푁푇 = 3
by Lemma 3.8. We first assume theorem holds for 푁푇 < 푘.
When 푁푇 = 푘, we assume the POP cycle is 퐺 = {{푇1,푇2, ...,푇푘 },
{(푇1,푇2), (푇2,푇3), ..., (푇푘,푇1)}. We let {(푝1,푞2), (푝2,푞3), ..., (푝푘,푞1)}
be the object-oriented operations in forming cycle 퐺 = {(푇1, 푇2),
(푇2,푇3), ..., (푇푘,푇1)}. We let <푠 denote the version order between
operations. So the graph can be represented by {푝1 <푠 푞2; 푝2 <푠
푞3; ...; 푝푘 <푠 푞1}. As each POP should have a write operation, we
have the following cases.
-----
Coo: Consistency Check for Transactional Databases
If 푝1 = 푊, (i) if 푝1 happens before 푝푘−1, i.e., 푝1 <푠 푝푘−1, since
푝푘−1 <푠 푞푘, then 푝1 <푠 푞푛, meaning a POP from 푇1 to 푇푛. By
original POP from 푇푘 to 푇1, 푇1 and 푇푘 forms a cycle. (ii) if 푝1 happens later than 푝푘−1, i.e., 푝푘−1 <푠 푝1, meaning a POP from 푇푘−1
to 푇1, then, we remove 푇푘 and achieve a new cycle 퐺 [′] = {(푇1, 푇2),
(푇2,푇3), ..., (푇푘−1,푇1)}. Based on the assumption, when 푛 = 푘 − 1
the theorem is true.
If 푝1 = 푅, then 푞2 = 푊 . Likewise, (i) if 푞2 <푠 푝푘−1, then 푇1,
푇2, and 푇푘 forms a cycle. It can be reduced to 2-transaction cycle
by lemma 3.8. (ii) if 푝푘−1 <푠 푞2, then, we remove 푇1 and 푇푘, and
achieve a new cycle 퐺 [′] = {(푇2, 푇3), (푇3,푇4), ..., (푇푘−1,푇2)}. Based
on the assumption, when 푛 < 푘 the theorem is true.
In general, if one cycle only involves one object, we can find
representative cycles of exactly two transactions. This property
is meaningful, as when only one object involves, evaluating twotransaction cycles is sufficient to represent cycles with more transactions. Next, we consider a POP cycle with more than one object.
Lemma 3.10. A POP cycle has more than two POPs accessing to
one object exists a cycle with at most two connected POPs accessing
this object.
Proof. We first assume the POP cycle is 퐺 = {{푇1,푇2, . . .,푇푛},
{(푇1,푇2), (푇2,푇3), . . ., (푇푛,푇1)}. The POP edges accessing the same
object 푥 are F (푃푂푃푖 [푥]) = (푇푖,푇푖+1) and F (푃푂푃 푗 [푥]) = (푇푗,푇푗+1),
푗 - 푖. We assume {(푝푖푞푖+1[푥]), (푝 푗푞 푗+1 [푥])} are the object-oriented
operations in forming edges of 푃푂푃푖 [푥] and 푃푂푃 푗 [푥]. Then 퐺 can
be simplified into the following graphs.
If 푝푖 = 푊, (i) if 푝푖 <푠 푝 푗, since 푝 푗 <푠 푞 푗+1, then 푝푖 <푠 푞 푗+1,
meaning a POP from 푇푖 to푇푗+1. We get 퐺 [′] = {{푇1,푇2, ...푇푖,푇푗+1...푇푛 },
{(푇1,푇2), . . ., (푇푖,푇푗+1), . . .,(푇푛,푇1)} with a new POP accessing 푥
edge (푇푖, 푇푗+1). (ii) if 푝 푗 <푠 푝푖, meaning a POP from 푇푗 to 푇푖. We
get 퐺 [′] = {{푇푖,푇푖+1, ...푇푗 }, {(푇푖,푇푖+1), . . . (푇푗−1,푇푗 ), (푇푗,푇푖 } with a
new POP edge (푇푗,푇푖). The adjoining edges (푇푗,푇푖) and (푇푖,푇푖+1)
with ordering 푝 푗 <푠 푝푖 <푠 < 푞푖+1 are both accessing the same
object 푥. (ii-a) There will be no new POP edges between them
until 푝 푗 = 푞 푗 = 푅, which is F [−][1] (푇푗,푇푖) ∈{푅푗푊푖, 푅푗퐶 푗푊푖 } and
F [−][1](푇푖,푇푖+1) ∈{푊푖푅푗,푊푖퐶푖푅푗 }. (ii-b) Otherwise, meaning a POP
from 푝 푗 and 푞푖+1, causing the POP cycle to continue to be simplified to 퐺 [′] = {{푇푖,푇푖+1, ...푇푗 }, {(푇푗,푇푖+1), . . . (푇푗−1,푇푗 )} with a new
POP edge (푇푗,푇푖+1).
If 푝푖 = 푅, then 푞푖+1 = 푊 . (i) If 푞푖+1 <푠 푝 푗, since 푝 푗 <푠 푞 푗+1
then 푞푖+1 <푠 푞 푗+1, meaning a POP from 푇푖+1 to 푇푗+1. We get 퐺 [′] =
{{푇1,푇2, ...푇푖,푇푖+1,푇푗+1...푇푛 }, {(푇1,푇2), . . ., (푇푖+1,푇푗+1), . . ., (푇푛,푇1)}
with a new POP edge (푇푖+1,푇푗+1). The adjoining edges (푇푖,푇푖+1)
and (푇푖+1,푇푗+1) with ordering 푝푖 <푠 푞푖+1 <푠 < 푞 푗+1 are both accessing the same object 푥. (ii-a) If 푞 푗+1 = 푊, the graph 퐺 [′] can be continues to simplify by the POP F [−][1](푇푖,푇푗+1) ∈{푅푖푊푗+1, 푅푖퐶푖푊푗+1}. (iib) Otherwise, if 푞푖+1 = 푅, POP edges are F [−][1] (푇푖,푇푖+1) ∈{푅푖푊푖+1,
푅푖퐶푖푊푖+1} and F [−][1](푇푖+1,푇푗+1) ∈{푊푖+1푅푗+1,푊푖+1퐶푖+1푅푗+1}.
By repeating the above steps on object 푥, we can obtain the cycle
with only one or two edges operating on this object. And if two
edges remained, then these two edges are connected.
Theorem 3.11. A POP cycle has 푁푂푏푗 (푁푂푏푗 ≥ 1) objects exists
a POP cycle with at most 2푁푂푏푗 transactions.
Proof. When 푁푂푏푗 = 1, Lemma 3.9 has proven the theorem.
**Figure 3: A 4-transaction cycle to its simplified cycles.**
When 푁푂푏푗 ≥ 2, we prove it by contradiction. Without loss
of generality, we assume that there exists a cycle with 2푁푂푏푗 +
1 transactions and 푁푂푏푗 objects that can not be simplified. The
cycle must then include three POP edges accessing the same object,
e.g 푥. However, by Lemma 3.10, we can proceed to simplify the
cycle to at most two POPs accessing 푥, making the original cycle
at most 2푁푂푏푗 transactions, which contradicts the assumption of
the simplest cycle.
Example 3.12. Figure 3(a) depicts a 4-transaction POP cycle 퐺 =
{{푇1,푇2, 푇3,푇4}, {(푇1,푇2), (푇2,푇3), (푇3,푇4), (푇4,푇1)}} with 푃푂푃푠 =
{푅1푊2 [푥], 푅2퐶2푊3 [푦], 푅3퐶3푊4 [푥], 푅4푊1 [푥]}. To simplify, (i) if 푅3 <푠
푊2, we obtained a new POP from 푇3 to 푇2, and a 2-transaction
POP cycle 퐺 [′] = {(푇2,푇3), (푇3,푇2)} as shown in Figure 3(b). (ii) if
푊2 <푠 푅3, then 푇1, 푇2, and 푇4 forms a cycle as shown in Figure 3(c)
by a new POP from 푇2 to 푇4. By lemma 3.8, we keep simplifying.
(ii-a) if 푊2 <푠 푅4, since 푅4 <푠 푊1, then 푊2 <푠 푊1, meaning a POP
from 푇2 to 푇1 (Figure 3(d)). (ii-b) if 푅4 <푠 푊2, then 푇2 and 푇4 forms
a cycle (Figure 3(e)).
Generator. We provide two classifications. The first is based on
primitive conflict dependencies, i.e., WR, WW, and RW, i.e., (i) Read
**Anomaly Type (RAT), if the cycle has at least a 푊푅** POP; (ii)
**Write Anomaly Type (WAT), if the cycle does not have a 푊푅**
POP, but have at least a 푊푊 POP; (iii) Intersect Anomaly Type
**(IAT), if the cycle does not have** 푊푅 and푊푊 POPs. This classification closely relates to three traditional conflicts and current knowledge, leading to a better evaluation and analysis of POP behaviors.
So based on our classification, Read Skew (푅1 [푥0]푊2 [푥1]푊2 [푦1]
푅1 [푦1]) and Read Skew Committed (We named it) (푅1 [푥0]푊2 [푥1]
푊2 [푦1]퐶2푅1 [푦1]) are different anomalies in different categories. Read
Skew with WR belongs to RAT, while Read Skew Committed without WR and WW belongs IAT.
By Theorem 3.11, given finite number of transactions (푁푇 ) and
objects (푁표푏푗 ), the simplified cycles are also finite and can be determinedly evaluated. This classification controls the real number
of evaluation cases. The second is based on 푁푇 and 푁표푏푗 in cycles,
i.e.,: (i) Single Data Anomaly (SDA), if 푁푇 = 2, 푁표푏푗 = 1; (ii)
**Double Data Anomaly (DDA), if 푁푇** = 2, 푁표푏푗 = 2; (iii) Multi**transaction Data Anomaly (MDA), otherwise. So the SDAs and**
-----
RAT
WAT
Haixiang Li, Yuxing Chen, Xiaoyan Li
**Table 2: Data anomaly formal expression, classification, and their POP combinations in POP cycles.**
**Types of Anomalies** **No** **Anomalies** **Formal expressions** **POP Combinations**
SDA 1 Dirty Read [14, 34, 52] 푊푖 [푥푚 ] . . . 푅 푗 [푥푚 ] . . .퐴푖 푊푖푅 푗 [푥 ] − 푅 푗 퐴푖 [푥 ]
SDA 2 Non-repeatable Read [34] 푅푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . . 푅푖 [푥푚+1 ] 푅푖푊푗 [푥 ] − 푊푗 푅푖 [푥 ]
SDA 3 Intermediate Read [14, 52] 푊푖 [푥푚 ] . . . 푅 푗 [푥푚 ] . . .푊푖 [푥푚+1 ] 푊푖푅 푗 [푥 ] − 푅 푗푊푖 [푥 ]
SDA 4 **Intermediate Read Committed** 푊푖 [푥푚 ] . . . 푅 푗 [푥푚 ] . . .퐶 푗 . . .푊푖 [푥푚+1 ] 푊푖푅 푗 [푥 ] − 푅 푗퐶 푗푊푖 [푥 ]
SDA 5 **Lost Self Update** 푊푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . . 푅푖 [푥푚+1 ] 푊푖푊푗 [푥 ] − 푊푗 푅푖 [푥 ]
DDA 6 **Write-read Skew** 푊푖 [푥푚 ] . . . 푅 푗 [푥푚 ] . . .푊푗 [푦푛 ] . . . 푅푖 [푦푛 ] 푊푖푅 푗 [푥 ] − 푊푗 푅푖 [푦 ]
DDA 7 **Write-read Skew Committed** 푊푖 [푥푚 ] . . . 푅 푗 [푥푚 ] . . .푊푗 [푦푛 ] . . .퐶 푗 . . . 푅푖 [푦푛 ] 푊푖푅 푗 [푥 ] − 푊푗퐶 푗 푅푖 [푦 ]
DDA 8 **Double-write Skew 1** 푊푖 [푥푚 ] . . . 푅 푗 [푥푚 ] . . .푊푗 [푦푛 ] . . .푊푖 [푦푛+1 ] 푊푖푅 푗 [푥 ] − 푊푗푊푖 [푦 ]
DDA 9 **Double-write Skew 1 Committed** 푊푖 [푥푚 ] . . . 푅 푗 [푥푚 ] . . .푊푗 [푦푛 ] . . .퐶 푗 . . .푊푖 [푦푛+1 ] 푊푖푅 푗 [푥 ] − 푊푗퐶 푗푊푖 [푦 ]
DDA 10 **Double-write Skew 2** 푊푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .푊푗 [푦푛 ] . . . 푅푖 [푦푛 ] 푊푖푊푗 [푥 ] − 푊푗 푅푖 [푦 ]
DDA 11 Read Skew [18] 푅푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .푊푗 [푦푛 ] . . . 푅푖 [푦푛 ] 푅푖푊푗 [푥 ] − 푊푗 푅푖 [푦 ]
DDA 12 **Read Skew 2** 푊푖 [푥푚 ] . . . 푅 푗 [푥푚 ] . . . 푅 푗 [푦푛 ] . . .푊푖 [푦푛+1 ] 푊푖푅 푗 [푥 ] − 푅 푗푊푖 [푦 ]
DDA 13 **Read Skew 2 Committed** 푊푖 [푥푚 ] . . . 푅 푗 [푥푚 ] . . . 푅 푗 [푦푛 ] . . .퐶 푗 . . .푊푖 [푦푛+1 ] 푊푖푅 푗 [푥 ] − 푅 푗퐶 푗푊푖 [푦 ]
MDA 14 **Step RAT [26, 28]** . . .푊푖 [푥푚 ] . . . 푅 푗 [푥푚 ] . . ., and 푁표푏푗 ≥ 2, 푁푇 ≥ 3 . . .푊푖푅 푗 [푥 ] . . .
SDA 15 Dirty Write [34] 푊푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . . 퐴푖 /퐶푖 푊푖푊푗 [푥 ] − 푊푗퐴푖 /퐶푖 [푥 ]
SDA 16 **Full Write** 푊푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .푊푖 [푥푚+2 ] 푊푖푊푗 [푥 ] − 푊푗푊푖 [푥 ]
SDA 17 **Full Write Committed** 푊푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .퐶 푗 . . .푊푖 [푥푚+2 ] 푊푖푊푗 [푥 ] − 푊푗퐶 푗푊푖 [푥 ]
SDA 18 Lost Update [18] 푅푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .푊푖 [푥푚+2 ] 푅푖푊푗 [푥 ] − 푊푗푊푖 [푥 ]
SDA 19 **Lost Self Update Committed** 푊푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .퐶 푗 . . . 푅푖 [푥푚+1 ] 푊푖푊푗 [푥 ] − 푊푗퐶 푗 푅푖 [푥 ]
DDA 20 **Double-write Skew 2 Committed** 푊푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .푊푗 [푦푛 ] . . .퐶 푗 . . . 푅푖 [푦푛 ] 푊푖푊푗 [푥 ] − 푊푗퐶 푗 푅푖 [푦 ]
DDA 21 **Full-write Skew** 푊푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .푊푗 [푦푛 ] . . .푊푖 [푦푛+1 ] 푊푖푊푗 [푥 ] − 푊푗푊푖 [푦 ]
DDA 22 **Full-write Skew Committed** 푊푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .푊푗 [푦푛 ] . . .퐶 푗 . . .푊푖 [푦푛+1 ] 푊푖푊푗 [푥 ] − 푊푗퐶 푗푊푖 [푦 ]
DDA 23 **Read-write Skew 1** 푅푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .푊푗 [푦푛 ] . . .푊푖 [푦푛+1 ] 푅푖푊푗 [푥 ] − 푊푗푊푖 [푦 ]
DDA 24 **Read-write Skew 2** 푊푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . . 푅 푗 [푦푛 ] . . .푊푖 [푦푛+1 ] 푊푖푊푗 [푥 ] − 푅 푗푊푖 [푦 ]
DDA 25 **Read-write Skew 2 Committed** 푊푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . . 푅 푗 [푦푛 ] . . .퐶 푗 . . .푊푖 [푦푛+1 ] 푊푖푊푗 [푥 ] − 푅 푗퐶 푗푊푖 [푦 ]
MDA 26 **Step WAT** . . .푊푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . ., and 푁표푏푗 ≥ 2, 푁푇 ≥ 3, . . .푊푖푊푗 [푥 ] . . .
and not include (. . .푊푖1 [푦푛 ] . . . 푅 푗 1 [푦푛 ] . . . )
SDA 27 Non-repeatable Read Committed [34] 푅푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .퐶 푗 . . . 푅푖 [푥푚+1 ] 푅푖푊푗 [푥 ] − 푊푗퐶 푗 푅푖 [푥 ]
SDA 28 **Lost Update Committed** 푅푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .퐶 푗 . . .푊푖 [푥푚+2 ] 푅푖푊푗 [푥 ] − 푊푗퐶 푗푊푖 [푥 ]
DDA 29 Read Skew Committed [18] 푅푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .푊푗 [푦푛 ] . . .퐶 푗 . . . 푅푖 [푦푛 ] 푅푖푊푗 [푥 ] − 푊푗퐶 푗 푅푖 [푦 ]
DDA 30 **Read-write Skew 1 Committed** 푅푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . .푊푗 [푦푛 ] . . .퐶 푗 . . .푊푖 [푦푛+1 ] 푅푖푊푗 [푥 ] − 푊푗퐶 푗푊푖 [푦 ]
DDA 31 Write Skew [19] 푅푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . . 푅 푗 [푦푛 ] . . .푊푖 [푦푛+1 ] 푅푖푊푗 [푥 ] − 푅 푗푊푖 [푦 ]
DDA 32 **Write Skew Committed** 푅푖 [푥푚 ] . . .푊푗 [푥푚+1 ] . . . 푅 푗 [푦푛 ] . . .퐶 푗 . . .푊푖 [푦푛+1 ] 푅푖푊푗 [푥 ] − 푅 푗퐶 푗푊푖 [푦 ]
DDAs are finite, which will be evaluated one by one, while MDAs
are infinite, which will be evaluated by one of the typical cases. The
four standard anomalies are SDAs. We think this classification is
sufficient to illustrate the core idea and explore relatively complete
inconsistent behaviors. But we do not limit classifications with
more one-to-one mapping anomalies of fixed transactions and objects for a more detailed evaluation. We also plan our future work
to test databases with more random cycles by a larger number of
transactions and objects.
Table 2 shows all data anomalies types and their classification.
The anomaly names with BOLD font are 20+ new types of anomalies that have never been reported (We named them with “committed” when it has a WCW, WCR, or RCW POP). Those reported
in Step RAT and Step IAT are a tiny portion of them. Unlike previous tools (e.g., Elle [16]) which randomly issue queries and found
anomaly by accident, our generator provides exact sequences of
schedules (more details in Section 4.2), making the consistency
check determined and explainable, meaning it is easy to reproduce
and to debug/analyze the result.
Corollary 3.13. If a schedule satisfies consistency, then the schedule does not have any data anomalies in Table 2.
The current research mainly focused on centralized databases.
There is little research on distributed consistency and it remains
ambiguous to do a distributed check. We first define distributed
data anomalies.
Definition 3.14. Distributed Data Anomalies The distributed
data anomaly exists if the represented POP graph has a cycle, and
it has at least two objects storing at distributed partitions.
The distributed consistency check is to test if a distributed
data anomaly exists. The standard anomalies are not distributed
ones and are insufficient for a distributed check as they are singleobject. By our classification, we can construct a distributed data
anomaly by a DDA or MDA. We particularly designed the test
cases to access the different objects from different partitions sometimes from different tables. The design is required by table partitioning and the data is expected to insert/update in different partitions/shards (e.g., by PARTITION BY RANGE in SQL).
#### 4 EVALUATION
In this part, we will evaluate 11 real databases with 33 designed
anomaly test cases.
-----
Coo: Consistency Check for Transactional Databases
#### 4.1 Setup
We deployed 2 Linux machines each with 8 cores (Intel(R) Xeon(R)
Gold 6133 CPU @ 2.50GHz) and 16 GB memory. The centralized
evaluation only used one machine. We tested distributed OceanBase, TDSQL, and CockroachDB by their cloud services. We installed UnixODBC for the common driver, and some database drivers are installed by the trial version connector from CData [1].
The tests are coded with C++. Each transaction is issued with one
thread/core. The deadlock or wait_die timeout is often set to 20
seconds depending on the cases. The source code is available on
Github [3]. We execute transactions in parallel while using timesleep
(e.g., 0.1 second in centralized tests) between queries to force execution sequences.
We evaluated eleven real databases, i.e., MySQL [7], MyRocks
[6], TDSQL [12], SQL Server [11], TiDB [13], Oracle [9], OceanBase
[8], Greenplum [4], PostgreSQL [10], CockroachDB [2], MongoDB
[5]. Most databases support four standard isolation level, i.e., Serializable (SER), Repeatable Read (RR), Read Committed (RC), and
Read Uncommitted (RU). MongoDB supports only Snapshot Isolation(SI) level. Greenplum supports SER, RC and RU levels. OceanBase support two modes, i.e., MySQL (RR and RC supported) and
Oracle modes (SER, RR, and RC supported). TiDB supports RR and
RC levels, as well as its Optimistic (OPT) level. SQL Server also
supports two additional SI levels in optimistic mode, i.e., the default one (SI) and the read-committed snapshot level (RCSI). Table
4 shows their default ("★") and other supported levels. Some levels
in one database perform the same, so we put them together (e.g.,
RC and RU in PostgreSQL). We exclude to present MyRocks and
TDSQL in most cases, as they perform the same as MySQL.
#### 4.2 Construction of Test Cases
We constructed all 33 types of data anomalies described in Table
2. Note that the SDA and DDA are finite and one-to-one mapping
anomalies, yet MDA denotes a set of anomalies. So we design one
typical case for each of the MDAs. Step RAT, Step WAT, and Step
IAT have designed schedules with three WR, WW, and RW POPs,
respectively. For example, the schedule 푆2 of the Read Skew anomaly can be executed in the following orders:
푆2 = 푅1 [푥0] 푊2 [푥1] 푊2 [푦1] 푅1 [푦1] (2)
However, 푊2 [푦1] may be waited as 푊2 [푥1] may be waited by conflicting to 푅1 [푥0], making the other conflict disappear. So, we may
let non-conflict operations start first to simulate the complete conflicts. For example, the schedule 푆3 of Read Skew anomaly can be
executed in the following orders:
푆3 = 푅1 [푥0] 푊2 [푦1] 푊2 [푥1] 푅1 [푦1] (3)
In the schedule 푆3, note that 푊2 [푦1] starts earlier than 푊2 [푥1],
as 푊2 [푦1] does not conflict with 푅1 [푥0]. After the execution, we
assure the occurrence of two conflicts by (푅1 [푥0], 푊2 [푥1]) and
(푊2 [푦1], 푅1 [푦1]). 푆2 and 푆3 are actually equivalent schedule with
the same version order <푠. We then give the schedule of 푆4 of Read
Skew Committed anomaly in the following:
푆4 = 푅1 [푥0] 푊2 [푦1] 푊2 [푥1] 퐶2 푅1 [푦1] (4)
As this time, the traditional conflict graph treated these 푆3 and 푆4
no difference. However, we recognized different POPs in 푆3 and
**Table 3: PostgreSQL Evaluation by Read Skew and Read**
**Skew Committed at the RC level and by Lost Update Com-**
**mitted and Step WAT at the SER level.**
Preparation
1 DROP TABLE IF EXISTS t1
2 CREATE TABLE t1 (k INT PRIMARY KEY, v INT)
3 INSERT INTO t1 VALUES (0, 0)
4 INSERT INTO t1 VALUES (1, 0)
A **Generator: Read Skew (푅1 [푥0** ]푊2 [푦1 ]푊2 [푥1 ]푅1 [푦1 ])
Q Session 1: 푇1-SQL Operations Session 2: 푇2-SQL Result
1 Begin
2 SELECT * FROM t1WHERE k=0 푅1 [푥0 ] (0,0)
3 RW Begin
4 푊2 [푦1 ] UPDATE t1 SET v=1WHERE k=1
5 RW 푊2 [푥1 ] UPDATE t1 SET v=1WHERE k=0
6 SELECT * FROM t1WHERE k=1 푅푅11 [ [푦푦10 ]] Snapshot(1,0)
7 퐶2 Commit
8 Commit 퐶1
**Checker: Pass (P) with consistency**
B **Generator: Read Skew Committed (푅1 [푥0** ]푊2 [푦1 ]푊2 [푥1 ]퐶2푅1 [푦1 ])
Q Session 1: 푇1-SQL Operations Session 2: 푇2-SQL Result
1 Begin
2 SELECT * FROM t1WHERE k=0 푅1 [푥0 ] (0,0)
3 RW Begin
4 푊2 [푦1 ] UPDATE t1 SET v=1WHERE k=1
5 WCR푊2 [푥1 ] UPDATE t1 SET v=1WHERE k=0
6 퐶2 Commit
7 SELECT * FROM t1WHERE k=1 푅1 [푦1 ] MVCC+RC(1,1)
8 Commit 퐶1
**Checker: Anomaly (A) detected**
C **Generator: Lost Update Committed (푅1 [푥0** ]푊2 [푥1 ]푊1퐶2 [푥2 ])
Q Session 1: 푇1-SQL Operations Session 2: 푇2-SQL Result
1 Begin
2 SELECT * FROM t1WHERE k=0 푅1 [푥0 ] RW (0,0)
3 Begin
4 푊2 [푥1 ] UPDATE t1 SET v=1WHERE k=0
5 WCW퐶2 Commit
6 UPDATE t1 SET v=1WHERE k=0 푊1 [푦1 ] Abort byrules
**Checker: Rollback (R) by rules (WCW)**
**Generator: Step WAT**
D 푊1 [푥1 ]푊2 [푦1 ]푊3 [푧1 ]푊3 [푦2 ]푊2 [푥2 ]푊1 [푧2 ]
Q Session 1: 푇1-SQL Session 2: 푇2-SQL Session 3: 푇3-SQL Result
1 Begin
2 푊1 [푥1 ]: UPDATE t1 Begin SET v=1 WHERE k=0
WW푊2 [푦2 ]: UPDATE t1 Begin
2 SET v=2 WHERE k=1
45 푊SET v=2 WHERE k=02 [푥2 ]: UPDATE t1WW WW푊SET v=3 WHERE k=2푊SET v=3 WHERE k=133 [ [푦푧33]]: UPDATE t1: UPDATE t1 푊푊23 waited waited
6 푊1 [푧1 ]: UPDATE t1 2PL Wait
SET v=1 WHERE k=2 deadlock
**Checker: Deadlock (D) detected**
푆4, and later our evaluation will illustrate different performances
under different isolation levels. Tables 3(A) and 3(B) depict the detailed preparation and execution steps by SQL queries for 푆3 and 푆4
by PostgreSQL at the RC level. In all our tests, the Begin command
is alone with the first operation while the Commit command could
be any order after schedule if not mentioned. PostgreSQL passed
Read Skew schedule but found an anomaly result by Read Skew
Committed schedule. Previous works (e.g., Elle [16]) often only
**Checker: Pass (P) with consistency**
**Checker: Rollback (R) by rules (WCW)**
Commit
Commit
(1,0)
(1,1)
rules
deadlock
A
C
D
WCW
-----
detected anomaly cases, but in this paper, we also in-depth analyze the potential anomaly cases that are prevented by databases
as shown in Table 3(A, C, and D).
For the construction of distributed databases, we let keys (e.g.,
푘=0 and 푘=1 in the Read Skew) spread into different distributed
partitions (e.g., by PARTITION BY RANGE). Greenplum, which
by default has a write lock for a table/segment, needs to enable
a global detector for supporting concurrent writes. Another way
is to simulate the cases with multiple tables and each table having
one row/key.
#### 4.3 Consistency Check in Databases
This part provides a general summary of the evaluation results. Table 4 shows the overall evaluation result of 11 databases with different isolation levels by 33 test cases constructed via SQL queries
(except for MongoDB). The evaluation is cost-effective and reproducible, as we do not rely on the time- and resource-consuming
random workloads but specifically and determinedly generate representative inconsistent scenarios. The average time spent for each
level to finish 33 tests is around 1 minute. The original and executed schedules are available for analysis and debugging. The result behaviors are classified into two types, i.e., anomaly (A) and
consistency. For anomaly occurrence, data anomalies are not recognized by databases, resulting in data inconsistencies, meaning
the executed schedule with no equivalent serializable execution
(or a POP cycle). While for the consistent performance, databases
either pass (P) the anomaly test cases with a serializable result (no
POP cycle) cycle or rollback transactions due to rules (R), deadlock
detection (D), or timeout (T) reached.
**SER level:** All tested databases can guarantee no anomalies except Oracle, OceanBase, and Greenplum. These three databases
claimed SER levels yet performed equivalent to the SI level. As
most knowledge, researchers discover Oracle’s inconsistency at its
SER level by the Write Skew anomaly [19]. However, we found
that anomalies also happened when feeding test cases of Writeread Skew, Write-read Skew Committed, Write Skew Committed,
Step RAT, and Step IAT. These are similar anomalies yet previous
work is hard to quantify such cases. More importantly, by Coo, we
can build an infinite number of various-object Step RAT and Step
IAT to reproduce anomaly scenarios, which are non-trivial by traditional tests or by CC protocols.
**Weaker isolation levels:** Unlike the original isolation levels
having a coarse sense of a few anomalies, we can recognize and
analyze many more newly found anomalies between levels, and
some anomalies are confused to fit into one specific level (e.g., Lost
Update Committed aborted in PostgreSQL as shown in Table 3(C)
and appeared in MySQL at the RR level). More POPs are allowed at
weaker levels and some anomalies are expected to be appeared by
combinations of these allowed POPs. Roughly speaking, anomalies
of RAT types have most Pass cases. In contrast, anomalies of WAT
types have the most different rollback cases while databases occur most anomalies with test cases of IAT types. We explain more
details of POP behaviors and anomaly occurrences in the right following.
Haixiang Li, Yuxing Chen, Xiaoyan Li
#### 4.4 Detailed Evaluation of POP Graphs
This part explains more details of POP behaviors and data anomaly occurrences. Specifically, we discuss consistency or consistent
behaviors via POP and POP cycles. Firstly, POPs are the unit of conflicts that are handled by CC protocols (e.g., MVCC [20] and 2PL
[31]). CC protocols perform different rules to allow or forbid these
POPs. Roughly speaking, MySQL/RocksDB and TiDB are mainly
using 2PL, and support MVCC at RR and RC levels. SQL Server
uses pure 2PL and supports MVCC at its SI level. Other databases
support MVCC at all levels and use 2PL for write locks. Secondly,
POP cycles are specific anomalies, and consistency is guaranteed
if cycles are destroyed based on these POP behaviors.
4.4.1 **POP Behaviors.**
This part discusses the behaviors of POPs at different isolation levels. Table 5 shows a summary of behaviors of three primitive POPs,
i.e., WR, WW, and RW, corresponding to our test cases in three
types, i.e., RAT, WAT, IAT. Core CC protocols used in different
databases are 2PL and MVCC. Some are using combined protocols.
The WR is waited by MySQL at the SER level or SQL Server at
SER, RR, and RC levels, and is allowed in other cases. First, WR is indeed allowed in 2PL databases at the RU level, as they allow a read
on an uncommitted write. Second, WR is allowed by MVCC (e.g.,
PostgreSQL at all levels and MySQL at RR and RC levels) by reading
the old committed version, transforming it into RW. For example,
MySQL executed the Intermediate Reads (푊1 [푥1] 푅2 [푥1] 푊1 [푥2])
as expected at the RU level but into a non-anomaly (푊1 [푥1] 푅2 [푥0]
푊1 [푥2]) at the RC level.
The WW is waited by most evaluated databases at any level, except MongoDB directly aborts it and TiDB, at OPT level, prewrites
it in private. This is very different from the ANIS SQL standard that
considers WW as a Dirty write and forbids it at any level. In practice, the WW is somehow waited (not immediate abort) by the 2PL
Wait strategy. For example, MySQL and SQL Server passed Full
Write anomalies (푊1 [푥1]푊2 [푥2]푊1 [푥3]), as they executed it into
a non-anomaly (푊1 [푥1]푊1 [푥3]퐶1푊2 [푥2]), transforming WW into
WCW (will discuss later).
The RW is allowed by most evaluated databases at any level, except at the SER level, 2PL databases wait for it and CockroachDB
aborts it. Note SQL Server by using 2PL at the RR level still waits
for RW. For example when executing Write Skew anomaly (푅1 [푥0]
푅2 [푦0]푊2 [푥1]푊1 [푦1]) at the SER level, 2PL databases (e.g., SQL
Server) waited for each other by two RWs, i.e., (푅1 [푥0]푊2 [푥1] and
푅2 [푦0]푊1 [푦1]), yielding deadlocks. PostgreSQL allowed each RW
in Write Skew but aborted it when two consecutive RWs were
formed by the SSI [44] at the SER level while passed it as an anomaly at other levels.
Unlike previous analyses that discussed only primitive conflicts,
we, in this paper, explain more POPs. We exclude the discussion
of RA, WA, and WC in most cases, as they (i) exist only in a 2transaction single-object cycle and (ii) perform similar to RW, WW,
and WW, respectively. With the Wait strategy, the second operation of primitive POPs waits for the first one to be committed,
meaning WR, WW, and RW will turn into WCR, WCW, and RCW,
respectively. We then discuss more detailed behaviors of WCR, WCW,
and RCW.
-----
Coo: Consistency Check for Transactional Databases
**Table 4: The consistency check results of 11 databases. Anomaly (A): Data anomalies are not recognized by the database, re-**
**sulting in data inconsistencies. Consistency: The databases passed (P) the fed anomalies test case; or the database rollback a**
**data anomaly by Rules (R), Deadlock Detection (D), or Timeout (T) to guarantee consistency.**
P
**P**
P
P
**P**
P
P
P
**P**
P
**P**
A
**P**
P
P
P
P
**P**
P
P
P
A
P
P
**P**
P
P
**P**
P
P
P
**P**
P
**P**
A
**P**
P
P
P
P
**P**
P
P
P
A
P
P
**R**
P
R
**R**
P
P
P
**P**
P
**P**
P
**P**
A
A
A
A
**A**
A
A
A
A
A
A
**R**
P
A
**A**
P
P
P
**D**
D
**D**
A
**R**
R
P
P
R
**R**
P
R
R
A
P
R
**R**
P
R
**R**
P
P
P
**D**
D
**D**
A
**P**
R
P
P
R
**R**
P
R
R
A
P
P
**P**
P
P
**P**
P
P
P
**D**
A
**D**
A
**P**
P
P
P
P
**P**
P
P
P
A
P
P
**P**
P
P
**P**
P
P
P
**D**
A
**D**
A
**P**
A
A
A
A
**A**
A
A
A
A
A
R
**R**
P
R
**R**
P
P
P
**P**
P
**P**
P
**P**
R
P
P
R
**R**
P
R
R
P
P
R
**R**
P
R
**R**
P
P
P
**P**
P
**P**
P
**R**
R
A
A
R
**R**
A
R
R
A
A
R
**R**
P
R
**R**
P
P
P
**P**
P
**P**
P
**P**
R
P
P
R
**R**
P
R
R
A
P
R
**D**
D
T
**T**
T
D
D
**D**
D
**D**
D
**D**
D
D
T
T
**D**
D
R
D
D
D
R
**R**
A
R
**R**
A
A
A
**D**
A
**D**
A
**R**
R
A
A
R
**R**
A
R
R
A
A
R
**R**
A
R
**R**
A
A
A
**D**
A
**D**
A
**D**
D
P
T
T
**D**
D
R
D
D
D
P
**P**
A
P
**P**
A
A
A
**P**
A
**P**
A
**R**
R
A
A
R
**R**
A
R
R
A
A
P
**P**
A
P
**P**
A
A
A
**D**
A
**D**
A
**R**
R
A
A
R
**R**
A
R
R
A
A
A
**R**
A
A
**A**
A
A
A
**D**
A
**D**
A
**R**
A
A
A
A
**R**
A
A
A
A
A
A
**R**
A
A
**A**
A
A
A
**D**
A
**D**
A
**P**
**P**
**A**
**R**
**R**
**P**
**A**
**R**
**R**
**R**
**D**
**R**
**D**
**R**
**R**
**A**
P
P
P
P
P
P
P
P
P
P
D
A
A
P
P
A
A
The WCR occurred when the committed write is read. There
are two cases: After a transaction with the write operation is committed, other transactions can read the data. For example, the Read
Skew Committed (푅1 [푥0]푊2 [푥1]푊2 [푦1]퐶1푅1 [푦1]) was executed as
expected by most databases (e.g., PostgreSQL, MySQL) at RC and
RU levels. The WCR is formed by 푊2 [푦1]퐶1푅1 [푦1] (compared to
the Read Skew (푅1 [푥0]푊2 [푥1]푊2 [푦1]푅1 [푦1]), where WCR does not
exist). However, at the SER or RR level, by snapshot enabling (e.g.,
PostgreSQL), requiring to read a snapshot version 푦0, Read Skew
Committed was executedinto a non-anomaly (푅1 [푥0]푊2 [푥1]푊2 [푦1]퐶1
푅1 [푦0]), transforming WCR into RW.
The WCW occurred when the write is allowed after the concurrent write is committed. The WCW is allowed in most cases but
not allowed in Databases with only write locks (e.g., PostgreSQL
and Oracle) at SER, SI, and RR levels. For example, the Dirty Write
(푊1 [푥1]푊2 [푥2]퐶1) was passed in MySQL, as it was executed into
a non-anomaly (푊1 [푥1]퐶1푊2 [푥2]), where only one WCW exists.
P
P
**P**
P
A
P
P
**P**
P
**P**
**P**
**P**
P
P
P
P
P
**P**
P
P
P
P
A
R
P
**R**
R
P
A
A
**A**
A
**D**
**D**
**P**
P
A
A
A
P
**R**
R
P
P
D
A
R
P
**R**
R
A
R
P
**R**
P
**D**
**D**
**P**
P
P
P
P
P
**P**
P
P
P
A
A
P
P
**P**
P
A
A
A
**A**
A
**D**
**D**
**P**
P
R
P
R
P
**R**
R
P
P
P
P
R
P
**R**
R
P
R
A
**R**
A
**P**
**D**
**P**
P
R
P
R
P
**R**
R
P
P
D
A
D
T
**D**
R
D
R
D
**T**
D
**D**
**D**
**R**
A
R
A
R
A
**R**
R
A
A
A
A
R
A
**R**
R
A
R
D
**T**
D
**D**
**D**
**P**
A
P
P
P
A
**R**
R
A
A
A
A
P
A
**P**
P
A
R
A
**R**
A
**D**
**D**
**R**
A
A
A
A
A
**R**
A
A
A
A
A
A
A
**R**
A
A
**P**
**R**
**R**
**P**
**R**
**D**
**R**
**P**
**A**
P
P
P
P
P
A
A
A
However, PostgreSQL aborted at SER and RR levels due to WCW
POP. Similar cases are full Write and Full Write Committed.
The RCW is very much the same behavior as RW and is allowed
in all databases. For example, the Intermediate Read and Intermediate Read Committed having RW and RCW, respectively, performed
quite the same at different levels. At SER, 2PL databases actually executed the Intermediate Read into the Intermediate Read Committed. Similar cases happened between Read Skew 2 and Read Skew
2 Committed, between Read-write Skew 2 and Read-write Skew 2
Committed. One exception is that Oracle handled RW and RCW
differently, as it passed Write Skew (with RW) but abort Write
Skew Committed (with RCW).
In summary, at the SER level, 2PL databases (e.g., MySQL, SQL
Server) do not allow WW, WR, and RW by 2PL Wait. However,
they allow WCW, WCR, and RCW (as shown from SDA cases in
Table 4). Other databases (e.g., PostgreSQL and Oracle) do not allow WW at all levels and do not allow WCW at SER and RR levels,
P
P
P
P
P
A
P
P
P
**P**
**P**
**P**
R
R
**R**
A
D
A
**A**
A
P
P
**D**
P
P
P
R
P
A
P
P
P
**P**
**D**
**P**
P
P
**R**
A
D
R
**R**
P
P
P
**P**
P
P
P
R
A
A
P
R
R
**R**
**D**
**D**
T
D
**D**
T
D
R
**R**
A
A
A
**D**
A
A
A
T
D
D
A
P
R
**R**
**P**
**P**
P
P
**R**
A
A
A
**A**
A
A
A
**D**
A
A
A
**A**
**R**
**R**
**A**
A
A
D
A
-----
**Table 5: Databases behaviors when meeting WW, WR, and**
**RW POPs. 2PL(wait)/2PL(abort) stands for the waiting/abort**
**of POPs, and MV(trans) stands for the transformation from**
**WR to RW by MVCC.**
POPs DBs SER RR RC RU
MySQL/TDSQL 2PL(wait) MV(trans) MV(trans) allow
SQL Server 2PL(wait) 2PL(wait) 2PL(wait) allow
SQL Server (SI) / MV(trans) MV(trans) /
TiDB / MV(trans) MV(trans) /
TiDB (OPT) / / MV(trans) /
WR Oracle MV(trans) / MV(trans) /
OceanBase (Oracle) MV(trans) MV(trans) MV(trans) /
OceanBase (MySQL) / MV(trans) MV(trans) /
Greenplum MV(trans) / MV(trans) MV(trans)
PostgreSQL MV(trans) MV(trans) MV(trans) MV(trans)
CockroachDB MV(trans) / / /
MongoDB / MV(trans) / /
MySQL/TDSQL 2PL(wait) 2PL(wait) 2PL(wait) 2PL(wait)
SQL Server 2PL(wait) 2PL(wait) 2PL(wait) 2PL(wait)
SQL Server (SI) / 2PL(wait) 2PL(wait) /
TiDB / 2PL(wait) 2PL(wait) /
TiDB (OPT) / / prewrite /
WW
Oracle 2PL(wait) / 2PL(wait) /
OceanBase (Oracle) 2PL(wait) 2PL(wait) 2PL(wait) /
OceanBase (MySQL) / 2PL(wait) 2PL(wait) /
Greenplum 2PL(wait) / 2PL(wait) 2PL(wait)
PostgreSQL 2PL(wait) 2PL(wait) 2PL(wait) 2PL(wait)
CockroachDB 2PL(wait) / / /
MongoDB / 2PL(abort) / /
MySQL/TDSQL 2PL(wait) allow allow allow
SQL Server 2PL(wait) 2PL(wait) allow allow
SQL Server (SI) / allow allow /
TiDB / allow allow /
TiDB (OPT) / allow allow /
RW
Oracle allow / allow /
OceanBase (Oracle) allow allow allow /
OceanBase (MySQL) / allow allow /
Greenplum allow / allow allow
PostgreSQL SSI(allow) allow allow allow
CockroachDB abort / / /
MongoDB / allow / /
**Table 6: Anomalies at different isolation levels.**
No POP combinations Example anomalies Anomaly types
1 RW Write Skew, Step IAT IAT
2 RW, RCW Write Skew Committed IAT
3 RW, WCW Lost Update Committed IAT
4 RW, WCR Read Skew Committed IAT
5 all but no WW Read Skew, Write-read Skew RAT, IAT
Databases SER RR RC RU
MySQL/TDSQL None 1. 2. 3. 1. 2. 3. 4. 5.
SQL server None None 1. 2. 3. 4. 5.
SQL server (SI) / 1. 2. 1. 2. 3. 4. /
TiDB / 1. 2. 3. 1. 2. 3. 4. /
TiDB (OPT) / / 1. 2. 3. /
Oracle 1. 2. / 1. 2. 3. 4. /
OceanBase (Oracle) 1. 2. 1. 2. 1. 2. 3. 4. /
OceanBase (MySQL) / / 1. 2. 3. 4. /
Greenplum 1. 2. / 1. 2. 3. 4. /
PostgreSQL None 1. 2. 1. 2. 3. 4. 1. 2. 3. 4.
CockroachDB None / / /
MongoDB (SS) / 1. 2. / /
yet allow all other POPs, while PostgreSQL (using SSI) did not allow two consecutive RWs. At weaker isolation levels, all databases
still forbid WW but gradually allow more POPs like RW and WCR.
4.4.2 **Data Anomalies Occurrence.**
This part discusses occurrences of anomalies at different isolation
Haixiang Li, Yuxing Chen, Xiaoyan Li
levels. Table 6 shows a summary of expected anomaly groups at
different levels. We show 5 groups of different types of anomalies
by different POP combinations. For example, Group 1 is the anomaly of any number of RW combinations. The typical anomalies are
Write Skew and Step IAT in IAT. We found most databases allowed
Group (1,2) or Group (1,2,3) at the RR level and allowed Group 4
furthermore at the RC level. While at the RU level, it allows anomalies formed by all POPs except WW. We show a more detailed
evaluation of anomaly occurrences from two perspectives, i.e., (i)
expected performance that anomalies should appear and (ii) unexpected performance that anomalies should have been forbidden, in
the following.
The RAT exists at least one WR POP. (i) Based on our previous analysis, WR is indeed allowed only at the RU level by 2PL
databases (e.g., MySQL and SQL Server). At the RU level, most
schedules are executed as expected and anomalies are not detected.
In contrast, at non-RU levels, RATs are mostly passed, as most of
them are turned WR into RW by MVCC or WCR by 2PL Wait. For
example, MySQL executed Intermediate Read (푊1 [푥1]푅2 [푥1]푊1 [푥2])
as expected at the RU level but executed it into non-anomalies
(푊1 [푥1]푊1 [푥2]퐶1푅2 [푥2]퐶2) (WR to WCR by 2PL Wait) and (푊1 [푥1]
푅2 [푥0]푊1 [푥2]퐶1퐶2) (WR to RW by MVCC) at SER and RR/RC levels, respectively. Interestingly, SQL Server executed Intermediate
Read into one non-anomaly (푊1 [푥1]푊1 [푥2]퐶1푅2 [푥2]퐶2) (WR to
WCR by 2PL Wait) at all non-RU levels. (ii) RATs are not expected
in non-RU levels, but some anomalies are reported, as they are executed into IATs. For example, at the RR level, Most DB executed
bothWrite-read Skew (푊1 [푥1]푅2 [푥1]푊2 [푦1]푅1 [푦1]) and Write-read
Skew Committed (푊1 [푥1]푅2 [푥1]푊2 [푦1]퐶2푅1 [푦1]) are often executed
into Write Skew (푊1 [푥1] 푅2 [푥0]푊2 [푦1] 푅1 [푦0]), except SQL Server
did not allow RW, ending up as a deadlock. However, MySQL executedWrite-read Skew into a non-anomaly (푊1 [푥1]푊2 [푦1] 푅2 [푥0]퐶2푅1 [푦1]),
(due to the timing of taking snapshot, more details in 4.5).
The WAT exists at least one WW POP and without WR. (i) WW
is not allowed in all databases and at any level. For example, anomalies with all WWs like Full-write Skew, Full-write Skew Committed, and Step WAT are aborted in most databases at all levels. These
anomalies are often detected as deadlocks (more detailed analysis
in Section 4.7). (ii) However, we see some cases are passed. For
example, Dirty Write (푊1 [푥1]푊2 [푥2]퐶1) and Full-write anomalies
were executed into non-anomalies (e.g., 푊1 [푥1]퐶1푊2 [푥2] by Dirty
Write) in most cases, transforming WW into WCW. Similar cases
are Lost Self Update Committed, Double-write Skew 2 Committed,
Read-write Skew 1/2, and Read-write Skew 2 Committed. However,
some databases (e.g., PostgreSQL and Oracle), which disallowed
WCW, aborted these two cases at SER, SI, OPT, and RR levels, but
can execute the Dirty Write (abort version) (푊1 [푥1]푊2 [푥2]퐴1) into
a non-anomaly (푊1 [푥1]퐴1푊2 [푥2]퐶2).
The IAT does not exist WR and WW. (i) Most databases tolerate
IATs at the non-SER level. At the RR level, most databases occurred
anomalies with RW or RCW combinations. The typical anomalies
are Write Skew, Step IAT, etc. At RC and RU levels, they further
occurred anomalies with WCW or WCR POPs. The typical anomalies are Lost Update Committed, Read Skew Committed, etc. (ii)
Oracle, OceanBase, and Greenplum claimed to support SER level
yet it behaves similar to RR or SI equivalent level. They eliminated
four standard anomalies but ignored some anomalies in IAT. We
-----
Coo: Consistency Check for Transactional Databases
further discuss the behaviors of OceanBase by Read Skew Committed, Read-write Skew 1 Committed, and Write Skew Committed anomalies, which are with RW-WCR, RW-WCW, and RW-RCW
POP combinations, respectively. At the RC level, OceanBase executed these three anomaly schedules as expected, reporting anomalies. While at the SER/RR level, OceanBase behaved quite differently. OceanBase (1) passed Read Skew Committed due to snapshot
reading, transforming WCR into RW, (2) aborted Read-write Skew
1 Committed due to WCW abort rule, and (3) reported an anomaly
by Write Skew Committed as it executed as expected.
In summary, at the SER level, no anomalies occurred except
for Oracle, OceanBase, and Greenplum. As most knowledge, researchers discover Oracle not to be consistent at its SER level by the
Write Skew anomaly. We found that anomalies also happened by
Write-read Skew (both committed version and non-committed version), Step RAT, and Step IAT, although they executed into Write
Skew eventually. At the RR level, most databases occurred anomalies with RW and RCW combinations (e.g., Read Skew, Write Skew,
and Step RAT), except for SQL Server with a similar strong policy as SER level. Surprisingly, SQL Server has the same behaviors
between SER and RR levels by our tests. At the RC level, most
databases occurred all anomalies happened at the RR level, and
anomalies with RW and WCW/WCR combinations (e.g., Lost Update Committed and Read-write Skew 1 Committed). At the weakest RU level, all databases only avoided WW, resulting in all kinds
of anomalies without WW (e.g., Read Skew and Read Skew2). Thus,
most anomalies occur at the RU level. And among all types of
anomalies, IAT are the trickiest one with RW and other POPs, having the most anomaly cases.
**Lesson learned:** (i) Databases aim at consistency by avoiding
all or partial POP cycles, and have different behaviors on different POPs. (ii) Different CC protocols are differently implemented
between databases and between isolation levels. (iii) Developers
still lack complete understanding between SER level and eliminating four standard anomalies, and between coarse isolation levels.
Our evaluation can capture more insights and subtle behaviors between POPs, CC, and coarse isolation levels.
#### 4.5 MVCC and Consistency
MVCC technology has three elements: multi versions, snapshot
and data visibility algorithm. Multi versions with Read committed rule allow the newest committed objects to be read at RC levels. It helps to transform WR into RW. Snapshot, however, makes
every read of the transaction consistent with exactly one committed version at SER and RR levels. It transforms WR and WCR into
RW. For example, PostgreSQL and MySQL passed Non-repeatable
Read Committed (푅1 [푥0]푊2 [푋1]퐶2푅1 [푥1]) at RR levels as it executed into a non-anomaly (푅1 [푥0]푊2 [푋1]퐶2 푅1 [푥0]) but reported
an anomaly at the RC level as expected. A similar case is Read Skew
Committed.
MVCC sometimes are differently implemented. CockroachDB
also consider Timestamp Ordering (TO) [20] in its CC protocols.
Unlike traditional MVCC databases, reads are not waited/blocked,
some read is waited in CockroachDB if an early uncommitted write
found. For example, Write-read Skew Committed (푊1 [푥1]푊2 [푦1]
푅2 [푥1]퐶2푅1 [푦1]) was executed into Write Skew by traditional MVCC
databases like PostgreSQL, but into a non-anomaly (푊1 [푥1]푊2 [푦1]
푅1 [푦0]퐶1푅2 [푥1]퐶2) by the CockroachDB. Note that 푇1 started earlier can read 푦0, but 푇2 started latter can not read 푥0 but can read
푥1 once 푇1 committed.
Snapshot is the MVCC restricted to reading only one consistent
version. Most databases (e.g., PostgreSQL, Oracle, and OceanBase
2.2.50) take the snapshot at the timestamp of first operation while
some (i.e., MySQL and OceanBase 2.2.77) take the snapshot at the
the first read. For example at the RR level, PostgreSQL executed the
Write-read Skew Committed (푊1 [푥1]푊2 [푦1]푅2 [푥1]퐶2푅1 [푦1]) into
the Write Skew (푊1 [푥1]푊2 [푦1]푅2 [푥0]퐶2푅1 [푦0]), printing anomaly
found, as it takes the snapshot of 푥0 and 푦0. However, MySQL executed Write-read Skew Committed into a non-anomaly (푊1 [푥1]
푊2 [푦1]푅2 [푥0]퐶2푅1 [푦1]), as it takes the snapshot of 푥0 and 푦1 at the
first read.
**Lesson learned:** MVCC helps to transform WR into RW, and
snapshot transforms WR and WCR into RW. Most databases (e.g.,
PostgreSQL) take snapshots at the beginning of the transaction
while some (e.g., MySQL) at the first read.
#### 4.6 Distributed Consistency
The above analyses are based on the centralized evaluation. This
part discusses the evaluation of distributeddatabases. We deployed
the data to be stored in different partitions/nodes. In Greenplum,
as a write by default has a lock on one table/segments, we can distribute each row/keys to be in different tables. Note that SDAs (e.g.,
four standard anomalies) with one object are not suitable for the
distributed consistency check. We want to observe the difference
from the global CC protocolsand deadlock detection. We evaluated
5 databases (i.e., MongoDB, CockroachDB, Greenplum, OceanBase,
and TiDB) by DDAs and MDAs. We obtained the same results as
Table 4, meaning these databases did a great implementation to
maintain the consistent performance between centralized and distributed deployments.
We showcase the Write Skew (푅1 [푥0]푅2 [푦0]푊2 [푥1]푊1 [푦1]) anomaly occurred in distributed scenario by Greenplum. We let objects
푥 and 푦 be stored in two tables on two partitions. And then the
Write Skew was executed as scheduled at the SER level, meaning an
anomaly is found. Similar cases are Write Skew committed, Writeread Skew, etc. OceanBase at the SER level, MongoDB at the SI
level, and TiDB at the RR level, existed similar anomalies in distributed scenarios.
**Lesson learned: DDA and WDA type anomalies are suitable for**
distributed environments. CockroachDB has excellent consistent
behaviors at the SER level as the centralized scenarios. But OceanBase and GreenPlum are not.
#### 4.7 Deadlocks
Deadlocks occurredwhen multiple transactions wait for each other
for resources. Deadlocks are usually found by periodically checking wait-for graphs [41]. Most databases (e.g., PostgreSQL, CockroachDB, and Oracle) use deadlock detection only for a small portion of data anomalies, and they detect deadlocks from anomalies
Full-write Skew, Full-write Skew Committed, and Step WAT by
two or three WW POPs waiting, as they have only locks on writes.
In contrast, 2PL databases (e.g., SQL Server and MySQL) heavily
-----
detect deadlocks from all rollbacked cases, as they may have locks
on both reads and writes, making WR, WW, and RW POPs wait
for each other. Table 3(D) depicts an example of Step WAT rollbacked by PostgreSQL deadlock detection. The transaction which
found deadlock is often aborted and the rest may continue to proceed. However, in PostgreSQL and Oracle, the transaction which
found deadlock aborted while the rest are still waiting. They by
default can not proceed and depend on lock_timeout to terminate.
And OceanBase did not use any deadlock techniques at all, instead,
it used timeout (e.g., 2PL Wait_die) to avoid deadlocks.
**Lesson learned:** (i) Deadlocks are caused by resource wait-for
dependency by the 2PL Wait. (ii) Deadlocks are essentially special
instances of data anomalies.
#### 5 RELATED WORK
In this part, we surveyed the related work in more detail.
**Consistency** For database transactions, there are two classical definitions of C in ACID. First, the ANSI SQL [34] holds that
consistency is met without violating integrity constraints; Second,
Jim [35] believes that consistency can be divided into four levels,
and each level excludes some data anomalies. Both of them are
casual definitions and cannot directly and specifically guide the
consistency verification of the database. Some [38, 39] reported
that many databases do not provide the consistency and isolation
guarantees they claimed. In fact, within the scope of the database,
there is little research on the definition of consistency, not to mention the research on the relationship between consistency and data
anomalies. Adya et al. [15] defines the relationship between conflict graphs and data anomalies. However, they can not correspond
to some kinds of data anomalies (e.g., Dirty Read, Dirty Write and
Intermediate Read [19, 34]). The reason behind this is that the stateful information like commit and abort cannot be modeled in the
conflict graph. In contrast, this paper proposed a POP graph that
can fully express the schedule with this stateful information. By
POP graph, we are able to define all data anomalies and corresponding consistency to no anomalies.
**Consistency check** There exist two typical methods for checking databases consistency. One is by the white box method [25, 37,
42, 46, 47, 53, 55], where users often profile active transactions and
conflicts to check non-serializable schedule. The white box method
has a high knowledge bar and user-side burden to modify system
code. As active transactions increases, the checking cost may exponentially increase, possibly affecting the performance of original transaction processing. The other is by the black box method
[24, 48], where users do not make any modification for the system
and check the result by some given workloads. Jepsen (including
Elle [16], which is part of the Jepsen project) consistency check [39]
is one of the popular tools in the industry. However, these methods usually issue random workloads to discover inconsistent behaviors. Such methods are not accurate, spending tons of computing resources. In contrast, Coo judiciously designed finite anomaly
schedules, evaluating the consistency of databases once and for all.
The evaluation is accurate (all types of anomalies), user-friendly
(SQL-based test), and cost-effective (few minutes). The test is also
possible for distributed databases. Test cases (i.e., DDA and MDA,
Haixiang Li, Yuxing Chen, Xiaoyan Li
which have more than one object) can be designed to force data to
spread in different partitions.
**Data anomalies, serializability, and consistency** In recent
years, there still exist extensive research works that focus on reporting new data anomalies, we make a thorough survey on data
anomalies and show them in Table 1. These new data anomalies
are constantly reported in different scenarios, indicating that data
consistency in various scenarios is still full of challenges. The traditional knowledge has a shallow and inaccurate understanding
between data anomalies and consistency. Previous work related
conflict acyclic graph to consistency. They guarantee the serializable schedule to guarantee the consistency [21, 22, 30, 54]. The serialization is usually achieved by strong rules via eliminating three
kinds of conflict relations (i.e., WW, WR, and RW) [30]. However,
they can not quantify all data anomalies such as Dirty Read and
Dirty Write. In this paper, Coo, by using the POP graph, can define
all anomalies and correlate data anomalies to inconsistency.
#### 6 CONCLUSION AND FUTURE WORK
This paper proposed Coo, which contributed to pre-check the consistency of databases, filling the gap in contrast to real-time or postverify solutions. We systematically defined all data anomalies and
correlated data anomalies to inconsistency. Specifically, we introduced an extended conflict graph model called Partial Order Pair
(POP) Graph, which also considers state-expressed operations. By
POP cycles, we can produce infinite distinct data anomalies. We
classify data anomalies and report 20+ new types of them. We evaluated the new consistency model by ten real databases. The consistency check by predefined representative anomaly cases is accurate (all types of anomalies), user-friendly (SQL-based test), and
cost-effective (one-time checking in a few minutes).
The research of predicate cases have not been discussed in this
paper due to the limited space and is still on going. We think the
model of this paper is compatible to extend to predicate cases (e.g.,
Phantom can be constructed by Non-repeatable Read, with predicate Select and replacing Update by Insert [3]).
#### REFERENCES
[[1] 2022. CData. https://www.cdata.com.](https://www.cdata.com)
[[2] 2022. CockroachDB. https://www.cockroachlabs.com.](https://www.cockroachlabs.com)
[[3] 2022. Coo: Github open sourcecode. https://github.com/Tencent/3TS/tree/coo-consistency-check.](https://github.com/Tencent/3TS/tree/coo-consistency-check)
[[4] 2022. GreenPlum. https://greenplum.org.](https://greenplum.org)
[[5] 2022. MongoDB. https://www.mongodb.com.](https://www.mongodb.com)
[[6] 2022. MyRocks. https://myrocks.io.](https://myrocks.io)
[[7] 2022. MySQL. https://www.mysql.com.](https://www.mysql.com)
[[8] 2022. OceanBase. https://www.oceanbase.com.](https://www.oceanbase.com)
[[9] 2022. Oracle. https://www.oracle.com.](https://www.oracle.com)
[[10] 2022. PostgreSQL. https://www.postgresql.org.](https://www.postgresql.org)
[[11] 2022. SQL Server. https://www.microsoft.com/en-us/sql-server.](https://www.microsoft.com/en-us/sql-server)
[[12] 2022. TDSQL. https://cloud.tencent.com/document/product/557.](https://cloud.tencent.com/document/product/557)
[[13] 2022. TiDB. https://github.com/pingcap/tidb.](https://github.com/pingcap/tidb)
[14] A. Adya, B. Liskov, and P. O’Neil. 2000. Generalized isolation level definitions. In Proceedings of 16th International Conference on Data Engineering (Cat.
No.00CB37073). 67–78.
[15] Atul Adya and Barbara H. Liskov. 1999. Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions. (1999).
[16] Peter Alvaro and Kyle Kingsbury. 2020. Elle: Inferring Isolation Anomalies
from Experimental Observations. Proc. VLDB Endow. 14, 3 (2020), 268–280.
[https://doi.org/10.5555/3430915.3442427](https://doi.org/10.5555/3430915.3442427)
[17] Peter Bailis, Alan Fekete, Ali Ghodsi, Joseph M. Hellerstein, and Ion Stoica. 2016.
Scalable Atomic Visibility with RAMP Transactions. ACM Trans. Database Syst.
[41, 3, Article 15 (July 2016), 45 pages. https://doi.org/10.1145/2909870](https://doi.org/10.1145/2909870)
-----
Coo: Consistency Check for Transactional Databases
[18] Hal Berenson, Phil Bernstein, Jim Gray,Jim Melton, Elizabeth O’Neil, and Patrick
O’Neil. 1995. A Critique of ANSI SQL Isolation Levels. In Proceedings of the 1995
ACM SIGMOD International Conference on Management of Data (San Jose, California, USA) (SIGMOD ’95). Association for Computing Machinery, New York,
[NY, USA, 1–10. https://doi.org/10.1145/223784.223785](https://doi.org/10.1145/223784.223785)
[19] Hal Berenson, Philip A. Bernstein, Jim Gray, Jim Melton, Elizabeth J. O’Neil, and
Patrick E. O’Neil. 1995. A Critique of ANSI SQL Isolation Levels. In SIGMOD
Conference. ACM Press, 1–10.
[20] Philip A. Bernstein and Nathan Goodman. 1983. Multiversion Concurrency Control - Theory and Algorithms. ACM Trans. Database Syst. 8, 4 (1983), 465–483.
[21] Philip A. Bernstein, Vassos Hadzilacos, and Nathan Goodman. 1987. Concurrency Control and Recovery in Database Systems. Addison-Wesley.
[http://research.microsoft.com/en-us/people/philbe/ccontrol.aspx](http://research.microsoft.com/en-us/people/philbe/ccontrol.aspx)
[22] P. A. Bernstein, D. W. Shipman, and W. S. Wong. 1979. Formal Aspects of Serializability in Database Concurrency Control. IEEE Transactions on Software
Engineering SE-5, 3 (1979), 203–216.
[23] Carsten Binnig, Stefan Hildenbrand, Franz Farber, Donald Kossmann, Juchang
Lee, and Norman May. 2014. Distributed snapshot isolation: global transactions
pay globally, local transactions pay locally. 23, 6 (2014), 987–1011.
[24] Ranadeep Biswas and Constantin Enea. 2019. On the complexity of checking
transactional consistency. Proc. ACM Program. Lang. 3, OOPSLA (2019), 165:1–
165:28.
[25] Lucas Brutschy, Dimitar K. Dimitrov, Peter Müller, and Martin T. Vechev. 2017.
Serializability for eventual consistency: criterion, analysis, and applications. In
POPL. ACM, 458–472.
[26] Sebastian Burckhardt, Daan Leijen, Jonathan Protzenko, and Manuel Fähndrich.
2015. Global Sequence Protocol: A Robust Abstraction for Replicated Shared
State. In 29th European Conference on Object-Oriented Programming (ECOOP
2015) (Leibniz International Proceedings in Informatics (LIPIcs)), John Tang Boyland (Ed.), Vol. 37. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Dagstuhl,
[Germany, 568–590. https://doi.org/10.4230/LIPIcs.ECOOP.2015.568](https://doi.org/10.4230/LIPIcs.ECOOP.2015.568)
[27] Andrea Cerone, Giovanni Bernardi, and Alexey Gotsman. 2015. A Framework for Transactional Consistency Models with Atomic Visibility. In CONCUR
(LIPIcs), Vol. 42. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 58–71.
[28] Andrea Cerone, Alexey Gotsman, and Hongseok Yang. 2017. Algebraic Laws for
Weak Consistency. (2017), 26:1–26:18.
[29] Natacha Crooks, Youer Pu, Lorenzo Alvisi, and Allen Clement. 2017. Seeing is
Believing: A Client-Centric Specification of Database Isolation. In PODC. ACM,
73–82.
[30] Ramez Elmasri and Shamkant B. Navathe. 2000. Fundamentals of Database Systems, 3rd Edition. Addison-Wesley-Longman.
[31] Kapali P. Eswaran, Jim Gray, Raymond A. Lorie, and Irving L. Traiger. 1976. The
Notions of Consistency and Predicate Locks in a Database System. Commun.
ACM 19, 11 (1976), 624–633.
[32] Alan Fekete, Elizabeth O’Neil, and Patrick O’Neil. 2004. A Read-Only Transaction Anomaly under Snapshot Isolation. SIGMOD Rec. 33, 3 (Sept. 2004), 12–14.
[https://doi.org/10.1145/1031570.1031573](https://doi.org/10.1145/1031570.1031573)
[33] Alan D. Fekete, Dimitrios Liarokapis, Elizabeth J. O’Neil, Patrick E. O’Neil, and
Dennis E. Shasha. 2005. Making snapshot isolation serializable. ACM Trans.
[Database Syst. 30, 2 (2005), 492–528. https://doi.org/10.1145/1071610.1071615](https://doi.org/10.1145/1071610.1071615)
[34] American National Standard for Information Systems – DatabaseLanguage. Nov
1992. ANSI X3.135-1992. SQL.
[35] Jim Gray, Raymond A. Lorie, Gianfranco R. Putzolu, and Irving L. Traiger. 1976.
Granularity of Locks and Degrees of Consistency in a Shared Data Base. In
IFIP Working Conference on Modelling in Data Base Management Systems. NorthHolland, 365–394.
[36] Jim Gray, Raymond A. Lorie, Gianfranco R. Putzolu, and Irving L. Traiger. 1976.
Granularity of Locks and Degrees of Consistency in a Shared Data Base. In Readings in database systems (3rd ed.). 365–394.
[37] Christian Hammer, Julian Dolby, Mandana Vaziri,and Frank Tip. 2008. Dynamic
detection of atomic-set-serializability violations. In ICSE. ACM, 231–240.
[38] AK. Kingsbury and K. Patella. 2013-2019. Jepsen (reports).
[http://jepsen.io/analyses](http://jepsen.io/analyses)
[39] M. Kleppmann. 2014-2019. Hermitage: Testing transaction isolation levels.
[https://github.com/ept/hermitage](https://github.com/ept/hermitage)
[40] Gavin Lowe. 2017. Testing for linearizability. Concurr. Comput. Pract. Exp. 29, 4
(2017).
[41] Zhenghua Lyu, Huan Hubert Zhang, Gang Xiong, Gang Guo, Haozhou Wang,
Jinbao Chen, Asim Praveen, Yu Yang, Xiaoming Gao, Alexandra Wang, Wen Lin,
Ashwin Agrawal, Junfeng Yang, Hao Wu, Xiaoliang Li, Feng Guo, Jiang Wu,
Jesse Zhang, and Venkatesh Raghavan. 2021. Greenplum: A Hybrid Database
for Transactionaland Analytical Workloads. In SIGMOD Conference. ACM, 2530–
2542.
[42] Kartik Nagar and Suresh Jagannathan. 2018. Automated Detection of Serializability Violations Under Weak Consistency. In CONCUR (LIPIcs), Vol. 118.
Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 41:1–41:18.
[43] Christos H. Papadimitriou. 1979. The serializability of concurrent database updates. J. ACM 26, 4 (1979), 631–653.
[44] Dan R. K. Ports and Kevin Grittner. 2012. Serializable Snapshot Isolation in
PostgreSQL. Proc. VLDB Endow. 5, 12 (2012), 1850–1861.
[45] Ralf Schenkel, Gerhard Weikum, N Weissenberg, and Xuequn Wu. 2000. Federated transaction management with snapshot isolation. Lecture Notes in Computer
Science (2000), 1–25.
[46] Arnab Sinha and Sharad Malik. 2010. Runtime checking of serializability in
software transactional memory. In IPDPS. IEEE, 1–12.
[47] William N. Sumner, Christian Hammer, and Julian Dolby. 2011. Marathon: Detecting Atomic-Set Serializability Violations with Conflict Graphs. In RV (Lecture Notes in Computer Science), Vol. 7186. Springer, 161–176.
[48] Cheng Tan, Changgeng Zhao, Shuai Mu, and Michael Walfish. 2020. Cobra: Making Transactional Key-Value Stores Verifiably Serializable. In OSDI. USENIX Association, 63–80.
[49] G. Weikum and G. Vossen. 2002. Concurrency Control: Notions of Correctness
for the Page Model. Transactional Information Systems (2002), 61–123.
[50] wikipedia. 2022. Read_Only_Transactions.
[https://wiki.postgresql.org/wiki/SSI#Read_Only_Transactions](https://wiki.postgresql.org/wiki/SSI#Read_Only_Transactions)
[51] Du el at. Xiaoyong. 2017. Big data management. (2017).
[52] Chao Xie, Chunzhi Su, Cody Littley, Lorenzo Alvisi, Manos Kapritsos, and Yang
Wang. 2015. High-performance ACID via modular concurrency control. In Proceedings of the 25th Symposium on Operating Systems Principles. 279–294.
[53] Min Xu, Rastislav Bodík, and Mark D. Hill. 2005. A serializability violation detector for shared-memory server programs. In PLDI. ACM, 1–14.
[54] Maysam Yabandeh and Daniel Gómez Ferro. 2012. A critique of snapshot isolation. In EuroSys. ACM, 155–168.
[55] Kamal Zellag and Bettina Kemme. 2014. Consistency anomalies in multi-tier
architectures: automatic detection and prevention. VLDB J. 23, 1 (2014), 147–
172.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2206.14602, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://arxiv.org/pdf/2206.14602"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-06-29T00:00:00
|
[
{
"paperId": "e85d7ae932a392cc1faa8ac16e029d7fb16423ad",
"title": "Greenplum: A Hybrid Database for Transactional and Analytical Workloads"
},
{
"paperId": "e5b238dd439d54375f6a599253ca9f0c9e54b7ef",
"title": "Elle: Inferring Isolation Anomalies from Experimental Observations"
},
{
"paperId": "613542a22dbcc04990867577f77a6d96943a1729",
"title": "On the complexity of checking transactional consistency"
},
{
"paperId": "bf373e549bf28740b877f8955e1d85a226bf3734",
"title": "Automated Detection of Serializability Violations under Weak Consistency"
},
{
"paperId": "050f5d841bc838fb4eded506d2084f4b3edf304b",
"title": "Seeing is Believing: A Client-Centric Specification of Database Isolation"
},
{
"paperId": "5fbff85af05714be32d463afcfc61850d6118276",
"title": "Testing for linearizability"
},
{
"paperId": "4a67d462b68b856e85e3ef594a32478359532cdd",
"title": "Algebraic Laws for Weak Consistency"
},
{
"paperId": "aaf69fa9213d65d4d486972314472904067afc4e",
"title": "Serializability for eventual consistency: criterion, analysis, and applications"
},
{
"paperId": "aae102355133753c4b7fe673ab33f634effeffa1",
"title": "High-performance ACID via modular concurrency control"
},
{
"paperId": "3eac8ea6a145ee9916688d2a6f5ddcc3fc925c76",
"title": "Distributed snapshot isolation: global transactions pay globally, local transactions pay locally"
},
{
"paperId": "f3d815d00930df7f2c2cd19c68de3186a2180c8e",
"title": "Scalable atomic visibility with RAMP transactions"
},
{
"paperId": "fd964a4b36a9cdb03d8c5f12fab4ecf48806a651",
"title": "Consistency anomalies in multi-tier architectures: automatic detection and prevention"
},
{
"paperId": "a8c525684cea85f7f96ab852af021d8b150bad3a",
"title": "Serializable Snapshot Isolation in PostgreSQL"
},
{
"paperId": "db6bd29338631bed534ea835e139e9a6c8ca9133",
"title": "A critique of snapshot isolation"
},
{
"paperId": "9c8695fbb02a39f22b978f8955281061fe6aa81a",
"title": "Marathon: Detecting Atomic-Set Serializability Violations with Conflict Graphs"
},
{
"paperId": "f163ecec9074f8fb056227046af6d6652e6c7c07",
"title": "Runtime checking of serializability in software transactional memory"
},
{
"paperId": "b503cfb59b1577b8a59690b9ff35ba432154058e",
"title": "Dynamic detection of atomic-set-serializability violations"
},
{
"paperId": "12da02c91ab75d7bda6ac4ddefa3fe03e9c53173",
"title": "A serializability violation detector for shared-memory server programs"
},
{
"paperId": "d6582728e30011adfe27b329c35203dfb8d1e7a8",
"title": "Making snapshot isolation serializable"
},
{
"paperId": "85e8f9101c41e4cad53ca49edc176eab0e7ea322",
"title": "A read-only transaction anomaly under snapshot isolation"
},
{
"paperId": "bf6fc223d2403306c34d664a193bb623a70d0d40",
"title": "Generalized isolation level definitions"
},
{
"paperId": "b0d7bfd07752108b53d885c2835004d49ca693c9",
"title": "Granularity of Locks and Degrees of Consistency in a Shared Data Base"
},
{
"paperId": "38c443a37914e287c42a3238ccebd7372847d5f9",
"title": "A critique of ANSI SQL isolation levels"
},
{
"paperId": "e7ab23d011e5183db78cfea48e303210f6e57e2e",
"title": "The serializability of concurrent database updates"
},
{
"paperId": "643a5ea2791f56ed58dcf50141301216de10bb9d",
"title": "Formal Aspects of Serializability in Database Concurrency Control"
},
{
"paperId": "0b9182d502e62fb7e1ebd7e01de7523005d35677",
"title": "The notions of consistency and predicate locks in a database system"
},
{
"paperId": "95dfd3e2942a92f5df42b67f1c509f638cc7b637",
"title": "Cobra: Making Transactional Key-Value Stores Verifiably Serializable"
},
{
"paperId": null,
"title": "Du el at. Xiaoyong. 2017. Big data management"
},
{
"paperId": "5563ff2972232dfa8b75548d8ceeda044e4c6f1d",
"title": "Global Sequence Protocol: A Robust Abstraction for Replicated Shared State"
},
{
"paperId": "8119a054c1a3f3b108842876aa889741ab059e90",
"title": "A Framework for Transactional Consistency Models with Atomic Visibility"
},
{
"paperId": "59451c7f9a3fca7b45bf2bae20e3f578e62a3c29",
"title": "CHAPTER THREE – Concurrency Control: Notions of Correctness for the Page Model"
},
{
"paperId": "32257d8d2b08c87e58c7b7f4b2430d58e4b51a81",
"title": "Weak Consistency: A Generalized Theory and Optimistic Implementations for Distributed Transactions"
},
{
"paperId": "afa7a833ef213cbc936bf92f47afcac70d18980e",
"title": "Fundamentals of Database Systems, 2nd Edition"
},
{
"paperId": null,
"title": "AmericanNationalStandardforInformationSystems–DatabaseLanguage"
},
{
"paperId": null,
"title": "Concurrency Control and Recovery in Database Systems"
},
{
"paperId": null,
"title": "MultiversionConcurrencyCon-trol - Theory and Algorithms"
},
{
"paperId": null,
"title": "2022. TDSQL"
}
] | 31,766
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Sociology",
"source": "s2-fos-model"
},
{
"category": "Philosophy",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0307dafff5e68569d28e904f5811dcff0bde756c
|
[
"Computer Science"
] | 0.83762
|
From Users to (Sense)Makers: On the Pivotal Role of Stigmergic Social Annotation in the Quest for Collective Sensemaking
|
0307dafff5e68569d28e904f5811dcff0bde756c
|
ACM Conference on Hypertext & Social Media
|
[
{
"authorId": "24364033",
"name": "Ronen Tamari"
},
{
"authorId": "2165314699",
"name": "D. Friedman"
},
{
"authorId": "153191957",
"name": "W. Fischer"
},
{
"authorId": "119451022",
"name": "Lauren A Hebert"
},
{
"authorId": "1805894",
"name": "Dafna Shahaf"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"HT",
"ACM Conf Hypertext Soc Media"
],
"alternate_urls": null,
"id": "ec4efbef-0722-428c-bdfa-603ece1d0829",
"issn": null,
"name": "ACM Conference on Hypertext & Social Media",
"type": "conference",
"url": "http://www.acm.org/sigweb/"
}
|
The web has become a dominant epistemic environment, influencing people’s beliefs at a global scale. However, online epistemic environments are increasingly polluted, impairing societies’ ability to coordinate effectively in the face of global crises. We argue that centralized platforms are a main source of epistemic pollution, and that healthier environments require redesigning how we collectively govern attention. Inspired by decentralization and open source software movements, we propose Open Source Attention, a socio-technical framework for “freeing” human attention from control by platforms, through a decentralized eco-system for creating, storing and querying stigmergic markers; the digital traces of human attention.
|
## From Users to (Sense)Makers: On the Pivotal Role of Stigmergic Social Annotation in the Quest for Collective Sensemaking
### RONEN TAMARI, DAOStack, Hebrew University of Jerusalem, Israel DANIEL A FRIEDMAN, University of California, Davis, USA WILLIAM FISCHER and LAUREN HEBERT, Veeo, USA DAFNA SHAHAF, Hebrew University of Jerusalem, Israel
The web has become a dominant epistemic environment, influencing people’s beliefs at a global scale. However, online epistemic
environments are increasingly polluted, impairing societies’ ability to coordinate effectively in the face of global crises. We argue
that centralized platforms are a main source of epistemic pollution, and that healthier environments require redesigning how we
collectively govern attention. Inspired by decentralization and open source software movements, we propose Open Source Attention,
a socio-technical framework for “freeing” human attention from control by platforms, through a decentralized eco-system for creating,
storing and querying stigmergic markers; the digital traces of human attention.
CCS Concepts: • Human-centered computing → **Social content sharing; Social tagging systems.**
**ACM Reference Format:**
Ronen Tamari, Daniel A Friedman, William Fischer, Lauren Hebert, and Dafna Shahaf. 2022. From Users to (Sense)Makers: On the Pivotal
Role of Stigmergic Social Annotation in the Quest for Collective Sensemaking. In Proceedings of the 33rd ACM Conference on Hypertext and
_[Social Media (HT ’22), June 28-July 1, 2022, Barcelona, Spain. ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3511095.3536361](https://doi.org/10.1145/3511095.3536361)_
**1** **INTRODUCTION**
The web has become a dominant epistemic environment, shaping peoples’ beliefs and knowledge on a global scale. The
web, however, is also currently a severely polluted epistemic environment [13], due to highly centralized and opaque
information ecologies, coupled with incentive misalignment and unprecedented information overload. A small number
of major web platforms such as Google and Facebook have gained immense control over the means to search, create,
and distribute information [14]. Centralization leads to opacity, in which network data as well as algorithms for content
creation, search, and distribution are effectively hidden away from public, scientific, and ethical oversight [1].
Platform incentives are fundamentally misaligned with those necessary for healthier epistemic environments [25]. For
example, centralization and control of data are necessary for running lucrative “attention markets”, but ultimately hinder
attempts to address information overload, and undermine both user autonomy [20] as well as the open information
networks necessary for healthy democracies [12, 26]. Platforms are implicated in a host of problematic social phenomena,
including the spread of false information, behavioral changes, and societal polarization, epistemic distraction and
degradation of individual and collective sense-making capacities [14, 24]. Impending global ecological and societal
crises lend increased urgency to addressing these problems: astute collective sense- and decision-making have perhaps
never been more needed [1, 23, 24].
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not
made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party
components of this work must be honored. For all other uses, contact the owner/author(s).
© 2022 Copyright held by the owner/author(s).
Manuscript submitted to ACM
1
-----
HT ’22, June 28-July 1, 2022, Barcelona, Spain Tamari, Friedman, Fischer, Hebert and Shahaf
Laudable recent efforts have called attention to this precarious state of affairs of collective online sensemaking [1, 12,
14]. However, while providing invaluable insights, they have largely focused on improving platforms through regulatory
action, whether internally or externally imposed. Such platform-centric approaches are an important step towards
healthier epistemic environments, but face inherent limitations (§3), and are fraught with many impediments as they
often run counter to powerful platforms’ core business models. Perhaps more crucially, platform-centric initiatives
cannot adequately account for the fundamentally distributed [9], self-organizing [8], and stigmergic [16] nature of
collective intelligence.
We argue that reflecting these considerations in practice would benefit from a more radical redesign of our epistemic
environments, centered around guiding principles of agency, transparency, interoperability, decentralization, and a
collective conceptual transition to a “maker” mindset [11]; from passive users, to more active (sense)makers.
Inspired by both theoretical and practical breakthroughs of decentralization and open source software movements
contra entrenched centralized systems, we propose Open Source Attention (OSA), a conceptual framework and “call to
movement” towards decentralized, open-source, stigmergic annotation. We envision our framework as a step towards
systems for distributed governance, education, and control of collective sense-making and attention.
Hypertext and social annotation play a pivotal role in our proposed transition: in the current platform-centric ecology,
user annotations (such as likes, retweets, etc) are locked across platforms’ data siloes where they serve to optimize
_platform growth. OSA aims to empower maker-centric ecologies by employing distributed content creation and storage_
technology (e.g., Solid [19]). In this way, makers will control creation and dissemination of their annotations, which can
then be leveraged to optimize personalized human growth and learning for individuals and collectives.
**2** **DISTRIBUTED, STIGMERGIC FOUNDATIONS OF COLLECTIVE SENSE-MAKING**
Sense-making refers to processes by which agents make sense of their environment, achieved by organizing sense
data until the environment is understood well enough to enable reasonable decisions [22]. Theories of extended [6]
and stigmergic [16] cognition highlight the integral role of environment modification in sense making; agents actively
change their environment to assist internal cognitive processes (e.g., writing to-do notes) as well as indirect stigmergic
communication with others (e.g., ant pheromone trails). Stigmergy is particularly relevant for the setting of collaboration
of large-scale groups [7]. In stigmergic communication, the environment acts as a kind of distributed memory; modifi
cations left by others provide cybernetic feedback, driving both emergence of novel system-level behavior from local
**Current: platform-centric eco-system** **Open Source Attention: Maker-centric eco-system**
Content Discovery
Content Discovery
New Content Content Discovery
ServicesServices Content Drives
content Discovery Drives Services Discovery
Algos.
Algorithms Enable maker
New
Data control of data
Annotation Platform Data content Makers Storage
Interface Storage Annotation
Interfaces
Stig. marker data Personal Knowledge marker
Stig. Decentralized
Users Management Tools (PKMs) data Storage Services
Fig. 1. Platforms leverage control of both data and content discovery algorithms to drive growth at the expense of users (left);
Decoupling data and algorithms incentivizes content discovery services oriented towards human-centered growth (right).
2
|Current: platform-centric eco-system|Col2|
|---|---|
|Content New content Discovery Drives Algorithms Annotation Platform Data Interface Storage||
||Stig. marker data|
|||
|Current: platform-centric eco-system Content New content Discovery Drives Algorithms Annotation Platform Data Interface Storage Stig. marker data Users|Open Source Attention: Maker-centric eco-system CCoonntetennt tD Disisccoovveeryry Content Discovery Content SSeervrviciceess Drives Services Discovery Algos. Enable maker New Data control of data content Makers Storage Annotation Interfaces Stig. Decentralized Personal Knowledge marker Storage Services Management Tools (PKMs) data|
|---|---|
-----
From Users to (Sense)Makers HT ’22, June 28-July 1, 2022, Barcelona, Spain
interactions of agents, and immergence (individual interactions informed by a global state of affairs) [16]. Sense-making
is thus inherently co-created, through agents modifying their environment and reacting to changes made by others.
What kind of environment modifications are relevant to consider for sense-making in vast digital spaces? The
literature broadly distinguishes between two types of modifications: sematectonic stigmergy, which directly alters
the environment state (in the digital case: creating new content, such as publishing a blog post), and stigmergic
_markers, which do not directly modify content, but rather serve as signalling cues (in the digital case: likes, annotations,_
hyperlinking of text). Importantly, stigmergic markers play a central role in assessing epistemic quality of content, both
for humans [13, 16] and machines [14], due to the sheer volume of information as well as challenges in endogenous
content interpretation. Stigmergic markers may be explicitly left by users (e.g., likes) or implicitly recorded through
their behavior (e.g., link click-through data, reading time).
**3** **OPEN-SOURCING STIGMERGIC MARKERS FOR HEALTHIER EPISTEMIC ENVIRONMENTS**
Polluted epistemic environments are often framed as casualties of the “attention economy”; platforms selling user
data to advertisers and putting up ads in social media feeds, with the aim of “capturing” users’ attention and seducing
them to make yet another purchase. While “data” and “attention” are popular abstractions, the stigmergic perspective
is valuable in guiding practical redesign of epistemic environments. Stigmergic markers can be thought of as digital
traces of human attention, whose primacy as indicators of epistemic value makes them precious resources, whether for
extractive (e.g., ad-tech) or constructive (e.g., collective sense-making) purposes. In the following sections, we illuminate
the role of stigmergic markers in nourishing healthier epistemic environments.
**3.1** **From attention to intention**
Healthier epistemic environments involve moving from exploitation of attention to supporting our intentions [24]. This
transition requires two paradigm shifts. First, a mindset shift on the human side, from passive, unwitting users consuming
“unhealthy information diets” [10], to active makers, who cultivate growth-oriented intentions and are mindful of the
(stigmergic) traces they leave, as well as their role as co-creators in the larger digital and physical ecology. Realistically,
humans stand no chance of making the transition in isolation; content discovery algorithms are indispensable for
navigating vast digital landscapes, but to a large degree are controlled by platforms [14]. Accordingly, the second
shift involves re-designing our epistemic environments to support this transition by empowering makers through
human-centric content discovery. As shown in Fig. 1, current content discovery is platform-centric: platforms enjoy a
closed feedback loop consisting of both the content discovery algorithms as well as the stigmergic marker data needed
to drive algorithmic optimization towards platform growth [21]. Human-centric content discovery requires supplanting
this degenerative cycle with a more symbiotic information ecology, in which makers create and control their stigmergic
markers, and thus are empowered to share their data to content discovery services oriented towards personalized
individual or collective growth. Content moderation is an important representative example [17]: moderation is
intractable in centralized systems, due to inherent limitations of AI capabilities as well as the scale of complex human
adjudications needed. In contrast, decentralized eco-systems enable a “marketplace of filters”, where different individuals
and organizations can create and tune content moderation systems for their own needs.
**3.2** **Open Source Attention: maker-centered information ecology**
Analogously to open-source code and common domain knowledge [11], stigmergic markers can be thought of as a public
good. However, despite their unique importance, surprisingly little work has specifically targeted their decentralization
3
-----
HT ’22, June 28-July 1, 2022, Barcelona, Spain Tamari, Friedman, Fischer, Hebert and Shahaf
(§4). Stated simply; where open source software is a movement to “free” software, similarly OSA is a movement
to “free” stigmergic markers, starting from basic hypertext primitives: emotional valence (e.g., likes), bi-directional
links, span highlighting, semantic categorization (tags, bookmarks), and textual annotation. We envision decentralized,
maker-centered ecologies, comprised of three main architectural elements (see also Fig. 1):
**Annotation tools. Enable makers to easily create markers attached to any URL or content element included therein,**
not just where platforms provide like buttons [4]. Some types of markers should themselves be mark-able, allowing for
example the option to “like” a particular annotation, or link between two annotations. Future extensions can address
implicit stigmergic markers such as read-time or click-through counts [15]. Apps recording these function as automatic
annotation tools, though their implicit nature requires extra caution with regard to consent and data privacy issues.
**Self-sovereign storage. Makers own their markers and control their visibility (private, public, etc) to other people or**
services. Identity provision is a key related service that can (but does not have to be) provided along with storage [21].
**Content discovery services. Rather than platforms’ monolithic and opaque feeds, a decentralized ecology encourages**
a market of diverse and human-centered content discovery services. For example, competing interfaces for social media
that better moderate trolls, promote thought-provoking stories, or provide customizable feed controls [17].
**4** **DISCUSSION**
While the idea of leveraging stigmergic markers for collective sense-making has a long history [2], most contemporary
open-source and decentralization efforts have focused on sematectonic (content-creating) stimergy, such as code, social
media, financial ledgers and executable contracts [5]. Closest to our proposal is the Solid eco-system [19] that, similarly,
targets “re-decentralizing the web” [21], and empowers individuals to control their data. Solid also features a marketplace
of services, including the dokieli decentralized annotation client for scientific research [3, 4]. While Solid and dokieli
are inspiring initial steps, they are limited with regard to content discovery services or social incentives. More broadly,
where Solid is primarily a technology, OSA proposes an ecological perspective accounting for the embeddedness of
such technologies in wider social, educational and economic contexts. For example, a key extra-technological challenge
concerns changing norms around knowledge work. Similarly to how platforms changed the culture around certain
kinds of content creation, effectively turning us all into performers, well designed social networks could help shape the
norms and prestige associated with sense-making activities. Academic Twitter demonstrates that even without direct
economic incentives, social incentives lead experts to freely share high-quality information publicly [18].
Another key question concerns scale: for global-scale sense-making, any proposal must necessarily compete with
massive, well established platforms. While recent years are seeing a resurgence of personal knowledge management apps
(PKMs) enabling content creation and annotation, knowledge tends to remain siloed at the individual level; adaptation to
_collective knowledge management (CKM) has been limited.[1]_ OSA is naturally congruent with the promising “protocols,
not platforms” approach [17]; rather than head-to-head competition between PKMs, existing PKM growth can be
bootstrapped for CKM by introducing interoperable protocols and storage for stigmergic primitives (e.g., links, tags). In
this way, data from across diverse PKM apps could be shared to contribute to collective sense-making efforts.
Many open questions remain out of scope of this short piece, which is best seen as a call to attention; success in
surmounting the formidable challenges faced today by humanity requires that “we give the right sort of attention
to the right sort of things” [24]. We have claimed that attending “to the right things” will require re-imagining our
[1https://athensresearch.ghost.io/season-2/](https://athensresearch.ghost.io/season-2/)
4
-----
From Users to (Sense)Makers HT ’22, June 28-July 1, 2022, Barcelona, Spain
socio-technological systems for governing collective attention; we hope our proposal will help galvanize action towards
this vital cause.
**ACKNOWLEDGMENTS**
We thank Zak Stein for inspiring our exploration, and we thank Nimrod Talmon and the DAOStack team for thoughtful
feedback and support. We also thank Metagov and RadicalXChange for cultivating the wonderful real-world and online
spaces that seeded this collaboration.
**REFERENCES**
[1] Joseph B. Bak-Coleman, Mark Alfano, Wolfram Barfuss, Carl T. Bergstrom, Miguel A. Centeno, Iain D. Couzin, Jonathan F. Donges, Mirta Galesic,
Andrew S. Gersick, Jennifer Jacquet, Albert B. Kao, Rachel E. Moran, Pawel Romanczuk, Daniel I. Rubenstein, Kaia J. Tombak, Jay J. Van Bavel,
and Elke U. Weber. 2021. Stewardship of global collective behavior. Proceedings of the National Academy of Sciences 118, 27 (2021), e2025764118.
[https://doi.org/10.1073/pnas.2025764118](https://doi.org/10.1073/pnas.2025764118)
[2] Vannevar Bush. 1945. As we may think. The atlantic monthly 176, 1 (1945), 101–108.
[[3] Sarven Capadisli. 2020. Linked research on the decentralised Web. Ph. D. Dissertation. https://csarven.ca/linked-research-decentralised-web](https://csarven.ca/linked-research-decentralised-web)
[4] Sarven Capadisli, Amy Guy, Ruben Verborgh, Christoph Lange, Sören Auer, and Tim Berners-Lee. 2017. Decentralised authoring, annotations and
notifications for a read-write web with dokieli. In International Conference on Web Engineering. Springer, 469–481.
[5] Fran Casino, Thomas K Dasaklis, and Constantinos Patsakis. 2019. A systematic literature review of blockchain-based applications: Current status,
[classification and open issues. Telematics and Informatics 36 (2019), 55–81. https://doi.org/10.1016/j.tele.2018.11.006](https://doi.org/10.1016/j.tele.2018.11.006)
[6] Andy Clark and David Chalmers. 1998. The extended mind. analysis 58, 1 (1998), 7–19.
[[7] Mark Elliott. 2006. Stigmergic Collaboration: The Evolution of Group Work: Introduction. M/C Journal 9, 2 (may 2006). https://doi.org/10.5204/mcj.](https://doi.org/10.5204/mcj.2599)
[2599](https://doi.org/10.5204/mcj.2599)
[8] Nigel R. Franks and J. L. Deneubourg. 1997. Self-organizing nest construction in ants: individual worker behaviour and the nest’s dynamics. Animal
_Behaviour 54 (1997), 779–796._
[9] Todd M. Gureckis and Robert L. Goldstone. 2006. Thinking in groups. Pragmatics & Cognition 14 (2006), 293–311.
[[10] C Johnson. 2011. The Information Diet: A Case for Conscious Consumption. O’Reilly Media. https://books.google.com/books?id=QrW62y9l3lYC](https://books.google.com/books?id=QrW62y9l3lYC)
[11] Vasilis Kostakis, Vasilis Niaros, George Dafermos, and Michel Bauwens. 2015. Design global, manufacture local: Exploring the contours of an
[emerging productive model. Futures 73 (2015), 126–135. https://doi.org/10.1016/j.futures.2015.09.001](https://doi.org/10.1016/j.futures.2015.09.001)
[12] Anastasia Kozyreva, Stephan Lewandowsky, and Ralph Hertwig. 2020. Citizens Versus the Internet: Confronting Digital Challenges With Cognitive
[Tools. Psychological Science in the Public Interest 21, 3 (2020), 103–156. https://doi.org/10.1177/1529100620946707](https://doi.org/10.1177/1529100620946707)
[[13] N Levy. 2021. Bad Beliefs: Why They Happen to Good People. OUP Oxford. https://books.google.com/books?id=C%5C_ZQEAAAQBAJ](https://books.google.com/books?id=C%5C_ZQEAAAQBAJ)
[14] Philipp Lorenz-Spreen, Stephan Lewandowsky, Cass R Sunstein, and Ralph Hertwig. 2020. How behavioural sciences can promote truth, autonomy
and democratic discourse online. Nature human behaviour 4, 11 (2020), 1102–1109.
[15] Artur Sancho Marques and José Figueiredo. 2013. Stigmergic hyperlink: A new social web object. Information Systems and Modern Society: Social
_[Change and Global Development (2013), 260–272. https://doi.org/10.4018/978-1-4666-2922-6.ch016](https://doi.org/10.4018/978-1-4666-2922-6.ch016)_
[16] Leslie Marsh and Christian Onof. 2008. Stigmergic epistemology, stigmergic cognition. Cognitive Systems Research 9, 1-2 (2008), 136–149.
[https://doi.org/10.1016/j.cogsys.2007.06.009](https://doi.org/10.1016/j.cogsys.2007.06.009)
[17] Mike Masnick. 2019. Protocols, Not Platforms. Knight First Amendment Institute (2019).
[[18] Daniel S. Quintana. 2020. Twitter for Scientists [eBook edition]. https://doi.org/10.5281/ZENODO.3707741](https://doi.org/10.5281/ZENODO.3707741)
[19] Andrei Sambra, Amy Guy, Sarven Capadisli, and Nicola Greco. 2016. Building Decentralized Applications for the Social Web. In Proceedings of the
_25th International Conference Companion on World Wide Web (Montréal, Québec, Canada) (WWW ’16 Companion). International World Wide Web_
[Conferences Steering Committee, Republic and Canton of Geneva, CHE, 1033–1034. https://doi.org/10.1145/2872518.2891060](https://doi.org/10.1145/2872518.2891060)
[20] Chirag Shah and Emily M. Bender. 2022. Situating Search. In ACM SIGIR Conference on Human Information Interaction and Retrieval (Regensburg,
[Germany) (CHIIR ’22). Association for Computing Machinery, New York, NY, USA, 221–232. https://doi.org/10.1145/3498366.3505816](https://doi.org/10.1145/3498366.3505816)
[21] Ruben Verborgh. 2022. Re-decentralizing the Web, for good this time. In Linking the World’s Information: A Collection of Essays on the Work of Sir
_[Tim Berners-Lee, Oshani Seneviratne and James Hendler (Eds.). ACM. https://ruben.verborgh.org/articles/redecentralizing-the-web/](https://ruben.verborgh.org/articles/redecentralizing-the-web/)_
[[22] K E Weick and K E W Weick. 1995. Sensemaking in Organizations. SAGE Publications. https://books.google.com/books?id=nz1RT-xskeoC](https://books.google.com/books?id=nz1RT-xskeoC)
[23] Jevin D West and Carl T. Bergstrom. 2021. Misinformation in and about science. Proceedings of the National Academy of Sciences 118, 15 (apr 2021),
[e1912444117. https://doi.org/10.1073/pnas.1912444117](https://doi.org/10.1073/pnas.1912444117)
[24] James Williams. 2018. Stand out of our light: freedom and resistance in the attention economy. Cambridge University Press.
[[25] S Zuboff. 2019. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs. https://books.google.](https://books.google.com/books?id=lRqrDQAAQBAJ)
[com/books?id=lRqrDQAAQBAJ](https://books.google.com/books?id=lRqrDQAAQBAJ)
5
-----
HT ’22, June 28-July 1, 2022, Barcelona, Spain Tamari, Friedman, Fischer, Hebert and Shahaf
[26] Ethan Zuckerman. 2020. The Case for Digital Public Infrastructure. The Tech Giants, Monopoly Power, and Public Discourse: An Essay Series by the
_[Knight Institute, Columbia University (2020). https://knightcolumbia.org/content/the-case-for-digital-public-infrastructure](https://knightcolumbia.org/content/the-case-for-digital-public-infrastructure)_
6
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2205.06345, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2205.06345"
}
| 2,022
|
[
"JournalArticle",
"Book"
] | true
| 2022-05-12T00:00:00
|
[
{
"paperId": "c12c35cae3be70b22d6f360b3b638c4e898f531c",
"title": "Re-decentralizing the Web, For Good This Time"
},
{
"paperId": "b28a6b0d85bf8ac8f8148b5c5080473cb302f796",
"title": "Bad Beliefs: Why They Happen to Good People"
},
{
"paperId": "876f3285465b6e9586a355c8acb8d96abe102e9a",
"title": "Situating Search"
},
{
"paperId": "73c7e196f18b1c63d5bf64c032e146071b1e0b25",
"title": "Bad Beliefs"
},
{
"paperId": "ce5f44e3e62c3223629792fdd5c1e6c80deaac48",
"title": "Stewardship of global collective behavior"
},
{
"paperId": "e7f937051c555f79887470bb674da4750a3aa482",
"title": "FROM SENSEMAKING IN ORGANIZATIONS"
},
{
"paperId": "1f56844ef859297f2ce68f18fde8e3c538bf9e87",
"title": "Misinformation in and about science"
},
{
"paperId": "72c6d827d03590d1f300788db114ad66d4445fc7",
"title": "The age of surveillance capitalism: the fight for a human future at the new frontier of power"
},
{
"paperId": "5c658936c683a221edb0e9d807d2a0416a15ecd2",
"title": "How behavioural sciences can promote truth, autonomy and democratic discourse online"
},
{
"paperId": "c011ea9b35b2109cfd21e1f06ce1d7f426600858",
"title": "Citizens Versus the Internet: Confronting Digital Challenges With Cognitive Tools"
},
{
"paperId": "a0429d2d2a8f9da4f78ebf6eeed0ffa9c6a44a6d",
"title": "The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of PowerShoshanaZuboff. Public Affairs, 2019."
},
{
"paperId": "4c0945cb52d0734b25ecea49e3ae1c1b243fca66",
"title": "A systematic literature review of blockchain-based applications: Current status, classification and open issues"
},
{
"paperId": "f95ddf7f2ac8f3cef3d808205e0606c394ca03a3",
"title": "The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power"
},
{
"paperId": "cb970d91cdb24e83c94e0c9eb1909f52cdd263eb",
"title": "Stand Out of Our Light : Freedom and Resistance in the Attention Economy"
},
{
"paperId": "e7a31a592bbc1fb0962b591f92c671e6ac0f262d",
"title": "Decentralised Authoring, Annotations and Notifications for a Read-Write Web with dokieli"
},
{
"paperId": "194268d1965a7704d1361e440c3ebf599249a005",
"title": "Building Decentralized Applications for the Social Web"
},
{
"paperId": "ddc24b1c3d1d12b4701c5b108c889d85ee6502fd",
"title": "Design global, manufacture local: Exploring the contours of an emerging productive model"
},
{
"paperId": "30f5c06376f4d440c218c507372a2c37013f6be1",
"title": "Stigmergic Hyperlink: A New Social Web Object"
},
{
"paperId": "e7f54e549cb60a2396e9d488045aaa299c9b7b80",
"title": "Stigmergic epistemology, stigmergic cognition"
},
{
"paperId": "e9b73fde873fb72b777c4f1cdb19dc348c6ebd59",
"title": "Introduction to the special issue “Perspectives on Social Cognition”"
},
{
"paperId": "923f93332d9b33db4e6c76bd967e6d37909f405a",
"title": "Stigmergic Collaboration: The Evolution of Group Work: Introduction"
},
{
"paperId": "1be3feeb17c3520fbac6b5b652aa41feea596b2f",
"title": "The Extended Mind"
},
{
"paperId": "9fb1deb25b69d8d7b577f957a12e7e3098b93af9",
"title": "Self-organizing nest construction in ants: individual worker behaviour and the nest's dynamics"
},
{
"paperId": "60b0f8d64f57de74a77a0982dbf726dfa2b189fd",
"title": "The Case for Digital Public Infrastructure"
},
{
"paperId": null,
"title": "Linked research on the decentralised Web"
},
{
"paperId": null,
"title": "Protocols, Not Platforms"
},
{
"paperId": null,
"title": "WWW '16 Companion). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva"
},
{
"paperId": "0d691a2cfb1f41532e761c79ed9026312e741d69",
"title": "Twitter for Scientists"
},
{
"paperId": "4987ee8d4fe2771347f8dc6c1807cab3939ae40a",
"title": "The Information Diet - A Case for Conscious Consumption"
},
{
"paperId": "1e6d1ef1600f49e1c4a5a5e38f153ba33b958d1e",
"title": "Thinking in groups*"
},
{
"paperId": null,
"title": "As we may think"
},
{
"paperId": null,
"title": "HT ’22, June 28-July 1, 2022, Barcelona, Spain"
}
] | 6,122
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0308fc702a2526d899b23fc62720f30c12449d18
|
[
"Computer Science"
] | 0.888834
|
Decentralized Coordinated Cyberattack Detection and Mitigation Strategy in DC Microgrids Based on Artificial Neural Networks
|
0308fc702a2526d899b23fc62720f30c12449d18
|
IEEE Journal of Emerging and Selected Topics in Power Electronics
|
[
{
"authorId": "49729632",
"name": "M. Habibi"
},
{
"authorId": "9281906",
"name": "Subham S. Sahoo"
},
{
"authorId": "144824799",
"name": "S. Rivera"
},
{
"authorId": "1954241",
"name": "T. Dragičević"
},
{
"authorId": "144595648",
"name": "F. Blaabjerg"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE J Emerg Sel Top Power Electron"
],
"alternate_urls": [
"https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6245517"
],
"id": "bab0fbdb-9a40-44c9-ab79-f95eccaa9883",
"issn": "2168-6777",
"name": "IEEE Journal of Emerging and Selected Topics in Power Electronics",
"type": "journal",
"url": "https://ieeexplore.ieee.org/servlet/opac?punumber=6245517"
}
|
DC microgrids can be considered as cyber-physical systems (CPSs) and they are vulnerable to cyberattacks. Therefore, it is highly recommended to have effective plans to detect and remove cyberattacks in dc microgrids. This article shows how artificial neural networks can help to detect and mitigate coordinated false data injection attacks (FDIAs) on current measurements as a type of cyberattacks in dc microgrids. FDIAs try to inject the false data into the system to disrupt the control application, which can make the dc microgrid shutdown. The proposed method to mitigate FDIAs is a decentralized approach and it has the capability to estimate the value of the false injected data. In addition, the proposed strategy can remove the FDIAs even for unfair attacks with high domains on all units at the same time. The proposed method is tested on a detailed simulated dc microgrid using the MATLAB/Simulink environment. Finally, real-time simulations by OPAL-RT on the simulated dc microgrid are implemented to evaluate the proposed strategy.
|
###### Aalborg Universitet
Decentralized Coordinated Cyber-Attack Detection and Mitigation Strategy in DC Microgrids Based on Artificial Neural Networks
Habibi, Mohammad Reza; Sahoo, Subham; Riveria, Sebastian; Dragicevic, Tomislav; Blaabjerg, Frede
_Published in:_
I E E E Journal of Emerging and Selected Topics in Power Electronics
_DOI (link to publication from Publisher):_
[10.1109/JESTPE.2021.3050851](https://doi.org/10.1109/JESTPE.2021.3050851)
_Publication date:_
2021
_Document Version_
Accepted author manuscript, peer reviewed version
[Link to publication from Aalborg University](https://vbn.aau.dk/en/publications/bd931384-5cdb-45ce-b317-f9bdcab25246)
_Citation for published version (APA):_
Habibi, M. R., Sahoo, S., Riveria, S., Dragicevic, T., & Blaabjerg, F. (2021). Decentralized Coordinated CyberAttack Detection and Mitigation Strategy in DC Microgrids Based on Artificial Neural Networks. I E E E Journal of
_Emerging and Selected Topics in Power Electronics, 9(4), 4629-4638. Article 9319658._
[https://doi.org/10.1109/JESTPE.2021.3050851](https://doi.org/10.1109/JESTPE.2021.3050851)
**General rights**
Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners
and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.
- Users may download and print one copy of any publication from the public portal for the purpose of private study or research.
- You may not further distribute the material or use it for any profit-making activity or commercial gain
You may freely distribute the URL identifying the publication in the public portal
-----
### Decentralized Coordinated Cyber-Attack Detection and Mitigation strategy in DC Microgrids based on Artificial Neural Networks
###### Mohammad Reza Habibi, Student Member, IEEE, Subham Sahoo, Member, IEEE, Sebasti´an Rivera, Senior Member, IEEE, Tomislav Dragiˇcevi´c, Senior Member, IEEE, and Frede Blaabjerg, Fellow, IEEE
**_Abstract—DC microgrids can be considered as cyber-physical_**
**systems (CPSs) and they are vulnerable to cyber-attacks. There-**
**fore, it is highly recommended to have effective plans to detect**
**and remove cyber-attacks in DC microgrids. This paper shows**
**how artificial neural networks can help to detect and mitigate**
_coordinated false data injection attacks (FDIAs) on current_
**measurements as a type of cyber-attacks in DC microgrids.**
**FDIAs try to inject the false data into the system to disrupt the**
**control application, which can make the DC microgrid shutdown.**
**The proposed method to mitigate FDIAs is a decentralized**
**approach and it has the capability to estimate the value of the**
**false injected data. In addition, the proposed strategy can remove**
**the FDIAs even for unfair attacks with high domains on all units**
**at the same time. The proposed method is tested on a detailed**
**simulated DC microgrid using MATLAB/Simulink environment.**
**Finally, real-time simulations by OPAL-RT on the simulated DC**
**microgrid is implemented to evaluate the proposed strategy.**
**_Index_** **_Terms—DC_** **microgrid,** **false** **data** **injection** **attack**
**(FDIA), artificial neural networks, cyber-attack mitigation.**
I. INTRODUCTION
C microgrids are more efficient with less control complexity as compared to AC microgrids [1]–[4] and its
## D
operation can be further improved by coordination between the
sources using communication. Based on the communication
networks, two typical communication topologies exist in DC
microgrids, i.e., centralized and distributed [5]. Although centralized methods are simple to implement, their performance
impedes due to single-point-of-failure [6]. As a result, the
reliability of operation becomes very poor for centralized
systems. On the other hand, distributed control enhances the
reliability and flexibility of operation, since the information
is being shared only between the neighbors. As a result, it
becomes robust to limited communication delays, and link
failure. Also, it can be flexible to plug-and-play capability.
However, since global information is inadequate, it is highly
vulnerable to cyber-attacks. This has been a primary concern
for autonomous systems used in mission-critical applications
M.R. Habibi, S. Sahoo, and F. Blaabjerg are with the Department of
Energy Technology, Aalborg University, Aalborg East, 9220, Denmark (emails: mre@et.aau.dk, sssa@et.aau.dk and fbl@et.aau.dk).
S. Rivera is with the Faculty of Engineering and Applied Sciences,
Universidad de los Andes, Chile (e-mail: s.rivera.i@ieee.org).
T. Dragiˇcevi´c is with the Center for Electrical Power and Energy, Department of Electrical Engineering, Technical University of Denmark, Copenhagen, Denmark, (e-mail: tomdr@elektro.dtu.dk).
S. Rivera acknowledges the support of the projects AC3E
(ANID/Basal/FB0008) and SERC (ANID/FONDAP/15110019).
such as electric ships, aircrafts, and telecommunication centres
[7]–[10].
Recently, cooperative and consensus-based distributed control strategies are applied in DC microgrids applications [7],
[11]–[13]. The objectives of cooperative control in DC microgrids are to regulate the average voltage and also proportional
sharing of the currents by using local and neighbor’s data [13],
[14]. In cooperative control strategies, since DC microgrids
only exchange information between the neighbors, the global
information is missing. As a result, cooperative DC microgrids
are highly vulnerable to cyber attacks [15]. There are various
kinds of cyber-attacks such as denial of service (DoS) attacks,
replay attacks, and FDIAs. DoS attacks attempt for unavailability of the communication network while FDIAs inject the
false data into the system to change the state of the system
and replay attacks record the reading of sensors for a given
time and after that, it repeats those readings to defraud the
operator of the system [14], [16]–[20]. This paper investigates
the most prominent attack, i.e. FDIAs. The generalized FDIAs
usually are recognized as stealth attacks can inject the false
data into the system without any disturbances by deceiving
the control application leaving the operator uninformed about
the online attack in this case [14], [21]. After penetration into
the control system, the attacker can cause the DC microgrid
shutdown by increasing the magnitude of attacked elements
in an unfair manner. To protect the system from such events,
it is vital to design a resilient strategy to remove the FDIA
elements in DC microgrids.
This paper introduces a method to determine the value of the
false data in cooperative DC microgrids and remove the FDIA
from the DC microgrid. The goal is to show how artificial
neural networks can be implemented as a powerful tool to
mitigate the false data into cooperative DC microgrids easily
with very high accuracy. Firstly, an artificial neural networkbased estimator is designed to monitor the output current
of converters, those connect the distributed energy resources
(DERs) to the DC microgrid and based on the output of the
estimator, FDIAs and also the value of the false data can
be calculated. In the next step, in regard to the calculated
value of the false data, a reference tracking approach using
a PI controller is introduced to mitigate the false data in the
attacked converter.
The organization of the rest of the paper is as follows:
Section II introduces the basics of the feedforward neural
networks. Section III discusses the fundamental concepts of
-----
_Input Layer_
_x1_
_x2_
_x(k-1)_
_xk_
_Output of the_
_j neuron th_
_of the l layerth_
_Hidden Layers_
_Input signals_
_from_
_(l-1) layerth_
_Process inside the neuron_
_Output Layer_
_y_
|Ʃ|Col2|
|---|---|
|||
|f|Col2|
|---|---|
|||
Fig. 1. General architecture of a feedforward neural network with k inputs
and one output.
TABLE I
PARAMETERS OF FIG. 2
Parameter Description
_Ml−1_ Number of neurons in the (l − 1)[th] layer
_α[l]j_ Output of the j[th] neuron from the l[th] layer
_b[l]j_ Bias weight of the j[th] neuron of the l[th] layer
_f_ (.) Activation function of the neuron
_wrj[l]_ Connection weight between
_r[th]_ neuron in (l − 1)[th] layer and j[th] neuron in l[th] layer
cooperative control of DC microgrids based on the consensus
theory, and also the effect of FDIAs on the cooperative DC
microgrids. In addition, the proposed strategy is explained
in Section IV. Section V and VI show the performance of
the proposed method under offline and real-time simulations,
respectively. In Section VII, a discussion about the proposed
method and future work are prepared. Finally, the conclusion
of the paper is proposed in Section VIII.
II. INTRODUCTION TO FEEDFORWARD NEURAL
NETWORKS
A feedforward neural network consists of an input layer
with multiple inputs, one or more hidden layers and an output
layer. In the feedforward structure, each layer is built by
number of neurons, and data will propagate from neurons in
one layer to another. Fig. 1 shows the general structure of a
feedforward neural network.
In Fig. 1, xi and y are the i[th] input and the output of the
neural network, respectively. Considering the input and output
layer as the first and last layers respectively in the neural
network, Fig. 2 illustrates the structure of the j[th] neuron in
the l[th] (l > 1) layer of the neural network and Table I defines
the corresponding parameters of the neural network shown in
Fig. 2.
The mathematical formulation of Fig. 2 is as follows:
_w1j[l]_ _bjl_
_w2j[l]_
#### Ʃ f
_wM[l]_ _l-1 j_
#### α jl
#### α l-1
_1_
#### α l-1
_2_
#### α l-1
_M_ _l-1_
Fig. 2. The structure of the j[th] neuron in the l[th] layer of the feedforward
neural network, which is depicted in Fig. 1.
DERs to the DC microgrid. For the implementation of a
feedforward neural network, two steps are considered. In the
first step, the neural network is trained offline to prepare a finetuned neural network and in the second step, the well trained
feedforward neural network is used to monitor and estimate the
output DC current of converters. Firstly, a feedforward neural
network with one hidden layer is trained offline and then
examined to estimate the output currents for several scenarios.
Based on the seen proper results, and avoiding complexity,
a neural network with one hidden layer is considered. The
mathematical description of a feedforward neural network with
one hidden layer and one output is as follows:
_y¯ = fout(fhid(XW_ _[hid]_ + b[hid])W _[out]_ + b[out]). (2)
Where, fhid and fout are activation functions of neurons in
the hidden layer and output layer, respectively. In addition,
_y¯ is the estimated output by the feedforward neural network_
and X is the input vector of the neural network with k inputs
which is defined as follows:
_X = (x1, x2, ..., xk−1, xk)._ (3)
Furthermore, b[out] is the bias weight of the neuron in the output
layer. Moreover, W _[hid], W_ _[out]_ and b[hid] are a weight matrix
of the hidden layer, weight vector of the output layer and
bias vector of the hidden layer, respectively. The aim of the
offline training of the feedforward neural network is to find the
optimized value of the W _[hid], W_ _[out], b[hid]_ and b[out] to have a
fine-tuned neural network for estimating the output properly.
To train the neural network offline, a set of input data and
corresponding outputs should be prepared to be used in the
training process to optimize the parameters of the feedforward
neural network.
III. FDIA ON COOPERATIVE CONTROL BASED DC
MICROGRIDS
In this section, a traditional cooperative control scheme used
for DC microgrids will be introduced. Further, the effect of
FDIAs in cooperative DC microgrids will be discussed.
_w1j[l]_ _bjl_
_w2j[l]_
#### Ʃ f
_wM[l]_ _l-1 j_
#### α l-1
_1_
#### α l-1
_2_
#### α l-1
_M_ _l-1_
_αj[l]_ [=][ f] [((]
#### α jl
_Ml−1_
�
_αi[l][−][1]_ _× wrj[l]_ [) +][ b][l]j[)][.] (1)
_i=1_
In this paper, a feedforward neural network is implemented
to estimate the output DC current of converters, those connect
-----
_R1,M_
_Unit_
_Unit 1_
_M_
_Iin1_ = _Idc1_ _Vdc1_ _of the networkCyber graph_
_Unit 1_ _Il1_ _Rest of the_ _Unit_
_Unit 2_
_cyber_ _M-1_
_network_
_Physical structure of_
_the DC microgrid_ _Rest of the physical_
_Cyber network of the_ _Network_
_DC microgrid_
Fig. 3. Cyber-physical model of the DC microgrid with M units.
_A. Cooperative Control of a DC Microgrid_
Fig. 3 shows a general cyber-physical model of a DC
microgrid, which is studied in this paper. The illustrated DC
microgrid consists of M units and each unit is a DC source that
is connected to the DC microgrid by a DC-DC converter with
equal power rating for all of the converters. Each converter
works to restore the voltage as per the reference voltage, which
is prepared by the local primary and the secondary controllers.
In addition, an undirected cyber graph is employed to transmit
the local information only between neighbors.
Also, Fig. 4 illustrates the cooperative control application
of the DC microgrids. As it can be seen in Fig. 4, two voltage
terms are added to the global voltage reference to deal the
local voltage reference and maintain the output voltage of each
converter, as follows:
_Vdc[i]_ _ref_ [=][ V][dc]ref [+ ∆][V][v] [+ ∆][V][i][.] (4)
In Fig. 4, Vdcref and Idcref represent the global reference of
voltage and current for all units, respectively. It is important to
note that Idcref = 0 for the load current sharing proportionally
between units [18]. The _V[¯]dc[i]_ [is the average voltage estimated]
for the i[th] unit and it is updated based on the following
protocol, which is named dynamic consensus [22]:
**___** _V dc ref_
_V[_]dci_ _[i]_ **_+** _H (s)V_ _∆++Vv_ **++** _V dc refi_ **_** _G (s)v_
_outI_
**_+** _H (s) I_ _∆Vi_ _inI refi_
_I dc ref_ _Local_ _Iini_ **_+**
_measurements{Vdc[i]_
_i[th]_ _d_ _i_
_DC-DC converter_ _PWM_ _G (s)I_
_Iin1_ = _Idc1_ _Vdc1_
_Il1_
_Unit 1_
_Iin1_ = _I_
_Unit 1_
_Rest of the physical_
_Network_
_Rest of the_
_cyber_
_network_
_1_
_dc_
_Unit_
_M-1_
_Unit 1_
_Unit_
_M-1_
_Unit 1_
_V¯dc[i]_ [(][t][) =][ V][ i]dc[(][t][) +]
**___** _V dc ref_
_V[_]dci_ _[i]_ **_+** _H (s)V_ _∆++Vv_ **++** _V dc refi_ **_** _G (s)v_
_outI_
**_+** _H (s) I_ _∆Vi_ _inI refi_
_I dc ref_ _Local_ _Iini_ **_+**
_measurements{Vdc[i]_
_i[th]_ _d_ _i_
_DC-DC converter_ _PWM_ _G (s)I_
_Rest of the_
_cyber_
_network_
_Unit 2_
_Unit 2_
_t_
�
0
_Iin1_ = _Idc1_ _Vdc1_
_Il1_
_Unit 1_
_Unit_
_M_
� _aij( V[¯]dc[j]_ [(][τ] [)][ −] _[V][¯][ i]dc[(][τ]_ [))d][τ,] (5)
_j∈Mi_
_Rest of the physical_
_Network_
and, Mi is the set of neighbors of the i[th] unit. In addition,
_I¯out[i]_ [is updated as follows:]
_Unit_
_M_
_I¯out[i]_ [(][k][) =] � _ciaij_
_j∈Mi_
�
_Idc[j]_ [(][k][)] _dc[(][k][)]_
_−_ _[I]_ _[i]_
_Imax[j]_ _Imax[i]_
�
_._ (6)
Fig. 4. The cooperative control of the DC microgrid.
objectives of the DC microgrids for a well-connected cyber
graph will converge as follows [23]:
_klim→∞_ _V¯dc[i]_ [(][k][) =][ V][dc]ref _[,][ lim]k→∞_ _I¯out[i]_ [(][k][) = 0] _∀i ∈_ _M._ (7)
_B. Effect of FDIAs on Cooperative DC Microgrids_
In the case of attacks in the DC microgrid, the objectives of
the DC microgrids will not follow (7). However, some attacks
can be programmed with more complexity to deceive the
operators by obeying (7). Also, detection of Stealth attacks
using voltage measurements is studied by [14] so, this paper
focus on detection and mitigation of coordinated attacks on
the current measurements. The attack on the current sensors
in the i[th] agent can be conducted using:
_Ia[i]_ [=][ I]dc[i] [+][ κ][i][I]f[i] _[,]_ (8)
where, Ia[i] [is the value of the output current of][ i][th][ unit, which is]
reported to the controller and Idc[i] [is the real value of the output]
current of the i[th] unit. In addition, If[i] [is the false data that]
is injected to the system by attackers. It is important to note
that, κi is a binary parameter and κi = 1 means the presence
of attack element and vice-versa. Furthermore, the model of
the FDIAs on cyber link is as follows:
_Ia[ij]_ [=][ I]dc[j] [+][ κ][i][I]f[i] _[,]_ _∀j ∈_ _Mi._ (9)
In (9), Ia[ij] is the value of the output current of i[th] unit
that is sent to j[th] units. Coordinated attacks inject the false
data both into sensor and cyber link. The coordinated attack
seems such a load change in the DC microgrid and it satisfy
the objectives of coordinated control, i.e., current sharing and
average voltage regulation.
IV. PROPOSED METHOD
The objective of this paper is to detect and mitigate the
_coordinated FDIAs on output current measurements of con-_
verters. As it was mentioned earlier, these kind of smart attacks
satisfy (7), which makes it difficult to identify the existence of
attacks just by monitoring the cooperative control signals. As
a result, it is important to have an appropriate control strategy
to mitigate those attacks in cooperative DC microgrids. The
In (6), ci and Idc[i] [are the coupling gain in the][ i][th][ unit and]
measured output current of the i[th] converter, respectively. In
addition, Imax[i] [denotes the maximum output current allowed]
for the i[th] converter. By distributed consensus algorithm, the
-----
**_Monitoring the output_**
**_of the cyber-attack_**
**_mitigation layer (βi_** **_)_**
###### Is
**_There is an FDIA in unit i._**
**_The value of the false injected_**
**_data is equal to -βi ._**
**_There is an FDIA in unit i._**
**_The value of the false injected_**
**_data is equal to -βi ._**
###### zero?
Fig. 5. The implementation of the PI based reference tracking method to
remove the attack in cooperative DC microgrids in the i[th] unit.
Fig. 6. The structure of the bidirectional boost DC-DC converter in the i[th]
unit of the cooperative DC microgrid.
introduced strategy is based on reference tracking application
for the output DC current of each converter, to mitigate the
false data. The proposed method is based on a PI controller
based reference tracking application in which the reference
is prepared by an artificial neural network. In this work, a
local estimator is designed for each unit to estimate the output
current of the converter using an artificial neural network. The
output of this neural network is then used as a reference to
a PI controller and the output of the PI controller is then
added to the output current of the converter. In continues, the
implementation of the PI controller and also artificial neural
network as the estimator are discussed in more detail.
Fig. 5 shows the implementation of the local PI controller
in the i[th] unit. Based on Fig. 5, if there is no the attack
mitigation layer and the PI controller in the i[th] unit, the
gathered value of the output current of the converter to use
in the local controller and send to neighbors is as follows:
_IM[i]_ [(][k][) =][ I]a[i] [(][k][) =][ I]dc[i] [(][k][) +][ κ][i][I]f[i] _[,]_ (10)
where, IM[i] [is the gathered value of the output current of]
the converter in the i[th] unit. In the presence of the attack
mitigation layer in the i[th] unit, IM[i] [is determined as follows:]
_IM[i]_ [(][k][) =][ I]dc[i] [(][k][) +][ κ][i][I]f[i] [+][ β][i][.] (11)
**_There is no cyber-attacks in_**
**_unit i._**
**_The effect of the cyber-attack_**
**_is mitigated._**
**_Cyber-Secure Operation_**
**_Monitoring the output_**
**_of the cyber-attack_**
**_mitigation layer (βi_** **_)_**
###### Is
###### Is βi
Fig. 7. The monitoring and implementing βi to detect and also to mitigate
the existence of the false data in the i[th] unit.
In (11), βi is the output of the PI controller in the i[th] unit.
The PI controller is employed to follow Idc[i] [by][ I]M[i] [even in]
the presence of attacks. The reference, which is used in the
attack mitigation layer, is the estimated value of the output
current of the i[th] unit and it is represented by _I[¯]dc[i]_ [. If][ ¯][I]dc[i] [is to]
be estimated exactly with an ideal estimator and without any
errors, _I[¯]dc[i]_ [=][ I]dc[i] [happens. Alternatively,][ β][i][ can be written as:]
_βi = −κiIf[i]_ _[.]_ (12)
Based on (12), the PI controller tries to produce a value as
the output to add to the gathered value of the output current
to remove the effect of the coordinated FDIAs from the unit.
Based on (12), βi is a proper index to monitor the i[th] unit
locally. If the value of βi is not zero, it means that the i[th] unit
is under attack. If the i[th] unit is not under attack, κi is zero,
therefore, based on (12), βi is zero. But, if the i[th] unit is under
attack, κi is one and based on (12), βi and the injected false
data have the same domain with different signs. Therefore, by
monitoring βi in each unit, the exact value of the false data
can be determined if the unit is under attack. Briefly,
_Remark 1: By monitoring −βi, the existence of the attack_
in the i[th] unit can be detected. Based on (12), it can be
concluded that, if the value of −βi is not zero, the i[th] unit is
under an FDIA, but, if the i[th] unit is not under the attack, the
value of −βi is zero.
_Remark 2: Based on (12), if the i[th]_ unit is under an FDIA,
the value of the false injected data (If[i] [) is equal to the value]
of −βi. Therefore, by injecting the output of the PI controller
(βi) into the system, the attack will be mitigated.
Fig. 7 shows that how the decentralized proposed method
can detect and mitigate the cyber-attack in the system. Also,
in the attack mitigation layer, _I[¯]dc[i]_ [has an important role and]
the reference producing layer should be a reliable layer and
**_Then, the attack mitigation_**
**_layer injects βi into the system._**
-----
Fig. 8. Implementation of the neural network in offline and online modes in the i[th] unit in the DC microgrid.
it should produce _I[¯]dc[i]_ [as close as][ I]dc[i] [with high accuracy and]
small error. In this work, based on the abilities of artificial neural networks to extract the map between inputs and the output
of a system with a high degree of non-linearity and complexity,
artificial neural network is implemented to estimate the output
current of the converter in each unit. A feedforward neural
network is used as the estimator and as it will be shown later,
in this work, the feedforward neural network has a good ability
to estimate and predict the output current of the converter.
Therefore, because of acceptance results, and also preventing
complexity in the reference predicting layer, the feedforward
neural network is selected as a proper candidate and solution
to be used in this work.
The implementation of the neural network consists of two
phases. The first phase that is done offline, is related to the
training of the neural network to reach a fine-tuned network to
have the ability of estimation properly and the second phase is
about implementation of the trained neural network to estimate
the output current of the converter and make the reference
for the PI controller online. The cooperative DC microgrid
consists of DERs and each DER as a DC source is connected
to the main DC bus by a bidirectional buck-boost converter
that is modeled based on Fig. 6.
In this study, Iin[i] [and][ (][V][ i]dcref _dc[)][ are selected as the]_
_[−]_ _[V][ i]_
input of the neural network to be used for the estimation of
_Idc[i]_ [. Before the implementation of the neural network online,]
it should be trained to determine the optimized value of the
connection weight between neurons of the consecutive layers
and also bias weights of neurons to reach a well-trained neural
network. For the training, a set of data inputs and data output
of the neural network should be gathered to be used in the
training phase. It is important to note that, the training is
implemented offline before the online implementation of the
neural network. Fig. 8 shows how the neural network is trained
offline to be ready to implement in each unit online to estimate
the output current of that unit.
V. SIMULATION RESULTS
A cyber-physical microgrid with M=4 units is considered
here, as shown in Fig. 3. The simulated parameters are included in Appendix. At first, a neural network with one hidden
layer, which has 10 neurons is considered to be implemented
for estimation the output current of the converter. As it will
be shown later, the results with one hidden neural network
are proper and the neural network can estimate the output
current of each converter precisely, so, in order to avoid more
complexity of the neural network, this work avoids to use
a neural network with more hidden layers and the neural
network with one hidden layer is implemented. In this work,
the neural network has two inputs and as a result, the number
of the neurons in the input layer is two. Also, the number of
neurons in the output layer is equal to the number of outputs
of the neural network. In this study, the output of the neural
network is the estimated value of the output current. Therefore,
the neural network has just one output and the number of the
neurons in the output layer is one. Also, by default, the number
of neurons in the hidden layer is ten. The obtained results
based on ten hidden neurons were satisfactory and because
of that, the number of neurons in the hidden layer was not
changed. In addition, because the structure and parameters
of the converters in all units are the same, a neural network
was trained for one unit and the trained neural network was
implemented in other units. To gather data to train the neural
network, when the simulation model was running, for a given
-----
duration (20 s), data were gathered every 0.1 ms so a set of
data consists of 200000 samples of {Iin[1] _[,][ (][V][ 1]dcref_ _[−][V][ 1]dc[)][}][ as the]_
input of the neural network and, 200000 samples of {Idc[1] _[}][ as]_
the output of the neural network were selected to be used in the
training phase. It is important to note that during the selected
time to gather the data, 14 load changes were considered in
the simulation and 11 of them happened in unit 1. Those load
changes were considered to map the dynamics of the converter
when the data were gathered to have a more accurate neural
network based estimator. In addition, the activation functions
of the neural network in the hidden layer and the output layer
namely f1 and f2 are sigmoid and linear activation functions
respectively, and they are considered as follows:
2
_f1(x) =_ (13)
1 + e[−][2][x][ −] [1][,]
_f2(x) = x._ (14)
It is important to note that, the global reference voltage in this
work is 315 V. In addition, to gather data to train the neural
network, the DC microgrid model was simulated without any
attack on the model. The rest of this section is to show the
results of different scenarios based on the proposed method.
_A. Scenario 1: Injecting a false data by a coordinated attack_
In this scenario, a load change happens on unit 1 at t = 0.5s
and after one second, a false data with value of 2 starts to be
injected into unit one as a coordinated attack. Fig. 9 shows
the output currents od converters. Also, Fg. 10 illustrates the
DC voltages of all units. Based on Fig. 9, the attack is removed
from the attacked unit quickly. Furthermore, based on (12), _βi represents the estimated value of the false data, which is_
injected to the i[th] unit, coordinately, so, Fig. 11 shows the
estimated value of the false data in all units and based on the
results, the proposed method estimates the false data value
in all units, precisely. As it can be seen in Fig. 11, -βi for
_i = 2, 3 and 4 is zero, but at t = 1.5s, -βi for i = 1 starts to_
be increased to reach to 2 and it means that the attacked unit
is unit 1 by a false data with value of 2. In addition, Fig. 12
illustrates the estimated value of the false data for the attacked
unit for different values of Kp.
_B. Scenario 2: Wide FDIAs on all units_
In this scenario, all units at different times are targets of the
attacker to implement coordinated FDIAs on them. Based
on the planned scenario, at t = 0.5s, t = 1.5s, t = 2.5s
and t = 3.5s the false data with value of +1, -0.5, +1 and
+0.5 are injected to unit 1, 2, 3 and 4, respectively. Fig. 13
represents the output currents of the DC/DC converters in
the DC microgrid. It can clearly be seen in Fig. 13 that the
introduced strategy in this work is so efficient to remove the
_coordinated FDIA even when all units are under attack. Also,_
Fig. 14 shows the output voltages of all units. Furthermore,
Fig. 15 illustrates the estimated values of the false data by the
proposed strategy. Based on Fig. 15, the proposed method is
successful to detect the attacked units and also it is reliable to
estimate the value of the false data. Finally, Fig. 16 shows the
estimated value of false data for different Kp.
Fig. 9. DC output currents of all units during a coordinated attack in
scenario 1.
Fig. 10. DC voltages of all units during a coordinated attack in scenario 1.
Fig. 11. Estimation of the false injected data into all units in scenario 1.
-----
Fig. 12. Estimated value of the false injected data in the attacked unit for
different Kp in scenario 1.
Fig. 13. DC output currents of all units during wide coordinated attacks in
scenario 2.
Fig. 14. DC voltages of all units during wide coordinated attacks in scenario
2.
Fig. 15. Estimation of the false injected data into all units in scenario 2.
VI. REAL-TIME SIMULATION RESULTS
This work is verified on real-time simulation using OPALRT on a detailed simulated cooperative DC microgrid to
evaluate the computational burden of the proposed method.
The setup consists of OPAL-RT, a laptop and a router, which
connects devices to each other. The software of the OPALRT is RT-LAB that is integrated by Matlab and the MATLAB/Simulink environment is opened by RT-LAB and after
that, RT-LAB generates the C code of the model, which can
be run on a real-time target. It is important to note that the
sample time in Matlab configuration parameters of the model
is 5 10[−][5]s. The real time system has three subsystems i.e,
_×_
master, slave and console subsystems. The plant model is
implemented in the master subsystem and the slave subsystem
is used to separate the computational section. In addition,
scopes are located in the console subsystem. The information
of the target is illustrated in the Appendix. Fig. 17 shows
the real-time setup based on OPAL-RT and Fig. 18 illustrates
the implementation of the subsystems. In this part, all units
are considered under coordinated attacks simultaneously with
unfair values of false data. The values of +80, +60, +40 and
+20 are injected to be added to unit 1, 2,3 and 4, respectively.
Fig. 19(a) and Fig. 19(b) show the currents and voltages in
the DC microgrid. As it can be seen from Fig. 19(a) and Fig.
19(b), the effect of the unfair coordinated attacks are removed
from the DC microgrid. In addition, Fig. 20 shows the value
of -βi for i = 1, 2, 3 and 4.
VII. DISCUSSIONS AND FUTURE WORK
In this study, a method based on artificial neural networks
is introduced to detect and mitigate FDIAs in DC microgrids.
The proposed strategy has advantages. For example, It is a
decentralized approach and as a result, it does not need extra
data transmission between units and it just uses the local data.
Furthermore, the proposed method can calculate the value of
the false injected data. Also, the neural network was trained
based on non-attacked data and it does not need the data of
the system when the system is under attack. In other words,
the neural network is trained based on the non-attacked system
without any attacked data and despite other methods, there is
no need to model the attack in the training phase to detect
is
-----
Fig. 16. The estimated value of the false injected data to all units for different Kp in scenario 2.
Fig. 17. The real-time setup to evaluate the proposed attack detection and
mitigation strategy.
and mitigate the FDIAs. Based on the proposed strategy,
the FDIAs can be detected and also mitigated and there is
no need to disconnect the attacked converter from the DC
microgrid and the DC microgrid can be operated successfully
without any stress even it is under the FDIAs. The proposed
work is successful even all units are under unfair attacks. In
the planned future works, the proposed application will be
Fig. 18. Implementation of the master, slave, and console subsystems for
real-time simulation.
developed to detect and remove other types of attacks. Also,
the proposed strategy will be developed to implement in more
complex DC microgrids.
VIII. CONCLUSIONS
This work introduced a method based on artificial neural
networks to detect and remove the coordinated FDIAs on
-----
(a) DC output currents.
(b) DC voltages.
Fig. 19. The values of a) DC currents, and b) DC voltages for all units during
the real-time simulation.
Fig. 20. Estimation of the false injected data for all units during the real-time
simulation.
current measurements to have a secure cooperative control
strategy in DC microgrids. Based on the proposed method,
firstly, a neural network is used in each unit to estimate
the output DC current of each converter and based on the
estimated value, a PI controller is implemented to remove
the attack from the attacked unit. The proposed method is
a decentralized method and there is no need to have exchange
of any extra data between neighbors. The proposed strategy
can determine the value of the false data when any of the units
are under attack. Furthermore, as the results show, this work
can successfully detect and remove attacks in DC microgrids
when all units are under attack, even when the attacker try
to inject the false data to all units simultaneously with high
domains and unfairly.
TABLE II
PARAMETERS OF THE SIMULATED DC MICROGRID
_R12 = 1.8 Ω_ _Vdcref = 315 V and ∆t = 0.1 ms_
_R23 = 2.3 Ω_ _Vin[1]_ [=][ V][ 2]in [=][ V][ 3]in [=][ V][ 4]in [= 270][ V]
_R34 = 2.1 Ω_ _L1 = L2 = L3 = L4 = 5 mH_
_R14 = 1.3 Ω_ _C1 = C2 = C3 = C4 = 50 mF_
TABLE III
PARAMETERS OF THE TARGET
_V ersion_ 2.6.29.6 − _opalrt −_ 6.1
_Number of CPUs_ 12
_CPU speed_ 3466 [MHz]
_Artitecture_ _i686_
APPENDIX
The DC microgrid parameters used in the simulation are
shown in Table II. In addition, Table III illustrates the information of the target for the real-time simulation.
REFERENCES
[1] T. Dragiˇcevi´c, X. Lu, J. C. Vasquez, and J. M. Guerrero, “Dc microgrids—part ii: A review of power architectures, applications, and
standardization issues,” IEEE Trans. Power Elect., vol. 31, pp. 3528–
3549, May 2016.
[2] D. Chen, L. Xu, and J. Yu, “Adaptive dc stabilizer with reduced dc fault
current for active distribution power system application,” IEEE Trans.
_Power Syst., vol. 32, pp. 1430–1439, March 2017._
[3] Z. Liu, M. Su, Y. Sun, W. Yuan, H. Han, and J. Feng, “Existence and
stability of equilibrium of dc microgrid with constant power loads,”
_IEEE Trans. Power Syst., vol. 33, pp. 6999–7010, Nov 2018._
[4] Y. Gu, W. Li, and X. He, “Passivity-based control of dc microgrid
for self-disciplined stabilization,” IEEE Trans. Power Syst., vol. 30,
pp. 2623–2632, Sep. 2015.
[5] T. Dragiˇcevi´c, X. Lu, J. C. Vasquez, and J. M. Guerrero, “Dc microgrids—part i: A review of control strategies and stabilization techniques,” IEEE Trans. Power Elect., vol. 31, pp. 4876–4891, July 2016.
[6] C. Wang, J. Duan, B. Fan, Q. Yang, and W. Liu, “Decentralized highperformance control of dc microgrids,” IEEE Trans. Smart Grid, vol. 10,
pp. 3355–3363, May 2019.
[7] V. Nasirian, S. Moayedi, A. Davoudi, and F. L. Lewis, “Distributed
cooperative control of dc microgrids,” IEEE Trans. Power Elect., vol. 30,
pp. 2288–2303, April 2015.
[8] L. Meng, Q. Shafiee, G. F. Trecate, H. Karimi, D. Fulwani, X. Lu,
and J. M. Guerrero, “Review on control of dc microgrids and multiple
microgrid clusters,” IEEE J. of Em. and Sel. Topics in Power Electron,
vol. 5, pp. 928–948, Sep. 2017.
[9] T. Morstyn, B. Hredzak, G. D. Demetriades, and V. G. Agelidis, “Unified
distributed control for dc microgrid operating modes,” IEEE Trans.
_Power Syst., vol. 31, pp. 802–812, Jan 2016._
[10] T. Wang, D. O’Neill, and H. Kamath, “Dynamic control and optimization
of distributed energy resources in a microgrid,” IEEE Trans. Smart Grid,
vol. 6, pp. 2884–2894, Nov 2015.
[11] X. Chen, M. Shi, J. Zhou, W. Zuo, Y. Chen, J. Wen, and H. He,
“Consensus-based distributed control for photovoltaic-battery units in
a dc microgrid,” IEEE Trans. Ind. Electron., vol. 66, pp. 7778–7787,
Oct 2019.
[12] L. Meng, T. Dragicevic, J. Rold´an-P´erez, J. C. Vasquez, and J. M.
Guerrero, “Modeling and sensitivity study of consensus algorithm-based
distributed hierarchical control for dc microgrids,” IEEE Trans. Smart
_Grid, vol. 7, pp. 1504–1515, May 2016._
[13] Y. Li, P. Dong, M. Liu, and G. Yang, “A distributed coordination control
based on finite-time consensus algorithm for a cluster of dc microgrids,”
_IEEE Trans. Power Syst., vol. 34, pp. 2205–2215, May 2019._
[14] S. Sahoo, S. Mishra, J. C. Peng, and T. Dragicevic, “A stealth cyber
attack detection strategy for dc microgrids,” IEEE Trans. Power Elect.,
vol. 34, pp. 8162–8174, Aug 2019.
-----
[15] S. Abhinav, H. Modares, F. L. Lewis, and A. Davoudi, “Resilient
cooperative control of dc microgrids,” IEEE Trans. Smart Grid, vol. 10,
pp. 1083–1085, Jan 2019.
[16] Y. Liu, P. Ning, and M. K. Reiter, “False data injection attacks against
state estimation in electric power grids,” ACM Trans. Inf. Syst. Secur.,
vol. 14, pp. 13:1–13:33, 2011.
[17] M. R. Habibi, H. R. Baghaee, T. Dragiˇcevi´c, and F. Blaabjerg, “Detection
of false data injection cyber-attacks in dc microgrids based on recurrent
neural networks,” IEEE J. of Em. and Sel. Topics in Power Electron.,
pp. 1–1, 2020.
[18] S. Sahoo, J. C. Peng, D. Annavaram, S. Mishra, and T. Dragicevic, “On
detection of false data in cooperative dc microgrids-a discordant element
approach,” IEEE Trans. Ind. Elect., pp. 1–1, 2019.
[19] S. Liu, Z. Hu, X. Wang, and L. Wu, “Stochastic stability analysis and
control of secondary frequency regulation for islanded microgrids under
random denial of service attacks,” IEEE Trans. Ind. Inform., vol. 15,
pp. 4066–4075, July 2019.
[20] Y. Mo and B. Sinopoli, “Secure control against replay attacks,” in
_2009 47th Annual Allerton Conference on Communication, Control, and_
_Computing (Allerton), pp. 911–918, Sep. 2009._
[21] J. Zhao, L. Mili, and M. Wang, “A generalized false data injection attacks
against power system nonlinear state estimator and countermeasures,”
_IEEE Trans. Power Syst., vol. 33, pp. 4868–4877, Sep. 2018._
[22] M. Zhu and S. Mart´ınez, “Discrete-time dynamic average consensus,”
_Automatica, vol. 46, no. 2, pp. 322 – 329, 2010._
[23] S. Sahoo, J. C. Peng, S. Mishra, and T. Dragiˇcevi´c, “Distributed
screening of hijacking attacks in dc microgrids,” IEEE Trans. Pow. Elect,
vol. 35, no. 7, pp. 7574–7582, 2020.
**Mohammad Reza Habibi (S’19) was born in**
Tehran, Iran. He is currently working toward the
Ph.D. degree with the Department of Energy Technology, Aalborg University, Denmark. He is also
a Visiting Research Scholar with the Department
of Electrical Power Engineering and Mechatronics,
Tallinn University of Technology, Tallinn, Estonia.
His current research interests include intelligent energy systems, application of artificial intelligence
in power electronics and power systems, advanced
control of power converters, modeling and control
of energy storage systems, modeling and secure control of DC distribution
systems and microgrids, and cyber-physical systems.
**Subham Sahoo (S’16-M’18) received the B.Tech.**
& Ph.D. degree in Electrical and Electronics Engineering from VSS University of Technology, Burla,
India and Electrical Engineering at Indian Institute
of Technology, Delhi, New Delhi, India in 2014
& 2018, respectively. He has worked as a Visiting Student with the Department of Electrical and
Electronics Engineering in Cardiff University, UK in
2017. Prior to completion of his PhD, he worked as
a Research Fellow in the Department of Electrical
and Computer Engineering in National University
of Singapore. He has made significant contribution towards development of
advanced resilient control strategies in cyber-physical DC microgrids. He is
currently working as a postdoctoral researcher in the Department of Energy
Technology, Aalborg University, Denmark. He is a recipient of the Indian
National Academy of Engineering (INAE) Innovative Students Project Award
for his PhD thesis across all the institutes in India for the year 2019. He has
also won the IRD Student Start-up Award in the year 2017 to incorporate a
company named SILOV SOLUTIONS PVT. LTD. commercialized and based
on his contributions during his doctoral studies. This company is based and
incubated by Indian Institute of Technology Delhi, India. He is also active in
many expert talks as a secretary of IEEE Young Professionals Affinity Group,
Denmark. He was also one of the outstanding reviewers for IEEE Transactions
on Smart Grid in the year 2020. His research interests are control and stability
of microgrids, renewable energy integration, cyber-physical power electronic
systems and cyber security in power electronic systems.
**Sebastian Rivera (S’10-M’16-SM’20) received the**
M.Sc. degree in Electronics Engineering from Universidad Tecnica Federico Santa Maria (UTFSM),
Chile, in 2011, and the Ph.D. degree in Electrical
and Computer Engineering from Ryerson University,
Toronto, Canada, in 2015. During 2016 and 2017,
he was a Postdoctoral Fellow at the University of
Toronto, Canada, and the Advanced Center of Electrical and Electronic Engineering (AC3E), UTFSM,
respectively. Since 2018, he is an Assistant Professor
for the Faculty of Engineering and Applied Sciences,
Universidad de los Andes, Chile. He is also an Associate Researcher at the
AC3E and the Solar Energy Research Center (SERC-Chile), both centers of
excellence in Chile. His research focuses on dc distribution systems, electric
vehicle charging infrastructure, high efficiency dc-dc conversion, multilevel
converters, and renewable energy systems. Dr. Rivera was the recipient of the
Academic Gold Medal of the Governor General of Canada in 2016.
**Tomislav Dragiˇcevi´c (S’09-M’13-SM’17) received**
the M.Sc. and the industrial Ph.D. degrees in Electrical Engineering from the Faculty of Electrical
Engineering, Zagreb, Croatia, in 2009 and 2013,
respectively. From 2013 until 2016, he has been a
Postdoctoral research associate at Aalborg University, Denmark. From March 2016 until 2020, he has
been an Associate Professor at Aalborg University,
Denmark. From April 2020, he is a Professor at the
Technical University of Denmark. He made a guest
professor stay at Nottingham University, UK, during
spring/summer of 2018. His principal field of interest is the design and control
of DC distributions systems and microgrids and the application of advanced
modeling and control concepts to power electronic systems. He has authored
and co-authored more than 200 technical publications (more than 100 of them
are published in international journals, mostly in IEEE) in his domain of
interest, 8 book chapters, and a book in the field. He serves as Associate
Editor in the IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS,
in IEEE TRANSACTIONS ON POWER ELECTRONICS, in IEEE Emerging
and Selected Topics in Power Electronics and in IEEE Industrial Electronics
Magazine. Prof. Dragiˇcevi´c is a recipient of the Konˇcar prize for the best
industrial Ph.D. thesis in Croatia, a Robert Mayer Energy Conservation award,
and from 2019 he is an Alexander von Humboldt fellow.
**Frede Blaabjerg (S’86–M’88–SM’97–F’03) was**
with ABB-Scandia, Randers, Denmark, from 1987
to 1988. From 1988 to 1992, he got a Ph.D. degree
in Electrical Engineering at Aalborg University in
1995. He became an Assistant Professor in 1992, an
Associate Professor in 1996, and a Full Professor of
power electronics and drives in 1998. From 2017
he became a Villum Investigator. He is honoris
causa at University Politehnica Timisoara (UPT),
Romania, and Tallinn Technical University (TTU)
in Estonia. His current research interests include
power electronics and its applications, such as in wind turbines, PV systems,
reliability, harmonics, and adjustable speed drives. He has published more than
600 journal papers in the fields of power electronics and its applications. He is
the co-author of four monographs and editor of ten books in power electronics
and its applications. He has received 32 IEEE Prize Paper Awards, the IEEE
PELS Distinguished Service Award in 2009, the EPE-PEMC Council Award
in 2010, the IEEE William E. Newell Power Electronics Award 2014, the
Villum Kann Rasmussen Research Award 2014, the Global Energy uPrize in
2019 and the 2020 IEEE Edison Medal. He was the Editor-in-Chief of the
IEEE TRANSACTIONS ON POWER ELECTRONICS from 2006 to 2012.
He has been a Distinguished Lecturer for the IEEE Power Electronics Society
from 2005 to 2007 and for the IEEE Industry Applications Society from 2010
to 2011 as well as 2017 to 2018. In 2019-2020 he served a President of the
IEEE Power Electronics Society. He is Vice-President of the Danish Academy
of Technical Sciences too. He is nominated in 2014-2019 by Thomson Reuters
to be between the most 250 cited researchers in Engineering in the world.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/JESTPE.2021.3050851?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/JESTPE.2021.3050851, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://backend.orbit.dtu.dk/ws/files/246171899/09319658.pdf"
}
| 2,021
|
[] | true
| 2021-01-11T00:00:00
|
[] | 12,972
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/030a68fdeb50728db63565c372e09cba03387a69
|
[
"Computer Science"
] | 0.803148
|
A Blockchain Protocol for Human-in-the-Loop AI
|
030a68fdeb50728db63565c372e09cba03387a69
|
arXiv.org
|
[
{
"authorId": "3486281",
"name": "N. Dehouche"
},
{
"authorId": "80428389",
"name": "R. Blythman"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ArXiv"
],
"alternate_urls": null,
"id": "1901e811-ee72-4b20-8f7e-de08cd395a10",
"issn": "2331-8422",
"name": "arXiv.org",
"type": null,
"url": "https://arxiv.org"
}
|
Intelligent human inputs are required both in the training and operation of AI systems, and within the governance of blockchain systems and decentralized autonomous organizations (DAOs). This paper presents a formal definition of Human Intelligence Primitives (HIPs), and describes the design and implementation of an Ethereum protocol for their on-chain collection, modeling, and integration in machine learning workflows.
|
## A Blockchain Protocol for Human-in-the-Loop AI
**Nassim Dehouche[∗]**
Mahidol University International College
Mahidol University
Salaya, Thailand 73170
```
nassim.deh@mahidol.edu
### Abstract
```
**Richard Blythman**
Algovera
Dublin
Ireland
```
richard@algovera.ai
```
Intelligent human inputs are required both in the training and operation of AI
systems, and within the governance of blockchain systems and decentralized
autonomous organizations (DAOs). This paper presents a formal definition of
Human Intelligence Primitives (HIPs), and describes the design and implementation
of an Ethereum protocol for their on-chain collection, modeling, and integration in
machine learning workflows.
### 1 Introduction and Related Work
Modern Artificial Intelligence tends to focus on centralization, autonomy and competition with
humans [1]. However, the idea of augmenting human intelligence [2] and "man-computer symbiosis"
[3] was prevalent in the early days of AI and cybernetics.
Human-in-the-loop (HITL) machine learning [4] is a promising development in this regard. Intelligent
human inputs are often included in the machine learning workflow before training, in the form of
data annotation. The HITL approach extends the scope of this integration to include human-machine
interactions during training, e.g. through expert supervision [5], and post-training, e.g. in safety
audits and fine-tuning models [6].
Software applications for crowdsourcing human intelligence tasks face challenges pertaining to the
unfair compensation of labor [7], fraud [8], censorship [9], and the difficulty of vetting credentials
[10]. The latter is typified by protocols geared towards centralized crowd-labor platforms, such as
Turkit [11]. For example, human input is taken from an indistinct mass of crowd workers on Amazon’s
Mechanical Turk platform, without the ability to require a certain level of expertise or credentials from
respondents. Turkit [11] introduced the useful concept of scripting human intelligence tasks within
traditional web applications, and was designed with issues related to high-cost and high-latency steps
involving humans in mind. This required engineering a crash-rerun approach to avoid re-executing
expensive steps.
Decentralized software deployed on public, permissionless blockchains offers natural opportunities
to tackle the aforementioned challenges. Any write instruction in a smart contract is an atomic
transaction that is immutably stored on the blockchain, and transparently accessible to any client
application. Moreover, in addition to trustless, uncensorable payment processing, blockchain software
can offer participants ownership in the system they partake in. Lastly, the emergence of standards for
identity management, such as the non-fungible token standard, allow for sophisticated access control
and have propelled the emergence of domain-expert decentralized autonomous organizations (DAOs).
DAOs are sometimes imagined as being governed by autonomous algorithms, with humans at the
margins. However, there is an increasing push towards a future of collective intelligence that promotes
harmony between humans and algorithms by optimizing for the autonomy of individuals [12]. We
_∗https://www.ndehouche.github.io_
2022 Trustworthy and Socially Responsible Machine Learning (TSRML 2022) co-located with NeurIPS 2022.
-----
believe that protocols that facilitate crowdsourcing of human intelligence and preferences are a key
component of this. This has applications in collection and annotation of training data, and AI safety.
In the following, we describe a protocol for Ethereum Virtual Machine (EVM)-compatible blockchains
that allow for the on-chain modeling of human intelligence tasks and their integration in machine
learning workflows.
### 2 Human Intelligence Primitives
A Human Intelligence Primitive (HIP) is a procedure for the collection and representation of preferences structures on a finite set of n potential alternatives (i.e. comparable objects or actions)
_A = {a0, . . ., an−1}. Preferences can be of one of the four following types, based on [13]:_
- A choice (P1), that is a subset A[′] _⊆_ _A of potential alternatives, typically a singleton set,_
containing the preferred alternative(s).
- A ranking (P2), that is a total preorder on A, ordering alternatives by decreasing preference,
with possible ex-aequo. A particular case of ranking is preferential voting [18], in which
this preference structure is a total order on A.
- A sorting (P3), that is the assignment of each alternative in A into pre-defined classes
_C = {c0, . . ., ck−1}, ordered by decreasing preference. A particular case of sorting is score_
voting [19] on a discrete scale.
- A classification (P4), that is the assignment of each alternative in A into pre-defined,
unordered classes C = {c0, . . ., ck−1}.
A HIP can thus be abstractly characterized by a triplet (t, n, k), where t ∈{P1, P2, P3, P4} is
the type of preferences sought, n 2, 3, . . ., + the number of alternatives considered, and
_∈{_ _∞}_
_m_ 1, 2, . . ., + the number of classes (equal to 1 for a choice or ranking primitive).
_∈{_ _∞}_
### 3 Smart Contract Architecture
We propose a smart contract implementing HIPs to incentivize and coordinate collective intelligence
by humans within DAOs. HIPs can be initiated by Externally-Owned Account (EOA) on Ethereum,
or contracts, through a CALL or DELEGATECALL operation. Conversely, the smart contract can
communicate synchronous events to off-chain clients (e.g. the submission of a response to a HIP),
and its output can be read asynchronously by these programs.
We consider two categories of users of the smart contract; proposers and respondents. A HIP is
recorded in a HIP object, the creation of which is initiated by a proposer address, through a function
```
submitHIP(). In this function, a proposer submits a triplet (t, n, k), and pays a fee that depends
```
on the type of primitive t. The type t is encoded by an enumerable types {CHOICE, RANKING,
```
SORTING, CLASSIFICATION}, while the number of alternatives n and the number of classes k are
```
stored in unsigned integers.
Additionally, the proposer specifies a duration, encoded as an unsigned integer number of seconds, for
taking responses. This duration is relative to the creation date of the HIP, recorded as the timestamp
of the valid block enacting its creation on the blockchain. Lastly, the HIP object records the number
of responses (individual preference structures compatible with the HIP type) recorded so far, in an
unsigned integer variable numResponses.
An individual response to a HIP is submitted by a respondent address, through a function
```
submitResponse(), specifying the address of the proposer, and the index of the HIP being re
```
sponded to, among their proposed HIPs. Respondents’ access is gated by a non-fungible token (NFT),
vetting their credentials, and giving them read access to the corresponding off-chain semantic data,
and write access to record a response to a HIP in the contract. Moreover, before recording a response,
we verify that the respondent has not already voted, and that the submitted response is compatible
with the HIP type t. An individual response that passes these checks is recorded in a Response
structure, containing the address of the respondent and an array of unsigned integers, representing the
content of the response.
-----
Given the strengths of the blockchain (trustless access control and payment processing), and its
weaknesses (inability to store secrets and high cost of computation), we have made the following key
architecture choices:
- Since data are transparently stored on the blockchain, HIP objects are recorded in the abstract
form of a triplet (t, n, k), and linked with semantic data (i.e. descriptions for alternatives
and classes) that are stored off-chain.
- In order to incentivize responses, and discourage their concentration in a few HIPs, the
reward of each respondent is the fee paid by the proposer divided by the number of responses,
by the end of a HIP’s duration.
- Once recorded in the contract, responses can be eventually accessed by off-chain clients for
computationally complex processing and aggregation.
The proposed architecture is summarized in the process diagram in Figure 1.
Figure 1: Architecture of the proposed protocol
### 4 Main Data Structures
The implementation of the protocol is subject to four indexing requirements:
- Reading/Writing HIPs requires mapping proposers with the HIPs they have created. This is
implemented as a mapping(address => HIP[]), named HIPs, whose key is a proposer
address, and value is an array of HIPs submitted by this address.
- Reading/Writing Responses requires mapping HIPs with the responses they have received.
This is implemented as a double mapping, mapping(address => mapping(uint =>
```
Response[])), named responses, indexed by a proposer address and an integer index for
```
a HIP, and whose value is an array of responses submitted for it.
- Ensuring single responses requires mapping respondents and HIPs, with a boolean indicating
whether the former has submitted a response to the latter. This is implemented as a triple
mapping, mapping(address => mapping(address => mapping (uint =>bool))), named
```
responded, indexed by a respondent address, a proposer address and an integer index for a
```
HIP, and whose value is a boolean indicating the existence of a response.
-----
- Payment processing requires mapping respondents with the proposers and indices of the
HIPs they have responded to. This is implemented as a mapping, mapping(address =>
```
ResponseRef[]), named responseRefs, indexed by a respondent address, and whose
```
value is an array of objects of type responseRef, containing the address of a proposer and
the index of a HIP.
These four mappings are illustrated in Figure 2.
Figure 2: Main data structures
**4.1** **Response Verification**
Before a response, submitted in the form of an array R of unsigned integers, is recorded in the
contract, we must verify that it is valid for a given HIP, defined by a triplet (t, n, k).
- If t indicates a choice primitive, the validity requirements are that length(R) == 1 (i.e. we
only allow singleton choices[2]) and R[0] < n (i.e. the submitted choice corresponds to the
index of a possible alternative).
- If t indicates a ranking primitive, the validity requirements are that length(R) == n and
_R contains unique digits between 0 and n_ 1. This latter requirement is verified by a
_−_
function uniqueDigits() in O(n), which uses a local boolean array variable of size n.
Depending on the preferred storage-computation trade-offs, an alternative would be to verify
the uniqueness of the digits of R in O(n[2]), without the use of a local array. This is the most
computationally-intensive potential operation in the proposed protocol.
- If t indicates a sorting or classification primitive, the validity requirements are that
_length(R) == n and R only contains digits between 0 and k_ 1.
_−_
**4.2** **Payment Processing**
Compensating respondents to a HIP proportionally to its total number of respondents poses challenges
for payment processing. It notably not allow for a real-time incrementation of a respondent’s balance.
2This requirement could be changed to an inequality to allow for larger choice subsets.
-----
This is due to the fact that any computation on the EVM must be initiated by an EOA, and it does
not allow for automated code execution. The solution we propose is to compute this balance once a
respondent requests a payment, using the requestPayment() function, so that they can bear the gas
cost of this computation.
### 5 Example Applications
A wide range of tasks requiring human intelligence can be expressed as HIPs, for example within
the training and operation of AI systems and the governance of blockchain systems and DAOs.
Surveys can be modeled as an instance of P1 [14], the collection of training data for machine-learned
ranking (MLR) as P2 [15], independent AI safety audits as P3 [6], or data annotation as P4 [16]. In
these examples, HIPs are used with a descriptive intent and a collection of individual preferences
is their intended output. Moreover, when combined with a systematic aggregation procedure for
individual preferences, HIPs can serve as primitives in processes such as plurality voting or approval
voting as instances of P1 [17], preferential voting as P2 [18], score voting as P3 [19], or rule-based
classification as P4 [20].
Following is an example, in pseudo-JavaScript, for a data annotation use case. The prefix
"contract." indicates a call to a function of the contract by an EOA or a web client, e.g. using the web3.js library [21].
- Proposer calls a classification HIP with await contract.methods.submitHIP(CLASSIFICATION,
```
n, 2, duration).send({from:accounts[0], value:fee});
```
- After a delay corresponding to the value of the argument duration, proposer collects responses with response=await contract.methods.getResponse(proposer,index,
```
i).call();
```
- Proposer aggregates responses off-chain using e.g. the majority rule.
### 6 Conclusion and Perspectives
This paper described the design and implementation of an Ethereum protocol to to incentivize and
coordinate collective intelligence by humans on-chain. Experiments using the proposed protocol will
be conducted in the Algovera community, a DAO for data scientists, in order to identify new use
cases and optimize gas usage for typical real-world applications.
The detailed source code of the proposed implementation can be found in the Appendix of this paper.
### Acknowledgements
This publication has emanated from research conducted with the partial financial support of Algovera
Grants under grant number 22/AG/R1/6. The first author is grateful to the members of Algovera DAO
for fruitful discussions.
### References
[1] Siddarth, D. et al. (2021) How AI Fails Us. Technology & Democracy Discussion Paper, Harvard
Kennedy School, Carr Center for Human Rights Policy, Cambridge, Massachusetts.
[2] Ross, A. W. (1956) An Introduction to Cybernetics. London: Chapman & Hall Ltd.
[3] Licklider, J. C. R. (1960) Man-Computer Symbiosis, IRE Transactions on Human Factors in
Electronics, 1, pp. 4–11.
[4] Xin, D. et al. (2018) Accelerating Human-in-the-loop Machine Learning: Challenges and
Opportunities, DEEM’18: Proceedings of the Second Workshop on Data Management for
End-To-End Machine Learning, June 2018, pp. 1-4.
[5] Wu, X. et al. (2022) A survey of human-in-the-loop for machine learning, Future Generation
Computer Systems, 135, pp. 364-381.
-----
[6] Falco, G. et al. (2021) Governing AI safety through independent audits, Nature Machine Intelligence, 3, pp. 566–571.
[7] Hagendorff, T. (2021) Blind spots in AI ethics, AI and Ethics, Commentary, pp 1-17.
[8] Hartvigsen, D. (2008) The Manipulation of Voting Systems, Journal of Business Ethics, 80 (1),
pp. 13–21.
[9] Ebel, C. et al. (2021) Towards intellectual freedom in an AI Ethics Global Community. AI and
Ethics, 1, pp 131-138.
[10] Halderman, J. A., Teague, V. (2015) The New South Wales iVote System: Security Failures and
Verification Flaws in a Live Online Election, Proceedings of the International Conference on
E-Voting and Identity, Lecture Notes in Computer Science, 9269, pp. 35-53.
[11] Little, G., Chilton, L. B., Goldman, M., Miller, R. C. (2009) TurKit: tools for iterative tasks
on mechanical Turk. In Proceedings of the ACM SIGKDD Workshop on Human Computation
(HCOMP ’09), Paul Bennett, Raman Chandrasekar, Max Chickering, Panos Ipeirotis, Edith Law,
Anton Mityagin, Foster Provost, and Luis von Ahn (Eds.). ACM, New York, NY, USA, pp. 29-30
[12] Nabben, K (2021) Imagining Human-Machine Futures: Blockchain-based ’Decentralized
Autonomous Organizations’, working paper, SSRN: https://ssrn.com/abstract=3953623.
[13] Roy, B. (1996) Multicriteria Methodology for Decision Aiding, Kluwer Academic Publishers,
Dordrecht (1996).
[14] Rubenfeld, G. D. (2004) Surveys: An Introduction, Respiratory Care October, 49 (10), pp.
1181-1185.
[15] Rahangdale, A., Raut, S. (2019) Machine Learning Methods for Ranking, International Journal
of Software Engineering and Knowledge Engineering, 29 (06), pp. 729-761.
[16] Paullada, A. et al. (2021) Data and its (dis)contents: A survey of dataset development and use
in machine learning research, Patterns, 2 (11), 100336.
[17] Laslier, JF. (2012). And the Loser Is. . . Plurality Voting. In: Felsenthal, D., Machover, M. (eds)
Electoral Systems. Studies in Choice and Welfare. Springer, Berlin, Heidelberg.
[18] Arrow, K. J. (1951) Alternative Approaches to the Theory of Choice in Risk-Taking Situations,
Econometrica, 19 (4), pp. 404-437.
[19] Dery, L., Tassa, T., Yanai, A. (2021) Fear not, vote truthfully: Secure Multiparty Computation
of score based rules. Expert Systems with Applications, 168, 114434.
[20] Li, X. L., Liu, B. (2014) Rule-based classification. In: Aggarwal CC (ed.) Data classification:
algorithms and applications. CRC Press, Boca Raton, pp. 121–156.
[21] Lee, W.-M. (2019) Using the web3.js APIs, In: Beginning Ethereum Smart Contracts Programming, pp. 169–198, Apress, Berkeley, CA.
### A Appendix: Source Code of the Proposed Protocol
```
// SPDX -License -Identifier: CC BY 4.0
pragma solidity ^0.8.12;
/**
* @title Human -augemented Intelligence contract
* @author Nassim Dehouche
*/
import "@openzeppelin/contracts/interfaces/IERC721.sol" ;
contract HaAI {
address owner;
address tokenContract ;
```
-----
```
// HIP types
enum types{ CHOICE, RANKING, SORTING, CLASSIFICATION }
uint numProposers;
address [] proposers;
uint [] fees;
constructor (){
owner = msg . sender ;
}
```
```
/**
@param _tokenContract is the address of the ERC -721 contract to vet
voters. We assume one address, one NFT, one vote.
Use 0 xF5b2B5b042B253323cB96121ABad487C95d287ea on Kovan
*/
function initialize ( address _tokenContract, uint [] calldata _fees )
public {
require ( msg . sender == owner);
tokenContract = _tokenContract ;
fees=_fees;
}
// The HIP structure
struct HIP{
types HIPType;
uint numAlternatives;
uint numClasses;
uint creationDate;
uint duration;
uint numResponses;
}
```
```
// Mapping proposers with an array of their proposed HIPs
mapping ( address => HIP []) public HIPs;
// The Response struct for the content of the response.
struct Response{
address respondent;
```
```
uint [] response;
}
// The Response reference struct for payment.
struct ResponseRef{
address proposer;
uint index;
}
```
```
// Responses. The first key is the proposer address
mapping ( address => mapping ( uint => Response [])) internal responses;
// The Response boolean. The first key is the respondent address
mapping ( address => mapping ( address => mapping ( uint => bool ))) public
responded;
```
```
// The Response reference for payment. Mapping respondent with the
HIPs they responded to.
mapping ( address => ResponseRef []) public responseRefs ;
modifier onlyIfPaidEnough (types _HIPType) {
require ( msg . value == fees[ uint (_HIPType)], "User did not pay the
right fee for this HIP type." );
_;
}
```
-----
```
modifier onlyIfHoldsNFT ( address _voter) {
require (IERC721(tokenContract ).balanceOf(_voter) > 0, "User does
not hold the right NFT." );
_;
}
modifier onlyIfHasNotResponded ( address _proposer, uint _id) {
require (responded[ msg . sender ][ _proposer ][_id ]== false, "User has
already responded." );
_;
}
```
```
modifier onlyIfStillOpen ( address _proposer, uint _id) {
require ( block . timestamp <= HIPs[_proposer ][_id]. creationDate +HIPs[
_proposer ][_id]. duration, "This HIP is no longer open for
responses." );
_;
}
function submitHIP
(
types _HIPType,
uint _numAlternatives,
uint _numClasses,
uint _duration)
```
```
public
payable
onlyIfPaidEnough (_HIPType)
returns ( uint _id)
{
bool condition;
if (_numAlternatives >=2){
condition= true ;
if (_HIPType == types.SORTING || _HIPType == types. CLASSIFICATION ){
condition=_numClasses >=2;
```
```
}
```
```
}
```
```
if (! condition) { revert ( ’Trivial or invalid HIP’); }
```
```
_id= HIPs[ msg . sender ]. length ;
if (_id ==0){
numProposers ++;
proposers. push ( msg . sender );
}
HIPs[ msg . sender ]. push ();
HIPs[ msg . sender ][_id]. HIPType = _HIPType;
HIPs[ msg . sender ][_id]. numAlternatives = _numAlternatives ;
HIPs[ msg . sender ][_id]. numClasses = _numClasses;
HIPs[ msg . sender ][_id]. creationDate = block . timestamp ;
HIPs[ msg . sender ][_id]. duration = _duration;
return _id;
}
```
```
function rightDigits ( uint [] calldata _response, uint _number)
```
```
internal
pure
returns ( bool _right)
{
uint i;
_right= true ;
while (i<_response. length ){
if (_response[i]>= _number){
```
-----
```
return false ;
```
```
}
unchecked{i++;}
}
return _right;
}
```
```
function uniqueDigits ( uint [] calldata _response, uint _number)
```
```
internal
pure
```
```
returns ( bool _unique)
```
```
{
bool [] memory visited;
uint i;
_unique= true ;
while (i<_response. length ){
if (_response[i]>= _number || visited[_response[i]]== true ){
return false ;
}
else {
visited[_response[i]]= true ;
}
unchecked{i++;}
}
return _unique;
}
function submitResponse
(
address _proposer,
```
```
uint _id,
uint [] calldata _response)
public
onlyIfHoldsNFT ( msg . sender )
onlyIfHasNotResponded (_proposer, _id)
onlyIfStillOpen(_proposer, _id)
returns ( uint _number)
{
bool condition;
if (HIPs[_proposer ][_id]. HIPType == types.CHOICE){
condition=_response. length ==1 && _response [0]< HIPs[_proposer ][_id].
numAlternatives ;
}
```
```
else if (HIPs[_proposer ][_id]. HIPType == types.RANKING){
condition=_response. length == HIPs[_proposer ][_id]. numAlternatives
&& uniqueDigits(_response, _response. length );
}
else if (HIPs[_proposer ][_id]. HIPType == types.SORTING || HIPs[
_proposer ][_id]. HIPType == types. CLASSIFICATION ){
condition=_response. length == HIPs[_proposer ][_id]. numAlternatives
&& rightDigits(_response, HIPs[_proposer ][_id]. numClasses);
```
```
}
```
```
if (! condition) { revert ( ’Invalid response ’); }
```
```
_number=responses[_proposer ][_id]. length +1;
HIPs[_proposer ][_id]. numResponses=_number;
responses[_proposer ][_id]. push ();
```
-----
```
responses[_proposer ][_id][ _number -1]. respondent= msg . sender ;
for ( uint i = 0; i < _response. length ; ) {
responses[_proposer ][_id][ _number -1]. response. push (_response[i]);
unchecked{i++;}
}
ResponseRef memory r;
r.proposer = _proposer;
r.index = _id;
responseRefs[ msg . sender ]. push (r);
responded[ msg . sender ][ _proposer ][_id]= true ;
return _number;
}
```
```
// Respondents payment function
function requestPayment () public
{
uint _balance;
uint _id;
address _proposer;
for ( uint i=0;i<responseRefs[ msg . sender ]. length ;){
_proposer=responseRefs[ msg . sender ][i]. proposer;
_id=responseRefs[ msg . sender ][i]. index;
if (_proposer != address (0) && block . timestamp >HIPs[_proposer ][_id].
creationDate+HIPs[_proposer ][_id]. duration)
{
responseRefs[ msg . sender ][i]. proposer= address (0);
_balance += fees[ uint8 (HIPs[_proposer ][_id]. HIPType)]/ HIPs[_proposer
][_id]. numResponses ;
unchecked{i++;}
}
}
( bool sent, ) = msg . sender . call { value : _balance }( "" );
require (sent, "Failed to send Ether" );
```
```
}
```
```
function getNumProposers () public view returns ( uint _numProposers ){
return numProposers;
}
function getFee( uint i) public view returns ( uint _fee){
return fees[i];
}
function getProposer( uint i) public view returns ( address _proposer){
return proposers[i];
}
```
```
function getHIPCount( address _proposer) public view returns ( uint
_count){
return HIPs[_proposer ]. length ;
}
function getResponse( address _proposer, uint _indexHIP, uint
_indexResponse ) public view returns ( uint [] memory _response){
return responses[_proposer ][ _indexHIP ][ _indexResponse ]. response;
}
```
```
function getBalance () public view returns ( uint _balance){
```
```
uint _id;
address _proposer;
for ( uint i=0;i<responseRefs[ msg . sender ]. length ;){
_proposer=responseRefs[ msg . sender ][i]. proposer;
_id=responseRefs[ msg . sender ][i]. index;
```
-----
```
if (_proposer != address (0) && block . timestamp >HIPs[_proposer ][_id].
creationDate+HIPs[_proposer ][_id]. duration)
{
_balance += fees[ uint8 (HIPs[_proposer ][_id]. HIPType)]/ HIPs[_proposer
][_id]. numResponses ;
unchecked{i++;}
}
}
return _balance;
```
```
}
```
```
}
```
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2211.10859, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2211.10859"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-11-20T00:00:00
|
[
{
"paperId": "41ada02f5d67a2ff3a1de7d44445896f1e8008a7",
"title": "Blind spots in AI ethics"
},
{
"paperId": "102ebe229df18c8733ea1b8def56cd79996e2178",
"title": "A Survey of Human-in-the-loop for Machine Learning"
},
{
"paperId": "93a04c8661ce96f9ab972a0ede4680232627467a",
"title": "Governing AI safety through independent audits"
},
{
"paperId": "2fc582558b056924e3a6e6d5823ccd1813bb465b",
"title": "Blind Spots in AI"
},
{
"paperId": "8a58560ef6322c06b6ec080c8a36429dc4f6bef1",
"title": "Towards intellectual freedom in an AI Ethics Global Community"
},
{
"paperId": "c09f44e0088342ec618c7a2deeab1526d73b2d6b",
"title": "Data and its (dis)contents: A survey of dataset development and use in machine learning research"
},
{
"paperId": "9835e2cddb842b5f52429da8ec8d09fe5a28a0ec",
"title": "Machine Learning Methods for Ranking"
},
{
"paperId": "292887dfd96468206f841390fde22848494a5c43",
"title": "Fear not, vote truthfully: Secure Multiparty Computation of score based rules"
},
{
"paperId": "f01801c5bc6742235be910a933acd25a0236b895",
"title": "Accelerating Human-in-the-loop Machine Learning: Challenges and Opportunities"
},
{
"paperId": "dd80dd581d2b0d621eea219e61c324a7e4b58c1d",
"title": "The New South Wales iVote System: Security Failures and Verification Flaws in a Live Online Election"
},
{
"paperId": "9c06b14335d07a54c9645e1156fc3d8ae1443be2",
"title": "TurKit: Tools for iterative tasks on mechanical turk"
},
{
"paperId": "2ecfdbac795d88de952178d900e38d5dc503cb2f",
"title": "The Manipulation of Voting Systems"
},
{
"paperId": "c0b9753def41400cb4bb42bc2e2a6f7853b99f81",
"title": "Multicriteria Methodology for Decision Aiding"
},
{
"paperId": "a4ed116fb6523166a2286be55bf9c14030baad61",
"title": "Man-Computer Symbiosis"
},
{
"paperId": "979079ff368b2a32ce7546319c7837ef1793bf9b",
"title": "An Introduction to Cybernetics"
},
{
"paperId": "83710b9fbc89d7b5f6c7a228d1ec6ce56b30ea8f",
"title": "Alternative Approaches to the Theory of Choice in Risk-Taking Situations"
},
{
"paperId": "3888b7e5e44166d778bfafb0bf3e83727e204c53",
"title": "How AI Fails Us"
},
{
"paperId": "51402565d0205d97a46380090744b2fc496f7dea",
"title": "Imagining Human-Machine Futures: Blockchain-based “Decentralized Autonomous Organizations”"
},
{
"paperId": null,
"title": "Using the web3.js APIs, In: Beginning Ethereum Smart Contracts Programming"
},
{
"paperId": "b47e556446d06592ff79eef05871acc81637712a",
"title": "Rule-Based Classification"
},
{
"paperId": "528dbf73a0f1573bfcbed9eca572e228bbb16e70",
"title": "And the Loser Is… Plurality Voting"
},
{
"paperId": "30e8078d4d97bf0f6b26698ad5bb498dafd62ab7",
"title": "And the Loser Is"
},
{
"paperId": null,
"title": "Surveys: An Introduction"
}
] | 6,959
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/030ba0590f3522fe9ddfec23f6dd04a4119419e0
|
[] | 0.880842
|
Blockchain-Based Incentive Mechanism for Spectrum Sharing in IoV
|
030ba0590f3522fe9ddfec23f6dd04a4119419e0
|
Wireless Communications and Mobile Computing
|
[
{
"authorId": "49403727",
"name": "Hongning Li"
},
{
"authorId": "2116479183",
"name": "Jingyi Li"
},
{
"authorId": "2181384926",
"name": "Hongyang Zhao"
},
{
"authorId": "3098001",
"name": "Shunfan He"
},
{
"authorId": "2163813841",
"name": "Tonghui Hu"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Wirel Commun Mob Comput"
],
"alternate_urls": [
"https://onlinelibrary.wiley.com/journal/15308677",
"http://www.interscience.wiley.com/jpages/1530-8669/"
],
"id": "501c1070-b5d2-4ff0-ad6f-8769a0a1e13f",
"issn": "1530-8669",
"name": "Wireless Communications and Mobile Computing",
"type": "journal",
"url": "https://www.hindawi.com/journals/wcmc/"
}
|
In this paper, we design a blockchain-based incentive mechanism for the problem of low-level participation of primary users caused by location privacy leakage during spectrum data sharing in the Internet of Vehicles (IoV). First, we propose a
K
-anonymous location protection scheme for multiuser cooperation, which can protect the location privacy of primary users by generalizing their location information through the construction of anonymous areas. Then, we design an incentive mechanism, which performs reporting and adjudication strategy through the transaction stored in blockchain. Simulation results indicate that the proposed scheme can effectively prevent the privacy leakage of primary users’ location and encourage them to actively participate in spectrum sharing in IoV.
|
Hindawi
Wireless Communications and Mobile Computing
Volume 2022, Article ID 6807257, 14 pages
[https://doi.org/10.1155/2022/6807257](https://doi.org/10.1155/2022/6807257)
# Research Article Blockchain-Based Incentive Mechanism for Spectrum Sharing in IoV
## Hongning Li,[1] Jingyi Li,[2] Hongyang Zhao,[3] Shunfan He,[4] and Tonghui Hu[2]
1Xidian Guangzhou Institute of Technology, Guangzhou, Guangdong 511370, China
2Xidian University, Xi'an, Shaanxi 710071, China
3CEPREI, Guangzhou, Guangdong 511370, China
4South Central University for Nationalities, Wuhan, Hubei 430073, China
Correspondence should be addressed to Hongyang Zhao; zhaohy@ceprei.com
Received 17 December 2021; Accepted 30 March 2022; Published 28 April 2022
Academic Editor: Celimuge Wu
[Copyright © 2022 Hongning Li et al. This is an open access article distributed under the Creative Commons Attribution License,](https://creativecommons.org/licenses/by/4.0/)
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
In this paper, we design a blockchain-based incentive mechanism for the problem of low-level participation of primary users
caused by location privacy leakage during spectrum data sharing in the Internet of Vehicles (IoV). First, we propose a K
-anonymous location protection scheme for multiuser cooperation, which can protect the location privacy of primary users by
generalizing their location information through the construction of anonymous areas. Then, we design an incentive
mechanism, which performs reporting and adjudication strategy through the transaction stored in blockchain. Simulation
results indicate that the proposed scheme can effectively prevent the privacy leakage of primary users’ location and encourage
them to actively participate in spectrum sharing in IoV.
## 1. Introduction
With the development of information technology, 6G will
further realize the Internet of everything, establish multilevel
and full-coverage seamless connection, and serve the key
areas of multi-industry integration such as communication,
transportation, and automobile. The vehicle networking system is being developed more quickly with the new generation of information and communication technology. 6G
needs to support high-level security to meet the requirements of intelligent vehicle systems. The growing number
of vehicles has significantly increased the consumption of
spectrum resources. In fact, spectrum resources are divided
into various frequency bands in a specific form given by government agencies, which are allocated to users with permission by issuing licenses. However, the existing spectrum
management methods lead to some frequency bands being
idle for most of the time, and the overall utilization rate of
spectrum resources is very low. Take the USA as an example.
A large number of investigations by the Federal Communication Commission show that the usage of spectrum
resources is extremely unbalanced. Some authorized frequency bands are very crowded, while most of the others
are idle [1]. Therefore, how to use spectrum effectively has
become an urgent problem to be solved.
The 6G white paper points out that the full and efficient
utilization of spectrum resources in different frequency
bands can be realized through recultivation, aggregation,
and sharing. It meets the spectrum needs of the 6G era. Most
of the existing methods for obtaining free spectrum are
based on the perception of secondary users, but the accuracy
may be affected by malicious users. How to encourage primary users to actively participate in spectrum sharing and
improve the accuracy of available spectrum information is
an urgent problem to be solved.
Incentive mechanism can guarantee the needs of participating users through special forms of interest division,
which is an effective way to stimulate users to participate
in spectrum sharing. In spectrum sharing, to get benefits,
primary users with permission can share the bands when
they have no communication requirements. Other users that
can use idle band shared by primary users are called
-----
2 Wireless Communications and Mobile Computing
secondary users. To get spectrum information, Feng et al.
propose a monetary incentive mechanism based on reverse
auction to encourage secondary users to participate in spectrum sensing [2]. Li et al. adopt a pricing mechanism based
on maximizing expected utility to encourage users to participate in perception. Ying et al. [3] use cooperative spectrum
sensing schemes based on the evolutionary game and Stenberg game model to improve detection performance [4].
However, most of the current research considers using spectrum sensing technology to obtain idle spectrum information and use idle bands for opportunities, while little
research is done on the active sharing of primary users. Elnahas et al. [5] propose an auction mechanism with timevarying valuation information to maximize auction revenue
to encourage primary users to join the market. Literature [6]
proposes to increase auction revenue in a dynamic secondary market to improve spectrum utilization. In fact, the participation of primary users can effectively improve spectrum
utilization efficiency.
The effective implementation of spectrum sharing in IoV
depends on the active participation of all users in the network. How to encourage the primary users to actively participate in spectrum sharing is one of the important issues that
need to be studied in IoV.
In addition, the primary users need to submit a certain
range of location information to a third party (such as a
spectrum distribution center) in spectrum sharing in IoV.
The more precise the location provided, the more conducive
to the allocation and use of free spectrum. Untrustworthy
third parties can infer their personal sensitive information
from the primary users’ spectrum status and sharing license
information, causing hidden dangers to user privacy and
security [7], thereby reducing the primary users’ enthusiasm
for participating in spectrum sharing. Due to the lack of protection of the location information and effective incentive
mechanism for primary users in IoV, primary users have
no incentive to participate in spectrum sharing.
At present, there are many blockchain-based technologies and methods applied in privacy protection. In 2016,
Yuan et al. used blockchain technology to build a secure
and trusted distributed autonomous transportation system
for the first time [8]. Benjamin et al. proposed a distributed
storage-based vehicle networking system based on Ethereum
to achieve secure communication between vehicles [9]. In
the literature [10–12], to provide reliable reference and credible data for law enforcement agencies involved in information exchange or traffic accident evidence collection, a
distributed data storage is constructed using blockchain
technology. According to the literature [13–16], blockchain
distributed storage can enhance the reliability of data, and
users in the blockchain system use pseudonyms, which cuts
off the connection between user names and their real identities and prevents malicious nodes from obtaining users’ real
identity. In [17], blockchain and in-vehicle IoT features and
related research questions are discussed. Besides, in literature [18], a multichannel blockchain solution is applied to
the blockchain. It can be seen that the current research uses
the blockchain to solve the problem of privacy leakage.
Therefore, the application of blockchain in the field of pri
vacy protection can be used as an effective means to solve
the problem of privacy leakage of main users in the spectrum
sharing in IoV.
Therefore, the paper proposes an incentive mechanism
based on location privacy protection (IMLPP), which uses
the blockchain to protect primary users’ location information and encourages them to actively participate in spectrum
sharing. The incentive mechanism is designed to improve
the utilization rate of the spectrum and further solve the
problem of the shortage of spectrum resources. In the proposed mechanism, the distributed K-anonymous scheme
based on blockchain is used to generalize the location information of primary users, which ensures that even if the
opponents can obtain the spectrum allocation information,
the real location of primary users cannot be inferred. The
main contributions of the paper are as follows:
(1) We propose a privacy-preserving scheme based on
blockchain to generalize primary users’ location during spectrum sharing. In this scheme, users with a
certain requirement can be selected to cooperate
with primary users to construct anonymous areas.
The information of construction of anonymous area
is stored in blockchain as transaction, and it can be
used as evidence of users’ behavior
(2) We propose an incentive mechanism to encourage
primary users to participate in spectrum sharing.
The honesty degree is proposed to measure the
integrity of users. Each user in the network has an
initial honesty degree, which is updated according
to the users’ behavior. With deposit payment, honesty degree evaluation algorithm, and users’ behavior
constraint in the blockchain, the incentive mechanism can effectively encourage primary users to participate in spectrum sharing and constraint users’
behavior
## 2. Spectrum Sharing Incentive Mechanism Based on Location Privacy Protection
2.1. System Model. This paper considers the spectrum sharing model of IoV as shown in Figure 1, which includes a
fusion center and multiple vehicle users. Communication is
enabled among users and between users and the fusion center. The fusion center issues spectrum sensing tasks, calculates spectrum data, and allocates idle spectrum to
secondary users. Primary users share their idle spectrum
and protect location information by issuing request for location information protection and constructing anonymous
areas with the assistance of other users in the network.
In this paper, the primary users who participate in spectrum sharing and need to protect their location information
are called the requesting user, and users that provide
encrypted location information to help the requesting user
construct an anonymous area are called cooperative users.
Requesting users use the location information provided by
cooperative users to construct anonymous areas to meet
the needs of privacy protection.
-----
Wireless Communications and Mobile Computing 3
Fusion center Primary user Secondary user
Communication between the fusion center and users
Communication between users
Figure 1: System model.
To construct a reliable anonymous area, cooperative
users’ behavior should be constrained. In this paper, we
define two types of illegal behaviors, one is requesting users
who disclose the location information of cooperative users,
and the other one is cooperative users who provide false
locations. The process of assisting a requesting user to construct anonymous area is regarded as a transaction. The
requesting user ID, the cooperative user ID, and the location
information of the cooperative user are taken as transaction
bill information and then encrypted and recorded in the
blockchain (the blockchain is a private chain in IoV). This
process will generate a certain amount of virtual currency
(called mining) in the blockchain system.
2.2. Incentive Mechanism Based on Location Privacy
Protection. In this section, an incentive mechanism based
on location privacy protection (IMLPP) is proposed, which
is shown in Figure 2. The mechanism uses K-anonymous
scheme based on blockchain with cooperative users to generalize primary users’ location information. In this mechanism, an honesty degree evaluation mechanism is designed
to provide a basis for selecting between requesting users
and cooperative users. By paying the deposit, reporting and
adjudicating with the transaction bill as the evidence, users’
location information can be protected. On this basis, the
honesty degree and virtual currency in the blockchain are
taken as incentives for the primary users to participate in
spectrum sharing.
Fusion center Primary user
Communication between the fusion center and users
Communication between users
IMLPP scheme is divided into four sections, honesty
degree mechanism, anonymous area construction, report
and adjudication strategy, and incentive mechanism.
2.2.1. Honesty Degree Mechanism. In the honesty degree
mechanism, honesty degree is used to measure the credibility of users, as the basis for mutual choice in the transaction,
to meet the user’s personalized security requirements for
location privacy, and as the reference basis for the fusion
center to allocate spectrum. Specifically, requesting users
want the cooperative users with high honesty degree to participate in anonymous area construction to ensure the accuracy of the location provided by cooperative users.
Cooperative users also tend to cooperate with requesting
users with high honesty degree to ensure that location information is not disclosed. Secondary users with high honesty
degree will be allocated spectrum with high probability.
The honesty degree evaluation algorithm is the basis of
honesty degree update. Assuming that m0 and m1 are constant coefficients, m0 and m1 can be any positive number,
and the value of m0 and m1 has no effect on the results of
this experiment. We consider m0 = 20, m1 = 20 in this paper.
_B is a Boolean variable, and if the user has illegal behavior,_
_B = 0, and on the contrary, B = 1. The initial honesty degree_
_H0 of all users is 60, and the upper limit of the honesty_
degree is 200. We assume that the current honesty degree
of user U _i is Hi, and the honesty degree evaluation algorithm_
is shown in Algorithm 1.
Secondary user
-----
4 Wireless Communications and Mobile Computing
Honesty degree mechanism Anonymous area construction
Spectrum sharing requests reporting
Honesty degree evaluation algorithm
Deposits paying
Anonymous groups constructing
Honesty degree evaluation algorithm
mechanism
Anonymous areas constructing
Incentive mechanism Report and adjudication strategy
The revenue reward and
Collaborative users
punishment
Reward and Reporting and
punishment Honesty degree reward adjudication Referees
strategy and punishment strategy
Requesting users
Deposit punishment
Figure 2: IMLPP.
According to the honesty degree evaluation algorithm, if
the user’s honesty degree is higher, the more honesty degree
will be deducted when the user commits illegal acts, and the
more slowly of the user’s honesty degree increases.
2.2.2. Anonymous Area Construction. This section gives the
detail of anonymous area construction, which uses distributed K-anonymous scheme to protect primary users’ location information. It contains spectrum sharing requests,
deposits paying, anonymous groups constructing, and anonymous areas constructing. Among them, the anonymous
group is a set of users who are willing to participate in the
construction of the anonymous area and meet the
requirements.
To illustrate, we take a primary user PU _i as an example._
_PU_ _i sends a request to the smart contract:_
request = ID� _PU_ _i_, HPU _i_, HU, Kð − 1Þ�, ð1Þ
where IDPU _i is the only identifier of PU_ _i in the blockchain_
system. HPU _i is PU_ _i’s honesty degree. HU is the lower limit_
of the honesty degree of cooperative users. ðK − 1Þ is the
number of cooperative users to meet different requirements
of different requesting users for location privacy.
After receiving the request, the smart contract determines whether to assist in constructing the anonymous area
according to PU _i’s honesty degree HPU_ _i:_
(1) When HPUi < 40, the request is rejected.
Then, the smart contract calculates and returns the
deposit that PU _i has to pay:_
_DPU_ _i =_ _H[m]PU[2]_ _i_, ð2Þ
where m2 is the income to be generated from this mining. It
can be seen from Formula (2) that the higher the honesty
degree, the lower the deposit PU _i need to pay._
After paying the deposit, PU _i’s location protection_
request is broadcasted in the network, and other users in
the network choose whether to participate in the anonymous
area construction according to PU _i’s honesty degree HPU_ _i_ .
In order to guarantee the construction of anonymous area,
this paper introduces willingness list wish = fU 1 : HU 1, U 2
: HU 2, ⋯, U _i : HU_ _i_ g, which includes users’ honesty degree
and their serial numbers.
When the user U _i is willing to participate in the anony-_
mous area construction, it sends a request to the smart contract. Then, the smart contract will put U _i into the_
willingness list. If U _i’s honesty degree HU_ _i ≥_ _HU_, the smart
contract returns the deposit DU _i, and U_ _i will join the anon-_
ymous group after paying the deposit DU _i_, which meets the
following requirements:
_DU_ _i =_ _K ∗mH2_ _U_ _i_ _:_ ð3Þ
If K-1 cooperative users join the anonymous group, the
anonymous group is successfully constructed. If the anonymous group construction fails because K or HU value is
too high, the smart contract will send the wish list to PU _i._
_PU_ _i adjusts HU and K according to the wish list and sends_
to the smart contract to reconstruct the anonymous group.
After the anonymous group is successfully constructed,
all cooperative users U _iði = 1, 2, ⋯, K −_ 1Þ in the group send
_PU_ 0 location information bills BillLOCUi, which meets the
following requirements:
_BillLOCPi = ID�_ _U_ _i_, PPU _i E�_ _U_ _i Loc�_ _U_ _i���,_ ð4Þ
where IDU _i is the cooperative user U_ _i’s identity, LocU_ _i is U_ _i’s_
-----
Wireless Communications and Mobile Computing 5
Input: Current honesty degree Hi;
Output: Updated honesty degree Hi′ .
①For each Hi do:
② if B =0:
//The user has illegal behavior
③ _Hi′_ =Hi − _Hi/m1_
④ else if B = 1 and Hi < 200 :
//The user is honest and the current honesty degree level is not up to the upper limit
⑤ _Hi′ = Hi + m0/Hi_
⑥ if Hi′ > 200 :
//Updated fidelity exceeds the upper limit
⑦ _Hi′ = 200_
⑧ else:
//The user is honest and the honesty degree reaches the upper limit
⑨ _Hi′ = 200_
Algorithm 1: Honesty degree evaluation algorithm.
location information, and EU _iðLocU_ _i_ Þ is the encrypted
ciphertext of location information using the U _i’s SU1 pri-_
vate key and PU _i’s public key._
_PU_ _i uses his private key and U_ _i’s public key to decrypt_
the ciphertext EU _iðLocU_ _i_ Þ and obtain U _i’s location informa-_
tion LocU _i_, which is used to construct the location anonymous area.
We assume PU _i’s identity is IDPU_ _i, and the location_
information is LocPU _i_ . Before using the location privacy protection scheme, PU _i submits location information that is_
shown in Table 1, and the fusion center can directly obtain
_PU_ _i’s location information._
With the location privacy protection scheme, a multilocation information anonymous area is submitted to the
fusion center by PU 0. As shown in Table 2, the probability
that the fusion center can correctly analyze the location
information of the primary user is only 1/K.
After the anonymous area is constructed, PU _i submits_
the anonymous area together with the spectrum sharing
license to the fusion center. Then, PPU _i_ ðEU _iðLocU_ _i_ ÞÞ, IDPU _i_
, and IDU _i are written into the transaction bill by PU_ _i for_
broadcasting throughout the network, which is shown in
Table 3. The users with honesty degree greater than 60 in
IoV jointly participate in the calculation competition to
write the transaction bill on the block and add the block to
the blockchain.
Since the size of the anonymous area is much larger than
the moving distance of vehicles during the time when the
anonymous area is constructed, the error caused by the vehicle movement is ignored in this paper.
2.2.3. Report and Adjudication Strategy. For the possible
users’ illegal behaviors in this scheme, this paper proposes
a strategy for judging and punishing illegal behaviors, which
is called report and adjudication strategy. In addition, we
give the concept of referees to refer to those users who participate in adjudicating illegal behavior. Firstly, the reporting
and adjudication strategy of requesting users and cooperative users are defined as follows.
(1) Definition I (Reporting and Adjudication Strategy).
(i) In the reporting and adjudication strategy a1, we
define the reporting and adjudication strategy of
cooperative users. When U _i discovers that his loca-_
tion information is leaked by PU _i, U_ _i sends the_
smart contract a request to report PU _i, and provides_
evidence of PU _i ‘s illegal behavior. Then the request_
is broadcasted in the network. The first 50 users (referees) in the network to respond carry out verification and adjudication. Referees retrieve transaction
bills in the blockchain, verify the report information
according to the transaction bills, and determine
whether support the reporting based on the evidence
(ii) In the adjudication strategy a2, we define the reporting and adjudication strategy of the requesting users.
When PU _i finds that the security of the constructed_
anonymous area is reduced due to the provision of
false location information by U _i, PU_ _i uses his private_
key to decrypt PPU _iðEU_ _iðLocU_ _i_ ÞÞ in the transaction
bill, and obtain EU _iðLocU_ _i_ Þ. Then, EU _i_ ðLocU _i_ Þ and
related evidence (such as the location is no man’s
land, etc.) are sent to the smart contract for reporting, which is broadcasted in the network. After verifying the report information, the referees use U _i ‘s_
public key to decrypt the ciphertext EU _iðLocU_ _i_ Þ to
get the location information LocU _i. Finally, referees_
determine whether support the report based on
_LocU_ _i and the evidence_
According to reporting and adjudication strategy, after
the user initiates a report, if there are more than 25 referees
who support the report, it will be determined that the
-----
6 Wireless Communications and Mobile Computing
Table 1: Position table before generalization.
User Location information
_IDPU_ _i_ _LocPU_ _i_
Table 2: Anonymous area.
User Location information
_IDPU_ _i_ _LocU_ 1, LocU 2, LocU 3, ..., LocPU _i_,..., LocU _k−1_
Table 3: Transaction bill.
User Location information
_IDPU_ 0 −
_IDU_ 1 _PPU_ _i E�_ _U_ 1 Loc� _U_ 1 ��
_IDU_ 2 _PPU_ _i E�_ _U_ 2 Loc� _U_ 2 ��
... ...
_IDU_ _K−1_ _PPU_ _i E�_ _U_ _K−1 Loc�_ _U_ _K−1_ ��
reported user has illegal behavior; otherwise, the report will
be invalid.
Considering that the referee may make an adjudication
without verification, which will affect the report result, this
paper puts forward the adjudication strategies for the referee’s illegal behaviors.
For the referee J _i, if the adjudication is wrong for T con-_
secutive times, J _i would be adjudicated as an illegal user, and_
the T value meets
_T = H�_ _J_ _i_ _m4_ � + m5, ð5Þ
where H _J_ _i is J_ _i’s honesty degree. The value range of m4 is_
between 0 and 1, and m5 can be other positive numbers. In
subsequent simulation experiments, we consider m4=
0.5, m5=2.
From Formula (5), the value of T is related to the honesty degree of the user. The higher the honesty degree of
the user, the better the inclusiveness to the user, and the
more times the error can be decided.
2.2.4. Incentive Mechanism. To encourage the primary user
to participate in spectrum sharing, all users in the network
to participate in anonymous area construction and adjudication and restrict users’ behavior; this paper proposes reward
and punishment mechanisms in different scenarios.
(1) Definition II (Responsivity). The ratio of the number of
users responding to a primary user’s request for anonymous
area construction information to the total number of users
in the network is called the response rate.
For the primary users, the higher the honesty degree,
the higher the response rate. Only by improving the honesty degree can the higher response rate be obtained. For
secondary users, only by improving honesty degree can
they have higher priority in spectrum allocation. Therefore, in addition to the virtual currency in the blockchain,
honesty degree is also used as an incentive for users. In
this scheme, we propose a reward and punishment mechanism to reward users and punish users who have illegal
behavior, which consists of reward and punishment strategies in three aspects, namely income, deposit, and honesty
degree.
(2) Definition III (Reward and Punishment Strategy).
(i) In the reward and punishment strategy b1, the revenue reward and punishment are defined. Users who
participate in anonymous area construction or spectrum sharing will get virtual currency rewards, and
users who have illegal behaviors have lower income
in the penalty round (we set the penalty round to
10 rounds).
After the transaction bill is linked up, the miners look for
whether there is a penalty transaction bill for PU _i’s and U_ _i’s_
illegal behaviors in the blockchain. Assume that m2 is the
virtual currency generated by the miner through mining,
and the miner obtains virtual currency is m2/3:
(1) If no penalty transaction bill for PU _i’s illegal behav-_
ior is found in the blockchain, the miner will assign
_PU_ _i virtual currency CPU_ _i_, which meets the following
requirements:
(3) When PU _i’s illegal behavior is found and it exists in_
the lth block blockl, assume that N is the current
number of blocks, Ci is the income when the user
has no illegal behavior, and Ci[′] is the actual income
of the user this time:
(a) If N − _L ≤_ 10, then the miner assigns virtual currency to the user:
_CPU_ _i=_
_m2_ ð6Þ
3 _[:]_
(2) If no penalty transaction bill for U _i’s illegal behavior_
is found in the blockchain, the miner will assign U _i_
virtual currency CU _i, which meets the following_
requirements:
_CU_ _i=_
_m2_ ð7Þ
3K _[:]_
-----
Wireless Communications and Mobile Computing 7
Table 4: Simulation parameter table.
Parameters Meaning Default
N Number of users 10000
A Proportion of primary users 30%
B Proportion of secondary users 50
C Percentage of attackers 20%
Cycle Number of simulations 0 ~ 200
Block Current block length 100
M Number of transactions stored per block 100
K Number of users participating in anonymous area construction 2 ~ 37
Privacy protection result
6
4
2
0
–2
–4
–6
–6 –4 –2 0
x
2
4
Without IMLPP
IMLPP
Figure 3: Anonymous region of K = 10.
Privacy protection result
6
4
2
0
–2
–4
–6
4
6
6
–6 –4 –2 0
x
2
Without IMLPP
IMLPP
Figure 4: Anonymous region of K = 20.
-----
8 Wireless Communications and Mobile Computing
450
400
350
300
250
200
150
100
50
_HU_ _i = HU_ _i −_ _[H]20[U]_ _[i]_ _[:]_ ð11Þ
(2) If PU _i is adjudicated to have illegal behavior, the_
deposit paid by PU _iwill be used as compensation,_
and PU _i’s honesty degree HPU_ _i_ will be updated
according to the honesty degree evaluation
algorithm:
as follows according to the honesty degree evaluation
algorithm:
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Anonymous zone construction parameters k
Primer user
Secondary user
Figure 5: Average calculation delay.
_Ci_ [′] = _[C][i]_ ð8Þ
2 _[:]_
_HPU_ _i = HPU_ _i −_ _[H]20[PU]_ _[i]_ _[:]_ ð12Þ
The following is an introduction to the reward and punishment mechanism of referees.
Assume that a referee J _i’s honesty degree is Hi:_
(1) If after J _i participating in the ruling, J_ _i is not deter-_
mined to be user who has illegal behavior, and J _i’s_
honesty degree will increase to
(b) If N − _L < 10, then the miner assigns virtual currency_
to the user:
_Ci_ [′] = Ci: ð9Þ
_H_ _J_ _i =_ _H_ _Ji + H[20]J_ _i_
!
_:_ ð13Þ
(ii) In the reward and punishment strategy b2, honesty
degree reward and punishment and deposit punishment are defined. The honesty degree is updated
according to the honesty degree evaluation algorithm. If users participate in anonymous area construction, share spectrum, or have illegal behavior,
their honesty degree will be updated. Besides, the
deposit paid by illegal users will be used as compensation for privacy victims
� �
_H_ _J_ _i =_ _H_ _Ji −_ _[H]20[J]_ _[i]_ _:_ ð14Þ
The reward and punishment mechanisms reward the
primary users who participate in spectrum sharing, the
cooperative users who participate in the construction of
anonymous areas, and the referees who participate in the
adjudication, and punish the illegal users, which not only
play an incentive role, but also can effectively restrain the
user behavior.
(2) If J _i’s adjudicated to be an illegal user after partici-_
pating in the ruling, J _i’s honesty degree will be_
reduced to
After the transaction bill is linked, PU _i’s and U_ _i’s hon-_
esty degree Hi will be updated as follows according to the
honesty degree evaluation algorithm:
_Hi = Hi + [20]_ _:_ ð10Þ
_Hi_
If there is a user who has illegal behavior during the construction of the anonymous area, the penalty transaction bill
will be broadcast and the user will be punished:
(1) If U _i is adjudicated to have illegal behavior, the_
deposit paid by U _i will be used as PU_ _i’s compensa-_
tion, and U _i’s honesty degree HU_ _iwill be updated_
## 3. Simulation Experiment and Analysis
3.1. Simulation Environment. In this section, we conduct a
simulation analysis on the proposed IMLPP scheme to verify
its impact on location privacy protection and spectrum sharing incentives in spectrum sharing in IoV. The parameter
settings of simulation environment are shown in Table 4.
3.2. Simulation Analysis and Results of Location Privacy
Protection. In this paper, a distributed K-anonymous
-----
Wireless Communications and Mobile Computing 9
18
16
14
12
10
8
6
4
2
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Anonymous zone construction parameters k
Primer user
Secondary user
350
300
250
200
150
100
50
Figure 6: Average communication overhead.
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30
Anonymous zone construction parameters k
Figure 7: Time consumption for different K values.
algorithm is designed by using blockchain technology, which
protects the location privacy security of the primary user in
the spectrum sharing of the vehicle network and solves the
privacy security threat of the primary user caused by spectrum sharing.
3.2.1. Privacy Protection of Constructing Anonymous Areas.
This part of the experiment analyzes the privacy protection
effect of the privacy protection scheme on the primary user.
The vehicle user running in the vehicle network is regarded
as a point moving in a two-dimensional plane coordinate
system, and the coordinates of the point represent the position of the user. As shown in Figure 3, before using IMLPP
scheme, the user’s position is red point A, which can be
directly obtained by attackers. After using this scheme, when
_K = 10, the user’s position A (3, 0) is generalized to an anon-_
ymous area composed of 10 points, and the probability that
the attacker can correctly analyze the position of point A is
only 1/10. When K = 20, as shown in Figure 4, the probabil
ity that the attacker gets the location of A is 1/20. The larger
the K value, the safer the user’s location privacy. We use Java
to perform simulation experiments and use Python to plot
and analyze the experimental result data.
3.2.2. Influence of Parameter K on Average Computing Delay
and Communication Overhead. In this part, the calculation
delay and communication overhead of users in the process
of anonymous area construction are analyzed
experimentally.
We select different K values for simulation experiments;
the value of K ranges from 2 to 19 and obtains the user’s
computing delay and communication overhead, as shown
in Figures 5 and 6. It can be seen from the figure that the
_K value will affect the computational delay and communica-_
tion overhead required by the requesting users, and the
cooperative users will not be affected by it.
This is because when the requesting user receives the
location bill of the cooperative user, the requesting user
-----
10 Wireless Communications and Mobile Computing
575
550
525
500
475
450
425
400
375
25 200 400 600 800 1000 1200 1400
Number of users
k=20
k=30
k=40
Figure 8: Time consumption when the number of users in the network varies.
18000
16000
14000
12000
10000
8000
6000
4000
2000
20 21 22 23 24 25 26 27 28 29 30
Anonymous zone construction parameters k
Time=700ms
Time=500ms
Time=350ms
Figure 9: Relationship between users and K value in the network at a limited time.
needs to decrypt the location information using the public
key of the cooperative user, while the cooperative user only
needs to send the location bill to the requesting user. Therefore, with the increase of K value, the calculation delay
required by the requesting user increases, and the cooperative user will not be affected by it, as shown in Figure 5.
In addition, during anonymous area construction, as the
number of cooperative users participating in the anonymous
area construction increases, the number of location information bills that the requesting user needs to receive increases,
and the amount of information that needs to be processed
increases, while the cooperative user is not affected. Therefore, as shown in Figure 6, the communication overhead of
the requesting user increases with the value of K, while the
communication overhead of the cooperative user is not
affected by the change of the value of K.
In addition, we control the number of users in the network to be 10,000 and select different K values for simulation experiments. The K value ranges from 2 to 30, and
the generation time of the anonymous area is obtained, as
shown in Figure 7. The figure shows, when the number of
users in the network is fixed, the time for constructing the
anonymous area will increase with the increase of the K
value, but the larger the K value, the better the location privacy of the primary user can be protected. In addition, when
the value of K is fixed, as shown in Figure 8, the number of
users in the network is inversely proportional to the time for
constructing the anonymous area, and the more users in the
-----
Wireless Communications and Mobile Computing 11
32.50
32.45
32.40
32.35
32.30
32.25
32.20
32.15
32.10
32.05
10 20 30 40 50 60 70 80 90 100
Blockchain length
Scheme in [9]
IMLPP
Figure 10: The effect of blockchain length on unauthorized users.
7.5
7.0
6.5
6.0
5.5
5.0
4.5
4.0
3.5
3.0
2.5
2.0
1.5
1.0
0.5
10 20 30 40 50 60 70 80 90 100
Historical collaboration times
Scheme in [9]
IMLPP
Figure 11: The impact of historical collaboration times on the communication overhead required by requesting users.
network, the less time it takes to construct the anonymous
area. However, within a limited time, as shown in Figure 9,
the larger the K value required by the primary user, the
higher the number of users in the network.
3.2.3. The Influence of Blockchain Length. In this scheme,
after receiving an anonymous area construction request sent
by an authorized user, it is only necessary to choose whether
to participate in its spectrum sharing according to its integrity, and the length of the blockchain will not affect unauthorized users, while in the scheme [19], in order to verify
whether there is location privacy leakage or fraudulent
behavior in the history of the requesting user, the collaborating user needs to download and query the transaction bills
stored in the entire blockchain. Therefore, as shown in
Figure 10 in the scheme [19], with the increase in the length
of the distributed anonymous area cooperative construction
blockchain, the computing delay required by users in the
anonymous area construction process is also increasing,
and the length of the blockchain will not affect this scheme.
Therefore, this scheme can reduce the computational experiment well.
3.2.4. The Impact of Historical Collaboration Times. In
scheme [19], the user’s ID will be used as an index to retrieve
all historical transaction bills containing the ID in the blockchain system, so that each user in the network can trace the
historical behavior of requesting users and cooperative users.
As shown in Figure 11, as the number of times that the
requesting user participates in the construction of the anonymous area as a collaborator increases, the number of transaction bill numbers that the requesting user needs to provide
also increases, resulting in the requesting user needing to
construct the anonymous area. The communication
-----
12 Wireless Communications and Mobile Computing
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0 1 2 3 4 5 6 7 8
Experimental rounds
IMLPP
Normal
Figure 12: Enthusiasm of primary users to participate in spectrum sharing.
0.8
0.6
0.4
0.2
0.0
0 1 2 3 4 5 6 7 8
Experimental rounds
9 10
H=100
H=80
H=50
Figure 13: Response rates of primary users with different honesty degree.
overhead also increases, and the integrity evaluation algorithm used in this paper makes the user do not need to provide the transaction bill number, so the number of historical
cooperation will not affect the communication overhead
required in the construction of the user’s anonymous area.
This scheme can reduce the communication overhead very
well.
3.3. Simulation Results and Analysis of Spectrum Sharing
Excitation. This part of the experiment analyzes the incentive
effect of incentive mechanism on primary users. In an environment without incentive mechanism, primary users are
divided into three types: (1) always actively share their idle
spectrum, (2) sometimes share their free spectrum, and (3)
do not participate in spectrum sharing. Set the initial proportion of class I users with idle spectrum to 20%, class II users to
60%, and class III users to 20%. Assuming that all primary
users in the current network have idle spectrum, the proportion of primary users willing to participate in spectrum sharing to the total number of primary users in the network is
taken as the positive rate of spectrum sharing. As shown in
Figure 12, in the absence of incentives, only the first and second types of primary users will participate in spectrum sharing, and due to the low enthusiasm of the second type of
primary users, the positive rate of spectrum sharing is between
0.3 and 0.5. Under the environment of incentive mechanism,
the second and third types of primary users will also actively
participate in spectrum sharing to obtain virtual currency
-----
Wireless Communications and Mobile Computing 13
rewards and improve honesty degree, so the positive rate of
spectrum sharing is between 0.7 and 0.9.
As shown in Figure 13, in the histogram, from left to
right are the response rates of the primary users with honesty degree of 100, 80, and 50 in the location privacy protection scheme. The higher the honesty degree of the primary
users, the higher the response rate. This is because the higher
the honesty degree, the more credible the users are, and the
more users are willing to participate in their location privacy
protection.
## 4. Concluding Remarks
This paper proposes an incentive mechanism called IMLPP,
which uses a blockchain-based K-anonymity scheme to construct a K-anonymity area that meets the needs of the primary user to protect their location information in
spectrum sharing. On this basis, honesty degree and virtual
currency are used to motivate users. The proposed scheme
can effectively generalize primary users’ location information, meet their personalized privacy protection needs, and
encourage them to actively participate in spectrum sharing.
In addition, both requesting users and cooperative users
need to pay deposit, which restricts the user’s behavior.
## Data Availability
The data of secure computation protocols and algorithms
used to support the findings of this study are available from
the corresponding author upon request.
## Additional Points
This is an open access article distributed under the Creative
Commons Attribution License, which permits unrestricted
use, distribution, and reproduction in any medium, provided
the original work is properly cited (Copyright © 2021 Hongning Li et al.).
## Conflicts of Interest
The authors declare that they have no conflicts of interest.
## Acknowledgments
This work is partly supported by the National Key Research
and Development Program of China under Grant
2021YFB2700600 and 2019YFC0118800, the National Natural Science Foundation of China under Grant 62132013 and
61903384, the Key Research and Development Programs of
Shaanxi under Grant 2021ZDLGY06-03, and High-Level
Innovation Research Institute Project under Grant
2021B0909050008.
## References
[1] M. Rajendran and M. Duraisamy, “Distributed coalition formation game for enhancing cooperative spectrum sensing in
cognitive radio ad hoc networks,” IET Networks, vol. 9, no. 1,
pp. 12–22, 2020.
[2] F. Jingyu, Y. Jinwen, Z. Ruitong, and Z. Wenbo, “Internet of
things spectrum sharing incentive mechanism against location
privacy leakage,” Computer Research and Development, vol. 57,
no. 10, pp. 2209–2220, 2020.
[3] L. Xiaohui, Z. Qi, and W. Xianbin, “Privacy-aware crowdsourced spectrum sensing and multi-user sharing mechanism
in dynamic spectrum access networks,” IEEE Access, vol. 7,
pp. 32971–32988, 2019.
[4] Y. Xuhang, S. Roy, and R. Poovendran, “Pricing mechanisms
for crown-sensed spatial-statistics-based radio mapping,”
IEEE Transactions on Cognitive Communications and Networking, vol. 3, no. 2, pp. 242–254, 2017.
[5] O. Elnahas, M. Elsabroute, O. Muta, and H. Furukawa, “Game
theoretic approaches for cooperative spectrum sensing in
energy-harvesting cognitive radio networks,” IEEE Access,
vol. 6, pp. 11086–11100, 2018.
[6] Y. Changyan, J. Cai, and G. Zhang, “Spectrum auction for differential secondary wireless service provision with timedependent evaluation information,” IEEE Transactionson
Wireless Communications, vol. 16, no. 1, pp. 206–220, 2017.
[7] X. Dong, T. Zhang, D. Lu, G. Li, Y. Shen, and J. Ma, “Preserving geo-distinguishability of the primary user in dynamic spectrum sharing,” IEEE Transactions on Veterinary Technology,
vol. 68, no. 9, pp. 8881–8892, 2019.
[8] Y. Yuan and F. Y. Wang, “Towards blockchain based intelligent transportation systems,” in 2016 IEEE 19th international
conference on intelligent transportation systems (ITSC),
pp. 2663–2668, Rio de Janeiro, Brazil, 2016.
[9] B. Leiding, P. Memarmoshrefi, and D. Hogrefe, “Self-managed
and blockchain based vehicular ad-hoc networks,” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, pp. 137–140,
Heidelberg, Germany, 2016.
[10] Z. Yang, K. Yang, L. Lei, K. Zheng, and V. C. Leung, “Blockchain
based decentralized trust management in vehicular networks,”
IEEE Internet of Things Journal, vol. 6, no. 2, pp. 1495–1505, 2019.
[11] M. Cebe, E. Ergin, K. Akkaya, H. Aksu, and S. Uluagac, “Block
4forensic: an integrated lightweight blockchain framework for
forensics applications of connected vehicles,” IEEE Communications Magazine, vol. 56, no. 10, pp. 50–57, 2018.
[12] M. Li, L. Zhu, and X. Lin, “Efficient and privacy preserving carpooling using blockchain assisted vehicular fog computing,” IEEE
Internet of Things Journal, vol. 6, no. 3, pp. 4573–4584, 2019.
[13] L. Liu, M. Zhao, M. Yu, M. A. Jan, D. Lan, and A. Taherkordi,
“Mobility-Aware Multi-Hop Task Offloading for Autonomous
Driving in Vehicular Edge Computing and Networks,” IEEE
Transactions on Intelligent Transportation Systems, pp. 1–14,
2022.
[14] C. Tan, X. Li, T. H. Luan, B. Gu, Y. Qu, and L. Gao, “Digital
twin based remote resource sharing in internet of vehicles
using consortium blockchain,” in 2021 IEEE 94th Vehicular
Technology Conference (VTC2021-Fall), pp. 1–6, Norman,
OK, USA, 2021.
[15] Y. Qu, S. R. Pokhrel, S. Garg, L. Gao, and Y. Xiang, “A blockchained federated learning framework for cognitive computing in industry 4.0 networks,” IEEE Transactions on
Industrial Informatics, vol. 17, no. 4, pp. 2964–2973, 2021.
[16] L. Liu, J. Feng, Q. Pei et al., “Blockchain-enabled secure data
sharing scheme in mobile-edge computing: an asynchronous
advantage actor–critic learning approach,” IEEE Internet of
Things Journal, vol. 8, no. 4, pp. 2342–2353, 2021.
-----
14 Wireless Communications and Mobile Computing
[17] C. Peng, C. Wu, L. Gao, J. Zhang, K. L. Alvin Yau, and Y. Ji,
“Blockchain for vehicular internet of things: recent advances
and open issues,” Sensors, vol. 20, no. 18, p. 5079, 2020.
[18] L. Gao, C. Wu, T. Yoshinaga, X. Chen, and Y. Ji, Multi-channel
blockchain scheme for internet of vehicles, 2021.
[19] L. Hai, L. Xinghua, L. Bin et al., “A distributed K-anonymous
location privacy protection scheme based on blockchain,”
Journal of Computer Science, vol. 42, no. 5, pp. 942–960, 2019.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1155/2022/6807257?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1155/2022/6807257, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "https://downloads.hindawi.com/journals/wcmc/2022/6807257.pdf"
}
| 2,022
|
[] | true
| 2022-04-28T00:00:00
|
[
{
"paperId": "c795828f5390d4208bc70b8a110082a7fc3884fd",
"title": "Mobility-Aware Multi-Hop Task Offloading for Autonomous Driving in Vehicular Edge Computing and Networks"
},
{
"paperId": "9d34b2633d34f1268a0fff3f3060ea494da5abb0",
"title": "Digital Twin Based Remote Resource Sharing in Internet of Vehicles using Consortium Blockchain"
},
{
"paperId": "a208331a249f082c6b19076754dad9cb129e12aa",
"title": "A Blockchained Federated Learning Framework for Cognitive Computing in Industry 4.0 Networks"
},
{
"paperId": "9dad453f4b0273354074e1735b3c73fb829f25f3",
"title": "Blockchain-Enabled Secure Data Sharing Scheme in Mobile-Edge Computing: An Asynchronous Advantage Actor–Critic Learning Approach"
},
{
"paperId": "774c925e6ed19c8b7199129c22a64b1c08449782",
"title": "Blockchain for Vehicular Internet of Things: Recent Advances and Open Issues"
},
{
"paperId": "5e981774575866bee36a6b102225be1e645745a5",
"title": "Distributed coalition formation game for enhancing cooperative spectrum sensing in cognitive radio ad hoc networks"
},
{
"paperId": "b833ed0e07d3bf041d2684956384034e5bce59fe",
"title": "Preserving Geo-Indistinguishability of the Primary User in Dynamic Spectrum Sharing"
},
{
"paperId": "de7c67097a91ca5ac2b76ce7f07c3bb0d6c96f64",
"title": "Efficient and Privacy-Preserving Carpooling Using Blockchain-Assisted Vehicular Fog Computing"
},
{
"paperId": "2a902fc7630b77daa2873f72b9cc6c3619677a10",
"title": "Blockchain-Based Decentralized Trust Management in Vehicular Networks"
},
{
"paperId": "e65877b3001a500a4b8c9544512b2889609092e8",
"title": "Privacy-Aware Crowdsourced Spectrum Sensing and Multi-User Sharing Mechanism in Dynamic Spectrum Access Networks"
},
{
"paperId": "26de56bb5ae0898a240474043bc9873c42359fd2",
"title": "Game Theoretic Approaches for Cooperative Spectrum Sensing in Energy-Harvesting Cognitive Radio Networks"
},
{
"paperId": "1ce725fdabe981f000419fde78156b56add29e66",
"title": "Block4Forensic: An Integrated Lightweight Blockchain Framework for Forensics Applications of Connected Vehicles"
},
{
"paperId": "b072c1a95af0e86f6eb5061a6997c65ce1cf77e5",
"title": "Pricing Mechanisms for Crowd-Sensed Spatial-Statistics-Based Radio Mapping"
},
{
"paperId": "3222d1e74b171cfd84516e4652c0efafb804c95c",
"title": "Towards blockchain-based intelligent transportation systems"
},
{
"paperId": "171dfd113c84f86138699e73e6819a5144832199",
"title": "Self-managed and blockchain-based vehicular ad-hoc networks"
},
{
"paperId": null,
"title": "Ji,Multi-channel blockchain scheme for internet"
},
{
"paperId": null,
"title": "Internet of things spectrum sharing incentive mechanism against location privacy leakage"
},
{
"paperId": null,
"title": "A distributed K-anonymous location privacy protection scheme based on blockchain"
},
{
"paperId": "13c7312096289f5d4066802aa5bfe1ca01d439d9",
"title": "Spectrum Auction for Differential Secondary Wireless Service Provisioning With Time-Dependent Valuation Information"
}
] | 11,194
|
en
|
[
{
"category": "Engineering",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/030c90b8714e0f37fe43afdcb779381dacf739cd
|
[
"Engineering"
] | 0.886125
|
Optimal Planning and Operation of Smart Grids with Electric Vehicle Interconnection
|
030c90b8714e0f37fe43afdcb779381dacf739cd
|
[
{
"authorId": "145865308",
"name": "M. Stadler"
},
{
"authorId": "2773017",
"name": "C. Marnay"
},
{
"authorId": "32396317",
"name": "M. Kloess"
},
{
"authorId": "146778734",
"name": "G. Cardoso"
},
{
"authorId": "3766413",
"name": "G. Mendes"
},
{
"authorId": "18980069",
"name": "A. Siddiqui"
},
{
"authorId": "2111334611",
"name": "Ratnesh K. Sharma"
},
{
"authorId": "3176899",
"name": "O. Mégel"
},
{
"authorId": "143693292",
"name": "J. Lai"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
## Lawrence Berkeley National Laboratory
#### Lawrence Berkeley National Laboratory
##### Title
###### Optimal Planning and Operation of Smart Grids with Electric Vehicle Interconnection
##### Permalink
###### https://escholarship.org/uc/item/6j02f15t
##### Author
###### Stadler, Michael
##### Publication Date
###### 2012-04-01
Peer reviewed
###### S h l hi P d b th C lif i Di it l Lib
-----
# ERNEST ORLANDO LAWRENCE BERKELEY NATIONAL LABORATORY
Optimal Planning and Operation of Smart Grids
with Electric Vehicle Interconnection
## Michael Stadler, Chris Marnay, Maximilian Kloess, Gonçalo Cardoso, Gonçalo Mendes, Afzal Siddiqui, Ratnesh Sharma, Olivier Mégel, and Judy Lai
Environmental Energy Technologies Division
#### January 2, 2012
to be published in the Journal of Energy Engineering, American Society of Civil Engineers (ASCE), Special Issue: Challenges and opportunities in the 21st century energy infrastructure, ISSN 0733-9402 / e-ISSN - 1943-7897
###### http://eetd.lbl.gov/EA/EMP/emp-pubs.html
##### The work described in this paper was funded by the Office of Electricity Delivery and Energy Reliability, Distributed Energy Program of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 and by NEC Laboratories America Inc. We also want to thank Professor Dr. Tomás Gómez and Ilan Momber for their very valuable contributions to previous versions of DER-CAM.
-----
-----
##### Disclaimer
This document was prepared as an account of work sponsored by the United States Government. While this document is believed to contain correct information, neither the United States Government nor any agency thereof, nor The Regents of the University of California, nor any of their employees, makes any warranty, express or implied, or assumes any legal responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by its trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof, or The Regents of the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof, or The Regents of the University of California.
Ernest Orlando Lawrence Berkeley National Laboratory is an equal opportunity employer.
-----
-----
### Optimal Planning and Operation of Smart Grids with Electric Vehicle Interconnection
###### M. Stadler[1], C. Marnay[2], M. Kloess[3], G. Cardoso[4 ], G. Mendes[5], A. Siddiqui[6], R. Sharma[7], O. Mégel[8], J. Lai[9]
**Abstract**
Connection of electric storage technologies to smartgrids will have substantial implications for building energy systems. Local
storage will enable demand response. When connected to buildings, mobile storage devices such as electric vehicles (EVs) are
in competition with conventional stationary sources at the building. EVs can change the financial as well as environmental
attractiveness of on-site generation (e.g. PV or fuel cells). In order to examine the impact of EVs on building energy costs and
CO2 emissions, a distributed-energy-resources adoption problem is formulated as a mixed-integer linear program with
minimization of annual building energy costs or CO2 emissions and solved for 2020 technology assumptions. The mixedinteger linear program is applied to a set of 139 different commercial buildings in California and example results as well as the
aggregated economic and environmental benefits are reported. Special constraints for the available PV, solar thermal, and EV
parking lots at the commercial buildings are considered. The research shows that EV batteries can be used to reduce utilityrelated energy costs at the smart grid or commercial building due to arbitrage of energy between buildings with different
tariffs. However, putting more emphasis on CO2 emissions makes stationary storage more attractive and stationary storage
capacities increase while the attractiveness of EVs decreases. The limited availability of EVs at the commercial building
decreases the attractiveness of EVs and if PV is chosen by the optimization, then it is mostly used to charge the stationary
storage at the commercial building and not the EVs connected to the building.
**Keywords**
carbon emissions, combined heat and power, commercial buildings, distributed energy resources, distributed generation,
electric vehicle, load shifting, microgrid, optimization, smart grid, storage technologies
**1.** **Introduction**
Several papers analyze the impact of renewable energy sources and EVs on the power grid and electricity prices. For example,
Sioshansi and Denholm, 2009 look into the possibility of providing ancillary services and storage capabilities to the power grid
by utilizing plug-in hybrid electric vehicles (PHEVs). Wang et al., 2010 model the impact on electricity prices due to
additional power grid loads from EVs. Since buildings are the link between the power system and the EVs, this work uses a
building centric approach and looks into the cost and CO2 benefits for buildings adopting distributed energy resources (DER).
Furthermore, there are many DERs in a building which will be influenced by EV batteries. Also, stationary storage in buildings
attracts more research attention and this can create competition between mobile storage and stationary storage. On the other
hand, when mobile storage is not suitable for EV usage anymore it can be recycled and used as stationary storage in buildings,
where the battery specifications can be relaxed. This 2[nd] life of EV batteries attracts the attention of researchers and this might
also create opportunities for EV batteries (see also TSRC). All these options and interactions of DER in buildings require an
integrated approach for analyzing the benefits of EVs connected to buildings.
This paper focuses on the analysis of the optimal interaction of electric vehicles (EVs) with commercial smartgrids/microgrids,
which may include photovoltaic (PV), solar thermal, stationary batteries, thermal storage, and combined heat and power (CHP)
systems with and without absorption chillers. A microgrid is a group of interconnected loads and DER within clearly defined
electrical boundaries that acts as a single controllable entity with respect to the grid. A microgrid can connect and disconnect
1 Ernest Orlando Lawrence Berkeley National Laboratory, One Cyclotron Road, MS: 90-1121, Berkeley, California 94720,
USA and Center for Energy and Innovative Technologies, Austria, MStadler@lbl.gov
2 Ernest Orlando Lawrence Berkeley National Laboratory, USA, ChrisMarnay@lbl.gov
3 Ernest Orlando Lawrence Berkeley National Laboratory, USA and Vienna University of Technology, Austria,
Kloess@eeg.tuwien.ac.at
4 Instituto Superior Técnico - MIT Portugal Program, Portugal, Goncalo.cardoso@ist.utl.pt,
5 Instituto Superior Técnico - MIT Portugal Program, Portugal, Goncalo.P.Mendes@ist.utl.pt
6 University College London, UK and Stockholm University, Sweden, Afzal@stats.ac.ucl.uk
7 NEC Laboratories America Inc., USA, Ratnesh@sv.nec-labs.com
8 Ecole Polytechnique Fédérale de Lausanne, Switzerland, Olivier.Megel@epfl.ch
9 Ernest Orlando Lawrence Berkeley National Laboratory, USA, JLai@lbl.gov
-----
from the grid to enable it to operate in both grid-connected or island mode. An overview of microgrids can be found in
Hatziargyriou et al., 2007.
In previous work, Berkeley Lab has developed the Distributed Energy Resources Customer Adoption Model (DER-CAM) with
its mathematical formulation documented in Siddiqui et al., 2005 and Stadler et al., 2008. Its optimization techniques find both
the combination of equipment and its operation over a typical year that minimizes the site’s total energy bill or carbon dioxide
(CO2) emissions, typically for electricity plus natural gas purchases, as well as amortized equipment purchases. It outputs the
optimal distributed generation (DG) and storage adoption combination and an hourly operating schedule, as well as the
resulting costs, fuel consumption, and CO2 emissions. DER-CAM always takes the perspective of the building owner or
operator since it is a customer adoption model and does not optimize the benefits of utilities or the society directly. However,
the results can be aggregated to a state level as shown below, which allows estimating changes in the state’s or the commercial
sector’s CO2 emissions.
Berkeley Lab has access to the California End-Use Survey (CEUS), which holds roughly 2700 building load profiles for the
commercial sector in California (see CEUS). These hourly load profiles are needed to make optimal decisions on the operation
of the DG equipment, which influences the optimal DG investment capacities since DER-CAM considers amortized
investment and operation costs. Berkeley Lab compiled a database of 139 representative building load profiles for buildings
with peak loads between 100 kW and 5 MW, and buildings in this size range account for roughly 35% of total statewide
commercial sector electric sales (Stadler et al., 2009). The 139 load profiles are made up of the following building types in
different sizes: hospitals, colleges, schools, restaurants, warehouses, retail stores, groceries, offices, and hotels/motels.
Mobile storage can directly contribute to tariff-driven demand response in these commercial buildings. By using EVs
connected to the buildings for energy management, the buildings could arbitrage their costs. However, since the car battery
lifetime is reduced due to the increased energy transfer, a model that also reimburses car owners for the degradation is
required. In general, the link between a microgrid and an EV can create a win-win situation, wherein the microgrid can reduce
utility costs by load shifting, while the EV owner receives revenue that partially offsets his/her expensive mobile storage
investment. Previous work done for certain types of buildings shows that the economic impact for the car owner is limited
relative to the costs of mobile storage for the site analyzed, i.e., the economic benefits from EV connections are modest
(Momber et al., 2010 and Mendes et al., 2011). However, that work does not consider all possible DER technologies in
buildings nor does it track the CO2 savings from mobile storage connected to buildings.
This paper will specifically focus on the new EV equations in DER-CAM, e.g. EV specific electric balance equation or CO2
emissions from EV electricity exchange, and assess the impact of EVs connected to different types of commercial buildings in
2020. The 139 buildings are grouped in different climate zones in California and within the three major utility service
territories of Pacific Gas & Electric (PG&E), Southern California Edison (SCE), and San Diego and Gas Electric (SDG&E).
Please note that this paper does not model the impact on electricity prices due to additional power grid loads from EVs and the
assumed tariffs for the three used service territories are assumed to be static. For impacts on marginal energy prices please
refer to (Wang et al., 2010). Furthermore, this work uses an area constraint for the maximum possible PV and solar thermal
adoption as well as for the available EV parking space. This constraint has a significant impact on the DER adoption and
operation and can drive up building energy costs.
The structure of this paper is as follows:
- Section 2 describes the Distributed Energy Resources Customer Adoption Model (DER-CAM)
- Section 3 discusses how EVs are modeled in DER-CAM
- Section 4 presents the data used for the analyses performed here
- Section 5 provides the results and discusses the impact on mobile and stationary storage adoption
- Section 6 summarizes the paper, discusses its limitations, and provides directions for future research in this area.
**2.** **DER-CAM**
DER-CAM is a mixed-integer linear program (MILP) written and executed in the General Algebraic Modeling System
(GAMS). Its objective is typically to minimize the annual costs or CO2 emissions for providing energy services to the modeled
site, including utility electricity and natural gas purchases, plus amortized capital and maintenance costs for any DG
investments. Other objectives, such as carbon or energy minimization, or a combination are also possible. The approach is
fully technology-neutral and can include energy purchases, on-site conversion, both electrical and thermal on-site renewable
harvesting. Furthermore, this approach considers the simultaneity of results. For example, building cooling technologies are
chosen such that the results reflect the benefit of electricity demand displacement by heat-activated cooling, which lowers
building peak load and, therefore, the on-site generation requirement, and also has a disproportionate benefit on bills because
-----
of demand charges and time-of-use (TOU) energy charges. Site-specific inputs to the model are end-use energy loads, detailed
electricity and natural gas tariffs, and DG investment options. In general these load profiles can be simulated and gathered
from building simulation tools (EnergyPlus) or taken from building information systems in the case of existing buildings.
Figure 1 shows a high-level schematic of the possible building energy flows modeled in DER-CAM. For this we use Sankey
diagrams, which show in a graphical way how loads can be met by different resources at given efficiencies (Schmidt, 2006).
Thus, a Sankey diagram provides a full view of possible resources that can be considered within the optimization.
Available energy inputs to the site are solar radiation, utility electricity, and utility natural gas. The location-specific solar
radiation will impact the adoption of PV and solar thermal technologies. Previous work has shown that the utility electricity
prices and utility natural gas prices are a main driver for natural gas fired distributed technologies. The gross margin of a gasfired power plant from selling a unit of electricity (spark spread) determines the attractiveness of the plant. In case of TOU
tariffs, the spark spread increases dramatically during the expensive (normally noon) hours, which increases the attractiveness
of gas-fired technologies.
DER-CAM solves the mixed integer linear problem over a given time horizon, e.g., a year, and selects the economically or
environmental optimal combination of utility electricity purchase, on-site generation, storage and cooling equipment required
to meet the site’s end-use loads at each time step. In other words, DER-CAM looks into the optimal combination/adoption and
operation of technologies to supply the services specified on the right hand side of Figure 1. All the different arrows in Figure
1 represent energy flows, and DER-CAM optimizes these energy flows to minimize costs or CO2 emissions. Black arrows
represent natural gas or any bio-fuel, light grey represents electricity, and darker grey heat and waste heat, which can be stored
and/or used to supply the heat loads or cooling loads via absorption cooling.
The outputs of DER-CAM include the optimal DG/storage adoption and an hourly operating schedule, as well as the resulting
costs, fuel consumption, and CO2 emissions. The approach does not consider EVs in isolation but rather alongside the rest of
the DER equipment. All available technologies compete and collaborate, and simultaneous results are derived. In this way, it
can be shown that PV and stationary electric storage can compete in certain situations. If the focus of the optimization is on
cost minimization and a TOU rate with high costs during noon hours is used, then it can be demonstrated that stationary
electric storage will be discharged at the same time when the PV system is operational (Stadler et al., 2009b). The on-site fuel
use and carbon savings are, therefore, quite accurately estimated and can deviate significantly from simple estimates. Also, the
optimal pattern of utility electricity purchase is accurately delivered. Finding likely solutions to this complex problem for
multiple buildings would be impossible using simple analysis, e.g. using assumed equipment operating schedules and capacity
factors. Because CEUS buildings each represent a certain segment of the commercial building sector, results from typical
buildings can readily be scaled up to the state level in order to provide policymaking insights.
**3.** **EV Approach**
Once EVs are connected to commercial buildings, electricity from their batteries can be transferred to and from the sites. The
building energy management system (EMS) can use this additional battery capacity to lower its energy bill and/or carbon
footprint. Whenever possible, economically attractive energy from a renewable energy source or CHP system at the building
could be used to offset EV charging at home. In this paper, DER-CAM is used to find the optimal charging and discharging
schedule for the EV batteries. Decision variables are, therefore, the activity levels of all available energy sources so that energy
loads are met, as well as the optimal installed capacity, making it a three-level assignment problem: energy loads, supply
scheduling, and installed capacity. Included in these variables are utility energy purchases, local energy production, and EV
interactions, which are the focus of this paper. It is assumed that the EV owner will receive compensation for battery
degradation caused by the commercial building EMS and is reimbursed for the amount of electricity charged at home and later
fed into the commercial building (see equations 1 & 5). On the other hand, if the EV is charged by electricity originating from
the commercial building, then the car owner needs to pay the commercial building for the electricity.
C��� = E�� ∗CL ∗RC��� (1)
Cbat EV battery degradation annual costs caused by the
commercial building, $
EEV total annual electricity exchange through the EV
battery, caused by the commercial building, kWh
CL capacity loss factor, dimensionless
RCbat replacement cost of the EV battery, $/kWh
-----
The monetary losses attributable to charging and discharging as well as the decay will be covered by the commercial building.
However, since this work also reports on the environmental impact of EVs connected to commercial buildings, the modeling of
the marginal CO2 emissions is important. The marginal CO2 emissions when the EVs are plugged in at residential buildings for
charging are tracked as this is necessary to be able to calculate the proper CO2 changes in the commercial buildings (see
equations 6 & 7). Consider the abstract state of charge (SOC) pattern (solid black line) for an EV connected to an office
building in Figure 2, and it is obvious that the commercial building benefits from energy (area A) that has a carbon foot print
that is related to times when the EV is not connected to the commercial building. Since the state of charge at disconnection
(SOCout) is less than at connection (SOCin), a net energy transfer to the commercial building takes place and that energy might
have a different carbon content since it originates from other sources at different times. Therefore, tracking the CO2 emissions
and different cases is an important feature within DER-CAM. This becomes even more complicated if the EVs are connected
to different buildings during a certain period of time[10].
The high-level formulation used in DER-CAM follows the standard linear programming approach:
Min f = c[⊺]x (2)
s. t.
Ax ≤b
L ≤x ≤U
where:
c cost coefficient vector
x decision variable vector
A constraint coefficient matrix
b constraint coefficient vector
L decision variable lower boundary vector
U decision variable upper boundary vector
This translates to DER-CAM in the simplified[11] mathematical formulation explained below, where an emphasis is given to EV
specific formulation. Please refer to Figure 3 for the representative MILP solved by DER-CAM
_3.1_ _Input Parameters_
_a._ _Indices_
_m_ month index (1,2,… 12)
_h_ hour index (1,2,… 24)
_b._ _Market data_
Cfix m fixed electricity costs, $
CO2 EV-home m,h macrogrid CO2 emission during home charging period, kgCO2/kWh.
These are the CO2 emissions of energy transferred to the commercial
building. CO2EV-home m,h is calculated based on the emissions when the
EV is connected to the residential building.
_c._ _EV parameters_
c EV battery capacity, kWh
cr EV battery maximum charge rate, dimensionless
dr EV battery maximum discharge rate, dimensionless
pEV EV electricity exchange price, $/kWh.
Set to residential charging rate for EVs
SOC EV battery maximum state of charge, dimensionless
SOC EV battery minimum state of charge, dimensionless
ηc EV battery charging efficiency, dimensionless
ηdc EV battery discharging efficiency, dimensionless
φ electricity storage loss factor for the EV battery, dimensionless
10 Multiple building connections are not considered in this work.
11 The full DER-CAM code consists of roughly 5600 lines of code for equations, parameters, and data sets. Please note that the full detailed mathematical
formulation of DER-CAM is roughly 17 pages.
-----
_d._ _Customer loads_
DB m,h electricity demand from the building, kWh
_3.2_ _Decision Variables_
_a._ _Costs_
Ctotal total annual energy cost of the commercial building, $
Celec electricity costs, $
CDER distributed energy resources costs (amortized capital costs of investments), $.
Cfuel fuel costs, $
CDR demand response costs for other non-storage technologies, $
Cbat EV battery degradation costs, $
Cvar m,h variable electricity costs (energy and demand charges), $
CEV m,h EV electricity costs, $
_b._ _CO2 emissions_
CO2_total total annual CO2 emissions, kgCO2
CO2_elec CO2 emissions from electricity consumption, kgCO2
CO2_fuel CO2 emissions from DG fuel burning, kgCO2
CO2_EV CO2 emissions from EV electricity exchange, kgCO2
_c._ _Electricity exchange with the micorgrid/building_
SU m,h electricity supplied by the utility, kWh
SDER m,h electricity supplied by distributed energy resources, kWh
SSt m,h electricity supplied by local/stationary storage, kWh
Vm,h electricity sales, $
_d._ _Electricity exchange with EVs_
DEV m,h electricity demand from EVs, kWh
DSt m,h electricity demand from local/stationary storage, kWh
E[c][→][r]m,h electricity flow from car to residential building,
E[c][→][r] ≤ 0, kWh
E[r][→][c]m,h electricity flow from residential building to car, kWh
ESEV m,h electricity stored in EVs, kWh
i m,h EV storage input, kWh
o m,h EV storage output, kWh
SEV m,h electricity supplied by EVs, kWh
_3.3_ _Objective Function – cost minimization_
The most commonly used goal function in DER-CAM is total energy cost minimization. This includes electricity
related costs, amortized capital costs of DER equipment, fuel costs, demand response measure costs, EV battery
degradation costs, and sales.
min C����� = C���� + C��� + C���� + C�� + C��� −∑� ∑�� �,� (3)[12]
C���� = ∑� ∑�C� ��� � + C��� �,� + C�� �,�� (4)
C�� �,� = p�� ∗��[�→�]���,� + E[�→�]�,� [∗η]��[�] (5)
_3.4_ _Objective Function – CO2 minimization_
As mentioned previously, a second objective function is also available to DER-CAM. In this case, the objective
becomes minimizing total CO2 emissions, which includes emissions linked to utility electricity and fuel usage,
but also to the CO2 emissions associated with the use of electricity from EVs and their charging at different time
periods.
min CO� ����� = CO� ���� + CO� ���� + CO� �� (6)[13]
12 Please note that only the EV relevant variables of equation 3 are shown in more detail. For Cbat please refer to equation 1.
13 Please note that only the EV relevant variables of equation 6 are shown in more detail.
-----
CO� �� = ∑� ∑�� � �[�→�]���,� + E[�→�]�,� [∗η]��[�∗CO]���������,�[�] (7)
_3.5_ _Constraints_
_a._ _Balance equations_
This includes electric, heating and cooling balance equations, but we focus on the electric balance (equation 8), as
this relates to the EV interactions. Another relevant example is the EV battery specific electric balance equation
(equation 9).
S� �,� + S��� �,� + S�� �,� + S�� �,� + V�,� = D� �,� + D�� �,� + D�� �,� (8)
ES�� �,� = ES�� �,��� ∗(1 −φ) + i�,� −o�,� (9)
_b._ _Operational constraints_
Operational constraints are applied to all technologies involved in DER-CAM, and are used, for instance to
model technology behavior. Highlighted here are the net input and output electric flows from EVs (equations 10
&11), as well as capacity related constraints (equations 12, 13 &14).
S�� �,� = o�,� ∗η�� (10)
D�� �,� = ��,��� (11)
c ∗soc ≤ES�� �,� ≤c ∗soc (12)
i�,� ≤c ∗cr (13)
o�,� ≤c ∗dr (14)
**4.** **Input Data, Technology Specification, and Parameters**
The starting point for the hourly load profiles used within DER-CAM is the CEUS database, which contains 2790 premises in
total. DER are very common at industrial buildings with electric peak loads above 5 MW, but mostly overlooked for
commercial buildings with loads below 5 MW. Thus, the focus here is on mid-sized buildings, between 100 kW and 5 MW
electric peak load, and the assumption that DER will not be attractive for <100 kW buildings. This assumption results in the
consideration of 35% of the total commercial electric demand in the service territories of PG&E, SCE, and SDG&E.
As is typical for Californian utilities, the electricity tariff has a fixed charge plus TOU pricing for both energy and power
(demand) charges. The latter are proportional to the maximum rate of consumption (kW), regardless of the duration or
frequency of such consumption over the billing period. Demand charges are assessed monthly and may be for all hours of the
month or assessed only during certain periods, e.g. on-, mid-, or off-peak, or be assessed at the highest monthly hour of peak
system-wide consumption. For example, for buildings with electric peak loads above 500 kW in PG&E’s service territory, the
E-19 TOU tariff is used as the 2020 estimate. This tariff is used for the PG&E school example in the next section. The E-19
consists of a seasonal demand charge between $13.51/kW (summer) and $1.04/kW (winter), the TOU tariff varies between
$0.16/kWh (on-peak) and $0.09/kWh (off-peak) in the summer months (May-Oct). Winter months show only $0.01/kWh
difference between mid-peak and off-peak hours. Summer on-peak is defined from 12:00-18:00 on weekdays. All details of E19 can be found at (PG&E E-19 tariff). It is assumed that in PG&E and SCE service territory the EVs can be charged at home
at night for 6c/kWh (PG&E E-9 tariff) and in the SDG&E for 14c/kWh. All used commercial utility tariffs for this paper can
be found at (Stadler et al., 2009). The demand charge in $/kW/month as well as the on-peak energy costs are a significant
determinant of technology choice and sizing of DG and electric storage system installations as can be seen in the next section.
As described in previous sections, DER-CAM finds the optimal combination of technologies in order to reach the objective,
defined in the specific runs. The available investment options comprise of technologies for distributed generation of electricity,
heating and cooling energy, as well as storage technologies. DER-CAM distinguishes between discrete and continuous
technologies to improve the optimization speed of DER-CAM: the former can only be picked in discrete sizes, whereas the
latter may be selected in any size. However, discrete technologies allow modeling of economies of scale in a better way than
continuous ones, and therefore, some important technologies, e.g. CHP are considered as discrete ones. For discrete
technologies please refer to Table 1 and for continuous ones to Table 2.
-----
In DER-CAM, there are two types of internal combustion engines (ICE) and fuel cells (FC) available – with and without heat
exchangers (HX) (see Table 1). HX can enable waste heat utilization for hot water usage and absorption cooling, thereby
allowing total energy conversion efficiencies of up to 80%. Their technical specifications and costs are based on historic data
and our own estimates (Goldstein et al., 2004, Firestone, 2004, and SGIP, 2008). The continuous technologies available in
DER-CAM at this point are PV, solar thermal collectors, absorption chiller systems as well as thermal and electric storage, and
EV batteries. Costs of continuous technologies available in 2020 are derived from various sources and are displayed in Table
2. For storage technologies, the economic performance and, hence, the adoption by the building EMS is also affected by some
key technical parameters (see Table 3). First, there is the charging and discharging efficiency of the storage. For both electric
and thermal storage, a charging and discharging efficiency of 90% is assumed, thus representing a technology status likely to
become standard in 2020. Another important parameter is the decay of the storage systems, which defines their degradation
due to usage. Finally, there is the maximum charging and discharging rate, which is a key input for the building energy
management system, since it determines the maximum energy flow that the storage can provide to the building at every time
step.
For the mobile storage systems, it is assumed that Li-Ion batteries with a capacity of 16 kWh are used. This is roughly the size
of current EVs or plug-in hybrid vehicles and is used as a proxy for vehicle batteries connected to the commercial building
(GreenCarCongress). For mobile storage systems, a charging/discharging efficiency of 95% is assumed, a value likely to be the
standard in 2020, given the dynamic progress in this field. Battery decay is an important parameter for mobile storage as well,
since it defines the degradation cost that has to be covered by the commercial building when using mobile storage capacities
(see section “EV Approach”). Table 5 shows the assumed times when vehicles are connected to the different building types
and can be used by the EMS in principle. This, of course, neglects the stochastic nature of the driving patterns. However,
sensitivity results show that the main results for the charging and discharging strategies for mobile storage, derived from this
deterministic work will basically hold under consideration of uncertain driving patterns. Driving patterns just changes the
connection periods to the buildings, but not the main drivers for the charging cycles - the electricity prices. Finally, Table 6
shows the area constraint used for PV, solar thermal, and EV parking space. Based on the CEUS database, the average floor
space was taken as an estimate for the maximum area available for these technologies. Since no detailed building information
can be collected from CEUS, no other information is available.
The marginal carbon emissions of the macrogrid for 2020 are taken from Mahone et al. 2008.
**5.** **DER-CAM Results**
Results for cost minimization, CO2 minimization, and multi-objective optimization for two selected buildings of the CEUS
building stock are shown in this section. A large school in the San Francisco Bay Area with 3340 m[2] floorspace and 550kW
electric peak load as well as a healthcare facility in San Diego with 3260 m[2] floorspace and 400kW electric peak load are
selected. These two examples are used to demonstrate how mobile storage capacity is adopted in commercial buildings
considering an area constraint for PV, solar thermal, and EV adoption, and how it interacts with buildings’ DG output and
stationary storage. At the end of this section, we show the aggregated results on CO2 savings, number of EVs used, and
capacity of PV, as well as other DG for the state of California, considering the building types and climate zones from CEUS.
DER-CAM allows optimization of the weighted building energy costs and CO2 emissions at the same time by using a multiobjective approach (see equation 15). By increasing _w,_ more focus on CO2 emission reduction is placed, and this approach
allows showing the trade-off between costs and CO2 emissions[14] in a building.
min (1 −ω) ∗ ������ ��� ����� (15)
������� [+ ω ∗] ��������
where:
ω weight factor [0..1]
RefCO�Em parameter to make equation unit less
RefCost parameter to make equation unit less
By analyzing the cases of minimal costs (ω=0) and four further cases with increasing ω (S1 to S4), we approximated the multiobjective frontier of the school building and the healthcare facility in two different parts of California. The principal
connection periods of EVs to the commercial buildings differ for each building type and are shown in Table 5. In both the
school and healthcare buildings, it is assumed that the EVs connect to the commercial buildings at 8 AM and disconnect at 6
PM. During that time, the building EMS can manage the mobile storage in combination with other DER technologies, and
14 Please note that DER-CAM tracks the CO2 emissions transferred to the commercial building by mobile storage.
-----
different optimization strategies can apply. From 6 PM to 8 AM, the EVs are disconnected from the commercial buildings and
are subject to driving and charging/discharging at the residential building. Both scenarios are subject to very different EV
charging tariffs at the residential buildings. In the San Francisco Bay Area, EVs can be charged for 6cents/kWh compared to
14cents/kWh in San Diego. This difference in price will influence the overall level of EV adoption, but still, general insights
can be derived from these two cases.
Figure 4 and 5 show that total energy costs can be reduced by using EVs in the building (see do-nothing vs. min cost in Figure
4 and 5), but more focus on CO2 emission reduction results in fewer EVs connected to the building (mobile storage curve in
Figure 4 and 5). Despite the major difference in electricity tariff rates, both cases show a similar pattern and show increasing
stationary storage capacities combined with decreasing numbers of EVs connected to buildings. The space constraints impact
the results dramatically as evidenced by the nearly vertical multi-objective frontier from S2 and S1 in Figures 4 and 5,
respectively. The maximum area available for PV and solar thermal is 3340 m[2 ]for the large school building and 3260 m[2 ]for
the healthcare facility. Also, the parking space for EVs is constrained by 3340m[2 ] and 3260 m[2 ]respectively. Another finding
from the optimization runs shown in Table 7 and 8 is the importance of natural gas fired fuel cell systems with CHP. Due to
the heat requirement and as well as the area constraint, efficient fuel cell systems, which allow total efficiencies up to 80%,
will be used during times when solar thermal or PV cannot be selected. For more detailed results for all optimization cases,
please refer to Tables 7 and 8.
The major cost reduction strategy derived from the DER-CAM optimization is to charge EVs with cheap electricity at home
and provide that energy during connection times to the commercial building (Figure 6 and 9). The higher residential EV
charging rates in San Diego, however, reduce the connected numbers of EV in Figure 5. Figures 6 to 8 show the optimal
diurnal electric pattern for different optimization cases for the large school building in the San Francisco Bay Area. Figure 6
clearly shows that EVs will be used to minimize utility related energy and demand charges, since the mobile storage will be
discharged during expensive mid- and on-peak hours (9 AM to 6 PM). No other DER technologies will be adopted at the
school.
Figure 7 illustrates the electric pattern for the school building with a multi-objective function for point S2. In this case,
considerable PV of 352 kW and stationary storage capacity of 2068 kWh is installed. The connected mobile storage is
practically negligible (14 kWh, or one vehicle) and so is the transferred electricity. There is a significant difference between
summer and winter days in the way how stationary batteries are used. In summer, they are charged in the afternoon with
excessive PV power and discharged at the beginning of the evening before CHP is activated (see Figure 7). In winter, they are
charged during night hours with excessive CHP capacity and discharged in the morning hours before sufficient PV power is
available (see Figure 8). In this case, stationary storage plays an important role for the electricity supply of the building
especially in winter days.
Figure 9 shows the electric pattern for the San Diego healthcare facility on a summer day with cost minimization
(corresponding to the point min. cost, w=0 in Figure 5). In this case, the electricity for the building is mainly supplied by DG
and by the utility. During peak hours, energy transfer from mobile storage is used to cover marginal demand. In the cost
minimization case, there is no PV installed and no stationary battery capacity. One reason for this is the way capital costs of
storage systems are considered within DER-CAM. Stationary storage is owned by the building, and therefore, the annualized
capital costs for stationary storage will be considered in the optimization. In contrast, mobile storage is owned by the car
owner, and therefore, no major capital cost reimbursements are assumed – the cars are simply around and utilized. However,
this also means that stationary storage has considerable disadvantages in a pure cost minimization strategy.
Figure 10 depicts the S1 case from Figure 5. In this case, PV is used to cover large parts of the total demand during day hours,
thereby replacing CHP generation and consumption from the utility. During peak hours, energy from EVs is used to cover
some demand. In the afternoon, EVs are used to balance supply and demand when DG/CHP is activated, and they absorb
excessive electricity. Later, when demand decreases and CHP is shut down again due to a must take from PV and a 50%
minimum capacity[15] constraint on CHP, they feed electricity back to the building. Stationary batteries are charged in the
morning and are discharged in late afternoon where they compensate the reduction in supply when EVs are leaving the
building. Figure 10 also shows that waste heat utilization and absorption cooling reduces the electricity demand during
expensive day hours and contributes to cost reductions (see cooling offset at the top of Figure 10).
15 To limit non-linear effects, the adopted discrete technologies need to be shut down at a minimum capacity of 50% of the nameplate capacity.
-----
With increasing priority to CO2 reduction, as assumed in S2 (Figures 11 and 12), the full PV potential of the building is
exploited. In summer, PV can cover almost the entire demand between 10 AM and 2 PM. Electricity from EVs is transferred to
the building during shoulder hours (9-10 AM and 2-4 PM). In winter days, the total load of the building is considerably lower
mainly because of lower cooling demand. This is why excessive supply from PV can be used to transfer electricity to the
stationary storage around midday to be used in afternoon and evening hours. In the afternoon, PV is used to charge EVs (see
Figure 12).
Summing up the results for the two buildings, analyzed in detail with respect to EVs, we have seen that the use of mobile
storage capacity from EVs is driven by the objective of cost minimization rather than efficiency improvement (Figures 6 and
9). The availability of EV storage capacity to the building is also strongly dependent on the tariff for home charging of EVs.
The lower the residential charging rate, the more EV users to provide energy to the commercial building during the day. This
effect is clearly shown in Figures 6 and 9. For Figure 6, a home charging rate of 6 cents/kWh and for Figure 9 14 cents/kWh is
assumed, and this reduces the mobile storage SOC considerably in Figure 9 compared to Figure 6.
In most cases, EVs are charged at the residential building, and only some cases show that renewable energy is transferred from
the commercial building to the residential building. EVs are always used to reduce the demand charges and energy-related
costs at peak or shoulder hours when PV or other DG/CHP is not fully available. Also, we have seen that all cases with
increasing focus on CO2 emission show increasing capacities for stationary storage, and this makes the case for considering the
2[nd] life of mobile storage, meaning to re-use EV batteries in buildings after they have decommissioned from EV usage due to
tighter performance requirements in EVs.
Finally, we show the aggregated results for California. Table 9 shows the results for CEUS building stock with electric peak
loads between 100 kW and 5 MW assuming a CO2 minimization strategy. When assuming a full CO2 minimization strategy
(w=1), a maximum cost increase boundary needs to be imposed. Without such a cost constraint, the optimization algorithm
could adopt any size of equipment, which would create very unrealistic adoption patterns as well high investment costs. For
the aggregated results shown in Table 9, a cost increase constraint of 30 % was used, which is considered as realistic increase
that customers can accept by 2020.
The considered commercial buildings can reduce their CO2 emissions by adopting DER by roughly 37%. To achieve this
reduction, roughly 15 GWh of stationary storage needs to be adopted. The utilized mobile storage is roughly 12.5 GWh, and
this shows the importance to consider second life of mobile storage in form of stationary storage. The 4.55 GW of adopted PV
are used to charge the stationary storage and not to charge the mobile storage (see also the diurnal electric patterns above).
Finally, Table 9 also shows that CHP plays an important role in CO2 minimization strategies and 3.5 GW of CHP systems will
be adopted.
**6.** **Conclusions**
The emergence of smart grids and EVs provides opportunities for transitioning towards a more energy efficient, less costly,
and greener energy system. However, deployment of these resources by commercial microgrids requires decision support that
simultaneously treats investment and operations. Furthermore, there is likely to be a tradeoff between costs and CO2 emissions
barring more substantial policy reforms. In order to illustrate the benefits and challenges of the incorporation of EVs into a
microgrid, we model the decisions of various types of California users in different geographical regions for the year 2020.
Via a MILP, we find that the use of mobile energy storage provided by EVs in commercial buildings is driven more by cost
reduction objectives than by CO2-reduction/efficiency improvement objectives. Under pure cost minimization, EVs are mainly
used to transfer low-cost electricity from the residential building to the commercial building to avoid high demand and energy
charges during expensive day hours. By contrast, with CO2 minimization strategies, EVs are used to reduce the utility demand
charges and energy-related costs at peak or shoulder hours when PV or CHP is not fully available. Here, the use of stationary
storage is more attractive compared to EV storage, because stationary storage is available at the commercial building for 24
hours a day and readily accessible for energy management. In particular, stationary storage can shift PV supply during the day
to off-peak hours, when the building would otherwise be supplied by more carbon-intensive electricity from the utility. To
benefit from the stationary storage and PV CO2 reduction potential, stationary storage should get a major focus in R&D
funding and policy making. To be able to use mobile storage in 2[nd] life, special focus needs to be put on the process of
recycling mobile storage in buildings since this creates larger CO2 savings. Finally, we find that the number of connected EVs
varies widely depending on the residential charging rate and possibility of arbitrage.
-----
Although the analysis presented here attempts to model a cost- or CO2-minimizing decision maker, it is limited by several
assumptions and simplifications. First, it assumes a given pattern of EV arrival and departure, which is only a rough
approximation of reality. Second, electricity prices are subject to uncertainty, but here they are assumed to be deterministic. In
general, a stochastic model of the investment and operational decisions would better capture the risks and tradeoffs faced by a
typical decision maker. Third, the model does not consider investment timing or subsequent upgrades to installed technology
based on changing market conditions. Again, these features could be incorporated into a real options or stochastic
programming framework. Fourth, the model assumes that in spite of the arbitrage, the energy tariffs remain unchanged. In
reality, the utility is likely to respond in the long run to such forces, which would necessitate a game-theoretic model and
change the incentives of the decision maker.
**Acknowledgment**
The work described in this paper was funded by the Office of Electricity Delivery and Energy Reliability, Distributed Energy
Program of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231 and by NEC Laboratories America Inc.
We also want to thank Professor Dr. Tomás Gómez and Ilan Momber for their very valuable contributions to previous versions
of DER-CAM.
-----
**References**
CEUS, California Commercial End-Use Survey database, ITRON. Available online at: http://capabilities.itron.com/ceusweb/.
Electricity Storage Association, Morgan Hill, CA, USA
(http://www.electricitystorage.org/tech/technologies_comparisons_capitalcost.htm).
EnergyPlus. Available online at:
http://apps1.eere.energy.gov/buildings/energyplus/.
EPRI-DOE Handbook of Energy Storage for Transmission and Distribution Applications (2003). EPRI, Palo Alto, CA, and the
U.S. Department of Energy, Washington, DC: 2003. 1001834.
Firestone, R. (2004). “Distributed Energy Resources Customer Adoption Model Technology Data,” Berkeley Lab, Berkeley,
CA, USA Case Study, Jan. 2004 (available at http://der.lbl.gov).
Goldstein, L., Hedman, B., Knowles, D., Friedman, S. I., Woods, R., and Schweizer, T. (2003). “Gas-Fired Distributed Energy
Resource Characterizations,” National Renewable Energy Resource Laboratory, Golden, CO, USA Rep. TP-620-34783, Nov.
2003.
GreenCarCongress. Available online at:
http://www.greencarcongress.com/2010/10/chevy-volt-delivers-novel-two-motor-four-mode-extended-range-electric-drivesystem-seamless-driver-e.html#more.
Hatziargyriou, N., Asano, H., Iravani, R., and Marnay, C. (2007). “Microgrids, An Overview of Ongoing Research,
Development, and Demonstration Projects,” IEEE Power & Energy Magazine, July/August 2007.
Mahone, A., Price, S., and Morrow, W. (2008). “Developing a Greenhouse Gas Tool for Buildings in California: Methodology
and Use,” Energy and Environmental Economics, Inc., September 10, 2008 and PLEXOS Production Simulation Dispatch
Model.
Marnay, C., Venkataramanan, G., Stadler, M., Siddiqui, A., Firestone, R., Chandran, B. (2008). “Optimal Technology
Selection and Operation of Microgrids in Commercial Buildings,” IEEE Transactions on Power Systems, Volume 23, Issue 3,
page 975-982, August 2008, ISSN 0885-8950.
Mechanical Cost Data 31st Annual Edition (2008). HVAC, Controls, 2008.
Mendes, G., Stadler, M., Marnay, C., Ioakimidis, C. (2011). “Modeling of Plug-in Electric Vehicle Interactions with a School
Building using DER-CAM,” Poster presented at MIT Transportation Showcase, Boston, USA, 2011.
Momber, I., Gómez, T., Venkataramanan, G., Stadler, M., Beer, S., Lai, J., Marnay, C., and Battaglia, V. (2010). “Plug-in
Electric Vehicle Interactions with a Small Office Building: An Economic Analysis using DER-CAM”, IEEE PES 2010
General Meeting, Power System Analysis and Computing and Economics, July 25[th]- 29[th], Minnesota, USA, 2010, LBNL3555E.
PG&E E-19 tariff. Available online at:
http://www.PG&E.com/tariffs/tm2/pdf/ELEC_SCHEDS_E-19.pdf.
PG&E E-9 tariff. Available online at:
http://www.PG&E.com/tariffs/tm2/pdf/ELEC_SCHEDS_E-9.pdf.
Schmidt, M. (2006): “Der Einsatz von Sankey-Diagrammen im Stoffstrommanagement.“ Beiträge der Hochschule Pforzheim.
Nr. 124. University Pforzheim, Germany.
SGIP (2008). Statewide Self-Generation Incentive Program Statistics, California Center for Sustainable Energy,
http://www.sdenergy.org/ContentPage.asp?ContentID=279&SectionID=276&SectionTarget=35, updated December 2008.
Siddiqui, A., Marnay, C., Edwards J., Firestone, R., Ghosh, S., and Stadler, M. (2005). “Effects of a CarbonTax on Microgrid
Combined Heat and Power Adoption,” Journal of Energy Engineering, American Society of Civil Engineers (ASCE), Special
Issue: Quantitative Models for Energy Systems, vol. 131, Number 1, pp. 2-25, April 2005, ISSN 0733-9402.
Sioshansi, R., Denholm, P. (2009): “The Value of Plug-In Hybrid Electric Vehicles as Grid Resources,” The Energy Journal,
Volume: 31, Issue: 3, Pages: 1-16, ISSN: 01956574.
Stevens, J.W., Corey, G.P. (1996). “A Study of Lead-Acid Battery Efficiency Near Top-of-Charge and the Impact on PV
System Design,” Photovoltaic Specialists Conference, 1996, Conference Record of the Twenty Fifth IEEE, Washington, DC,
USA: 1485-1488.
-----
Stadler, M., Marnay, C., Siddiqui, A., Lai, J., Coffey, B., and Aki, H. (2008). “Effect of Heat and Electricity Storage and
Reliability on Microgrid Viability: A Study of Commercial Buildings in California and New York States,” Report number
LBNL - 1334E, December 2008.
Stadler, M., Marnay, C., Cardoso, G., Lipman, T., Mégel, O., Ganguly, S., Siddiqui, A., and Lai, J. (2009). “The CO2
Abatement Potential of California’s Mid-Sized Commercial Buildings,” California Energy Commission, Public Interest Energy
Research Program, CEC-500-07-043, 500-99-013, LBNL-3024E, December 2009.
Stadler, M., Marnay, C., Siddiqui, A., Lai, J., and Aki, H. (2009b). “Integrated building energy systems design considering
storage technologies,” ECEEE 2009 Summer Study, 1–6 June 2009, La Colle sur Loup, Côte d'Azur, France, ISBN 978-91633-4454-1 and LBNL-1752E.
Symons, P.C., Butler, P.C. (2001). “Introduction to Advanced Batteries for Emerging Applications,” Sandia National Lab
Report SAND2001-2022P, Sandia National Laboratory, Albuquerque, NM, USA (available at
http://infoserve.sandia.gov/sand_doc/2001/012022p.pdf).
TSRC, “Plug-In Electric Vehicle Battery Second Life,” Transportation Sustainability Research Center University of California
(TSRC), University of California Berkeley. Available at: http://tsrc.berkeley.edu/PlugInElectricVehicleBatterySecondLife.
Wang, L., Lin, A., Chen, Y. (2010). “Potential Impact of Recharging Plug-in Hybrid Electric Vehicles on Locational Marginal
Prices,” Naval Research Logistics (NRL), Volume 57, Issue 8, pages 686–700, December 2010, Online ISSN: 1520-6750.
-----
**Acronyms**
CHP combined heat and power
CEUS California End-Use Survey
DER distributed energy resources
DER-CAM Distributed Energy Resources Customer Adoption Model
DG distributed generation
EMS energy management system
EV electric vehicle
FC fuel cells
HX heat exchanger
ICE internal combustion engines
MILP mixed-integer linear program
PHEV plug-in hybrid electric vehicles
PG&E Pacific Gas & Electric
PV Photovoltaics
SCE Southern California Edison
SDG&E San Diego and Gas Electric
TOU time-of-use
-----
Figure 1. High level schematic of DER-CAM (Stadler et al., 2009)
#### 0.9
##### EV charging
#### 0.8
##### SOC in
#### 0.7 0.6 EV
##### discharging
#### 0.5
###### Area A
#### 0.4 0.3 not connected to not connected to 0.2 office building, SOC out office building,
##### possible charging at connected to office possible charging at
#### 0.1
##### residential building residential building
#### 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
##### hours
Figure 2. Hypothetical charging/discharging at a commercial (office) building, SOCin means mobile storage state of charge at
the time when the EV connects to the building, SOCout means state of charge at the time when the EV disconnects from the
building.
-----
**MINIMIZE**
**_Annual energy cost:_**
```
energy purchase cost
+ amortized DER technology capital cost
+ annual O&M cost
```
**SUBJECT TO**
**_Energy balance:_**
```
- Energy purchased + energy generated exceeds demand
```
**_Operational constraints:_**
```
- Generators, chillers, etc. must operate within
installed limits
- Heat recovered is limited by generated waste heat
```
**_Regulatory constraints:_**
```
- Minimum efficiency requirements
- Maximum emission limits
```
**_Investment constraints:_**
```
- Payback period is constrained
```
**_Storage constraints:_**
```
- Electricity stored is limited by battery size
- Heat storage is limited by reservoir size
```
Figure 3. Representative MILP solved by DER-CAM
Figure 4. Results, multi-objective frontier for the large school building in the San Francisco Bay Area (PG&E service territory)
and storage capacity
-----
Figure 5. Results, multi-objective frontier for the healthcare facility in San Diego (SDG&E service territory) and storage
capacity
Figure 6. Diurnal electric pattern at cost-minimization on a July work day, large school in the San Francisco Bay Area (PG&E
service territory)
-----
Figure 7. Diurnal electric pattern for point S2 on a July work day, large school in the San Francisco Bay Area (PG&E service
territory)
Figure 8. Diurnal electric pattern for point S2 on a January work day, large school in the San Francisco Bay Area (PG&E
service territory)
-----
Figure 9. Diurnal electric pattern on a July work day for minimal costs for the healthcare facility in San Diego (SDG&E
service territory)
Figure 10. Diurnal electric pattern for point S1 on a July work day for the healthcare facility in San Diego (SDG&E service
territory)
-----
Figure 11. Diurnal electric pattern for point S2 on a July work day for the healthcare facility in San Diego (SDG&E service
territory)
Figure 12. Diurnal electric pattern for point S2 on a January work day for the healthcare facility in San Diego (SDG&E service
territory)
-----
Table1. Available discrete technologies[16] in 2020 (Goldstein et al., 2003), (Firestone, 2004), (SGIP, 2008)
S M S M
capacity (kW) 60 250 100 250
2721 1482 2382 1909
installed cost ($/kW)
w/HX 3580 2180 2770 2220
maintenance cost ($/kWh) 0.02 0.01 0.03 0.03
electrical efficiency[17] (%) 29 30 36 36
heat to power ratio (if w/HX) 1.73 1.48 1.00 1.00
lifetime (years) 20 20 10 10
Table 2. Available continuous DER technologies in 2020 (Firestone, 2004), (SGIP, 2008), (EPRI-DOE, 2003), (Mechanical
Cost Data 31st Annual Edition, 2008), (Stevens and Corey, 1996), (Symons and Butler, 2001), (Electricity Storage
Association)
ES TS AC ST PV
capital cost ($) 295 10000 93911 0 3851
variable cost ($/kW or $/kWh when referring to storage) 193 100 685 500 3237
maintenance cost ($/kWh) 0 0 1.88 0.50 0.25
lifetime (years) 5 17 20 15 20
_ES – stationary electrical storage, TS – thermal storage, AC - absorption cooling, ST-solar thermal, PV-Photovoltaics_
Table 3. Assumed stationary energy storage parameters (Stevens and Corey, 1996), (Symons and Butler, 2001)
ES TS
charging efficiency 0.9 0.9
discharging efficiency 0.9 0.9
decay 0.001 0.01
maximum charge rate 0.1 0.25
maximum discharge rate 0.25 0.25
minimum state of charge 0.3 0
_Notes: all parameters are dimensionless; ES – stationary electrical_
_storage, TS – thermal storage;_
Table 4. EV battery specifications
charging efficiency 0.95
discharging efficiency 0.95
battery hourly decay
0.001
(related to stored electricity)
capacity 16 kWh
Table 5. Principle EV connection periods for different building types[18]
building type building connection period EV owners
Hotel 19h-8h guests
Office 9h-18h employees
School/College 8h-18h employees
Retail 9h-18h employees/customers
Restaurant 18h-21h employees/customers
Warehouse 8h-18h employees
Grocery 9h-18h employees/customers
Healthcare 8h-18h employees
16 DER-CAM distinguishes between discrete and continues technologies. Discrete technologies can only be picked in discrete sizes and continues ones in any
size. The usage of continues technologies increases the optimization performance and reduces the run time. Also gas turbines and micro turbines are available,
but they were never selected in the optimization, and therefore, not shown here.
17 Higher heating Value.
18 For clarity, some of the formal building types were aggregated (i.e large and small offices).
|Col1|Col2|ICE|Col4|FC|Col6|
|---|---|---|---|---|---|
|||S|M|S|M|
|capacity (kW)||60|250|100|250|
|installed cost ($/kW) w/HX||2721|1482|2382|1909|
||w/HX|3580|2180|2770|2220|
|maintenance cost ($/kWh)||0.02|0.01|0.03|0.03|
|electrical efficiency17 (%)||29|30|36|36|
|heat to power ratio (if w/HX)||1.73|1.48|1.00|1.00|
|lifetime (years)||20|20|10|10|
|Association)|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
||ES|TS|AC|ST|PV|
|capital cost ($)|295|10000|93911|0|3851|
|variable cost ($/kW or $/kWh when referring to storage)|193|100|685|500|3237|
|maintenance cost ($/kWh)|0|0|1.88|0.50|0.25|
|lifetime (years)|5|17|20|15|20|
|Col1|ES|TS|
|---|---|---|
|charging efficiency|0.9|0.9|
|discharging efficiency|0.9|0.9|
|decay|0.001|0.01|
|maximum charge rate|0.1|0.25|
|maximum discharge rate|0.25|0.25|
|minimum state of charge|0.3|0|
|charging efficiency|0.95|
|---|---|
|discharging efficiency|0.95|
|battery hourly decay (related to stored electricity)|0.001|
|capacity|16 kWh|
|building type|building connection period|EV owners|
|---|---|---|
|Hotel|19h-8h|guests|
|Office|9h-18h|employees|
|School/College|8h-18h|employees|
|Retail|9h-18h|employees/customers|
|Restaurant|18h-21h|employees/customers|
|Warehouse|8h-18h|employees|
|Grocery|9h-18h|employees/customers|
|Healthcare|8h-18h|employees|
-----
Table 6. Area constraints for PV, solar thermal, and EVs (CEUS and own calculations)
building type area constraint A (m[2])
Hotel 3600
Small Office 175
Warehouse 1390
School 3340
Retail 800
Restaurant 300
Refrigerated
Warehouse 5560
Large Office
16200
Healthcare 3260
Grocery 540
College 5600
Table 7. Detailed optimization results for large school building
|building type|area constraint A (m2)|
|---|---|
|Hotel|3600|
|Small Office|175|
|Warehouse|1390|
|School|3340|
|Retail|800|
|Restaurant|300|
|Refrigerated Warehouse|5560|
|Large Office|16200|
|Healthcare|3260|
|Grocery|540|
|College|5600|
|Col1|do- nothing (DN)|min cost|S1|S2|S3|S4|
|---|---|---|---|---|---|---|
|equipment|||||||
|inernal combustion CHP (kW)|||250|60|120|420|
|fuell cell CHP (kW)||||100|350|350|
|abs. Chiller (kW in terms of electricity)||||106|142|113|
|solar thermal collector (kW)||91|308|779|961|779|
|PV (kW)|||193|352|315|352|
|stationary electric storage (kWh)|||790|2068|1769|2068|
|mobile electric storage (kWh)||3563|3563|14|91|53|
|thermal storage (kWh)|||767|2932|3063|2932|
|annual building costs (k$)|||||||
|electricity|269.24|212.38|90.21|65.98|36.07|40.81|
|NG|73.74|67.88|94.36|94.33|115.90|109.77|
|onsite DG technologies (amortized costs)||15.02|179.02|367.91|451.71|552.57|
|total|342.97|295.28|363.58|528.22|603.68|703.15|
|% savings compared to do-nothing||13.90|-6.01|-54.01|-76.02|-105.02|
|annual utility consumption (GWh)|||||||
|electricity|1.74|1.39|0.58|0.36|0.15|0.17|
|NG|1.74|1.60|2.23|2.24|2.76|2.62|
|annual building carbon emissions (t/a)|||||||
|emissions|1203.92|1203.79|833.18|586.99|575.33|559.94|
|% savings compared to do-nothing||0.01|30.79|51.24|52.21|53.49|
-----
Table 8. Detailed optimization results for healthcare facility
donothing
(DN) min cost S1 S2 S3
250 180
250 550
67
792 280
337 441
281 1061
929 405 293
0 440
336.07 83.70 19.37 33.48
62.74 173.57 118.77 106.00
onsite DG technologies (amortized
70.70 284.62 474.72
398.81 327.97 422.76 614.20
% savings compared to do-nothing 17.76 -6.01 -54.01
annual utility consumption (GWh)
2.33 0.58 0.06 0.19
2.13 5.91 4.04 3.61
annual building carbon emissions (t/a)
1574.39 1389.53 767.38 748.68
% savings compared to do- nothing 11.74 51.26 52.45
Table 9. Aggregated results for CO2 minimization
|Col1|do- nothing (DN)|min cost|S1|S2|S3|S4|
|---|---|---|---|---|---|---|
|equipment|||||||
|inernal combustion CHP (kW)||250||180|180|180|
|fuell cell CHP (kW)|||250|550|300|500|
|abs. Chiller (kW in terms of electricity)|||67||64|43|
|solar thermal collector (kW)|||792|280|323|305|
|PV (kW)|||337|441|433|436|
|stationary electric storage (kWh)|||281|1061|986|1017|
|mobile electric storage (kWh)||929|405|293|95|170|
|thermal storage (kWh)|||0|440|15582|23353|
|annual building costs (k$)|||||||
|electricity|336.07|83.70|19.37|33.48|42.41|39.97|
|NG|62.74|173.57|118.77|106.00|105.90|110.37|
|onsite DG technologies (amortized costs)||70.70|284.62|474.72|553.64|667.26|
|total|398.81|327.97|422.76|614.20|701.94|817.60|
|% savings compared to do-nothing||17.76|-6.01|-54.01|-76.01|-105.01|
|annual utility consumption (GWh)|||||||
|electricity|2.33|0.58|0.06|0.19|0.18|0.10|
|NG|2.13|5.91|4.04|3.61|3.60|3.76|
|annual building carbon emissions (t/a)|||||||
|emissions|1574.39|1389.53|767.38|748.68|741.65|732.06|
|% savings compared to do- nothing||11.74|51.26|52.45|52.89|53.50|
|energy cost savings buildings compared to do-nothing*|[%]|-30.00|
|---|---|---|
|CO emission reduction of buildings compared to do-nothing 2|[%]|37.13|
|number of cars energy management system (EMS) would like to utilize|[million cars]|0.78|
|mobile storage capacity|[GWh]|12.45|
|PV in buildings|[GW]|4.55|
|stationary storage|[GWh]|14.71|
|combined heat and power (CHP) and other distributen generation (DG)|[GW]|3.50|
*) the average max cost increase due to CO2 minimization was set to 30% and is constrained within
DER-CAM
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1061/(ASCE)EY.1943-7897.0000070?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1061/(ASCE)EY.1943-7897.0000070, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://escholarship.org/content/qt6j02f15t/qt6j02f15t.pdf?t=md86y1"
}
| 2,012
|
[] | true
| 2012-11-09T00:00:00
|
[] | 15,862
|
|
en
|
[
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/030f099868819ac18ab26dab74f579d64f7c12be
|
[] | 0.875065
|
Private Blockchain in the Field of Health Services
|
030f099868819ac18ab26dab74f579d64f7c12be
|
Journal of Advanced Health Informatics Research
|
[
{
"authorId": "66129975",
"name": "Purwono Purwono"
},
{
"authorId": "2270034215",
"name": "Khoirun Nisa"
},
{
"authorId": "2213025360",
"name": "Sony Kartika Wibisono"
},
{
"authorId": "2213027859",
"name": "Bala Putra Dewa"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Blockchain is a technology that is quite popular and has been adopted in various fields in recent years. This technology has caught the attention of researchers in the health sector because of its innovation which is considered capable of providing the necessary guarantees for the safe processing, sharing, and management of sensitive patient data. There are many problems with falsifying reports and withholding important information from patients, which is considered medical fraud. Hyperledger, a type of private Blockchain, is very suitable for healthcare applications. A private blockchain is a restricted type of blockchain network created by an entity. This type of network is limited to those with access permissions. In addition, private blockchains usually use a centralized verification system and are controlled by the network's creators. Hyperledger Fabric is one example of a permissioned blockchain that can play a role in implementing patient-centric, interoperable healthcare systems
|
## Journal of Advanced Health Informatics Research (JAHIR)
### Vol. 1, No. 1, April 2023, pp. 10-15 DOI: https://doi.org/10.59247/jahir.v1i1.14
# Private Blockchain in the Field of Health Services
Purwono [1], Khoirun Nisa [2], Sony Kartika Wibisono [3], Bala Putra Dewa [4]
_Department of Informatics, Universitas Harapan Bangsa, Purwokerto, 53182, Indonesia_
**ARTICLE INFO** **ABSTRACT**
**Article history:** Blockchain is a technology that is quite popular and has been adopted in
various fields in recent years. This technology has caught the attention of
Received December 18, 2022
Revised January 08, 2023 researchers in the health sector because of its innovation which is considered
Published January 16, 2023 capable of providing the necessary guarantees for the safe processing,
sharing, and management of sensitive patient data. There are many problems
with falsifying reports and withholding important information from patients,
**Keywords:** which is considered medical fraud. Hyperledger, a type of private Blockchain,
is very suitable for healthcare applications. A private blockchain is a
Blockchain;
restricted type of blockchain network created by an entity. This type of
Healthcare;
Hyperledger; network is limited to those with access permissions. In addition, private
Private; blockchains usually use a centralized verification system and are controlled
Patient; by the network's creators. Hyperledger Fabric is one example of a
permissioned blockchain that can play a role in implementing patient-centric,
interoperable healthcare systems.
This work is licensed under a Creative Commons Attribution-Share Alike 4.0
**Corresponding Author:**
Purwono, Universitas Harapan Bangsa, Jl. Raden Patah No.100, Purwokerto, Indonesia, 53182
[Email: purwono@uhb.ac.id](mailto:purwono@uhb.ac.id)
**1.** **INTRODUCTION**
Blockchain is a technology that is quite popular and has been adopted in various fields in recent years [1].
This technology has caught the attention of researchers in the health sector because of its innovation which is
considered capable of providing the necessary guarantees for the safe processing, sharing and management of
sensitive patient data [2]. This is in line with the need for security guarantees for health data that are considered
sensitive [3]. Various types of sensitive data contained in the Electronic Health Record (EHR) are regarded as
one the privacy issues that make patients not interested in sharing their data [4]. The fact also states that one
hospital with another does not necessarily have a compatible system, so it has profound implications for
patients, especially in medical record data [5].
Currently, health service data is spread over various systems that have different architectures. There are
also many problems of falsifying reports and withholding important information from patients, which are
considered medical fraud [6]. In traditional healthcare systems, when patients wish to share their data with
other parties, such as hospitals or research institutes, they have to go through manual approval processes, which
are highly inefficient for care providers to coordinate, especially in situations where patients may move
geographically without prior knowledge, where he will receive treatment [7].
Blockchain comes with a distributed database that forms a data blockchain as a decentralized data storage
and processing solution [8] capable of solving various centralized system problems [9]. This technology is
present as a solution that offers data security and privacy. Blockchain enables business process innovation in
healthcare [10]. Blockchain can be used as a medium that can reduce the impact of health service challenges,
Journal homepage: https://ejournal.ptti.web.id/index.php/jahir/ [Email: jahir@ptti.web.id](mailto:jahir@ptti.web.id)
-----
ISSN: 2985-6124 Journal of Advanced Health Informatics Research (JAHIR)
Vol. 1, No. 1, April 2023, pp. 10-15
Page | 11
such as health data that is difficult to understand, use and share because its non-standard nature makes it
difficult to disseminate to health networks [11].
Various types of research focus on the application of Blockchain in the healthcare sector. For example,
research conducted by Amponsah [12] who have been testing new fraud detection and prevention methods for
healthcare claims processing using machine learning and blockchain technology. Comparative experimental
results show that the tool with the best performance achieves a classification accuracy of 97.96% and a
sensitivity of 98.09%. This means that the proposed system enhances the ability of blockchain smart contracts
to detect fraud with an accuracy of 97.96%. Other similar studies have also been carried out by Cerchione [13],
who designed the distributed electronic health record ecosystem. The result is potentially benefiting from the
deployment of distributed networks in terms of clinical outcomes (e.g., quality improvement, reduction of
medical errors), organizational products (e.g., financial, operational benefits), and organizational outcomes
(e.g., increased ability to conduct research, increased population health, cost reduction). The research was
conducted by Karmakar [14], who created an Agent Free Insurance System using Blockchain for Healthcare
4.0. This research led to the conclusion that the proposed model has been implemented on an Ethereum test
network, and its performance has been compared empirically with other state-of-the-art models. This method
is considered to outperform the others in terms of service integrity, latency, and cost.
There are several blockchains types, including private, public and consortium [15]. Public blockchains
have been popular since the arrival of Bitcoin in 2008, which introduced the concept of a distributed ledger
which has caught the attention of researchers because it is considered a revolutionary technology after the
internet. [8]. Public blockchains are accessible to anyone, and anyone can participate in a consensus process to
determine what blocks can be added to the chain [16]. A private blockchain is a restricted type of blockchain
network created by an entity. This type of network is limited to those with access permissions. In addition,
private blockchains usually use a centralized verification system and are controlled by the network's creators
[17]. Based on the importance of patient data in healthcare, we summarize the use of private Blockchain by
leveraging the Hyperledger Fabric platform. This platform has also previously been researched utilizing
representative tests to assess the security criteria that support the Blockchain regarding data confidentiality,
privacy, and access control. Experimental evaluations reveal the promising benefits of private blockchain
technology in terms of security, regulatory compliance, compatibility, flexibility, and scalability [3].
**2.** **BLOCKCHAIN APPLICATIONS IN HEALTH SERVICES**
This section will discuss what health applications can be implemented with blockchain technology.
Blockchain will play an essential role in transforming the healthcare sector. Blockchain enhances healthcare
organizations to provide adequate patient care and high-quality healthcare facilities [18].
**2.1.** **Data Security**
As part of blockchain technology, consensus protocols significantly impact the safety and security of
blockchain systems [19]. The blockchain system uses a consensus algorithm to build trust and properly store
block transactions [20]. Blockchain-protected networks provide an advantage over older approaches to
securing health information. Data cannot be modified or deleted once added to the Blockchain. Even if the data
needs to be updated, a new record includes all previous entries. Additionally, each form is accessible via a
unique private key controlled by the patient. Since a hash represents each document, verifying modifications
to the original hash ensures the highest levels of transparency and verification.
**2.2.** **Health Insurance Claims**
Blockchain can also be adapted to health insurance claims [21]. When all data is appropriately connected
to the Blockchain network, the processing time will be accelerated, the risk of fraud will be reduced, and time
and money will be more efficient [14]. This further allows insurance claims to be processed in real-time.
**2.3.** **Supply Chain**
In the process of tracking medical supplies in real-time from manufacturers to minimize the danger of
human error in sending transactions, Blockchain integration with an organization's supply chain can increase
productivity and quality control [22]. It can also determine how much labour costs and how much carbon
emissions a supply chain functions. Organ transplantation is another use case of Blockchain-based supply
chains in healthcare that is becoming very popular. Blockchain technology offers a distributed, secure and
transparent approach to exchanging information in the supply chain [23].
Private Blockchain in the Field of Health Services
(Purwono et al.)
-----
**Page | 12** Journal of Advanced Health Informatics Research (JAHIR) ISSN: 2985-6124
Vol. 1, No. 1, April 2023, pp. 10-15
**2.4.** **Medical Research**
Medical research can only be successful if the data is high quality and readily available. The proprietary
rights granted to patients on the Blockchain can be used for research purposes only if the information is subject
to sufficient consent. This will enable research institutions to collect open data to advance clinical research and
public health reporting. In short, blockchain qualities such as decentralization, data sources, reliability and
smart contract support are ideal for advancing the modern healthcare system. The Hyperledger Fabric
healthcare system takes it one step further by introducing modularity to the ecosystem.
While first-generation blockchain frameworks, such as Bitcoin, were designed primarily to facilitate
cryptocurrency transactions [24], newer blockchain-based applications have also become available for
healthcare use [25]. A different blockchain framework, Hyperledger, employs a healthcare business's technical
requirements that require it to take a variety of things into account when developing healthcare applications.
[26]. For example, the privacy of patients and their data is one of the most important requirements in the
creation of Hyperledger healthcare applications [27]. While standard blockchain frameworks demand full
transparency, the European General Data Protection Regulation (GDPR) regulates public access to that
information [8]. Apart from that, transaction scalability is another important requirement of the healthcare
industry that an ideal blockchain infrastructure must meet. Transaction validation and consensus protocols are
critical in determining the scalability of transactions in healthcare applications.
**3.** **HYPERLEDGER AS A HEALTHCARE BLOCKCHAIN PLATFORM**
**3.1.** **Hyperledger**
Hyperledger is an open-source umbrella organization with several open-source projects. Where these
projects are used to build Blockchain technology. Hyperledger is directly fostered by the Linux Foundation
and has support from companies such as IBM and Intel to SAP Ariba. Hyperledger Fabric is a modular
blockchain project regulated by The Linux Foundation, a consortium that promotes decentralized innovation
[28].
Hyperledger has many frameworks and tools often used to build Blockchain networks [29]. Each of these
frameworks and devices has a specific function, but they can also collaborate during the implementation
process of creating a Blockchain network. Examples of hyper ledger frameworks are Hyperledger Fabric,
Hyperledger Sawtooth, Hyperledger Burrow, Hyperledger Indy and Hyperledger Iroha. As for the tools used,
among others, namely Hyperledger Explorer, Hyperledger Cello, Hyperledger Avalon, Hyperledger Cactus,
and Hyperledger Caliper.
Hyperledger Fabric is one example of a permissioned blockchain that can play a role in implementing
patient-centric, interoperable healthcare systems. It is an open-source distributed ledger technology (DTL)
platform that supports strong security and privacy features [26]. Because Hyperledger Fabric is licensed and
provides smart contract (chain code) support, it is becoming popular for many applications in multiple domains.
The Fabric enables participants in the consortium to develop and deploy applications using the Blockchain
[27]. Hyperledger Fabric has a modular design and architecture and therefore has a high degree of flexibility
and extensibility [30]. The Hyperledger fabric can be divided horizontally into four components: identity
management, ledger management, transaction management and smart contracts; and vertically. Hyperledger
Fabric can be divided into five components: member management, consensus services, chain code services,
security, and cryptographic services.
The difference between Hyperledger and other platforms, such as Bitcoin and Ethereum, is that
Hyperledger is widely used in building private/permissioned blockchain networks. Meanwhile, Ethereum and
Bitcoin are more public blockchains. Because it is commonly used in making private blockchain networks,
users/participants in the Hyperledger platform are also more controlled and supervised [31].
**3.2.** **Hyperledger Healthcare System**
Based on the developer's need for a complete toolkit that can rapidly implement multiple privacy and
security standards, the Hyperledger platform is a perfect fit for healthcare applications. [32]. In addition, it has
complete control over smart contracts that can be executed in multiple computer languages, including Node.js
and Javascript [33]. Smart contract technology is a computerized transaction protocol that independently
executes the contents of an agreement and aims to conclude an agreement or agreement between several parties
[34].
Private Blockchain in the Field of Health Services
(Purwono et al.)
-----
ISSN: 2985-6124 Journal of Advanced Health Informatics Research (JAHIR)
Vol. 1, No. 1, April 2023, pp. 10-15
Page | 13
While Bitcoin and Ethereum can complete seven and fifteen transactions per second, respectively [35],
Hyperledger outperforms the competition with transaction speeds of up to 3000 transactions per second [36].
This technology does not use cryptocurrency as a motivator, which is certainly different from public
blockchains such as bitcoin or Ethereum. Another advantage is that it has high transaction throughput and low
transaction fees. Hyperledger Fabric is the most comprehensive blockchain framework accessible compared to
other blockchain frameworks.
Implementation of blockchain technology in health management systems can be used to track transactions
that occur so as to create transparent data integrity and security [37]. The following are some implementations
of the hyper ledger Fabric in the health sector [38]:
1. Axuall is a digital network for verifying identity, credentials, and authenticity in real time using
the Sorvin Network and Hyperledger Indy. The Axuall network is currently in pilot with Hyr
Medical and their 650+ physician network in addition to two other health systems. Physicians'
time is better spent practising medicine than filling out redundant, repetitive credentialing
paperwork consisting of unchanging information. Using Axuall's digital credentialing network,
physicians will be able to present fully compliant credential sets to participating healthcare
systems and medical groups they are affiliated with or applying to. Utilizing the cryptographic
constructs from Hyperledger Indy, healthcare organizations will be able to verify the validity of
a physician's credentials – spanning medical education, training, licensing, board certification,
work history, competency evaluations, sanctions, and adverse events – ensuring compliance with
industry standards, regulatory mandates, and health system bylaws.
2. LedgerDomain joined forces with other industry leaders like Pfizer, IQVIA, UPS, Merck, UCLA
Health, GSK, Thermo Fisher, and Biogen to build out a pilot on Hyperledger Fabric called
KitChain. Scoped and developed over the course of two years, KitChain aims to demonstrate a
robust collaborative model for managing the pharmaceutical clinical supply chain, creating an
immutable record for shipment and event tracking without the need to resort to paperwork and
manual transcription. KitChain has two major components: a front-end mobile application and
a back-end blockchain server. The backend was implemented in Golang and used Hyperledger
Fabric, the LedgerDomain Selvedge blockchain app platform, and LedgerDomain's DocuSeal
framework, encompassing smart contracts and application logic. As such, the pilot has a fully
functioning, highly secure blockchain backend.
3. This drug discovery project uses Amazon Web Services technologies to execute Machine
Learning algorithms from academic partners on a large scale. The data never leaves the owner's
infrastructure, and only non-sensitive models are exchanged. A central dispatcher allows each
partner to share a common model to be consolidated collectively. To provide full traceability of
the operations, the platform is based on a private blockchain and uses Substrate, a software
framework for orchestrating distributed machine learning tasks in a secure way. Substrate is
based on Hyperledger Fabric. MELLODDY is designed to prevent the leaking of proprietary
information from one data set to another or from one model to another while simultaneously
boosting the predictive performance and applicability domain of the models by leveraging all
available data.
4. Medicalchain was one of the first healthcare blockchain companies to join the Hyperledger
community, signing on as a member in 2017. The company's ethos is to empower patients to
have access to their medical records. Providing patients with direct access to their data unlocks
the barriers we face in healthcare today, such as patient choice and interoperability issues. A
doctor-led team based in the UK, Medicalchain trialled the first telemedicine consultation using
blockchain technology. The company's first blockchain-based product to market, MyClinic.com,
makes it easy to schedule appointments, review medical reports and request further
investigations or assistance using an Android and iOS app. Now the company is set to focus on
scalability with the view to onboarding clinics and patients locally, nationally, and
internationally.
5. SecureKey launched its innovative and in-demand network to Canadian consumers in early
2019. Verified. Me is a blockchain-based digital identity network built upon Hyperledger Fabric
1.2 that lets consumers stay in control of their information by choosing when to share
information and with whom, reducing unnecessary oversharing of personal information. Sun
Life Financial has signed on as an early adopter and the first North American (health) insurer,
Private Blockchain in the Field of Health Services
(Purwono et al.)
-----
**Page | 14** Journal of Advanced Health Informatics Research (JAHIR)
Vol. 1, No. 1, April 2023, pp. 10-15
ISSN: 2985-6124
making it easier for their clients to do business with the company. Dynacare, one of Canada's
largest and most respected health and wellness solutions providers, has joined the Verified. Me
network. Dynacare's participation will make it easier for Canadians to verify their identities and
gain safer and faster access to their health information.
**4. CONCLUSION**
Blockchain is a technology that is increasingly in demand in the health sector. This is evidenced by the
increasing number of researchers who take advantage of the sophistication of this technology. Based on the
developer's need for a complete toolkit that can rapidly implement multiple privacy and security standards, the
Hyperledger platform is a perfect fit for healthcare applications. Hyperledger has complete control over smart
contracts that can be executed in multiple computer languages, including Node.js and Javascript. The difference
between Hyperledger and other platforms, such as Bitcoin and Ethereum, is that Hyperledger is widely used in
building private/permissioned blockchain networks. Meanwhile, Ethereum and Bitcoin are more public
blockchains. Because it is widely used in making private blockchain networks, users/participants in the
Hyperledger platform are also more controlled and more supervised.
**REFERENCES**
[1] Purwono, A. Ma’arif, W. Rahmaniar, Q. M. ul Haq, D. Herjuno, and M. Naseer, “Blockchain Technology,” J. Ilm.
_Tek. Elektro Komput. dan Inform., vol. 8, no. 2, pp. 199–205, 2022, doi: 10.26555/jiteki.v8i2.24327._
[2] V. Merlo, G. Pio, F. Giusto, and M. Bilancia, "On the exploitation of the blockchain technology in the healthcare
sector: A systematic review," _Expert_ _Syst._ _Appl.,_ vol. 213, p. 118897, 2023, doi:
https://doi.org/10.1016/j.eswa.2022.118897.
[3] M. Antwi, A. Adnane, F. Ahmad, R. Hussain, M. Habib your Rehman, and C. A. Kerrache, "The case of
HyperLedger Fabric as a blockchain solution for healthcare applications," Blockchain Res. Appl., vol. 2, no. 1, p.
100012, 2021, doi: https://doi.org/10.1016/j.bcra.2021.100012.
[4] A. Hajian, V. R. Prybutok, and H.-C. Chang, "An empirical study for blockchain-based information sharing
systems in electronic health records: A mediation perspective," Comput. Human Behav., vol. 138, p. 107471, 2023,
doi: https://doi.org/10.1016/j.chb.2022.107471.
[5] G. Al-Sumaidaee, R. Alkhudary, Z. Zilic, and A. Swidan, "Performance analysis of a private blockchain network
built on Hyperledger Fabric for healthcare," _Inf. Process. Manag., vol. 60, no. 2, p. 103160, 2023, doi:_
https://doi.org/10.1016/j.ipm.2022.103160.
[6] I. Riadi, T. Ahmad, R. Sarno, P. Purwono, and A. Ma’arif, “Developing Data Integrity in an Electronic Health
Record System using Blockchain and InterPlanetary File System (Case Study: COVID-19 Data),” Emerg. Sci. J.,
vol. 4, no. Special issue, pp. 190–206, 2020, doi: 10.28991/esj-2021-SP1-013.
[7] A. Dubovitskaya, Z. Xu, S. Ryu, M. Schumacher, and F. Wang, "Secure and Trustable Electronic Medical Records
Sharing using Blockchain," AMIA ... Annu. Symp. proceedings. AMIA Symp., vol. 2017, Aug. 2017.
[8] R. Belen-Saglam, E. Altuncu, Y. Lu, and S. Li, "A systematic literature review of the tension between the GDPR
and public blockchain systems," _Blockchain_ _Res._ _Appl.,_ p. 100129, 2023, doi:
https://doi.org/10.1016/j.bcra.2023.100129.
[9] R. Yang _et al., "Public and private blockchain in construction business process and information integration,"_
_Autom. Constr., vol. 118, p. 103276, 2020, doi: https://doi.org/10.1016/j.autcon.2020.103276._
[10] D. Aloini, E. Benevento, A. Stefanini, and P. Zerbino, "Transforming healthcare ecosystems through blockchain:
Opportunities and capabilities for business process innovation," _Technovation, vol. 119, p. 102557, 2023, doi:_
https://doi.org/10.1016/j.technovation.2022.102557.
[11] I. Erol, A. Oztel, C. Searcy, and İ. T. Medeni, "Selecting the most suitable blockchain platform: A case study on
the healthcare industry using a novel rough MCDM framework," _Technol. Forecast. Soc. Change, vol. 186, p._
122132, 2023, doi: https://doi.org/10.1016/j.techfore.2022.122132.
[12] A. A. Amponsah, A. F. Adekoya, and B. A. Weyori, "A novel fraud detection and prevention method for healthcare
claim processing using machine learning and blockchain technology," _Decis. Anal. J., vol. 4, p. 100122, 2022,_
doi: https://doi.org/10.1016/j.dajour.2022.100122.
[13] R. Cerchione, P. Centobelli, E. Riccio, S. Abbate, and E. Oropallo, "Blockchain's coming to hospital to digitalize
healthcare services: Designing a distributed electronic health record ecosystem," Technovation, p. 102480, 2022,
doi: https://doi.org/10.1016/j.technovation.2022.102480.
[14] A. Karmakar, P. Ghosh, P. S. Banerjee, and D. De, "ChainSure: Agent Free Insurance System using Blockchain
for Healthcare 4.0," Intell. Syst. with Appl., p. 200177, 2023, doi: https://doi.org/10.1016/j.iswa.2023.200177.
[15] B. Bera, A. K. Das, and A. K. Sutrala, "Private blockchain-based access control mechanism for unauthorized UAV
detection and mitigation in Internet of Drones environment," Comput. Commun., vol. 166, pp. 91–109, 2021, doi:
https://doi.org/10.1016/j.comcom.2020.12.005.
Private Blockchain in the Field of Health Services
(Purwono et al.)
-----
ISSN: 2985-6124 Journal of Advanced Health Informatics Research (JAHIR)
Vol. 1, No. 1, April 2023, pp. 10-15
Page | 15
[16] H. Tang, Y. Shi, and P. Dong, "Public blockchain evaluation using entropy and TOPSIS," Expert Syst. Appl., vol.
117, pp. 204–210, 2019, doi: https://doi.org/10.1016/j.eswa.2018.09.048.
[17] T. Ncube, N. Dlodlo, and A. Terzoli, Private Blockchain Networks: A Solution for Data Privacy. 2020.
[18] A. Haleem, M. Javaid, R. P. Singh, R. Suman, and S. Rab, "Blockchain technology applications in healthcare: An
overview," Int. J. Intell. Networks, vol. 2, pp. 130–139, 2021, doi: https://doi.org/10.1016/j.ijin.2021.09.005.
[19] Q. Bao, B. Li, T. Hu, and X. Sun, "A survey of blockchain consensus safety and security: State-of-the-art,
challenges, and future work," _J._ _Syst._ _Softw.,_ vol. 196, p. 111555, 2023, doi:
https://doi.org/10.1016/j.jss.2022.111555.
[20] H. Guo and X. Yu, "A survey on blockchain technology and its security," Blockchain Res. Appl., vol. 3, no. 2, p.
100067, 2022, doi: https://doi.org/10.1016/j.bcra.2022.100067.
[21] A. A. Amponsah, A. F. Adekoya, and B. A. Weyori, "Improving the Financial Security of National Health
Insurance using Cloud-Based Blockchain Technology Application," Int. J. Inf. Manag. Data Insights, vol. 2, no.
1, p. 100081, 2022, doi: https://doi.org/10.1016/j.jjimei.2022.100081.
[22] J. S. Jadhav and J. Deshmukh, "A review study of the blockchain-based healthcare supply chain," _Soc. Sci._
_Humanit. Open, vol. 6, no. 1, p. 100328, 2022, doi: https://doi.org/10.1016/j.ssaho.2022.100328._
[23] I. A. Omar, M. Debe, R. Jayaraman, K. Salah, M. Omar, and J. Arshad, "Blockchain-based Supply Chain
Traceability for COVID-19 personal protective equipment," Comput. Ind. Eng., vol. 167, p. 107995, 2022, doi:
https://doi.org/10.1016/j.cie.2022.107995.
[24] A. Brauneis, R. Mestel, R. Riordan, and E. Theissen, "Bitcoin unchained: Determinants of cryptocurrency
exchange liquidity," _J._ _Empir._ _Financ.,_ vol. 69, pp. 106–122, 2022, doi:
https://doi.org/10.1016/j.jempfin.2022.08.004.
[25] T. M. Ghazal, M. K. Hasan, S. N. H. S. Abdullah, K. A. A. Bakar, and H. Al Hamadi, "Private blockchain-based
encryption framework using computational intelligence approach," Egypt. Informatics J., vol. 23, no. 4, pp. 69–
75, 2022, doi: https://doi.org/10.1016/j.eij.2022.06.007.
[26] M. Kumar and S. Chand, "MedHypChain: A patient-centered interoperability hyperledger-based medical
healthcare system: Regulation in COVID-19 pandemic," J. Netw. Comput. Appl., vol. 179, p. 102975, 2021, doi:
https://doi.org/10.1016/j.jnca.2021.102975.
[27] X. Zhao, S. Wang, Y. Zhang, and Y. Wang, "Attribute-based access control scheme for data sharing on
hyperledger fabric," J. Inf. Secur. Appl., vol. 67, p. 103182, 2022, doi: https://doi.org/10.1016/j.jisa.2022.103182.
[28] D. Ravi, S. Ramachandran, R. Vignesh, V. R. Falmari, and M. Brindha, "Privacy preserving transparent supply
chain management through Hyperledger Fabric," _Blockchain Res. Appl., vol. 3, no. 2, p. 100072, 2022, doi:_
https://doi.org/10.1016/j.bcra.2022.100072.
[29] H. Foundation, An Overview of Hyperledger Foundation. 2021.
[30] N. Lu, Y. Zhang, W. Shi, S. Kumari, and K.-K. R. Choo, "A secure and scalable data integrity auditing scheme
based on hyperledger fabric," _Comput._ _Secur.,_ vol. 92, p. 101741, 2020, doi:
https://doi.org/10.1016/j.cose.2020.101741.
[31] R. M. Stulz, "Public versus private equity," 2020. doi: 10.1093/oxrep/graa003.
[32] Q. Wang and S. Qin, "A hyperledger fabric-based system framework for healthcare data management," Appl. Sci.,
vol. 11, no. 24, 2021, doi: 10.3390/app112411693.
[33] Hyperledger, "Hyperledger Architecture, Volume II," 2018. doi: 10.1016/0378-1119(82)90151-2.
[34] E. S. Negara, A. N. Hidayanto, R. Andryani, and R. Syaputra, "Survey of smart contract framework and its
application," Inf., vol. 12, no. 7, pp. 1–10, 2021, doi: 10.3390/info12070257.
[35] M. Schäffer, M. di Angelo, and G. Salzer, “Performance and Scalability of Private Ethereum Blockchains,” Lect.
_Notes Bus. Inf. Process., vol. 361, pp. 103–118, 2019, doi: 10.1007/978-3-030-30429-4_8._
[36] C. Gorenflo, S. Lee, L. Golab, and S. Keshav, "FastFabric: Scaling Hyperledger Fabric to 20,000 Transactions per
Second," in 2019 IEEE International Conference on Blockchain and Cryptocurrency (ICBC), 2019, pp. 455–463,
doi: 10.1109/BLOC.2019.8751452.
[37] A. Jain and D. S. Jat, "Implementation of Blockchain Enabled Healthcare System using Hyperledger Fabric," ACM
_Int. Conf. Proceeding Ser., pp. 37–47, 2021, doi: 10.1145/3484824.3484914._
[38] Hyperledger, "Five Healthcare Projects Powered by Hyperledger You May Not Know About," 2020.
[https://www.hyperledger.org/blog/2020/01/29/five-healthcare-projects-powered-by-hyperledger-you-may-not-](http://www.hyperledger.org/blog/2020/01/29/five-healthcare-projects-powered-by-hyperledger-you-may-not-)
know-about.
Private Blockchain in the Field of Health Services
(Purwono et al.)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.59247/jahir.v1i1.14?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.59247/jahir.v1i1.14, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://ejournal.ptti.web.id/index.php/jahir/article/download/14/2"
}
| 2,023
|
[] | true
| 2023-02-18T00:00:00
|
[] | 7,629
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/030fdedaaf85ca0fb0ab4a3dd650606c1bfa49dc
|
[
"Computer Science"
] | 0.822034
|
P4BFT: Hardware-Accelerated Byzantine-Resilient Network Control Plane
|
030fdedaaf85ca0fb0ab4a3dd650606c1bfa49dc
|
Global Communications Conference
|
[
{
"authorId": "2980912",
"name": "Ermin Sakic"
},
{
"authorId": "52496625",
"name": "N. Deric"
},
{
"authorId": "119011268",
"name": "Endri Goshi"
},
{
"authorId": "9358170",
"name": "W. Kellerer"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"GLOBECOM",
"Glob Commun Conf"
],
"alternate_urls": null,
"id": "b189dec0-41d0-4cea-a906-7c5186895904",
"issn": null,
"name": "Global Communications Conference",
"type": "conference",
"url": "http://www.ieee-globecom.org/"
}
|
Byzantine Fault Tolerance (BFT) enables correct operation of distributed, i.e., replicated applications in the face of malicious take-over and faulty/buggy individual instances. Recently, BFT designs have gained traction in the context of Software Defined Networking (SDN). In SDN, controller replicas are distributed and their state replicated for high availability purposes. Malicious controller replicas, however, may destabilize the control plane and manipulate the data plane, thus motivating the BFT requirement. Nonetheless, deploying BFT in practice comes at a disadvantage of increased traffic load stemming from replicated controllers, as well as a requirement for proprietary switch functionalities, thus putting strain on switches' control plane where particular BFT actions must be executed in software. P4BFT leverages an optimal strategy to decrease the total amount of messages transmitted to switches that are the configuration targets of SDN controllers. It does so by means of message comparison and deduction of correct messages in the determined optimal locations in the data plane. In terms of the incurred control plane load, our P4-based data plane extensions outperform the existing solutions by ~33.2% and ~40.2% on average, in random 128-switch and Fat-Tree/Internet2 topologies, respectively. To validate the correctness and performance gains of P4BFT, we deploy bmv2 and Netronome Agilio SmartNIC-based topologies. The advantages of P4BFT can thus be reproduced both with software switches and "commodity" P4-enabled hardware. A hardware- accelerated controller packet comparison procedure results in an average 96.4% decrease in processing delay per request compared to existing software approaches.
|
# P4BFT: Hardware-Accelerated Byzantine-Resilient Network Control Plane
### Ermin Sakic[∗†], Nemanja Deric[∗], Endri Goshi[∗], Wolfgang Kellerer[∗]
_∗Technical University Munich, Germany, † Siemens AG, Germany_
E-Mail:[∗]{ermin.sakic, nemanja.deric, endri.goshi, wolfgang.kellerer}@tum.de, _[†]_ ermin.sakic@siemens.com
**_Abstract—Byzantine Fault Tolerance (BFT) enables correct_**
**operation of distributed, i.e., replicated applications in the face**
**of malicious take-over and faulty/buggy individual instances.**
**Recently, BFT designs have gained traction in the context of**
**Software Defined Networking (SDN). In SDN, controller replicas**
**are distributed and their state replicated for high availability**
**purposes. Malicious controller replicas, however, may destabilize**
**the control plane and manipulate the data plane, thus motivating**
**the BFT requirement. Nonetheless, deploying BFT in practice**
**comes at a disadvantage of increased traffic load stemming from**
**replicated controllers, as well as a requirement for proprietary**
**switch functionalities, thus putting strain on switches’ control**
**plane where particular BFT actions must be executed in software.**
**P4BFT leverages an optimal strategy to decrease the total**
**amount of messages transmitted to switches that are the con-**
**figuration targets of SDN controllers. It does so by means of**
**message comparison and deduction of correct messages in the**
**determined optimal locations in the data plane. In terms of the**
**incurred control plane load, our P4-based data plane extensions**
**outperform the existing solutions by ∼** 33.2% and ∼ 40.2% on
**average, in random 128-switch and Fat-Tree/Internet2 topologies,**
**respectively. To validate the correctness and performance gains**
**of P4BFT, we deploy bmv2 and Netronome Agilio SmartNIC-**
**based topologies. The advantages of P4BFT can thus be repro-**
**duced both with software switches and "commodity" P4-enabled**
**hardware. A hardware-accelerated controller packet comparison**
**procedure results in an average 96.4 % decrease in processing**
**delay per request compared to existing software approaches.**
I. INTRODUCTION
and compared for payload matching for the purpose of correct
message identification.
In in-band [8] deployments, where application flows share
the same infrastructure as the control flows, the traffic arriving
from controller replicas imposes a non-negligible overhead [9].
Similarly, comparing and processing controller messages in
the switches’ software-based control plane causes additional
delays and CPU load [7], leading to longer reconfigurations.
Moreover, the comparison of control packets is implemented
as a proprietary non-standardized switch function, thus unsupported in off-the-shelf devices.
In this work, we investigate the benefits of offloading
the procedure of comparison of controller outputs, required
for correct BFT operation, to carefully selected network
switches. By minimizing the distance between the processing
nodes and controller clusters / individual controller instances,
we decrease the network load imposed by BFT operation.
P4BFT’s P4-enabled pipeline is in charge of controller packet
collection, correct packet identification and its forwarding
to the destination nodes, thus minimizing accesses to the
switches’ software control plane and effectively outperforming
the existing software-based solutions.
II. BACKGROUND AND PROBLEM STATEMENT
State-of-the-art failure-tolerant SDN controllers base their
state distribution on crash-tolerant consensus approaches. Such
approaches comprise single-leader operation, where leader
replica decides on the ordering of client updates. After confirming the update with the follower majority, the leader
triggers the cluster-wide commit operation and acknowledges
the update with the requesting client. RAFT algorithm [1]
realizes this approach, and is implemented in OpenDaylight
[2] and ONOS [3]. RAFT is, however, unable to distinguish
malicious / incorrect from correct controller decisions, and
can easily be manipulated by an adversary in possession of
the leader replica [4]. Recently, Byzantine Fault Tolerance
(BFT)-enabled controllers were proposed for the purpose of
enabling correct consensus in scenarios where a subset of
_controllers is faulty due to a malicious adversary or internal_
_bugs [5]–[7]. In BFT-enabled SDN, multiple controllers act_
as replicated state machines and hence process incoming
client requests individually. Thus with BFT, each controller
of a single administrative domain transmits an output of their
computation to the target switch. The outputs of controllers are
then collected by trusted configuration targets (e.g., switches)
BFT has recently been investigated in the context of distributed SDN control plane [5]–[7], [10]. In [5], [6], 3FM + 1
controller replicas are required to tolerate up to FM Byzantine
failures. MORPH [7] requires 2FM + _FA +1 replicas in order_
to tolerate up to FM Byzantine and FA availability-induced
failures. The presented models assume the deployment of SDN
controllers as a set of replicated state machines, where clients
submit inputs to the controllers, that process them in isolation
and subsequently send the computed outputs to the target destination (i.e., reconfiguration messages to destination switches).
They assume trusted platform execution and a mechanism in
the destination switch, capable of comparison of the controller
messages and deduction of the correct message. Namely, after
receiving FM +1 matching payloads, the observed message is
regarded as correct and the containing configuration is applied.
The presented models are sub-optimal in a few regards.
First, they assume the collection and processing of controller
messages exclusively in the receiver nodes (configuration
targets). Propagation of each controller message can carry
a large system footprint in large-scale in-band controlled
networks, thus imposing a non-negligible load on the data
plane. Second, neither of the models detail the overhead of
-----
message comparison procedure in the target switches. The
realizations presented in [5]–[7], [10] realize the packet comparison procedure solely in software. The non-deterministic
/ varied latency imposed by the software switching may,
however, be limiting in specific use cases, such as in the
failure scenarios in critical infrastructure networks [11] or in
5G scenarios [12]. This motivates a hardware-accelerated BFT
design that minimizes the processing delays.
_A. Our contribution_
We introduce and experimentally validate the P4BFT design, which builds upon [5]–[7] and adds further optimizations:
_• It allows for collection of controllers’ packets and their_
comparison in processing nodes, as well as for relaying
of deduced correct packets to the destinations;
_• It selects the optimal processing nodes at per-destination-_
switch granularity. The proposed objective minimizes
the control plane load and reconfiguration time, while
considering constraints related to the switches’ processing
capacity and the upper-bound reconfiguration delay;
_• It executes in software, e.g., in P4 switch behavioral_
model (bmv2[1]), or in a physical, e.g., Netronome SmartNIC[2] environment. Correctness, processing time and deployment flexibility are validated in both platforms.
We present the evaluation results of P4BFT for well-known
and randomized network topologies and varied controller and
cluster sizes and their placements. To the best of our knowledge, this is the first implementation of a BFT-enabled solution
on a hardware platform, allowing for accelerated packet processing and low-latency malicious controller detection time.
**Paper Structure: Related work is presented in Section III.**
Section IV details the P4BFT co-design of the control and data
plane as well as the optimization procedure. Section V presents
the evaluation methodology and discusses the empirically
measured performance of P4BFT in software- and hardwarebased data planes. Section VI concludes this paper.
III. RELATED WORK
_1) BFT variations in SDN context: In the context of central-_
ized network control, BFT is still a relatively novel area of research. Reference solutions [5]–[7], assume the comparison of
configuration messages, transmitted by the controller replicas,
in the switch destined as the configuration target. With P4BFT,
we investigate the flexibility advantages of message processing
in any node capable of message collection and processing,
thus allowing for a footprint minimization. [6] and [7] discuss
the strategy for minimization of no. of matching messages
required to deduce correct controller decisions, which we
adopt in this work as well. [10] discusses the benefit of disaggregation of BFT consensus groups in the SDN control plane
into multiple controller cluster partitions, thus enabling higher
scalability than possible with [6] and [7]. While compatible
with [10], our work focuses on scalability enhancements and
footprint minimization by means of data-plane reconfiguration
for realizing more efficient packet comparison.
[1P4 Software Switch - https://github.com/p4lang/behavioral-model](https://github.com/p4lang/behavioral-model)
[2Netronome Agilio R⃝CX 2x10GbE SmartNIC Product Brief - https://www.](https://www.netronome.com/media/documents/PB_Agilio_CX_2x10GbE.pdf)
[netronome.com/media/documents/PB_Agilio_CX_2x10GbE.pdf](https://www.netronome.com/media/documents/PB_Agilio_CX_2x10GbE.pdf)
_2) Data Plane-accelerated Service Execution: Recently,_
Dang et al. [13] have portrayed the benefits of offloading
coordination services for reaching consensus to the data plane,
on the example of a Paxos implementation in P4 language. In
this paper, we investigate if a similar claim can be transferred
to BFT algorithms in SDN context. In the same spirit, in
[14], end-hosts partially offload the log replication and log
commitment operations of RAFT consensus algorithm to
neighboring P4 devices, thus accelerating the overall commit
time. In the context of in-network computation, Sapio et al.
[15] discuss the benefit of data aggregation offloading to
constrained network devices for the purpose of data reduction
and minimization of workers’ computation time.
IV. SYSTEM MODEL AND DESIGN
_A. P4BFT System Model_
We consider a typical SDN architecture allowing for flexible
function execution on the networking switches for the purpose of BFT system operation. The flexibility of in-network
function execution is bounded by the limitation of the data
plane programming interface (i.e., the P416 [16] language
specification in the case of P4BFT). The control plane communication between the switches and controllers and in-between
the controllers is realized using an in-band control channel [8].
In order to prevent faulty replicas from impersonating correct
replicas, controllers authenticate each message using Message
Authentication Codes (assuming pre-shared symmetric keys
for each pair) [17]. Similarly, switches that are in charge of
message comparison and message propagation to the configuration targets must be capable of signature generation using
the processed payload and their secret key.
In P4BFT, controllers calculate their decisions in isolation from each other, and transmit them to the destination switch. Control packets are intercepted by the process_ing nodes (i.e., processing switches) responsible for deci-_
sions destined for the target switch. In order to collect and
compare control packets, we assume packet header fields
that include the client_request_id, controller_id,
destination_switch_id (e.g., MAC/IP address), the
payload (controller-decided configuration) and the optional
signature field (denoting if a packet has already been
processed by a processing node). Clients must include the
client_request_id field in their controller requests.
Apart from distinguishing correct from malicious/incorrect
messages, P4BFT allows for identification and exclusion of
_faulty controller replicas. P4BFT’s architectural model as-_
sumes three entities, each with a distinguished role:
**1) Network controllers enforce forwarding plane configu-**
rations based on internal decision making. For simplification,
each controller replica of an administrative domain serves
each client request. Each correct replica maintains internal
state information (e.g., resource reservations) matching to
that of other correct instances. In the case of a controller
with diverged state, i.e., as a result of corrupted operation
or a malicious adversary take-over, the incorrect controllers’
computation outputs may differentiate from the correct ones.
-----
**2) P4-enabled switches forward the control and application**
packets. Depending on the output of Reassigner’s optimization
step, a switch may be assigned the processing node role, i.e.,
become in charge of comparing outputs computed by different
controllers, destined for itself or other configuration targets.
A processing node compares messages sent out by different
controllers and distinguishes the correct ones. On identification
of a faulty controller, it declares the faulty replica to the
Reassigner. In contrast to [5]–[7], P4BFT enables control
packet comparison for packets destined for remote targets.
**3) Reassigner is responsible for two tasks:**
_Task 1: It dynamically reassigns the controller-switch con-_
nections based on the events collected from the detection
mechanism of the switches, i.e., upon their detection, it
excludes faulty controllers from the assignment procedure. It
furthermore ensures that a minimum number of required controllers, necessary to tolerate a number of availability failures
_FA and malicious failures FM_, are loaded and associated with
each switch. This task is also discussed in [6], [7].
_Task 2: It maps a processing node, in charge of controller_
messages’ comparison, to each destination switch. Based on
the result of this optimization, switches gain the responsibility
of control packets processing. The output of the optimization
procedure is the Processing Table, necessary to identify the
switches responsible for comparison of controller messages.
Additionally, the Reassigner computes the Forwarding Tables,
necessary for forwarding of controller messages to processing
nodes and reconfiguration targets. Given the no. of controllers
and the user-configurable parameter of max. tolerated Byzantine failures FM, Reassigner reports to processing nodes the
no. of necessary matching messages that must be collected
prior to marking a controller message as correct.
_B. Finding the Optimal Processing Nodes_
The optimization methodology allows for minimization of
the experienced switch reconfiguration delay, as well as the
decrease of the total network load introduced by the exchanged
controller packets. When a switch is assigned the processing
node role for itself or another target switch, it collects the
control packets destined for the target switch and deduces the
correct payload on-the-fly, it next forwards a single packet
copy containing the correct controller message to the destination switch. Consider Fig. 1a). If control packet comparison
is done only at the target switch (as in prior works), a request
for S4 creates a total footprint of FC = 13 packets in the data
plane (the sum of Cluster 1 and Cluster 2 utilizations of 4 and
9, respectively). In contrast, if the processing is executed in S3
(as depicted in Fig. 1b)), the total experienced footprint can
be decreased to FC = 11. Therefore, in order to minimize the
total control plane footprint, we identify an optimal processing
node for each target switch, based on a given topology,
placement of controllers and the processing nodes’ capacity
constraints. If we additionally extend the optimization to a
multi-objective formulation by considering the delay metric,
the total traversed critical path between the controller furthest
away from the configuration target would equal FD = 3 in
the worst case (ref. Fig. 1c)), i.e., 3 hops assuming a delay
weight of 1 per hop. Additionally, this assignment also has the
minimized communication overhead of FC = 11.
TABLE I
PARAMETERS USED IN THE MODEL
**Symbol** **Description**
_V : {S1, S2, ..., Sn}, n ∈_ Z[+] Set of all switch nodes in the topology.
_C : {C1, C2, ..., Cn}, n ∈_ Z[+] Set of all controllers connected to the topology.
_D : {di,j,k, ∀i, j, k ∈V}_ Set of delay values for path from i to k, passing through j.
_H : {hi,j_ _, ∀i, j ∈V}_ Set of number of hops for shortest path from i to j.
_Q : {qi, ∀i ∈V}_ Set of switches’ processing capacity.
_C[j]_ _⊆C_ Set of controllers connected to the node j.
_M ⊆V_ Set of switches connected to at least one controller.
_T_ Maximum tolerated delay value.
_x(i, k)_ Binary variable that equals 1 if i is a processing node for k.
We describe the processing node mapping problem using
an integer linear programming (ILP) formulation. Table I
summarizes the notation used.
**Communication overhead minimization objective min-**
imizes the global imposed communication footprint in the
control plane. Each controller replica generates an individual
message sent to the processing node i, that subsequently
collects all remaining necessary messages and forwards a
resulting single correct message to the configuration target k:
_MF = min_ _k�∈V_ _i�∈V(1 · hi,k · x(i, k) +_ _j∈M[�]_ _|C[j]| · hj,i · x(i, k))_
(1)
**Configuration delay minimization objective minimizes the**
_worst-case delay imposed on the critical path used for for-_
warding configuration messages from a controller associated
with node j, to the potential processing node i and finally to
the configuration target node k:
|Symbol|Description|
|---|---|
|V : {S1, S2, ..., Sn}, n ∈Z+ C : {C1, C2, ..., Cn}, n ∈Z+ D : {di,j,k, ∀i, j, k ∈V} H : {hi,j, ∀i, j ∈V} Q : {qi, ∀i ∈V} Cj ⊆C M ⊆V T x(i, k)|Set of all switch nodes in the topology. Set of all controllers connected to the topology. Set of delay values for path from i to k, passing through j. Set of number of hops for shortest path from i to j. Set of switches’ processing capacity. Set of controllers connected to the node j. Set of switches connected to at least one controller. Maximum tolerated delay value. Binary variable that equals 1 if i is a processing node for k.|
�
_MD = min_
_k∈V_
�
_x(i, k)_ max (2)
_·_ _∀j∈M[(][d][j,i,k][)]_
_i∈V_
**Bi-objective optimization minimizes the weighted sum of**
the two objectives, w1 and w2 being the associated weights:
min w1 _MF + w2_ _MD_ (3)
_·_ _·_
**Processing capacity constraint: Sum of messages requir-**
ing processing on i, for each configuration target k assigned
to i, must be kept at or below i’s processing capacity qi:
�
Subject to: _x(i, k) · |C| ⩽_ _qi,_ _∀i ∈V_ (4)
_k∈V_
**Maximum delay constraint: For each configuration target**
_k, the delay imposed by the controller packet forwarding_
to node i, responsible for collection and packet comparison
procedure and forwarding of the correct message to the target
node k, does not exceed an upper bound T :
�
Subject to: _x(i, k)_ max _k_ (5)
_·_ _∀j∈M[(][d][j,i,k][)][ ⩽]_ _[T,]_ _∀_ _∈V_
_i∈V_
**Single assignment constraint: For each configuration tar-**
get k, there exists exactly one processing node i:
�
Subject to: _x(i, k) = 1,_ _k_ (6)
_∀_ _∈V_
_i∈V_
_Note: The assignment of controller-switch connections for_
the purpose of control and reconfiguration is adapted from
existing formulations [7], [10] and is thus not detailed here.
-----
Cluster 1 Cluster 2
C1 C2 C3 C4 C5
+2 +2 +3
+1h +1h
S1 S2 S3
+2 +2 +1
+1h
+1
S4 S5
(a) Case I: FC = 13; FD = 3 hops
|+2 +1h +2|Col2|
|---|---|
|S1 S2 +2 +2 +1h +1||
|S4||
|S4|S5|
|---|---|
(b) Case II: FC = 11; FD = 5 hops
Cluster 1 Cluster 2
C1 C2 C3 C4 C5
+2
+3
+1h
+1h
S1 S2 S3
+1
+1h
S4 S5
(c) Case III: FC = 11; FD = 3 hops
Fig. 1. For brevity we depict the control flows destined only for configuration target S4. The orange and red blocks represent an exemplary cluster separation
of 5 controllers into groups of 2 and 3 controllers, respectively. The green dashed block highlights the processing node responsible for comparing the controller
messages destined for S4. Figure (a) presents the unoptimized case as per [5]–[7], where S4 collects and processes control messages destined for itself,
thus resulting in a control plane load of FC = 13 and a delay on critical path (marked with blue labels) of FD = 3 hops (assuming edge weights of 1).
By optimizing for the total communication overhead, the total FC can be decreased to 11, as portrayed in Figure (b). Contrary to (a), in (b) processing of
packets destined for S4 is offloaded to the processing node S3. However, additional delay is incurred by the traversal of path S1-S2-S3-S2-S4 for the control
messages sourced in Cluster 1. Multi-objective optimization according to P4BFT, that aims to minimize both the communication overhead and control plane
delay instead selects S2 as the optimal processing node (ref. Figure (c)), thus minimizing both FC and FD.
_C. P4 Switch and Reassigner Control Flow_
_Processing node data plane: Switches declared to process_
controller messages for a particular target (i.e., for itself, or for
another switch) initially collect the control payloads stemming
from different controllers. Each processing node maintains
counters for the number of observed and matching packets
for a particular (re-)configuration request identifier. After sufficient matching packets are collected for a particular payload
(more specifically, hash of the payload), the processing node
_signs a message using its private key and forwards one copy_
of the correct packet to its own control plane for required
software processing (i.e., identification of the correct message
and potentially malicious controllers), and the second copy
on the port leading to the configuration target. To distinguish
processed from unprocessed packets in destination switches,
processing nodes refer to the trailing signature field.
_Processing node control plane: After determining the cor-_
rect packet, the processing node identifies any incorrect controller replicas (i.e., replicas whose output hashes diverge
from the deduced correct hash) and subsequently notifies the
Reassigner of the discrepancy. Alternatively, the switch applies
the configuration message if it is the configuration target itself.
The switch then proceeds to clear its registers associated with
the processed message hash so to free the memory for future
requests.
_Reassigner control flow: At network bootstrapping time, or_
on occurrence of any of the following events: i) a detected
malicious controller; ii) a failed controller replica; or iii) a
switch/link failure; Reassigner reconfigures the processing and
forwarding tables of the switches, as well as the number of
required matching messages to detect the correct message.
_D. P4 Tables Design_
Switches maintain Tables and Registers that define the
method of processing incoming packets. Reassigner populates
the switches’ Tables and Registers so that the selection of
processing nodes for controller messages is optimal w.r.t. a
set of given constraints, i.e., so that the total message overhead or control plane latency experienced in control plane is
minimized (according to the optimization procedure in Section
IV-B). The Reassigner thus modifies the elements whenever
a controller is identified as incorrect and is hence excluded
from consideration, resulting in a different optimization result.
P4BFT leverages four P4 tables:
1) Processing Table: It holds identifiers of the switches
whose packets must be processed by the switch hosting
this table. Incoming packets are matched based on the
destination switch’s ID. In the case of a table hit, the
hosting switch processes the packets as a processing node.
Alternatively, the packet is matched against the Process_Forwarding Table._
2) Process-Forwarding Table: Declares which egress port
the packets should be sent out on for further processing.
If an unprocessed control packet is not to be processed
locally, the switch will forward the packet towards the
correct processing node, based on forwarding entries
maintained in this table.
3) L2-Forwarding Table: After the processing node has
processed the incoming control packets destined for the
destination switch, the last step is forwarding the correctly
deduced packet towards it. Information on how to reach
the destination switches is maintained in this table. Contrary to forwarding to a processing node, the difference
here is that the packet is now forwarded to the destination
switch.
4) Hash Table with associated registers: Processing a set
of controller packets for a particular request identifier
requires evaluating and counting the number of occurrences of packets containing the matching payload. To
uniquely identify the decision of the controller, a hash
value is generated on the payload during processing. The
counting of incoming packets is done by updating the
corresponding binary values in the register vectors, with
respective layout depicted in Table II.
On each arriving unprocessed packet, the processing node
computes a previously seen or i-th initially observed hash
_h[request]i_ [_][id] over the acquired payload. Subsequently, it sets the
-----
TABLE II
HASH TABLE LAYOUT
**Msg Hash** **Request ID 1** ... **Request ID K**
_h0_ _b[h]C[0]1_ _b[h]C[0]2_ ... _b[h]C[0]N_ ... _b[h]C[0]1_ _b[h]C[0]2_ ... _b[h]C[0]N_
... _b[...]C1_ _b[...]C2_ ... _b[...]CN_ ... _b[...]C1_ _b[...]C2_ ... _b[...]CN_
_hFM_ _bhCFM1_ _bhCFM2_ ... _bhCFMN_ ... _bhCFM1_ _bhCFM2_ ... _bhCFMN_
denotes the sum of packet footprint for control flows destined
to each destination switch of the network topology as per
Sec. IV-B and Fig. 1. P4BFT outperforms the state-of-theart as each of the presented works assumes an uninterrupted
control flow from each controller instance to the destination
switches. P4BFT, on the other hand, aggregates control packets
in the processing nodes that, subsequently to collecting the
control packets, forward a single correct message towards the
destination, thus decreasing the control plane load.
|Msg Hash|Request ID 1|Col3|Col4|Col5|...|Request ID K|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|h0|bh0 C1|bh0 C2|...|bh0 CN|...|bh0 C1|bh0 C2|...|bh0 CN|
|...|b... C1|b... C2|...|b... CN|...|b... C1|b... C2|...|b... CN|
|hFM|bhFM C1|bhFM C2|...|bhFM CN|...|bhFM C1|bhFM C2|...|bhFM CN|
binary flag to 1, for source controller controller_id in
the i-th register row at column [client_request_id
_· |C|_
+ controller_id]. represents the total no. of de_|C|_
ployed controllers. Each time a client request is fully processed, the binary entries associated with the corresponding
client_request_id are reset to zero. To detect a malicious controller, the controller IDs associated with hashes
distinguished as incorrect, are reported to the Reassigner.
_Note: To tolerate FM Byzantine failures, a maximum of_
_FM + 1 unique hashes for a single request identifier may be_
detected, hence the corresponding FM + 1 pre-allocated table
rows in Table II.
V. EVALUATION, RESULTS AND DISCUSSION
100
80
60
40
|Col1|Col2|Col3|Col4|Col5|dom andom andom|
|---|---|---|---|---|---|
||||P4BFT-cap P4BFT-cap P4BFT-cap|able - 1 Ran able - 25% R able - 50% R|dom andom andom|
||||P4BFT-cap P4BFT-cap|able - 75% R able - 100%|andom|
|||||||
_A. Evaluation Methodology_
We next evaluate the following metrics using P4BFT and
state-of-the-art [5]–[7] designs: i) control plane load; ii) imposed processing delay in the software and hardware P4BFT
nodes; iii) end-to-end switch reconfiguration delay; and iv)
ILP solution time. We execute the measurements for random
controller placements and diverse data plane topologies: i)
random topologies with fixed average node degree; ii) reference Internet2 [18]; and iii) data-center Fat-Tree (k = 4). We
also vary and depict the impact of no. of switches, controller
instances, and disjoint controller clusters. To compute paths
between controllers and switches and between processing and
destination switches, Reassigner leverages the Constrained
Shortest Path First (CSPF) algorithm. For brevity, as an input
to the optimization procedure in Reassigner, we assume edge
weights of 1. The objective function used in processing node
selection is Eq. 3, parametrized with (w1, w2) = (1, 1).
P4BFT implementation is a combination of P416 and P4
Runtime code, compiled for software and physical execution
on P4 software switch bmv2 (master check-out, December
2018) and a Netronome Agilio SmartNIC device with the corresponding firmware compiled using SDK 6.1-Preview,
respectively. Apache Thrift and gRPC are used for population
of registers and table entries in bmv2, respectively. Thrift is
used for both table and registers population for the Netronome
_SmartNIC, due to the current SDK release not fully supporting_
the P4 Runtime. HTTP REST is used in exchange between
P4 switch control plane and the Reassigner. The Reassigner
and network controller replicas are implemented as Python
applications.
20
0
8 24 40 56 128
Number of Switches in the Topology
_B. Communication Overhead Advantage_
Figure 2 depicts the packet load improvement in P4BFT
over the existing reference solutions [5]–[7] for randomly
generated topologies with average node degree of 4. The
footprint improvement is defined as 1 − _[F][ P]CF[ 4]C[SoA][BF T]_, where FC
Fig. 2. Packet load improvement of P4BFT over the reference works [5]–
[7] for 5000 randomly generated network topologies per scenario, with
7 controllers distributed into 3 disjoint and randomly placed clusters. In
addition to the 100% coverage where each node may be considered a P4BFT
processing node, we include scenarios where only the random [1, 25%, 50%,
75%] nodes of all available nodes in the infrastructure are P4BFT-enabled.
Thus, even in the topologies with limited programmable data plane resources,
i.e., in brownfield-scenarios involving OpenFlow/NETCONF+YANG non-P4
configuration targets, P4BFT offers substantial advantages over existing SoA.
Fig. 3 (a) and (b) portray the footprint improvement scaling
with the number of controllers and disjoint clusters. P4BFT’s
footprint efficiency generally benefits from the higher number
of controller instances. Controller clusters, on the other hand,
aggregate replicas behind the same edge switch. Thus, with the
higher number of disjoint clusters, the degree of aggregation
and the total footprint improvement decreases.
_C. Processing and Reconfiguration Delay_
Fig. 4 depicts the processing delay incurred in the processing node for a single client request. The delay corresponds
to the P4 pipeline execution time spent on identification of a
correct controller message, comprising the i) hash computation
over controller messages; ii) incrementing the counters for
the computed hash; iii) signing the correct packet and; iv)
propagating it to the correct egress port. When using the
P4-enabled SmartNIC, P4BFT decreases the processing time
compared to bmv2 software target by two orders of magnitude.
Fig. 5 depicts the total reconfiguration delay imposed in
SoA and P4BFT designs for (w1, w2) = (1, 1) (ref. Eq.
3). It considers the time difference between issuing a switch
reconfiguration request, until the correct controller message
is determined and applied in the destination. Related works
process the reconfiguration messages stemming from controller replicas in the destination target, their control flows
-----
100
80
1.0
0.8
100
80
60
40
60
40
0.6
0.4
20
0
20
0
|SoA ([5]-[7]) P4BFT Mean Improvement|Col2|
|---|---|
|||
|||
|Col1|Col2|Fat-T|ree (k=4)|Col5|Col6|
|---|---|---|---|---|---|
|||Intern|et2|||
|||||||
|||||||
|||||||
|||||||
5 9 13 17
0.2
0.0
0 5 10 15 20 25
Switch Reconfiguration Delay [ms]
Number of Replicated Controller Instances
(a)
100
80
60
40
20
0
|Col1|Col2|Fa|Col4|=4)|
|---|---|---|---|---|
||||t-Tree (k|=4)|
|||Int|ernet2||
||||||
||||||
||||||
1 7 13 17
Number of Disjoint Controller Clusters
(b)
Fig. 3. The impact of (a) controllers and; (b) disjoint controller clusters on the
control plane load footprint in Internet2 and Fat-Tree (k = 4) topologies for
5000 randomized controller placements each. (a) randomizes the placement
but fixes the no. of disjoint clusters to 3; (b) randomizes the no. of disjoint
clusters between [1, 7, 13, 17] but fixes the no. of controllers to 17. The
resulting footprint improvement scales with the number of controllers but is
inversely proportional to the number of disjoint clusters.
Fig. 5. CDFs of time taken to configure randomly selected switches in
SoA and P4BFT environments for Internet2 topology, 10 random controller
placements for 5 replicas and 1700 individual requests per placement. SoA
works [5]-[7] collect, compare and apply the controllers’ reconfiguration
messages in the destination switch thus effectively minimizing the reconfiguration delay at all times. P4BFT, on the other hand, may occasionally
favor footprint minimization over the incurred reconfiguration delay and thus
impose a longer critical path, leading to slower reconfigurations. On average,
however, P4BFT imposes comparable reconfiguration delays at a much higher
footprint improvement (depicted blue), mean being 38%, best and worst cases
at 60% and 19.3%, respectively, for evaluated placements.
space, depending on the weights prioritization in Eq. 3, either
(26.0, 3.0) or (28.0, 2.0) solutions can be considered optimal.
Comparable works implicitly minimize the incurred reconfiguration delay but fail to consider the control plane load. Hence,
they prefer the (30.0, 2.0) solution (encircled red).
4.0
3.5
3.0
2.5
2.0
26 27 28 29 30 31 32
Total Control-Plane Footprint (No. Packets)
Fig. 6. Pareto Frontier of P4BFT’s solution space for the topology presented
in Fig. 1. The comparable works tend to minimize the incurred reconfiguration
delay, but ignore the imposed control plane load. [5]–[7] hence select
(30.0, 2.0) as the optimal solution (encircled in red) while P4BFT selects
(26.0, 3.0) or (28.0, 2.0) thus minimizing the total overhead as per Eq. 3.
1.0 Netronome Agilio CX 10GbE
_bmv2 P4 Software Switch_
0.8
0.6
0.4
0.2
0.0
6 × 10[1] 7 × 10[1] 2 × 10[3] 3 × 10[3] 4 × 10[3]
P4BFT Switch Processing Delay [µs]
|Netronome Agilio CX 10GbE bmv2 P4 Software Switch|Col2|
|---|---|
|||
|||
|||
|||
Fig. 4. The CDF of processing delays imposed in a P4BFT’s processing node
for a scenario including 5 controller instances. 3 correct packets and thus 3
P4 pipeline executions are necessary to confirm the payload correctness when
tolerating 2 Byzantine controller failures.
traversing shortest paths in all cases. On average, P4BFT’s
reconfiguration delay is comparable with related works, the
overall control plane footprint being substantially improved.
_D. Optimization procedure in Reassigner_
_1) Impact of optimization objectives: Figure 6 depicts the_
Pareto frontier of optimal processing node assignments w.r.t.
the objectives presented in Section IV-B: the total control plane
footprint (minimized as per Eq. 1) and the reconfiguration
delay (minimized as per Eq. 2). From the total solution
_2) ILP solution time - impact of topology, amount of_
_controller and disjoint clusters: The solution time for the_
optimization procedure considering random topologies with
average network degree of 4 and a fixed no. of randomly
placed controllers is depicted in Fig. 7 (a). The solution
time scales with number of switches, peaking at 420ms for
large 128-switch topologies. The reassignment procedure is
executed in few rare events: during network bootstrapping, on
malicious / failed controller detection and following a switch
/ link failure. Thus, we consider the observed solution time
short and viable for online mapping. Fig. 7 (b) depicts the ILP
solution time scaling with the number of active controllers.
The lower the number of active controllers, the shorter the
solution time. In "Fixed Clusters" case, each controller is
placed in its disjoint cluster (worst-case for the optimization).
-----
Johannes Riedl and the anonymous reviewers for their useful
feedback and comments.
REFERENCES
(a)
40
30
20
10
0
|Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||||
|||||||||Fixed Clusters Random Clusters||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
|||||||||||||||||
11 10 9 8 7 6 5
Number of Controllers in the Topology
(b)
Fig. 7. (a) depicts the impact of network topology size on the ILP solution
time for random topologies. (b) depicts the impact of controller number
and cluster disjointness in the case of Internet2 topology. The results are
averaged over 5000 per-scenario iterations. The higher the cluster aggregation
of controllers, the lower the ILP solution time. "Fixed Clusters" considers
the worst-case, where each controller is randomly placed, but disjointly
w.r.t. the other controller instances. Clearly, the ILP solution time scales
with the amount of deployed switches and controllers. We used Gurobi 8.1
optimization framework configured to execute multiple solvers on multiple
threads simultaneously, and have chosen the ones that finish first.
The "Random Clusters" case considers a typical clustering
scenario, where a maximum of [1..3] clusters are deployed,
each comprising a uniform number of controller instances. The
higher the cluster aggregation, the lower the ILP solution time.
VI. CONCLUSION
P4BFT introduces a switch control-plane/data-plane codesign, capable of malicious controller identification while
simultaneously minimizing the control plane footprint. By
merging the control channels in P4-enabled processing nodes,
the use of P4BFT results in a lowered control plane footprint,
compared to existing designs. In a hardware-based data plane,
by offloading packet processing from general purpose CPU
to the data-plane NPU, it additionally leads to a decrease
in request processing time. Given the low solution time, the
presented ILP formulation is viable for on-line execution.
While we focused on an SDN scenario here, future works
should consider the conceptual transfer of P4BFT to other
application domains, including stateful web applications and
critical industrial control systems.
[1] H. Howard, M. Schwarzkopf, A. Madhavapeddy, and J. Crowcroft, “Raft
refloated: Do we have consensus?” ACM SIGOPS Operating Systems
_Review, vol. 49, no. 1, 2015._
[2] J. Medved, R. Varga, A. Tkacik, and K. Gray, “OpenDaylight: Towards
a model-driven SDN controller architecture,” in Proceedings of IEEE
_International Symposium on a World of Wireless, Mobile and Multimedia_
_Networks 2014._ IEEE, 2014, pp. 1–6.
[3] P. Berde, M. Gerola, J. Hart, Y. Higuchi, M. Kobayashi, T. Koide,
B. Lantz, B. O’Connor, P. Radoslavov, W. Snow et al., “ONOS: Towards
an open, distributed SDN OS,” in Proceedings of the third workshop on
_Hot topics in software defined networking._ ACM, 2014, pp. 1–6.
[4] C. Copeland _et_ _al.,_ “Tangaroa: A Byzantine Fault Tolerant
[Raft,” http://www.scs.stanford.edu/14au-cs244b/labs/projects/copeland_](http://www.scs. stanford.edu/14au-cs244b/labs/projects/copeland_zhong.pdf)
[zhong.pdf, [Accessed March-2019].](http://www.scs. stanford.edu/14au-cs244b/labs/projects/copeland_zhong.pdf)
[5] H. Li, P. Li, S. Guo, and A. Nayak, “Byzantine-resilient secure softwaredefined networks with multiple controllers in cloud,” IEEE Transactions
_on Cloud Computing, vol. 2, no. 4, pp. 436–447, 2014._
[6] P. M. Mohan, T. Truong-Huu, and M. Gurusamy, “Primary-backup
controller mapping for Byzantine fault tolerance in software defined
networks,” in GLOBECOM 2017 - 2017 IEEE Global Communications
_Conference._ IEEE, 2017, pp. 1–7.
[7] E. Sakic, N. Ðeri´c, and W. Kellerer, “MORPH: An adaptive framework
for efficient and Byzantine fault-tolerant SDN control plane,” IEEE
_Journal on Selected Areas in Communications, vol. 36, no. 10, pp. 2158–_
2174, 2018.
[8] L. Schiff, S. Schmid, and P. Kuznetsov, “In-band synchronization for
distributed SDN control planes,” ACM SIGCOMM Computer Commu_nication Review, vol. 46, no. 1, pp. 37–43, 2016._
[9] A. S. Muqaddas, A. Bianco, P. Giaccone, and G. Maier, “Inter-controller
traffic in ONOS clusters for SDN networks,” in 2016 IEEE International
_Conference on Communications (ICC)._ IEEE, 2016, pp. 1–6.
[10] E. Sakic and W. Kellerer, “BFT protocols for heterogeneous resource allocations in distributed SDN control plane,” in 2019 IEEE International
_Conference on Communications (IEEE ICC’19), Shanghai, P.R. China,_
2019.
[11] E. Sakic and W. Kellerer, “Response time and availability study of
RAFT consensus in distributed SDN control plane,” IEEE Transactions
_on Network and Service Management, vol. 15, no. 1, 2018._
[12] University of Surrey - 5G Innovation Centre, “5G Whitepaper: The Flat
Distributed Cloud (FDC) 5G Architecture Revolution,” 2016.
[13] H. T. Dang, M. Canini, F. Pedone, and R. Soulé, “Paxos made switch-y,”
_ACM SIGCOMM Computer Communication Review, vol. 46, no. 2, pp._
18–24, 2016.
[14] Y. Zhang, B. Han, Z.-L. Zhang, and V. Gopalakrishnan, “Networkassisted raft consensus algorithm,” in Proceedings of the SIGCOMM
_Posters and Demos._ ACM, 2017, pp. 94–96.
[15] A. Sapio, I. Abdelaziz, A. Aldilaijan, M. Canini, and P. Kalnis, “Innetwork computation is a dumb idea whose time has come,” in Proceed_ings of the 16th ACM Workshop on Hot Topics in Networks._ ACM,
2017, pp. 150–156.
[16] P. Bosshart, D. Daly, G. Gibb, M. Izzard, N. McKeown, J. Rexford,
C. Schlesinger, D. Talayco, A. Vahdat, G. Varghese et al., “P4: Programming protocol-independent packet processors,” ACM SIGCOMM
_Computer Communication Review, vol. 44, no. 3, pp. 87–95, 2014._
[17] M. Eischer and T. Distler, “Scalable byzantine fault tolerance on
heterogeneous servers,” in 2017 13th European Dependable Computing
_Conference (EDCC)._ IEEE, 2017, pp. 34–41.
[18] Internet2 Consortium, “Internet2 Network Infrastructure Topology,”
[https://www.internet2.edu/media_files/422, [Accessed March-2019].](https://www.internet2.edu/media_files/422)
ACKNOWLEDGMENT
This work has received funding from European Commission’s H2020 research and innovation programme under grant
agreement no. 780315 SEMIoTICS and from the German
Research Foundation (DFG) under the grant number KE
1863/8-1. We are grateful to Cristian Bermudez Serna, Dr.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1905.04064, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/1905.04064"
}
| 2,019
|
[
"JournalArticle",
"Conference"
] | true
| 2019-05-10T00:00:00
|
[
{
"paperId": "861223e86e914f4acba10bff682954b831ad8058",
"title": "BFT Protocols for Heterogeneous Resource Allocations in Distributed SDN Control Plane"
},
{
"paperId": "e9f73c551a4876cc72092a5302e9e8311bcc8a99",
"title": "MORPH: An Adaptive Framework for Efficient and Byzantine Fault-Tolerant SDN Control Plane"
},
{
"paperId": "5fe5a914ea38e1a4157680ded2c39e4960065636",
"title": "Response Time and Availability Study of RAFT Consensus in Distributed SDN Control Plane"
},
{
"paperId": "2308de36a4f04ca65490db703e8028c39c1ddf7b",
"title": "Primary-Backup Controller Mapping for Byzantine Fault Tolerance in Software Defined Networks"
},
{
"paperId": "8c345a2320d81a729a3748eab23e4b4b356901cb",
"title": "In-Network Computation is a Dumb Idea Whose Time Has Come"
},
{
"paperId": "b090d967c43c7eff69efb4b51da7cb0081761fb9",
"title": "Scalable Byzantine Fault Tolerance on Heterogeneous Servers"
},
{
"paperId": "952e1c7e7b3e313c5ee1454822c27ad86e5774e6",
"title": "Network-Assisted Raft Consensus Algorithm"
},
{
"paperId": "0cc7195268f8418abd0f598b976b4f5437de7e3a",
"title": "Inter-controller traffic in ONOS clusters for SDN networks"
},
{
"paperId": "cc88add9ca811f7cd8cdfe7318dda5010c2d218d",
"title": "In-Band Synchronization for Distributed SDN Control Planes"
},
{
"paperId": "2fc530b410e1e0bd45b067bae780e63e7efec1c0",
"title": "Paxos Made Switch-y"
},
{
"paperId": "7c90ad3307b024964c9e542e8af2b6174e677857",
"title": "Raft Refloated: Do We Have Consensus?"
},
{
"paperId": "19d3729dd3fba4de08df878bc4802b2a205f61a4",
"title": "Byzantine-Resilient Secure Software-Defined Networks with Multiple Controllers in Cloud"
},
{
"paperId": "6403a9dc96d1c7bec68325aab697e891793568de",
"title": "ONOS: towards an open, distributed SDN OS"
},
{
"paperId": "a406cfb3e352fd1d5e4b2c6fe002828d36fa593a",
"title": "OpenDaylight: Towards a Model-Driven SDN Controller architecture"
},
{
"paperId": "463e7d2a62b1f15cc2daba34230297827e7c6757",
"title": "P4: programming protocol-independent packet processors"
},
{
"paperId": null,
"title": "“Internet2 Network Infrastructure Topology,”"
},
{
"paperId": null,
"title": "“5G Whitepaper: The Flat Distributed Cloud (FDC) 5G Architecture Revolution,”"
},
{
"paperId": "e0d46e4aa7fd0bc6df61c689ce4c0a620f2be93f",
"title": "Tangaroa : a Byzantine Fault Tolerant Raft"
}
] | 11,412
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0310e8b556b8e040ad87329a7f8b25866bc6a6d1
|
[
"Computer Science"
] | 0.834778
|
Union and Intersection Types for Secure Protocol Implementations
|
0310e8b556b8e040ad87329a7f8b25866bc6a6d1
|
Joint Workshop on Theory of Security and Applications
|
[
{
"authorId": "144588806",
"name": "M. Backes"
},
{
"authorId": "1813045",
"name": "Cătălin Hriţcu"
},
{
"authorId": "4436634",
"name": "Matteo Maffei"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"TOSCA",
"Jt Workshop Theory Secur Appl"
],
"alternate_urls": null,
"id": "6dccb459-7004-4383-81b0-fff358e3fd86",
"issn": null,
"name": "Joint Workshop on Theory of Security and Applications",
"type": "conference",
"url": null
}
| null |
# Union and Intersection Types for Secure Protocol Implementations
Michael Backes[1][,][2], C˘at˘alin Hri¸tcu[1], and Matteo Maffei[1]
1 Saarland University
2 Max Planck Institute for Software Systems (MPI-SWS)
**Abstract. We present a new type system for verifying the security of crypto-**
graphic protocol implementations. The type system combines prior work on refinement types, with union, intersection, and polymorphic types, and with the
novel ability to reason statically about the disjointness of types. The increased
expressivity enables the analysis of important protocol classes that were previously out of scope for the type-based analyses of protocol implementations. In
particular, our types can statically characterize: (i) more usages of asymmetric
cryptography, such as signatures of private data and encryptions of authenticated
data; (ii) authenticity and integrity properties achieved by showing knowledge of
secret data; (iii) applications based on zero-knowledge proofs. The type system
comes with a mechanized proof of correctness and an efficient type-checker.
## 1 Introduction
Modern applications are mostly distributed and they rely on complex cryptographicprotocols to transmit data over potentially insecure networks (e.g., e-banking, e-commerce,
social networks, and mobile applications). Protocol designers struggle to keep pace
with the variety of possible security vulnerabilities, which have affected early authentication protocols like Needham-Schroeder [26, 37], carefully designed de facto standards like SSL and PKCS [17, 45], and even widely deployed products like Microsoft
Passport [30] and Kerberos [19]. Even if the underlying cryptographic protocols are
properly designed, security vulnerabilities may still arise due to flaws in the implementation. Manual security analyses of cryptographic protocols, and even more so protocol
implementations, are extremely difficult and error-prone. Therefore, it is important to
devise automated analysis techniques that can provide security guarantees for protocol
implementations and, more generally, for the source code of distributed applications.
An effective approach for analyzing protocol implementations is to rely on software verification techniques, such as model checking and type theory, and to adapt
them to the security problem. Type systems, in particular, proved successful in the automated analysis of both cryptographic protocol models [1, 2, 31] and protocol implementations [12,14]. Type systems provide security proofs for an unbounded number of
runs. Furthermore, the analysis is modular and has a predictable termination behavior.
Finally, type systems were designed from the beginning to efficiently deal with programming language features such as data structures, recursion, state, and higher-order
functions: consequently, type systems are more efficient and scale better than many
S. Mödersheim and C. Palamidessi (Eds.): TOSCA 2011, LNCS 6993, pp. 1–28, 2012.
_⃝c_ Springer-Verlag Berlin Heidelberg 2012
-----
2 M. Backes, C. Hri¸tcu, and M. Maffei
state-of-the-art protocol verifiers (e.g., ProVerif [16] used as a back end by fs2pv [15])
in the analysis of source code [14].
Despite these promising features, the type-based analysis of the source code of modern distributed applications is still an open issue. The first problem is that many of
these applications (e.g., trusted computing [18], electronic voting [22], and social networks [9]) rely on complex cryptographic schemes, such as zero-knowledge proofs.
Although the automated verification of protocols based on some of these schemes is
possible in process calculi for abstract protocol specifications, which provide convenient mechanisms to symbolically abstract these schemes (e.g., flexible equational theories), this is not the case for standard programming languages, where one needs to
encode these abstractions using the primitives provided by the language. These primitives were, however, not designed for abstractly representing cryptographic primitives,
which makes providing encodings that are suitable for automatic analysis and capture
all potential usages of cryptographic schemes a challenging task. The second, somewhat
similar, problem is that some interesting security properties are obtained by specific
cryptographic patterns that are difficult to encode in type systems for programming languages. For instance, authenticity and integrity properties can be achieved by showing
the knowledge of secret data, as in the Needham-Schroeder-Lowe public-key protocol [37] that relies on the exchange of secret nonces to authenticate the participants
or as in most authentication protocols based on zero-knowledge proofs (e.g., Direct
Anonymous Attestation [18] and Civitas [22]).
**1.1** **Contributions**
This paper presents a new type system for the verification of the source code of protocol implementations. The underlying type theory combines refinement types [12] with
union, intersection, and polymorphic types. Additionally, we introduce a novel relation
for statically reasoning about the disjointness of types. This expressive type system extends the scope of existing type-based analyses of protocol implementations [12, 14]
to important protocol classes that were not covered so far. In particular, our types statically characterize: (i) more usages of asymmetric cryptography, such as signatures
of private data and encryptions of authenticated data; (ii) authenticity and integrity
properties achieved by showing knowledge of secret data; (iii) applications based on
zero-knowledge proofs.
Protocols are implemented in RCF[∀]
_∧∨_ [[12], a concurrent lambda-calculus, and]
cryptographic primitives are considered fully reliable building blocks and represented
symbolically using a sealing mechanism [12, 35, 43]. In addition to hashes, symmetric cryptography, public-key encryption, and digital signatures, our approach supports
zero-knowledge proofs. Since the realization of zero-knowledge proofs changes according to the statement to be proven, we provide a tool that, given a statement, automatically generates a symbolic implementation of the corresponding zero-knowledge
primitive.
Our type-based analysis is automated, modular, efficient, and provides security
proofs for an unbounded number of sessions. We have implemented a type-checker that
performed very well in our experiments: it type-checks all our symbolic libraries and
samples totaling more than 1500LOC in around 12 seconds, on a normal laptop. The
-----
Union and Intersection Types for Secure Protocol Implementations 3
type-checker features a user-friendly graphical interface for examining typing derivations. The tool-chain we have developed additionally contains an automatic code generator for zero-knowledge proofs, an interpreter, and a visual debugger.
We have formalized the type system, and all the important parts of the soundness
proof in the Coq proof assistant. The formalization and the implementation are available
online [7].
**1.2** **Related Work**
Our type system extends the refinement type system by Bengtson et al. [12] with union,
intersection, and polymorphic types. We also encode a novel type Private, which is used
to characterize data that are not known to the attacker. A crucial property is that the set
of values of type Private is disjoint from the set of values of type Un, which is the
type of the messages known to the attacker. This property allows us to prune typing
derivations following equality tests between values of type Private and values of type
Un. This technique was first proposed by Abadi and Blanchet in their seminal work
on secrecy types for asymmetric cryptography [1], but later disappeared in the more
advanced type systems for authorization policies. Our extension is necessary to deal
with protocols based on zero-knowledge proofs and to verify integrity and authenticity
properties obtained by showing knowledge of secret data (e.g., the Needham-SchroederLowe public-key protocol). In addition, our extension removes the restrictions that the
type system proposed in [12] poses on the usage of standard cryptographic primitives.
For instance, if a key is used to sign a secret message, then the corresponding verification key cannot be made public. These limitations were preventing the analysis of
many interesting cryptographic applications, such as the Direct Anonymous Attestation
protocol [18], which involves digital signatures on secret TPM identifiers.
In recent parallel work, Bhargavan et al. [14] have developed an additional crypto
graphic library for a simplified version of the type system proposed in [12]. This library
does not rely on sealing but on datatype constructors and inductive logical invariants
that allow for reasoning about symmetric and asymmetric cryptography, hybrid encryption, and different forms of nested cryptography. The aforementioned logical invariants
are, however, fairly complex and have to be proven manually. Moreover, these logical invariants are global, which means that adding new cryptographic primitives could
require reproving the previous established invariants. Therefore, extending a symbolic
cryptographic library in the style of [14] to new primitives requires expertise and a
considerable human effort. In contrast, extending our sealing-based library does not involve any additional proof: one has just to find a well-typed encoding of the desired
cryptographic primitive, which is relatively easy[1].
The main simplification Bhargavan et al. [14] propose over [12] is the removal of
the kinding relation, which classifies types as public or tainted, and allows values of
public types to also be given any tainted type by subsumption. While this simplification removes the last security-specific part of the type system, therefore making it more
1 A master’s student encoded the sophisticated cryptographic schemes used in the Civitas [22]
electronic voting protocol (i.e., distributed decryption, plaintext equivalence tests, homomorphic encryptions, mix nets, and a variety of zero-knowledge proofs) in about three weeks [29].
-----
4 M. Backes, C. Hri¸tcu, and M. Maffei
standard, this change also requires attackers to be well-typed with respect to a carefully
constructed attacker interface. In contrast, by retaining the kinding relation from [12]
we also retain the property that all attackers are well-typed with respect to our type system (this property is usually called opponent typability). Despite these disadvantages,
Bhargavan et al. [14] manage to solve some of the problems we address in this paper,
without relying on union and intersection types, but instead using the logical connectives inside the refinement types. It would be interesting future work to try to combine
the advantages of both approaches in a unified framework.
Backes et al. [11] have recently established a semantic correspondence for asymmetric cryptography between a library based on sealing and one based on constructors,
showing that both libraries enjoy computational soundness guarantees.
Backes et al. [8] proposed a type system for statically analyzing security protocols
based on zero-knowledge proofs in the setting of the Spi calculus. Zero-knowledge
proofs are modeled using constructors and destructors. In an extension of this type system [6], union and intersection types are used to infer precise type information about the
secret witnesses of zero-knowledge proofs. This is captured in a separate relation called
statement verification, which is fairly complex and tailored to zero-knowledge proofs.
In contrast, in our paper we encode zero-knowledge proofs symbolically using standard
programming language primitives, and we type-check them using general typing rules.
Goubault-Larrecq and Parrennes developed a static analysis technique [32] based on
pointer analysis and clause resolution for cryptographic protocols implemented in C.
The analysis is limited to secrecy properties, it deals only with standard cryptographic
primitives, and it does not offer scalability since the number of generated clauses is very
high even on small protocol examples.
Chaki and Datta have proposed a technique [21] based on software model checking
for the automated verification of protocols implemented in C. The analysis provides
security guarantees for a bounded number of sessions and is effective in discovering
attacks. It was used to check secrecy and authentication properties of the SSL handshake
protocol for configurations of up to three servers and three clients. The analysis only
deals with standard cryptographic primitives, and offers only limited scalability.
Bhargavan et al. proposed a technique [15] for the verification of F# protocol implementations by automatically extracting ProVerif models [16]. The technique was
successfully used to verify implementations of real-world cryptographic protocols such
as TLS [13]. The analysis, however, is not compositional and is significantly less scalable than type-checking [14]. Furthermore, the considered fragment of F# is restrictive:
it does not include higher-order functions, and it allows only for a very limited usage of
recursion and state.
The more technical discussion about the related work on union and intersection types
is postponed to §8.
**1.3** **Outline**
The remainder of the paper is structured as follows. §2 gives an intuitive overview
of our type system and exemplifies the most important concepts on a simple authentication protocol. §3 introduces the syntax of RCF[∀]
_∧∨[, the language supported by our]_
type-checker. §4 presents the type system. §5 and §6 show how our type system can
-----
Union and Intersection Types for Secure Protocol Implementations 5
be used to obtain an expressive characterization of asymmetric cryptography and zeroknowledge proofs, respectively. §7 describes our implementation and experiments. §8
discusses some related work on union and intersection types. §9 concludes and gives
some interesting research directions. We refer to the long version for the results of our
Coq formalization, a more technical presentation of our encoding for zero-knowledge
proofs, and other details [7].
## 2 Our Type System at Work
Before giving the details of the calculus and the type system, we illustrate the main
concepts of our static analysis technique on the Needham-Schroeder-Lowe public-key
protocol [37] (NSL), which could not be analyzed with previous refinement type systems for protocol implementations [12, 14]. For convenience, throughout this section
we use some syntactic sugar that is supported by our type-checker and can be obtained
from the core calculus presented in §3 by standard encodings [12].
**2.1** **Protocol Description and Security Annotations**
The Needham-Schroeder-Lowe protocol is depicted below:
_A_ _B_
_{B,nB_ _}kA+_
assume authr(A, B, nB, nA)
_{A,nB_ _,nA}kB+_
assert authr(A, B, nB, nA)
assume authi(B, A, nB, nA)
_{nA}kA+_
assert authi(B, A, nB, nA)
The goal of this protocol is to allow A and B to authenticate with each other and to
exchange two fresh nonces, which are meant to be private and be later used to construct
a session key. B creates a fresh nonce nB and encrypts it together with his own identifier
with A’s public key. A decrypts the ciphertext with her private key. At this point of the
of the protocol, A does not know whether the ciphertext comes from B or from the
opponent as the encryption key used to create the ciphertext is public. A continues the
protocol by creating a fresh nonce nA, and encrypts this nonce together with nB and
her own identifier with B’s public key. B decrypts the ciphertext and, although the
encryption key used to create the ciphertext is public, if the nonce he received matches
the one he has sent to A then B does indeed know that the ciphertext comes from A,
since the nonce nB is private and only A has access to it. Finally, B encrypts the nonce
_nA received from A with A’s public key, and sends it back to A. After decrypting the_
ciphertext and checking the nonce, A knows that the ciphertext comes from B as the
nonce nA is private and only B has access to it.
Following [12], we decorate the code with assumptions and assertions. Intuitively,
assumptions introduce new hypotheses, while assertions declare formulas that should
-----
6 M. Backes, C. Hri¸tcu, and M. Maffei
logically follow from the previously introduced hypotheses. A program is safe if in
all program runs the assertions are entailed by the assumptions. The assumptions and
assertions of the NSL protocol capture the standard mutual authentication property.
**2.2** **Types for Cryptography**
Before illustrating how we can type-check this protocol, let us introduce the typed interface of our library for public-key cryptography. Intuitively, since encryption keys are
public, they can be used by honest principals to encrypt data as specified by the protocol, or by the attacker to encrypt arbitrary data. This intuitive reasoning is captured by
the following typed interface:
encrypt : ∀α. PubKey⟨α⟩→ _α ∨_ Un → Un
decrypt : ∀α. Un → PrivKey⟨α⟩→ _α ∨_ Un
Like many of the functions in our cryptographic library, the encrypt and decrypt functions are polymorphic. Their code is type-checked only once and given an universal
type. The type variable α stands in this case for the type of the payload that is encrypted, and can be instantiated with an arbitrary type when the functions are used.
Type Un describes those values that may be known to the opponent, i.e., data that
may come from or be sent to the opponent. The type PubKey _α_ describes public keys.
_⟨_ _⟩_
Since the opponent has access to the public key and to the encryption function, the
type system has to take into account that the library may be used by honest principals
to encrypt data of type α or by the opponent to encrypt data of type Un. The encrypt
function takes as input a public key of type PubKey _α_ a message of type α Un, and
_⟨_ _⟩_ _∨_
returns a ciphertext of type Un. The decrypt function takes as input a ciphertext of type
Un, a private key of type PrivKey _α_ and returns a payload of type α Un. Without
_⟨_ _⟩_ _∨_
union types, the type of the payload is constrained to be Un or a supertype thereof [12],
which severely limits the expressiveness of the type system and prevents the analysis of
a number of protocols, including this very simple example.
**2.3** **Type-Checking the NSL Protocol**
We first introduce the type definitions[2] for the content of the three ciphertexts:
msg1 = (Un ∗ Private)
msg2[xB] = (xA : Un ∗ _xnB : Private ∨_ Un ∗{xnA : Private | authr(xA, xB, xnB _, xnA)})_
msg3 = {xnA : Private | ∃xA, xB, xnB _._
authr(xA, xB, xnB _, xnA) ∧_ authi(xB, xA, xnB _, xnA)}_
The first ciphertext contains a pair composed of a public identifier of type Un and a
nonce of type Private. Type Private describes values that are not known to the attacker:
the set of values of type Un is disjoint from the set of values of type Private. Type
msg2[xA] is a combination of two dependent pair types and one refinement type. This
type describes a triple composed of an identifier xA of type Un, a first nonce xnB
of type Private ∨ Un, and a second nonce xnA of type Private such that the predicate authr(xA, xB, xnB _, xnA) is entailed by the assumptions in the system (A assumes_
2 Type definitions are syntactic sugar, and are inlined by the type-checker.
-----
Union and Intersection Types for Secure Protocol Implementations 7
**Table 1. NSL Initiator Code and Responder Code**
init = λxB : Un. λxA : Un.
_λkB : PrivKey⟨payload[xB]⟩._
_λpk A : PubKey⟨payload[xA]⟩._
_λch : Ch(Un)._
let nB = mkPriv() in
let p1 = (Msg1 (xB, nB)) in
let m1 = encrypt⟨payload[xA]⟩ _pk A p1 in_
send⟨Un⟩ _ch m1;_
let z = recv⟨Un⟩ _ch in_
let x = decrypt⟨payload[xB]⟩ _kB z in_
case x1 = x : payload[xB] ∨ Un in
match x1 with Msg2 x2 ⇒
let (yA, ynB _, ynA) = x2 in_
if yA = xA then
if ynB = nB then
assert authr(xA, xB, ynB _, ynA);_
assume authi(xB, xA, ynB _, ynA);_
let p3 = (Msg3 ynA) in
let m3 = encrypt⟨payload[xA]⟩ _pk A p3 in_
send⟨Un⟩ _ch m3_
resp = λxA : Un. λxB : Un.
_λpk B : PubKey⟨payload[xB]⟩._
_λkA : PrivKey⟨payload[xA]⟩._
_λch : Ch(Un)._
let m1 = recv⟨Un⟩ _ch in_
let x1 in decrypt⟨payload[xA]⟩ _kA m1_
case y1 = x1 : payload[xA] ∨ Un in
match y1 with Msg1 z1 ⇒
let (yB, xnB ) = z1 in
if yB = xB then
let nA = mkPriv() in
assume authr(xA, xB, xnB _, nA);_
let p2 = Msg2(xA, xnB _, nA) in_
let m2 = encrypt⟨payload[xB]⟩ _pk B p2 in_
send⟨Un⟩ _ch m2;_
let m3 = recv⟨Un⟩ch in
let x3 = decrypt⟨payload[xA]⟩ _kA m3 in_
case y3 = x3 : payload[xA] ∨ Un in
match y3 with Msg3 ynA ⇒
if ynA = nA then
assert authi(xB, xA, xnB _, nA)_
authr(A, B, nB, nA) before creating the second ciphertext). The free occurrence of xB
is bound in the type definition. Notice that xnB is given type Private ∨ Un since A does
not know whether the nonce received in the first ciphertext comes from B or from the
opponent. Type msg3 is a refinement type describing a nonce xnA of type Private such
that the formula ∃xA, xB, xnB _. authr(xA, xB, xnB_ _, xnA) ∧_ authi(xB, xA, xnB _, xnA)_
is entailed by the assumptions in the system. Indeed, before creating the third ciphertext, B has asserted authr(A, B, nB, nA) and assumed authi(B, A, nB, nA). Since the
payload of the third message only contains xnA we existentially quantify the other
variables. The overall type of the payload is obtained by combining the three previous
types:
payload[x] = Msg1 of msg1 | Msg2 of msg2[x] | Msg3 of msg3
The type of A’s public key is defined as PubKey payload[A] and the type of B’s public
_⟨_ _⟩_
key is defined as PubKey payload[B] .
_⟨_ _⟩_
The code of the initiator (B in our diagram) and the code of the responder (A) ab
stract over the principal’s identity and they are type-checked independently of each
other.
Since library functions such as encrypt, decrypt, send and so on are polymorphic,
they are instantiated with a concrete types in the code (e.g., the encryptions in the initiator’s code are instantiated with type payload[xA] since they take as argument xA’s
public key). The initiator creates a fresh private nonce by means of the function mkPriv.
The nonce is encrypted together with B’s identifier and sent on the network. The message x obtained by decrypting the second ciphertext is given type payload[xB] ∨ Un,
which reflects the fact that B does not know whether the first ciphertext comes from
-----
8 M. Backes, C. Hri¸tcu, and M. Maffei
_A or from the attacker. Since we cannot statically predict which of the two types is the_
right one, we have to type-check the continuation code twice, once under the assumption that x has type payload[xB] and once assuming that x has type Un. This is realized
by the expression case x1 = x : payload[xB] ∨ Un in . . ..
If x has type payload[xB], then its components are given strong types: yA is
given type Un, ynB is given type Private ∨ Un, and ynA is given the refinement
type {ynA : Private | authr(xA, xB, ynB _, ynA)}. This refinement type ensures that_
authr(xA, xB, ynB _, ynA) will be entailed at run-time by the assumptions in the system_
and thus justifies the assertion assert authr(xA, xB, ynB _, ynA). Finally, the assumption_
assume authi(xB, xA, ynB _, ynA) allows us to give ynA type msg3 and thus to type-_
check the final encryption.
If x has type Un then yA, ynB, and ynA are also given type Un. The following
equality check between the value ynB of type Un and the nonce nB of type Private
makes type-checking the remaining code superfluous: since the set of values of type
Un is disjoint from the set of values of type Private, it cannot be that the equality test
succeeds. So type-checking the initiator’s code succeeds.
Type-checking the responder’s code is similar. The code contains two case expres
sions to deal with the union types introduced by the two decryptions. In particular, the
code after the second decryption has to be type-checked under the assumption that the
variable ynA has type msg3 and under the assumption that ynA has type Un.
In the former case, the assertion assert authi(xB, xA, xnB _, nA) is justified by the_
previously assumed formula authr(xA, xB, xnB _, nA), the formula in the above refine-_
ment type, and the following global assumption, stating that there cannot be two different assumptions authr(xA, xB, x[′]nB _[, x][′]nA[)][ and][ auth][r][(][x][′]A[, x][′]B[, x]nB[′]_ _[, x][′]nA[)][ with the same]_
nonce xnB .
assume ∀xA, xB, x[′]A[, x][′]B[, x][nA][, x][′]nA[, x][nB] _[.]_
authr(xA, xB, xnB _, xnA) ∧_ authr(x[′]A[, x][′]B[, x][nB] _[, x]nA[′]_ [)]
_⇒_ _xA = x[′]A_ _[∧]_ _[x][B]_ [=][ x][′]B _[∧]_ _[x][nA]_ [=][ x][′]nA
This assumption is justified by the fact that the predicate authr is assumed only in the
responder’s code, immediately after the creation of a fresh nonce xnB .
If ynA is given type Un then type-checking the following code succeeds because the
equality check between ynA and the value nA of type Private cannot succeed.
The functions init and resp take private keys as input, so they are not available to the
attacker. We provide two public functions that capture the capabilities of the attacker.
**Attacker’s Interface for NSL**
createPrincipal = λx : Un.
let k = mkPrivKey⟨payload[x]⟩ () in addToDB x k; getPubKey _⟨payload[x]⟩_ _k_
startNSL = λ(role : Un)(xA : Un)(xB : Un)(c : Un).
let kA = getFromDB xA in let pk A = getPubKey _⟨payload[xA]⟩_ _kA in_
let kB = getFromDB xB in let pk B = getPubKey _⟨payload[xB]⟩_ _kB in_
match role with inl _ ⇒ (init xA xB kA pk B c)
_| inr _ ⇒_ (resp xB xA pk A kB c)
We allow the attacker to create arbitrarily many new principals using the
createPrincipal function. This generates a new encryption key-pair, stores it in a private
-----
Union and Intersection Types for Secure Protocol Implementations 9
database, and then returns the correspondingpublic key to the attacker. The second function, startNSL, allows the attacker to start an arbitrary number of sessions of the protocol,
between principals of his choice. When calling startNSL, the attacker chooses whether
he wants to start an initiator or a responder, the principals to be involved in the session,
and the channel on which the communication occurs. One principal can be involved in
many sessions simultaneously, in which it may play different roles.
The two functions above express the capabilities of the attacker for verification pur
poses, and would not be exposed in a production setting. However, they can also be
useful for testing and debugging the code of the protocol: for instance we can execute a
protocol run using the following code.
**Test Setup for NSL**
createPrincipal “Alice”; createPrincipal “Bob”;
let c = mkChan⟨Un⟩ () in
(startNSL (inl ()) “Alice” “Bob” c) ↱ (startNSL (inr ()) “Alice” “Bob” c)
Since the code of the NSL protocol is well-typed, the soundness result of the type
system ensures that in all program runs the assertions are entailed by the assumptions,
i.e., the code is safe when executed by an arbitrary attacker. In addition, the two nonces
are given type Private and thus they are not revealed to the opponent.
## 3 The RCF[∀]
**_∧∨_** **[Calculus]**
The Refined Concurrent FPC (RCF) [12] is a simple programming language extending
the Fixed Point Calculus with refinement types and concurrency [4]. This core calculus
is expressive enough to encode a considerable fragment of an ML-like programming
language [12]. In this paper, we further increase the expressivity of the calculus by
adding intersection types [40], union types [39], and parametric polymorphism. We call
the extended calculus RCF[∀]
_∧∨_ [and describe it in this and the following section.]
We start by presenting the surface syntax of RCF[∀]
_∧∨[, which is a subset of the syntax]_
supported by our type-checker. In the surface syntax of RCF[∀]
_∧∨_ [variables are named,]
which makes programs human-readable. The surface syntax also contains explicit typing annotations that guide type-checking. It is given semantics by translation (i.e., type
erasure) into a core implicitly-typed calculus, which we have formalized in Coq [7].
The syntax comprises the four mutually-inductively-defined sets of values, types, expressions, and formulas. We mark with star (*) the constructs that are completely new
with respect to RCF [12].
**Surface syntax of RCF[∀]**
_∧∨_ **[values]**
_x, y, z_ variable
_h ::= inl | inr_ constructor for sum types
_M, N ::=_ value
_x_ variable
() unit
_λx : T. A_ function (scope of x is A)
(M, N ) pair
-----
10 M. Backes, C. Hri¸tcu, and M. Maffei
_h M_ value of sum type
foldμα. T M recursive value
_Λα. A_ type abstraction* (scope of α is A)
for _α in_ _T_ ; _U. M_ value of intersection type* (scope of _α = α1, .., αn is M_ )
� [�] [�] �
The set of values is composed of variables, the unit value, functions, pairs, and in
troduction forms for disjoint union, recursive, polymorphic, and intersection types.
**Surface syntax of RCF[∀]**
_∧∨_ **[types]**
_α, β_ type variable
_T, U, V ::=_ type
unit unit type
_x : T →_ _U_ dependent function type (x bound in U )
_x : T ∗_ _U_ dependent pair type (x bound in U )
_T + U_ disjoint sum type
_μα. T_ iso-recursive type (α bound in T )
_α_ type variable
_{x : T | C}_ refinement type (x bound in C)
_T ∧_ _U_ intersection type*
_T ∨_ _U_ union type*
top type*
_⊤_
_∀α. T_ polymorphic type* (α bound in T )
The unit value () is given type unit. Functions λx : T. A taking as input values of
type T and returning values of type U are given the dependent type x : T _U_, where
_→_
the result type U can depend on the input value x. Pairs are given dependent types of the
form x : T _U_, where the type U of the second component of the pair can depend on the
_∗_
value x of the first component. If U does not depend on x, then we use the abbreviations
_T_ _U and T_ _U_ . The sum type T + U describes values inl(M ) where M is of type
_→_ _∗_
_T and values inr(N_ ) where N is of type U . The iso-recursive type μα. T is the type of
all values foldμα. T M where M is of type T {μα. T/α}. We use refinement types [12]
to associate logical formulas to messages. The refinement type _x : T_ _C_ describes
_{_ _|_ _}_
values M of type T for which the formula C _M/x_ is entailed by the current typing
_{_ _}_
environment. A value is given the intersection type T _U if it has both type T and_
_∧_
type U . A value is given a union type T _U if it has type T or if it has type U_, but
_∨_
we do not necessarily know what its precise type is. The top type is supertype of all
_⊤_
the other types, and contains all well-typed values. The universal type _α. T describes_
_∀_
polymorphic values Λα. A such that A _U/α_ is of type T _U/α_ for all types U .
_{_ _}_ _{_ _}_
**Surface syntax of RCF[∀]**
_∧∨_ **[expressions]**
_a, b_ name
_A, B ::=_ expression
_M_ value
_M N_ function application
_M_ _⟨T ⟩_ type instantiation*
let x = A in B let (scope of x is B)
let (x, y) = M in A pair split (scope of x, y is A)
match M with inl x ⇒ _A | inr y ⇒_ _B_ pattern matching (scope of x is A, of y is B)
-----
Union and Intersection Types for Secure Protocol Implementations 11
unfoldμα. T M use recursive value
case x = M : T ∨ _U in A_ elimination of union types* (scope of x is A)
if M = N as x then A else B equality check with type cast* (scope of x is A)
(νa ↕ _T )A_ restriction (scope of a is A)
_A ↱_ _B_ fork off parallel expression
_a!M_ send M on channel a
_a?_ receive on channel a
assume C add formula C to global log
assert C formula C must hold
The syntax of expressions is mostly standard [12, 39]. A type instantiation M _T_
_⟨_ _⟩_
specializes a polymorphic value M with the concrete type T . The elimination form for
union types case x = M : T _U in A substitutes the value M in A. The conditional_
_∨_
if M = N as x then A else B checks if M is syntactically equal to N, if this is the
case it substitutes x with the common value. Syntactic equality is defined up to alpharenaming of binders and the erasure of typing annotations and constructs such as for.
During type-checking the variable x is given the intersection of the types of M and
_N_ . When the variable x is not necessary we omit the as clause, as we did in §2. The
restriction (νa _T )A generates a globally fresh channel a that can only be used in A_
_↕_
to convey values of type T . The expression A ↱ _B evaluates A and B in parallel, and_
returns the result of B (the result of A is discarded). The expression a!M outputs M on
channel a and returns the unit value (). Expression a? blocks until some message M is
available on channel a, removes M from the channel, and then returns M . Expression
assume C adds the logical formula C to a global log. The assertion assert C returns ()
when triggered. If at this point C is entailed by the multiset S of formulas in the global
log, written as S = C, we say the assertion succeeds; otherwise, we say the assertion
_|_
_fails._
Intuitively, an expression A is safe if, once it is translated into Formal-RCF[∀]
_∧∨[, all]_
assertions succeed in all evaluations. When reasoning about implementations of cryptographic protocols, we are interested in the safety of programs executed in parallel with
an arbitrary attacker. This property is called robust safety and is statically enforced by
our type system from §4.
We consider a variant of first-order logic with equality as the authorization logic.
We assume that RCF[∀]
_∧∨_ [values are the terms of this logic, and equality][ M][ =][ N][ is]
interpreted as syntactic equality between values.
## 4 Type System
This section presents our type system for enforcing authorization policies on RCF[∀]
_∧∨_
code. This extends the type system proposed by Bengtson et al. [12] with union, intersection, and polymorphic types. Additionally, we encode a new type Private, which is
used to characterize data that are not known to the attacker, and introduce a novel relation for statically reasoning about the disjointness of types. In the following we explain
the typing judgements and present the most important typing rules.
-----
12 M. Backes, C. Hri¸tcu, and M. Maffei
**4.1** **Typing Environment and Entailment**
A typing environment E is a list of bindings for variables (x : T ), type variables (α or
_α :: k), names (a_ _T, where the name a stands for a channel conveying values of type_
_↕_
_T ), and formulas (bindings of the form_ _C_ ). An environment is well-formed (E
_{_ _}_ _⊢_
) if all variables, names, and type variables are defined before use, and no duplicate
_⋄_
definitions exist. A type T is well-formed in environment E (written E _T ) if all its_
_⊢_
free variables, names, and type variables are defined in E.
A crucial judgment in the type system is E _C, which states that the formula C_
_⊢_
is derivable from the formulas in E. Intuitively, our type system ensures that whenever
_E_ _C we have that C is logically entailed by the global formula log at execution time._
_⊢_
This judgment is used for instance when type-checking assert C using (Exp Assert):
type-checking succeeds only if C is entailed in the current typing environment.
**4.2** **Subtyping and Kinding**
Intuitively, all data sent to and received from an untrusted channel have type Un, since
such channels are considered under the complete control of the adversary. However, a
system in which only data of type Un can be communicated over the untrusted network
would be too restrictive, e.g., a value of type _x : Un_ _Ok(x)_ could not be sent over
_{_ _|_ _}_
the network. We therefore consider a subtyping relation on types, which allows a term
of a subtype to be used in all contexts that require a term of a supertype. This preorder
is most often used to compare types with type Un. In particular, we allow values having
type T that is a subtype of Un, denoted T <: Un, to be sent over the untrusted network,
and we say that the type T has kind public in this case. Similarly, we allow values of
type Un that are received from the untrusted network to be used as values of type U,
provided that Un <: U, and in this case we say that type U has kind tainted. We outline
some important rules for kinding and subtyping (let k range over pub and tnt).
**Kinding and subtyping for refinement types**
(Kind Refine Pub)
_E ⊢{x : T | C}_ _E ⊢_ _T :: pub_
_E ⊢{x : T | C} :: pub_
(Sub Refine Left)
_E ⊢{x : T | C}_ _E ⊢_ _T <: T_ _[′]_
_E ⊢{x : T | C} <: T_ _[′]_
(Kind Refine Tnt)
_E ⊢_ _T :: tnt_ _E, x : T ⊢_ _C_
_E ⊢{x : T | C} :: tnt_
(Sub Refine Right)
_E ⊢_ _T <: T_ _[′]_ _E, x : T ⊢_ _C_
_E ⊢_ _T <: {x : T_ _[′]_ _| C}_
The refinement type _x : T_ _C_ is a subtype of T . This allows us to discard logical
_{_ _|_ _}_
formulas when they are not needed. For instance, a value of type _x : Un_ _Ok(x)_ can
_{_ _|_ _}_
be sent on a channel of type Un. Conversely, the type T is a subtype of _x : T_ _C_ only
_{_ _|_ _}_
if _x.C is entailed in the current typing environment, so by subtyping we can only add_
_∀_
universally valid formulas.
**Kinding for pair and function types**
(Kind Pair)
_E ⊢_ _T :: k_ _E, x : T ⊢_ _U :: k_
_E ⊢_ (x : T ∗ _U_ ) :: k
(Kind Fun)
_E ⊢_ _T :: k_ _E, x : T ⊢_ _U :: k_
_E ⊢_ (x : T → _U_ ) :: k
-----
Union and Intersection Types for Secure Protocol Implementations 13
A pair type T _U is public (or tainted) only if both T and U are public (respectively_
_∗_
tainted). On the other hand, a function type T _U is public only if the return type U is_
_→_
public (otherwise λx:unit. Msecret would be public) and the argument type T is tainted
(otherwise λk : PrivKey⟨Private⟩. let x = encrypt⟨Private⟩ _k Msecret in apub!x would_
be public).
**Kinding and subtyping for union and intersection types (*)**
(Kind And Pub 1) (Kind And Pub 2) (Kind And Tnt)
_E ⊢_ _T1 :: pub_ _E ⊢_ _T2_ _E ⊢_ _T1_ _E ⊢_ _T2 :: pub_ _E ⊢_ _T1 :: tnt_ _Γ ⊢_ _T2 :: tnt_
_E ⊢_ _T1 ∧_ _T2 :: pub_ _E ⊢_ _T1 ∧_ _T2 :: pub_ _Γ ⊢_ _T1 ∧_ _T2 :: tnt_
(Kind Or Pub) (Kind Or Tnt 1) (Kind Or Tnt 2)
_E ⊢_ _T1 :: pub_ _E ⊢_ _T2 :: pub_ _E ⊢_ _T1 :: tnt_ _E ⊢_ _T2_ _E ⊢_ _T1_ _E ⊢_ _T2 :: tnt_
_E ⊢_ _T1 ∨_ _T2 :: pub_ _E ⊢_ _T1 ∨_ _T2 :: tnt_ _E ⊢_ _T1 ∨_ _T2 :: tnt_
(Sub And LB 1) (Sub And LB 2) (Sub And Greatest)
_E ⊢_ _T1 <: U_ _E ⊢_ _T2_ _E ⊢_ _T1_ _E ⊢_ _T2 <: U_ _E ⊢_ _T_ _[′]_ _<: T1_ _E ⊢_ _T_ _[′]_ _<: T2_
_E ⊢_ _T1 ∧_ _T2 <: U_ _E ⊢_ _T1 ∧_ _T2 <: U_ _E ⊢_ _T_ _[′]_ _<: T1 ∧_ _T2_
(Sub Or Least) (Sub Or UB 1) (Sub Or UB 2)
_E ⊢_ _T1 <: U_ _E ⊢_ _T2 <: U_ _E ⊢_ _T <: U1_ _E ⊢_ _U2_ _E ⊢_ _U1_ _E ⊢_ _T <: U2_
_E ⊢_ _T1 ∨_ _T2 <: U_ _E ⊢_ _T <: U1 ∨_ _U2_ _E ⊢_ _T <: U1 ∨_ _U2_
The intersection type T1 _T2 can intuitively be seen as a[3]_ greatest lower bound of
_∧_
the types T1 and T2. Rules (Sub And LB 1) and (Sub And LB 2) ensure that T1 _T2 is_
_∧_
a lower bound: by using reflexivity in the premise we obtain that T1 _T2 <: T1 and_
_∧_
_T1_ _T2 <: T2. Rule (Sub And Greatest) ensures that T1_ _T2 is greater than any other_
_∧_ _∧_
lower bound: if T _[′]_ is another lower bound of T1 and T2 then T _[′]_ is a subtype of T1 ∧ _T2._
As far as kinding is concerned, the type T1 _T2 is public if T1 is public or T2 is public,_
_∧_
and it is tainted if both T1 and T2 are tainted.
The union type T1 _T2 intuitively corresponds to a least upper bound of T1 and T2._
_∨_
The rules for union types are exactly the dual of the ones for intersection types.
Our type system has no distributivity rules between union and intersection types
and the primitive type constructors. Some distributivity rules are derivable from the
primitive rules above: for instance we can prove that T → (U1 ∧ _U2) is a subtype of_
(T → _U1)_ _∧_ (T → _U2), but not the other way around. In fact adding a subtyping rule in_
the other direction would be unsound [24], since in our system functions can have sideeffects and such distributivity rules would allow circumventing the value restriction on
the introduction of intersection types (see §4.4 and §8).
**Kinding and subtyping rules for universal types**
(Kind Univ*)
_E, α ⊢_ _T :: k_
_E ⊢∀α. T :: k_
(Sub Univ*)
_E, α ⊢_ _T <: U_
_E ⊢∀α. T <: ∀α. U_
3 The subtyping relation of RCF is not anti-symmetric, so least and greatest elements are not
necessarily unique.
-----
14 M. Backes, C. Hri¸tcu, and M. Maffei
Finally, the rule for subtyping polymorphic types (Sub Univ*) is simple: the type
_α. T is subtype of_ _α. U if T is a subtype of U_ . Similarly, _α. T has kind k if T has_
_∀_ _∀_ _∀_
kind k in an environment extended with a binding for α. Note that α can be substituted
by any type, so we cannot assume anything about α when checking that T :: k and
_T <: U respectively._
**Kinding and subtyping rules for recursive types**
(Sub Refl*)
_E ⊢_ _T_
_E ⊢_ _T <: T_
(Kind Rec)
_E, α :: k ⊢_ _T :: k_
_E ⊢_ (μα. T ) :: k
(Sub Pos Rec*)
_E, α ⊢_ _T <: U_ _α only occurs positively in T and U_
_E ⊢_ _μα. T <: μα. U_
The rule (Sub Pos Rec*) for subtyping recursive types is new, and differs signifi
cantly from Cardelli’s Amber rule [5,20], which is used by the original RCF:
**Cardelli’s Amber rule (used by the original RCF)**
(Sub Rec)
_E, α <: α[′]_ _⊢_ _T <: T_ _[′]_ _α ̸= α[′]_ _α ̸∈_ _ftv_ (T _[′])_ _α[′]_ _̸∈_ _ftv_ (T )
_E ⊢_ _μα. T <: μα[′]. T_ _[′]_
The soundness of the Amber rule (Sub Rec) is hard to prove syntactically [12] – in
particular proving the transitivity of subtyping in the presence of the Amber rule requires
a complicated inductive argument, which only works for “executable” environments
(see [12]), as well as spurious restrictions on the usage of type variables in the rules
(Sub Refl*), (Kind And Pub 1), (Kind And Pub 2), (Kind Or Tnt 1), (Kind Or Tnt 2),
(Sub And LB 1), (Sub And LB 2), (Sub Or UB 1), (Sub Or UB 2). We use the simpler
(Sub Pos Rec*) rule, which is much easier to prove sound and requires no restrictions
on the other rules. It resembles (Sub Univ*), our rule for subtyping universal types,
with the additional restriction that the recursive variable is not allowed to appear in a
contravariant position (such as α _T ). While this positivity restriction is crucial for_
_→_
the soundness of the (Sub Pos Rec*) rule, this does not pose any problem in practice,
where most of the time only positive recursive types [38, 44] are used. Moreover, this
positivity restriction only affects subyping, so programs involving negative occurrences
of recursion variables that do not involve subtyping can still be properly type-checked
(e.g., we can still type-check the encodings of fixpoint combinators on expressions [12])
**4.3** **Encoding Types Un and Private in RCF**
In RCF [12] the type Un is in fact not primitive. By the (Sub Pub Tnt) rule that relates
kinding and subtyping, any type that is both public and tainted is equivalent to Un. Since
type unit is both public and tainted, Un is actually encoded as unit.
**The (Sub Pub Tnt) rule and kinding for type unit**
(Sub Pub Tnt)
_E ⊢_ _T :: pub_ _E ⊢_ _U :: tnt_
_E ⊢_ _T <: U_
(Kind Unit)
_E ⊢⋄_
_E ⊢_ unit :: k
-----
Union and Intersection Types for Secure Protocol Implementations 15
The (Sub Pub Tnt) rule equates many of the types in the system. For instance in RCF
all the following types are equivalent: Un, Un Un, Un Un, Un + Un, μα. Un, and
_→_ _∗_
_α. Un. As a consequence it is hard to come up with RCF types that do not share any_
_∀_
values with type Un, a property we want for our Private type. Perhaps unintuitively, it is
not enough that a type is not public and not tainted to make it disjoint from Un. A final
observation is that, in RCF[∀]
_∧∨[, in an inconsistent environment (][E][ ⊢]_ [false][)][ all][ types are]
equivalent and all values inhabit all types. This means that Private being disjoint from
Un is relative to the formulas in the environment.
**Encoding type Private**
_{C} ≜_ _{x : unit | C}_ _x /∈_ _free(C)_
PrivateC ≜ _{f : {C} →_ Un | ∃x. f = λy : {C}. assert C; x} Private ≜ Privatefalse
We therefore encode a more general type PrivateC, read “private unless C”. The
values in this type are not known to the attacker, unless the formula C is entailed by
the environment. Intuitively, if the attacker would know a value of this type, then he
could call it (values of type PrivateC have to be functions), which would exercise the
assert C and invalidate the safety of the system, unless C can be derived from the
formula log. Type PrivateC resembles a singleton type, in that it contains only values
of a very specific form. We use an existential quantifier over values to ensure that there
are infinitely many values of this type. The type Private is obtained as Privatefalse.
**4.4** **Typing Values and Expressions**
The main judgments of the type system we consider are E _M : T, which states that_
_⊢_
value M has type T, and E _A : T, stating that expression A returns a value of type T ._
_⊢_
These two judgements are mutually-inductively defined, and the most important typing
rules are reported below. Most of them are standard, so we focus the explanation only
on the rules that are new with respect to [12].
**Selected rules for typing values** _E ⊢_ _M : T_
(Val Lam)
_E, x : T ⊢_ _A : U_
_E ⊢_ _λx : T. A : (x : T →_ _U_ )
(Val TLam*)
_E, α ⊢_ _A : T_
_E ⊢_ _Λα. A : ∀α. T_
(Val Refine)
_E ⊢_ _M : T_ _E ⊢_ _C{M/x}_
_E ⊢_ _M : {x : T | C}_
(Val For 2*)
_E ⊢_ _M_ _{U/[�]_ _α�} : V_
_E ⊢_ for �α in _T[�];_ _U. M[�]_ : V
(Val And*)
_E ⊢_ _M : T_ _E ⊢_ _M : U_
_E ⊢_ _M : T ∧_ _U_
(Val For 1*)
_E ⊢_ _M_ _{T /[�]_ _α�} : V_
_E ⊢_ for �α in _T[�];_ _U. M[�]_ : V
Rule (Val And*) allows us to give value M an intersection type T _U_, if we can give
_∧_
_M both type T and type U_ . As discovered by Davies and Pfenning [24] the value restriction is crucial for the soundness of this introduction rule in the presence of side-effects
(also see §8). Also, unrelated to the value restriction, this rule is not very useful on its
own: since we are in a calculus with typing annotations, it is hard to give one annotated
value two different types. For instance, if we want to give the identity function type
(Private Private) (Un Un) we need to annotate the argument with type Private
_→_ _∧_ _→_
(i.e., λx:Private. x) in order to give it type Private Private, but then we cannot give
_→_
-----
16 M. Backes, C. Hri¸tcu, and M. Maffei
this value type Un Un. Following Pierce [39, 40] and Reynolds [41] we use the for
_→_
construct to explicitly alternate type annotations. For instance, the identity function of
type (Private Private) (Un Un) can be written as (for α in Private; Un. λx:α. x).
_→_ _∧_ _→_
By rule (Val For 1*) we can give this value type Private Private if we can give value
_→_
_λx:Private. x the same type, which is trivial. Similarly, by (Val For 2*) we can give the_
for value type Un Un, so by (Val And*) we can also give it the desired intersection
_→_
type.
**Selected rules for typing expressions** _E ⊢_ _A : T_
(Exp Assert)
_E ⊢_ _C_
_E ⊢_ assert C : unit
(Exp Appl)
_E ⊢_ _M : (x : T →_ _U_ ) _E ⊢_ _N : T_
_E ⊢_ _M N : U_ _{N/x}_
(Exp If*)
(Exp Inst*)
_E ⊢_ _M : ∀α. U_
_E ⊢_ _M_ _⟨T ⟩_ : U _{T/α}_
_E ⊢_ _M : T1_ _E ⊢_ _N : T2_ _⊢_ NonDisj T1 T2 ⇝ _C_
_E, x : T1 ∧_ _T2, {x = M ∧_ _M = N ∧_ _C} ⊢_ _A : U_ _E, {M ̸= N_ _} ⊢_ _B : U_
_E ⊢_ if M = N as x then A else B : U
(Exp Case*) (Exp Subsum)
_E ⊢_ _M : T1 ∨_ _T2_ _E, x : T1 ⊢_ _A : U_ _E, x : T2 ⊢_ _A : U_ _E ⊢_ _A : T_ _E ⊢_ _T <: T_ _[′]_
_E ⊢_ case x = M : T1 ∨ _T2 in A : U_ _E ⊢_ _A : T_ _[′]_
Union Types are introduced by subtyping (T1 is a subtype of T1 _∨T2 for any T2), and_
eliminated by a case x = M : T1 _∨T2 in A expression [39] using the (Exp Case*) rule[4]._
Given a value M of type T1 _∨T2, we do not know whether M is of type T1 or of type T2,_
so we have to type-check A under each of these assumptions. This is useful when typechecking code interacting with the attacker. For instance, suppose that a party receives
a value encrypted with a public-key that is used by honest parties to encrypt messages
of type T (as in the protocol from §2). After decryption, the obtained plaintext is given
type T Un since it might come from a honest party as well as from the attacker. We
_∨_
have thus to type-check the remaining code twice, once under the assumption that x is
of type T, and once assuming that x is of type Un.
The rule (Exp If*) exploits intersection types for strengthening the type of the values
tested for equality in the conditional if M = N as x then A else B. If M is of type T1
and N is of type T2, then we type-check A under the assumption that x = M ∧ _M = N_,
and x is of type T1 ∧ _T2. This corresponds to a type-cast that is always safe, since the_
conditional succeeds only if M is syntactically equal to N, in which case the common
value has indeed both the type of M and the type of N . This is useful for type-checking
the symbolic implementations of digital signatures (see §5.2) and zero-knowledge (see
§6). Additionally, if the equality test of the conditional succeeds then the types T1 and
_T2 are not disjoint. However, certain types such as Un and Private have common values_
only if the environment is inconsistent (i.e., E false). Therefore, when comparing
_⊢_
values of disjoint types it is safe to add false to the environment when type-checking
_A, which makes checking A always succeed. Intuitively, if T1 and T2 are disjoint the_
4 As pointed out by Dunfield and Pfenning [28] eliminating union types for expressions that
are not in evaluation contexts is unsound in the presence of non-determinism (this is further
discussed in §8).
-----
Union and Intersection Types for Secure Protocol Implementations 17
conditional cannot succeed, so the expression A will not be executed. This idea has
been applied in [1] for verifying secrecy properties of nonce handshakes, but later disappeared in the more advanced type systems for authorization policies.
**Non-disjointness of types (*)** _⊢_ NonDisj T U ⇝ _C_
(ND Sym)
_⊢_ NonDisj T2 T1 ⇝ _C_
_⊢_ NonDisj T1 T2 ⇝ _C_
(ND Private Un)
_fv_ (C) = ∅
_⊢_ NonDisj PrivateC Un ⇝ _C_
(ND Refine)
_⊢_ NonDisj T1 T2 ⇝ _C_
(ND True)
_⊢_ NonDisj T1 T2 ⇝ true
(ND Refine) (ND Rec)
_⊢_ NonDisj T1 T2 ⇝ _C_ _⊢_ NonDisj (T {α/μα. T }) (U _{β/μβ. U_ _}) ⇝_ _C_
_⊢_ NonDisj {x : T1 | C1} T2 ⇝ _C_ _⊢_ NonDisj (μα. T ) (μβ. U ) ⇝ _C_
(ND Pair) (ND Sum)
_⊢_ NonDisj T1 U1 ⇝ _C1_ _⊢_ NonDisj T1 U1 ⇝ _C1_
_⊢_ NonDisj T2 U2 ⇝ _C2_ _⊢_ NonDisj T2 U2 ⇝ _C2_
_⊢_ NonDisj (T1 ∗ _T2) (U1 ∗_ _U2) ⇝_ _C1 ∧_ _C2_ _⊢_ NonDisj (T1 + T2) (U1 + U2) ⇝ (C1 ∨ _C2)_
(ND And) (ND Or)
_⊢_ NonDisj T1 U ⇝ _C1_ _⊢_ NonDisj T1 U ⇝ _C1_
_⊢_ NonDisj T2 U ⇝ _C2_ _⊢_ NonDisj T2 U ⇝ _C2_
_⊢_ NonDisj (T1 ∧ _T2) U ⇝_ _C1 ∧_ _C2_ _⊢_ NonDisj (T1 ∨ _T2) U ⇝_ _C1 ∨_ _C2_
We take this idea a lot further: we inductively define a ternary relation, which re
lates two types with a logical formula. If ⊢ NonDisj T1 T2 ⇝ _C holds then any en-_
vironment E in which T1 and T2 have a common value, has to entail the condition
_C (i.e., E ⊢_ _C). The base case of this relation is ⊢_ NonDisj PrivateC Un ⇝ _C,_
in particular ⊢ NonDisj Private Un ⇝ false. We call two types provably disjoint if
_⊢_ NonDisj T1 T2 ⇝ _C for some formula C that logically entails false, so Private and_
Un are provably disjoint. Intuitively, two provably disjoint types have common values
only in an inconsistent environment.
The other inductive rules lift the NonDisj relation to refinement, pair, sum, recursive,
union, and intersection types. We explain two of them in terms of provable disjointness.
In order to show that two (non-dependent) pair types (T1 ∗ _T2) and (U1 ∗_ _U2) are_
provably disjoint, we apply rule (ND Pair) and we need to show that T1 and U1 are
provably disjoint, or that T2 and U2 are provably disjoint (a conjunction is false if at
least one of the conjuncts is false). On the other hand, in order to show that two sum
types (T1 + T2) and (U1 + U2) are disjoint using (ND Sum) we need to show both that
_T1 and U1 are disjoint and that T2 and U2 are disjoint._
To illustrate the expressivity of this definition we consider a type for binary trees:
tree⟨α⟩ ≜ _μβ. α + (α ∗_ _β ∗_ _β). Each node in the tree is either a leaf or has two_
children, and both kind of nodes store some information of type α. We can show that
tree Private and tree Un are provably disjoint. By (ND Rec) we need to show that
_⟨_ _⟩_ _⟨_ _⟩_
the unfolded types Private + (Private tree Private tree Private ) and Un + (Un
_∗_ _⟨_ _⟩∗_ _⟨_ _⟩_ _∗_
tree Un tree Un ) are disjoint. By (ND Sum) we need to show both that Private
_⟨_ _⟩∗_ _⟨_ _⟩_
and Un are disjoint, which is immediate by (ND Private Un), and that the pair types
(Private tree Private tree Private ) and (Un tree Un tree Un ) are disjoint.
_∗_ _⟨_ _⟩∗_ _⟨_ _⟩_ _∗_ _⟨_ _⟩∗_ _⟨_ _⟩_
-----
18 M. Backes, C. Hri¸tcu, and M. Maffei
For the latter, by (ND Pair) it suffices to show that the types of the first components of
the pair are disjoint, which follows again by (ND Private Un).
We have proved in Coq that our type system enforces robust safety; for details we
refer to the long version [7].
## 5 Implementation of Symbolic Cryptography
In contrast to process calculi for cryptographic protocols [4, 3], RCF[∀]
_∧∨_ [does not have]
any built-in construct to model cryptography. Cryptographic primitives are instead encoded using a dynamic sealing mechanism [35], which is based on standard RCF[∀]
_∧∨_
constructs. The resulting symbolic cryptographic libraries are type-checked using the
regular typing rules. The main advantage is that, adding a new primitive to the library
does not involve changes in the calculus or in the soundness proofs: one has just to find
a well-typed encoding of the desired cryptographic primitive. In addition, Backes et
al. have recently [11] shown that sealing-based libraries for asymmetric cryptography
are computationally sound and semantically equivalent to the more traditional DolevYao libraries based on datatype constructors. §5.1 overviews the dynamic sealing mechanism used in [12] to encode symbolic cryptography, while §5.2 and §5.3 show how
our expressive type system can be used to improve this encoding and extend the class
of supported protocols.
**5.1** **Dynamic Sealing**
The notion of dynamic sealing was initially introduced by Morris [35] as a protection
mechanism for programs. Later, Sumii and Pierce [43] studied the semantics of dynamic
sealing in a λ-calculus, observing a close correspondence with symmetric encryption.
In RCF [12] seals are encoded using pairs, functions, references and lists. A seal is a
pair of a sealing function and an unsealing function, having type:
Seal _T_ = (T Un) (Un _T )._
_⟨_ _⟩_ _→_ _∗_ _→_
The sealing function takes as input a value M of type T and returns a fresh value N
of type Un, after adding the pair (M, N ) to a secret list that is stored in a reference.
The unsealing function takes as input a value N of type Un, scans the list in search of a
pair (M, N ), and returns M . Only the sealing function and the unsealing function can
access this secret list. In RCF, each key-pair is (symbolically) implemented by means
of a seal. In the case of public-key cryptography, for instance, the sealing function is
used for encrypting, the unsealing function is used for decrypting, and the sealed value
_N represents the ciphertext._
Let us take a look at the type Seal _T_ . If T is neither public nor tainted, as it is
_⟨_ _⟩_
usually the case for symmetric-key cryptography, neither the sealing function nor the
unsealing function are public, meaning that the symmetric key is kept secret. If T is
tainted but not public, as usually the case for public-key encryption, the sealing function is public but the unsealing function is not, meaning that the encryption key may
be given to the adversary but the decryption key is kept secret. If T is public but not
tainted, as typically the case for digital signatures, the sealing function is not public
-----
Union and Intersection Types for Secure Protocol Implementations 19
and the unsealing function is public, meaning that the signing key is kept secret but the
verification key may be given to the adversary.
Although this unified interpretation of cryptography as sealing and unsealing func
tions is conceptually appealing, it actually exhibits some undesired side-effects when
modeling asymmetric cryptography. If the type of a signed message is not public, then
the verification key is not public either and cannot be given to the adversary. This is
unrealistic, since in most cases verification keys are public even if the message to be
signed is not (as in DAA, see §6.1). Moreover, if the type of a message encrypted with
a public key is not tainted, then the public key is not public and cannot be given to the
adversary. This may be problematic, for instance, when modeling authentication protocols based on public keys as the NSL protocol (see §2), where the type of the encrypted
messages is neither public nor tainted.
**5.2** **Digital Signatures**
In this section, we focus on digital signatures and show how union and intersection types
can be used to solve the aforementioned problems. The signing key consists of the seal
itself and is given type SigKey⟨T ⟩ ≜ Seal ⟨T ⟩, as in the original RCF library [12]. The
verification key, instead, is encoded as a function that (i) takes the signature x and the
signed message t as input; (ii) calls the unsealing function to retrieve the message y
bound to x in the secret list; and (iii) returns y if y is equal to t and fails otherwise.
In this encoding, the verifier has to know the signed message in order to verify the
signature. This is reasonable as, for efficiency reasons, one usually signs a hash of the
message as opposed to the message in plain.
**Symbolic implementation of signing-verification key pair**
mkSigPair : ∀α. unit → SigKey⟨α⟩∗ VerKey⟨α⟩
mkSigPair = Λα. λu : unit.
let (seal _, unseal_ ) = mkSeal ⟨α⟩ in
let vk = λx : Un. for β in ⊤; Un. λt : β.
if t = (unseal x) as z then z else failwith “verification failed”
in (k, vk)
The type VerKey _T_ of a verification key is defined as Un �(x : _y :_
_⟨_ _⟩_ _→_ _⊤→{_
_T_ _x = y_ ) (Un Un)�. The verification key takes the signature of type Un as
_|_ _}_ _∧_ _→_
first argument. The second part of this type is an intersection of two types: The type
_x :_ _y : T_ _x = y_ is used to type-check honest callers: the signed message x
_⊤→{_ _|_ _}_
has any type (top type) and the message y returned by the unsealing function has the
stronger type T, which means that the unsealing function casts the type of the signed
message from down to T . This is safe since the sealing function is not public and can
_⊤_
only be used to sign messages of type T . The type Un Un makes VerKey _T_ always
_→_ _⟨_ _⟩_
public[5]. Hence, in contrast to [12], we can reason about protocols where the signing key
is used to sign private messages while the verification key is public (e.g., in DAA [18]).
5 A type of the form Un → (T1 ∧ _T2) is public if T1 or T2 are public, and in our case T2 =_
Un → Un is public.
-----
20 M. Backes, C. Hri¸tcu, and M. Maffei
Finally, we present the typed interface of the functions to create and check signatures:
sign : ∀α. (xsk : SigKey⟨α⟩→ _α →_ Un) ∧ Un
check : ∀α. (xvk: VerKey⟨α⟩→ Un →⊤→ _α) ∧_ Un
We type-check sign and check twice, to give them intersection types whose right-hand
side is Un. While making these functions available to the adversary is not necessary (the
attacker can directly use the signing and verification keys to which he has access), this
is convenient for the encoding of zero-knowledge we describe in §6 (dishonest verifier
cases).
**5.3** **Public-Key Encryption**
For public-key encryption we simply use a seal of type Seal _T_ Un, i.e.,
_⟨_ _∨_ _⟩_
PrivKey⟨T ⟩ ≜ Seal ⟨T ∨ Un⟩ and PubKey⟨T ⟩ ≜ (T ∨ Un) → Un. This allows us
to obtain the types described in §2.2. In contrast to [12], the encryption key is always
public, even if the type T of the encrypted message is not tainted[6].
## 6 Encoding of Zero-Knowledge
This section describes how we automatically generate the symbolic implementation of
non-interactive zero-knowledge proofs, starting from a high-level specification. Intuitively, this implementation resembles an oracle that provides three operations: one for
creating zero-knowledge proofs, one for verifying such proofs, and one for obtaining
the public values used to create the proofs. Some of the values used to create a zeroknowledge proof are revealed by the proof to the verifier and to any eavesdropper, while
the others (which we call witnesses) are kept secret. A zero-knowledge proof does not
reveal any information about these witnesses, other than the validity of the statement
being proved.
**6.1** **Illustrative Example: Simplified DAA**
We are going to illustrate our technique on a simplified version[7] of the Direct Anonymous Attestation (DAA) protocol [18]. The goal of the DAA protocol is to enable the
TPM to sign arbitrary messages and to send them to an entity called the verifier in such
a way that the verifier will only learn that a valid TPM signed that message, but without
revealing the TPM’s identity. The DAA protocol is composed of two sub-protocols: the
_join protocol and the DAA-signing protocol. The join protocol allows a TPM to obtain_
a certificate xcert from an entity called the issuer. This certificate is just a signature on
the TPM’s secret identifier xf . The DAA-signing protocol enables a TPM to authenticate a message ym by proving to the verifier the knowledge of a valid certificate, but
without revealing the TPM’s identifier or the certificate. In this section, we focus on
the DAA-signing protocol and we assume that the TPM has already completed the join
6 A type of the form (T1 ∨ _T2) →_ Un is public if T1 or T2 is tainted, and in our case T2 = Un
is tainted.
7 The long version describes the general code generation routine in more detail [7].
-----
Union and Intersection Types for Secure Protocol Implementations 21
protocol and received the certificate from the issuer. In the DAA-signing protocol the
TPM sends to the verifier a zero-knowledge proof.
TPM Verifier
assume Send(xf _, ym)_
zkdaa (xf _,xcert,yvki_ _,ym)_
assert Authenticate(ym)
The TPM proves the knowledge of a certificate xcert of its identifier xf that can be
verified with the verification key yvki of the issuer. Note that although the payload
message ym does not occur in the statement, the proof guarantees non-malleability
so an attacker cannot change ym without redoing the proof. Before sending the zeroknowledge proof, the TPM assumes Send(xf _, ym). After verifying the zero-knowledge_
proof, the verifier asserts Authenticate(ym). The authorization policy we consider for
the DAA-sign protocol is
assume ∀xf _, xcert_ _, ym. Send(xf_ _, ym) ∧_ OkTPM(xf ) ⇒ Authenticate(ym)
where the predicate OkTPM(xf ) is assumed by the issuer before signing xf .
**6.2** **High-Level Specification**
Our high-level specification of non-interactive zero-knowledgeproofs is similar in spirit
to the symbolic representation of zero-knowledge proofs in a process calculus [10, 8].
For a specification the user needs to provide: (1) variables representing the witnesses
and public values of the proof, (2) a Boolean formula over these variables representing
the statement of the proof, (3) types for the variables, and, if desired, (4) a promise,
i.e., a logical formula that is conveyed by the proof only if the prover is honest.
**High-level specification of simplified DAA**
zkdef daa =
witness = [xf : Tvki _, xcert : Un]_
matched = [yvki : VerKey⟨Tvki _⟩]_
public = [ym : Un]
statement = [xf = check⟨Tvki _⟩_ _yvki xcert xf_ ]
promise = [Send(xf _, ym)]_
where Tvki = {zf : Private | OkTPM(zf )}
**Variables. The variables xf and xcert stand for witnesses. The value of yvki is matched**
against the signature verification key of the issuer, which is already known to the verifier
of the zero-knowledge proof. The payload message ym is returned to the verifier of the
proof, so it is public.
**Statement. The statement conveyed by a zero-knowledge proof is in general a positive**
Boolean formula over equality checks. In our simplified DAA example this is just xf =
check⟨Tvki _⟩_ _yvki xcert xf_ .
-----
22 M. Backes, C. Hri¸tcu, and M. Maffei
**Types. The user also needs to provide types for the variables. The DAA-sign proto-**
col does not preserve the secrecy of the signed message, so ym has type Un. On the
other hand, the TPM identifier xf is given a secret and untainted type Tvki = {zf :
Private | OkTPM(zf )}. This type ensures that xf is not known to the attacker and that
the predicate OkTPM(xf ) holds. The verification key of the issuer is used to check
signed messages of type Tvki, so it is given type VerKey⟨Tvki _⟩. Finally the certificate_
_xcert is a signature, so it has type Un. Even though it has type Un, it would break the_
anonymity of the user to make the certificate a public value, since the verifier could then
always distinguish if two consecutive requests come from the same user or not.
**Promise. The user can additionally specify a promise: an arbitrary authorization logic**
formula that holds in the typing environment of the prover. If the statement is strong
enough to identify the prover as an honest (type-checked)protocol participant (signature
proofs of knowledge such as DAA-signing have this property), then the promise can be
safely transmitted to the typing environment of the verifier. In the DAA example we
have the promise Send(xf _, ym), since this predicate holds in the typing environment of_
a honest TPM.
**6.3** **Automatic Code Generation**
We automatically generate both a typed interface and a symbolic implementation for
the oracle corresponding to a zero-knowledge specification.
**Generated typed interface for simplified DAA**
createdaa : Tdaa ∨ Un → Un publicdaa : Un → Un
verifydaa : Un → ((yvki : VerKey⟨Tvki _⟩→_ _Udaa_ ) ∧ Un → Un)
where Tdaa = yvki : VerKey⟨Tvki _⟩∗_ _ym : Un ∗_ _xf : Tvki ∗_ _xcert : Un ∗{Send(xf_ _, ym)}_
and Udaa = {ym : Un | ∃xf _, xcert_ _. OkTPM(xf_ ) ∧ Send(xf _, ym)}_
The generated interface for DAA contains three functions that share a hidden seal
of type Tdaa Un. The function createdaa is used to create zero-knowledge proofs. It
_∨_
takes as argument a tuple containing values for all variables of the proof, or an argument of type Un if it is called by the adversary. In case a protocol participant calls this
function, we check that the values have the specified types. Additionally, we check that
the promise Send(xf _, ym) holds in the typing environment of the prover. The returned_
zero-knowledge proof is given type Un so that it can be sent over the public network.
The function publicdaa is used to read the public values of a proof, so it takes as
input the sealed proof of type Un and returns ym, also at type Un.
The function verifydaa is used for verifying zero-knowledge proofs. Because of the
second part of the intersection type, this function can be called by the attacker, in which
case it returns a value of type Un. When called by a protocol participant, however, it
takes as argument a candidate zero-knowledge proof of type Un and the verification
key of the issuer with type VerKey⟨Tdaa _⟩. On successful verification, verifydaa returns_
_ym, the only public variable, but with a stronger type than in publicdaa_ . The function guarantees that the formula ∃xf _, xcert_ _. OkTPM(xf_ ) ∧ Send(xf _, ym) holds, where_
the witnesses are existentially quantified. The first conjunct, OkTPM(xf ), guarantees
-----
Union and Intersection Types for Secure Protocol Implementations 23
that if verification succeeds then the statement indeed holds, no matter what the origin
of the proof is. This predicate is automatically extracted from the return type of the
check⟨Tvki _⟩_ function (see §5.2). The second conjunct Send(xf _, ym) is the promise of_
the proof.
The generated implementation for this interface creates a fresh seal kdaa for val
ues of type Tdaa Un. The sealing function of kdaa is directly used to implement the
_∨_
createdaa function. The unsealing function of kdaa is used to implement the publicdaa
and verifydaa functions. The implementation of publicdaa is very simple: since the zeroknowledge proof is just a sealed value, publicdaa unseals it and returns ym. The witnesses are discarded, and the validity of the statement is not checked.
The implementation of the verifydaa function is more interesting. This function takes
a candidate zero-knowledge proof z of type Un as input, and a value for the matched
variable yvki . Since the type of verifydaa contains an intersection type we use a for
construct to introduce this intersection type. If the proof is verified by the attacker we
can assume that the yvki has type Un and need to type the return value to Un. On the
other hand, if the proof is verified by a protocol participant we can assume that yvki has
the type VerKey⟨Tvki _⟩. In general, it is the strong types of the matched values that allow_
us to guarantee the strong types of the returned public values, as well as the promise.
**Generated symbolic implementation for simplified DAA**
verifydaa = λz : Un.
for α in Un; VerKey⟨Tvki _⟩. λyvki[′]_ [:][ α.]
let z[′] = (snd kdaa ) z in (1)
case z[′′] = z[′] : Un ∨ _Tdaa in_ (2)
let (yvki _, ym, xf_ _, xcert_ _, _) = z[′′]_ in (3)
if yvki = yvki[′] [as][ y]vki[′′] [then] (4)
if xf = check⟨Tvki _⟩_ _yvki[′′]_ _[x][cert][ x][f]_ [then][ y][m] (5)
else failwith “statement not valid”
else failwith “yvki does not match”
The generated verifydaa function performs the following five steps: (1) it unseals z
using “snd kdaa ” and obtains z[′]; (2) since z[′] has a union type, it does case analysis on it,
and assigns its value to z[′′]; (3) it splits the tuple z[′′] into the public values (yvki and ym)
and the witnesses (xf and xcert ). (4) it tests if the matched variable yvki is equal to the
argument yvki[′] [, and in case of success assigns the value to the variable][ y]vki[′′] [– since][ y]vki[′′]
has a stronger type than yvki[′] [and][ y][vki][ we use this new variable to stand for][ y][vki][ in the]
following; (5) it tests if the statement is true by applying the check⟨Tvki⟩ function, and
checking the result for equality with the value of xf . In general, this last step is slightly
complicated by the fact that the statement can contain conjunctions and disjunctions,
so we use decision trees. However, for the DAA example the decision tree has a trivial
structure with only one node.
Since the automatically generated implementation of zero-knowledge proofs relies
on types and formulas provided by the user, which may both be wrong, the generated implementation is not guaranteed to fulfill its interface. We use our type-checker
to check whether this is indeed the case. If type-checking the generated code against
its interface succeeds, then this code can be safely used in protocol implementations.
Note that because of the for and case constructs the body of verifydaa is type-checked
-----
24 M. Backes, C. Hri¸tcu, and M. Maffei
four times, corresponding to the following four scenarios: honest prover / honest verifier, honest prover / dishonest verifier, dishonest prover / honest verifier, and dishonest
prover / dishonest verifier. In DAA the most interesting case is dishonest prover / honest verifier, when z[′′] and hence xf are given type Un, while the result of the signature
verification is of type Tvki . Since ⊢ NonDisj {zf : Private | OkTPM(zf )} Un ⇝ false
by rules (ND Refine) and (ND Private Un), false is added to the environment in which
_ym is type-checked. The variable ym has type Un in this environment, but since this_
environment is inconsistent ym can also be given type Udaa .
## 7 Implementation
We have implemented a complete tool-chain for RCF[∀]
_∧∨[: it includes a type-checker]_
for the type system described in §4, the automatic code generator for zero-knowledge
described in §6, an interpreter, and a visual debugger.
The type-checker supports an extended syntax with respect to the one from §3, including: a simple module system, algebraic data types, recursive functions, type definitions, and mutable references. We use first-order logic with equality as the authorization
logic and the type-checker invokes the Z3 SMT solver [25] to discharge proof obligations. The type-checker performed very well in our experiments: it type-checks all our
symbolic libraries and samples totaling more than 1.5kLOC in around 12 seconds, on
a normal laptop. The type-checker produces an XML log file containing the complete type derivation in case of success, and a partial derivation that leads to the typing
error in case of failure. This can be inspected using our visualizer to easily detect and
fix flaws in the protocol implementation. The type-checker also performs very limited
type inference: it can infer the instantiation of some polymorphic functions from the
type of the arguments, however, the user has to provide all the other typing annotations
– we would like to improve the amount of type inference in the future (see §9 for a
discussion).
The type-checker, the code generator for zero-knowledge, and the interpreter are
command-line tools implemented in F#, while the GUIs of the visual debugger and the
visualizer for type derivations are specified using WPF (Windows Presentation Foundation). The type-checker consists of around 2.5kLOC, while the whole tool-chain has
over 5kLOC. All the tools and samples are available at [7].
## 8 Related Work on Unions and Intersections
The for construct for explicitly alternating type annotations was introduced by Pierce
[39,40] as a generalization of an idea Reynolds [41] used in Forsythe for giving intersection types to annotated lambda abstractions of the form λx:τ1..τn. e. In a Churchstyle system, however, the for construct does not have a clear operational semantics.
Compagnoni [23] gives an operational semantics to function application expressions
of the form ((for α in T ; U. λx:V. e1) e2) by pushing the application inside the for –
i.e., this expression reduces in one step to (for α in T ; U. ((λx:V. e2) e2)). It is unclear
if this can be generalized to anything other than function applications. Moreover, this
reduction rule does not respect the value restriction for the introduction of intersection
-----
Union and Intersection Types for Secure Protocol Implementations 25
types (our rule (Val And*) in §4). As discovered by Davies and Pfenning [24] the value
restriction on intersection introduction is crucial for soundness in the presence of sideeffects. The counterexample they give is in fact very similar to the one used to illustrate
the unsoundness of ML, in the absence of the value restriction, due to the interaction
of polymorphism with side-effects [33]. Moreover, Davies and Pfenning [24] observed
that some standard distributivity laws of subtyping are unsound in a setting with sideeffects, since they basically allow one to circumvent the value restriction. We obtain
all the benefits of the for construct in RCF[∀]
_∧∨[, but erase it completely when translat-]_
ing values into Formal-RCF[∀]
_∧∨[, and use the value restriction on both levels to ensure]_
soundness.
The case construct for eliminating union types was introduced by Pierce [39] as a
way to make type-checking more efficient, by asking the programmer to annotate the
position in the code where union elimination should occur. Dunfield and Pfenning [28]
later pointed out that unrestricted elimination of union types is unsound in the presence
of non-determinism. This observation is crucial for us, since our calculus, as opposed
to the one studied by Dunfield and Pfenning, is in fact non-deterministic. They propose
an evaluation context restriction that recovers soundness, but this is not enough to make
type-checking efficient. In recent work, Dunfield [27], shows that carefully transforming programs into let-normal form improves efficiency. This is encouraging, since our
expressions are already in let normal form, so we can hope to replace the case construct
by a normal let in the future, and still preserve efficient type-checking.
## 9 Conclusions and Future Work
We have presented a new type system that combines refinement types with union types,
intersection types, and polymorphic types. A novelty of the type system is its ability to
reason statically about the disjointness of types. This extends the scope of the existing
type-based analyses of protocol implementations to important classes of cryptographic
protocols that were not covered so far, including protocols based on zero-knowledge
proofs. Our type system comes with a mechanized proof of correctness and an efficient
implementation [7].
As future work, we plan to investigate the automated generation of concrete cryptographic implementations of zero-knowledge proofs, and thus to complement the generation of symbolic implementations considered in this paper. Also, we intend to apply
our framework to analyze implementations of more complex protocols, such as the Civitas electronic voting system [22].
The type-checker we implemented had very good efficiency in our experiments, however, the amount of typing annotations it requires is at the moment quite high. This issue
is more pronounced in our symbolic cryptography library, where intersection and union
types are pervasive. This is less of a problem in the code that links against these libraries, and in the case of zero-knowledge even the code in the library is automatically
generated together with all the necessary annotations. In the future we would like to
perform more type inference, maybe leveraging some of the recent progress on type
inference for refinement types [42, 34]. The good news is that intersection and union
types can be very useful when devising precise type inference algorithms [8,36].
-----
26 M. Backes, C. Hri¸tcu, and M. Maffei
**Acknowledgments. We thank Cédric Fournet, Andy Gordon, Jan Schwinghammer, and**
Pierre-yves Strub for the constructive discussions. Thorsten Tarrach implemented the
original F5 prototype. Stefan Lorenz helped us with the cryptographic implementation
of the DAA protocol. Joshua Dunfield and Kim Pecina commented on a draft. C˘at˘alin
Hri¸tcu is supported by a fellowship from Microsoft Research and the International Max
Planck Research School for Computer Science. Matteo Maffei is partially supported by
the initiative for excellence of the German federal government, by DFG Emmy Noether
program, and by MIUR project “SOFT”.
## References
1. Abadi, M., Blanchet, B.: Secrecy types for asymmetric communication. Theoretical Computer Science 3(298), 387–415 (2003)
2. Abadi, M., Blanchet, B.: Analyzing security protocols with secrecy types and logic programs.
Journal of the ACM 52(1), 102–146 (2005)
3. Abadi, M., Fournet, C.: Mobile values, new names, and secure communication. In: Proc. 28th
Symposium on Principles of Programming Languages (POPL), pp. 104–115. ACM Press,
New York (2001)
4. Abadi, M., Gordon, A.D.: A calculus for cryptographic protocols: The spi calculus. Information and Computation 148(1), 1–70 (1999)
5. Amadio, R.M., Cardelli, L.: Subtyping recursive types. ACM Transactions on Programming
Languages and Systems (TOPLAS) 15(4), 575–631 (1993)
6. Backes, M., Grochulla, M.P., Hri¸tcu, C., Maffei, M.: Achieving security despite compromise
using zero-knowledge. In: 22th IEEE Symposium on Computer Security Foundations (CSF
2009). IEEE Computer Society Press, Los Alamitos (July 2009)
7. Backes, M., Hri¸tcu, C., Maffei, M.: Union and intersection types for secure protocol im[plementations. Long version, formalization and implementation, http://www.infsec.](http://www.infsec.cs.uni-sb.de/projects/F5/)
[cs.uni-sb.de/projects/F5/](http://www.infsec.cs.uni-sb.de/projects/F5/)
8. Backes, M., Hri¸tcu, C., Maffei, M.: Type-checking zero-knowledge. In: 15th ACM Conference on Computer and Communications Security (CCS 2008), pp. 357–370. ACM Press,
New York (2008)
9. Backes, M., Maffei, M., Pecina, K.: A security API for distributed social networks. In: 18th
Annual Network & Distributed System Security Symposium (NDSS 2011), pp. 35–51. Internet Society, San Diego (2011)
10. Backes, M., Maffei, M., Unruh, D.: Zero-knowledge in the applied pi-calculus and automated
verification of the direct anonymous attestation protocol. In: Proc. of 29th IEEE Symposium
on Security and Privacy, pp. 202–215. IEEE Computer Society Press, Los Alamitos (2008)
11. Backes, M., Maffei, M., Unruh, D.: Computationally sound verification of source code. In:
Proc. 17th ACM Conference on Computer and Communications Security (CCS), pp. 387–
398. ACM Press, New York (2010)
12. Bengtson, J., Bhargavan, K., Fournet, C., Gordon, A.D., Maffeis, S.: Refinement types
for secure implementations. In: Proc. 21th IEEE Symposium on Computer Security Foundations (CSF), pp. 17–32. IEEE Computer Society Press, Los Alamitos (2008), long
version appeared as MSR-TR-2008-118. November 2010 revision that fixes the prob[lems we pointed out is http://research.microsoft.com/en-us/um/people/](http://research.microsoft.com/en-us/um/people/adg/Publications/MSR-TR-2008-118-SP2.pdf)
[adg/Publications/MSR-TR-2008-118-SP2.pdf](http://research.microsoft.com/en-us/um/people/adg/Publications/MSR-TR-2008-118-SP2.pdf)
13. Bhargavan, K., Corin, R., Fournet, C., Z˘alinescu, E.: Cryptographically verified implementations for TLS. In: 15th ACM Conference on Computer and Communications Security (CCS
2008), pp. 459–468. ACM Press, New York (2008)
-----
Union and Intersection Types for Secure Protocol Implementations 27
14. Bhargavan, K., Fournet, C., Gordon, A.D.: Modular verification of security protocol code by
typing. In: Proc. 37th Symposium on Principles of Programming Languages (POPL 2010),
pp. 445–456 (2010)
15. Bhargavan, K., Fournet, C., Gordon, A.D., Tse, S.: Verified interoperable implementations of
security protocols. In: Proc. 19th IEEE Computer Security Foundations Workshop (CSFW),
pp. 139–152. IEEE Computer Society Press, Los Alamitos (2006)
16. Blanchet, B.: An efficient cryptographic protocol verifier based on Prolog rules. In: Proc.
14th IEEE Computer Security Foundations Workshop (CSFW), pp. 82–96. IEEE Computer
Society Press, Los Alamitos (2001)
17. Bleichenbacher, D.: Chosen ciphertext attacks against protocols based on the RSA encryption
standard PKCS. In: Krawczyk, H. (ed.) CRYPTO 1998. LNCS, vol. 1462, pp. 1–12. Springer,
Heidelberg (1998)
18. Brickell, E., Camenisch, J., Chen, L.: Direct anonymous attestation. In: Proc. 11th ACM
Conference on Computer and Communications Security, pp. 132–145. ACM Press, New
York (2004)
19. Butler, F., Cervesato, I., Jaggard, A.D., Scedrov, A., Walstad, C.: Formal analysis of Kerberos
5. Theoretical Computer Science 367(1), 57–87 (2006)
20. Cardelli, L.: Type systems. In: The Computer Science and Engineering Handbook, pp. 2208–
2236 (1997)
21. Chaki, S., Datta, A.: ASPIER: An automated framework for verifying security protocol implementations. Technical report, CMU CyLab (October 2008)
22. Clarkson, M.R., Chong, S., Myers, A.C.: Civitas: A secure voting system. In: Proc. 29th
IEEE Symposium on Security and Privacy, pp. 354–368. IEEE Computer Society Press, Los
Alamitos (2008)
23. Compagnoni, A.B.: Subject reduction and minimal types for higher order subtyping. Technical Report ECS-LFCS-97-363, LFCS, University of Edinburgh (August 1997)
24. Davies, R., Pfenning, F.: Intersection types and computational effects. In: Proc. International
Conference on Functional Programming (ICFP 2000), pp. 198–208 (2000)
25. de Moura, L., Bjørner, N.: Z3: An efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J.
(eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008)
26. Denning, D.E., Sacco, G.M.: Timestamps in key distribution protocols. Communications of
the ACM 24(8), 533–536 (1981)
27. Dunfield, J.: Untangling typechecking of intersections and unions. In: Workshop on Intersection Types and Related Systems (ITRS) (July 2010)
28. Dunfield, J., Pfenning, F.: Tridirectional typechecking. In: Proc. 31th Symposium on Principles of Programming Languages (POPL 2004), pp. 281–292. ACM Press, New York (2004)
29. Eigner, F.: Type-based verification of electronic voting systems. Master’s thesis, Saarland
University (2009)
30. Fisher, D.: Millions of .Net Passport accounts put at risk. eWeek (May 2003) (Flaw detected
by Muhammad Faisal Rauf Danka)
31. Fournet, C., Gordon, A.D., Maffeis, S.: A type discipline for authorization in distributed
systems. In: Proc. 20th IEEE Symposium on Computer Security Foundations (CSF), pp. 31–
45. IEEE Computer Society Press, Los Alamitos (2007)
32. Goubault-Larrecq, J., Parrennes, F.: Cryptographic protocol analysis on real C code. In:
Cousot, R. (ed.) VMCAI 2005. LNCS, vol. 3385, pp. 363–379. Springer, Heidelberg (2005)
33. Harper, B., Lillibridge, M.: ML with callcc is unsound. Post to TYPES mailing list
[(July 8, 1991), archived at http://www.seas.upenn.edu/~sweirich/types/](http://www.seas.upenn.edu/~sweirich/types/archive/1991/msg00034.html)
[archive/1991/msg00034.html](http://www.seas.upenn.edu/~sweirich/types/archive/1991/msg00034.html)
34. Jhala, R., Majumdar, R., Rybalchenko, A.: HMC: Verifying functional programs using abstract interpreters. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806,
[pp. 470–485. Springer, Heidelberg (2011), http://arxiv.org/abs/1004.2884v2](http://arxiv.org/abs/1004.2884v2)
-----
28 M. Backes, C. Hri¸tcu, and M. Maffei
35. Morris Jr., J.H.: Protection in programming languages. Communications of the ACM 16(1),
15–21 (1973)
36. Kobayashi, N.: Types and higher-order recursion schemes for verification of higher-order
programs. In: Proc. 36th Symposium on Principles of Programming Languages (POPL
2009), pp. 416–428 (2009)
37. Lowe, G.: Breaking and fixing the Needham-Schroeder public-key protocol using FDR. In:
Margaria, T., Steffen, B. (eds.) TACAS 1996. LNCS, vol. 1055, pp. 147–166. Springer, Heidelberg (1996)
38. Mendler, N.P.: Inductive types and type constraints in the second-order lambda calculus.
Annals of Pure and Applied Logic 51(1-2), 159–172 (1991)
39. Pierce, B.C.: Programming with intersection types, union types, and polymorphism. Technical Report CMU-CS-91-106, Carnegie Mellon University (1991)
40. Pierce, B.C.: Intersection types and bounded polymorphism. Mathematical Structures in
Computer Science 7(2), 129–193 (1997)
41. Reynolds, J.C.: Design of the programming language Forsythe. Technical Report CMU-CS96-146, Carnegie Mellon University (June 1996); Reprinted in O’Hearn, Tennent: ALGOLlike Languages, vol. 1, pp. 173–233. Birkhäuser, Basel (1997)
42. Rondon, P.M., Kawaguchi, M., Jhala, R.: Liquid types. In: Proc. ACM SIGPLAN 2008 Conference on Programming Language Design and Implementation (PLDI 2008), pp. 159–169
(2008)
43. Sumii, E., Pierce, B.C.: A bisimulation for dynamic sealing. Theoretical Computer Science 375(1-3), 169–192 (2007)
44. Urzyczyn, P.: Positive recursive type assignment. In: Hájek, P., Wiedermann, J. (eds.) MFCS
1995. LNCS, vol. 969, pp. 382–391. Springer, Heidelberg (1995)
45. Wagner, D., Schneier, B.: Analysis of the SSL 3.0 protocol. In: Proc. 2nd USENIX Workshop
on Electronic Commerce, pp. 29–40 (1996)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-642-27375-9_1?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-642-27375-9_1, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007/978-3-642-27375-9_1.pdf"
}
| 2,011
|
[
"JournalArticle"
] | true
| 2011-03-31T00:00:00
|
[] | 25,912
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/03112b3c6f4eb4b8280b81b6266596279539482b
|
[
"Computer Science"
] | 0.846554
|
Self-Stabilized Fast Gossiping Algorithms
|
03112b3c6f4eb4b8280b81b6266596279539482b
|
ACM Transactions on Autonomous and Adaptive Systems
|
[
{
"authorId": "3266015",
"name": "S. Dulman"
},
{
"authorId": "3266794",
"name": "E. Pauwels"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ACM Trans Auton Adapt Syst"
],
"alternate_urls": [
"http://dl.acm.org/citation.cfm?id=J1010&picked=prox",
"http://portal.acm.org/taas/",
"https://taas.acm.org/"
],
"id": "4b9f08bb-708d-454a-bc00-6d38eca03af4",
"issn": "1556-4665",
"name": "ACM Transactions on Autonomous and Adaptive Systems",
"type": "journal",
"url": "http://www.acm.org/pubs/taas"
}
| null |
# Self-Stabilized Fast Gossiping Algorithms
STEFAN DULMAN and ERIC PAUWELS, CWI
In this article, we explore the topic of extending aggregate computation in distributed networks with selfstabilizing properties to withstand network dynamics. Existing research suggests that fast gossiping algorithms, based on the properties of order statistics applied to families of exponential random variables,
are a viable solution for computing functions of the values stored in the network. We focus on the specific
case in which network changes and failures occur in batches with a minimum frequency in the order of
the diameter of the network. Our contribution consists in two self-stabilizing mechanisms, allowing fast
gossiping algorithms to be applicable to dynamic networks with minor increase in resources usage. The
resulting algorithms can be deployed in networks exhibiting churn, node stop-failures and resets, and random topological changes. The theoretical results are verified with simulations on synthetic data, showcasing
desirable properties for large-scale network designers such as scalability, lack of single points of failure, and
anonymity.
Categories and Subject Descriptors: C.2.1 [Network Architecture and Design]: Distributed Networks
General Terms: Algorithms, Design, Performance
Additional Key Words and Phrases: Self-stabilization, distributed network, gossiping algorithm
**ACM Reference Format:**
Stefan Dulman and Eric Pauwels. 2015. Self-stabilized fast gossiping algorithms. ACM Trans. Auton. Adapt.
Syst. 10, 4, Article 29 (December 2015), 20 pages.
[DOI: http://dx.doi.org/10.1145/2816819](http://dx.doi.org/10.1145/2816819)
**1. INTRODUCTION**
Advances in electronics, telecommunication, and user interface design have led to
recent deployments of myriads of networked embedded platforms around us. Smartphones, wireless sensor networks, swarms of robotic devices (drones, intelligent cars,
etc.) are just a few examples of systems being increasingly found in our environment.
Making these multinode systems “intelligent” and able to autonomously adapt to internal and external changes is of acute interest with immediate societal impact. The
bottleneck in such designs is usually their distributed control: due to their sheer size,
networks are subject to emergent behavior that more often than not disrupts their functionality. Examples abound in smart energy grids, large-scale infrastructure, internet
of things, and smart cities applications.
Traditional centralized control fails to work beyond a certain network size, reasons
including limited bandwidth and real-time constraints violated mainly by the continuous changes in network topology. However, in many cases an adequate control strategy
relies on some form of measurement of global parameters of the network—in other
words, estimating if the network is doing the “right thing”—and adopting corrective
This work was partly funded by the Rijksdienst voor Ondernemend Nederland grant TKISG01002 SG-BEMS.
Authors’ addresses: S. Dulman and E. Pauwels, Science Park 123, 1098 XG Amsterdam, The Netherlands;
emails: {stefan.dulman, eric.pauwels}@cwi.nl.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted
without fee provided that copies are not made or distributed for profit or commercial advantage and that
copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for
components of this work owned by others than ACM must be honored. Abstracting with credit is permitted.
To copy otherwise, to republish, to post on servers, to redistribute to lists, or to use any component of this
work in other works requires prior specific permission and/or a fee. Permissions may be requested from
Publications Dept., ACM, Inc., 2 Penn Plaza, Suite 701, New York, NY 10121-0701 USA, fax +1 (212)
869-0481, or permissions@acm.org.
_⃝c_ 2015 ACM 1556-4665/2015/12-ART29 $15.00
[DOI: http://dx.doi.org/10.1145/2816819](http://dx.doi.org/10.1145/2816819)
# 29
-----
29:2 S. Dulman and E. Pauwels
measures if not. Our work focuses on employing gossip algorithms [Boyd et al. 2005] to
achieve distributed state estimation for large-scale networks, thus providing the basic
measurement building block for distributed control.
As the name suggests, gossip algorithms attempt to compute aggregate (i.e., global)
values for important network parameters by relying exclusively on local exchanges of
information. Put differently, network nodes only communicate with their neighbors,
but nevertheless manage to efficiently compute a reliable value for global network
parameters (e.g., the network-wide average, maximum value, various statistic parameters, etc.). Such an approach obviates the need to establish a central control authority:
since the resulting estimates diffuse across the network, every node will harbor the appropriate value. Furthermore, our work extends these algorithms to be self-stabilizing
in the sense that changes in the network topology (e.g., resulting from rearrangements
of nodes) are rapidly and automatically reflected in the final result. A vast body of literature exists around the concept of epidemic algorithms (such as gossiping [Boyd et al.
2005])—unfortunately, probably due to the lack of centralized control, their adoption
in practice is very limited at best (peer-to-peer applications being notable exceptions).
“Traditional gossiping” [Boyd et al. 2005] is a slow converging protocol. We focus
on recent developments, such as the work of Shah [2009], which proposes trade-offs
leading to very fast converging algorithms. The authors exploit a property of order
statistics applied to exponential random variables to achieve convergence in O(D log N)
timesteps instead of the “traditional” O(D[2] log N) timesteps (D being the diameter of
the network and N the number of nodes). In the following, we refer to this former class
of epidemic algorithms as fast gossiping algorithms.
Dynamics (i.e., changes in network topology such as nodes leaving the network,
stop-failing, resetting, etc.) heavily affect the results of gossiping algorithms, leading
to solutions employing rounds of communication that need to be synchronized [Jelasity
et al. 2005]. The reasoning for this is that during short intervals of time the networks
will experience very few modifications, so the shorter the rounds, the better the estimates. Drawing on our previous work [Pruteanu and Dulman 2012], we make use
of synchronization-free approaches, in which simple mechanisms are employed to add
self-stabilization properties to gossiping algorithms. In this article, we show how making use of simple timer mechanisms allows “removal” of old values from a network and
triggers the computed aggregate to change reflecting changes in the network-stored
values. Our solution is limited to the scenarios in which changes occur in batches, with
a period larger than the time needed for the network to stabilize.
As an example, let us consider the situation of a large crowd of people taking part in
a city-wide public event. People will carry smartphones and, most usual than not, the
network infrastructure will be assumed unavailable due to the large traffic incurred.
The number of people in the demonstration is a measure of interest to several parties,
including emergency services that accompany such an event. We achieve this computation by using the ad hoc communication capabilities of smartphones (leading to a
multihop mesh network) and by employing a simple aggregate function (i.e., the sum
of the values in the network) of the values in the network (i.e., each smartphone holds
the value 1). In this scenario, dynamics are represented by a multitude of events: people randomly joining and leaving the event, continuously changing network topology,
obvious communication failures, etc. Self-stabilizing fast gossiping algorithms provide
an elegant solution in this case: not only that the value of interest is computed fast and
reliable on all nodes despite the dynamics, but the convergence time depends mainly
on the diameter of the network and not on the number of participants. In other words,
the size of the area in which the event takes place influences the convergence speed. As
we show in Section 4 convergence speed remains almost constant even when varying
the number of nodes across several orders of magnitude.
-----
Self-Stabilized Fast Gossiping Algorithms 29:3
Various decentralized approaches solutions exist in literature—they tend to rely
on assumptions invalidated by the previous example case. A few examples include
network-wide synchronization [Jelasity et al. 2005], averaging of estimates of network
parameters running in parallel [Bicocchi et al. 2010], or simply tracking nodes joining or
leaving the network. In line with our previous work [Pruteanu and Dulman 2012; Iyer
et al. 2011], we are advocating for solutions that do not require tracking of individual
nodes and are built in the basic mechanism rather than added as layers of complexity
above it. For example, we introduce in Section 3.1 a simple timer mechanism that allows
“removal” of values belonging to nodes that already left the network, without the need
for a tracking mechanism or even for unique identifiers for the nodes. Furthermore, in
Section 3.2 we show how to achieve a significant speed improvement by the alteration
of the timer expiry function, leading from a linear to an exponential behavior.
The article is structured as follows. Section 2 introduces fast gossiping algorithms
and provides references to the relevant literature. Section 3 presents the main contribution of the article, two self-stabilization extensions, and theoretical characterization
of their convergence time. The algorithms are evaluated numerically and analyzed
in Section 4. We provide information on related work on aggregate computation in
Section 5 and conclude the article with Section 6.
**2. PRELIMINARIES—FAST GOSSIPING ALGORITHMS**
For the scope of this article, we target geometric random graphs (i.e., mesh networks),
where nodes can communicate mainly with their direct neighbors. From the perspective
of the communication model, we assume that time is discrete. During one timestep each
node will pick and communicate with a random neighbor. Major changes, disruptions,
and updates in the network occur just once in a large number of timesteps. We will
make use of the concept of synchronized time rounds and ask the nodes to update their
local data at the beginning of the rounds. The bootstrap problem and round-based time
models received a lot of attention in literature [Jelasity et al. 2005; Bicocchi et al. 2010;
Pruteanu and Dulman 2012] and is not the focus in this article—loose constraints allow
for algorithms like the one presented in Werner-Allen et al. [2005].
We make no assumptions with respect to nodes stop-failing or new nodes joining the
network. The mechanism described in Section 3 can accommodate these cases and the
computation results will adapt themselves to such changes.
Our contributions are an extension to the basic primitive for computing sums in
a distributed network via fast gossiping mechanism presented in Mosk-Aoyama and
Shah [2008]. The algorithm presented in Mosk-Aoyama and Shah [2008] uses a property of order statistics applied to a series of N exponential random variables with
parameters λi, i ∈ (1, N), which leads to the sum of the parameters λi (Table I). The
algorithm resembles gossiping algorithms [Jelasity et al. 2005] but differs in a number
of important points.
Essentially, it trades communication for convergence speed. By relying on the
propagation of an extreme value (the minimum value in this case), locally computable, it achieves the fastest possible convergence in a distributed network—
_O(D_ log N) timesteps (with D being the diameter of the network). This speed is sig
nificant compared to the original gossiping algorithms that converged in O(D[2] log N)
timesteps [Boyd et al. 2005].
The paid price is the increase in exchanged messages size of O(δ[−][2]). δ is a parameter
defining the precision of the final result. If λ is the ground-truth result, the algorithm offers an estimate in the interval [(1 _δ)λ, (1_ _δ)λ] with an error ϵ_ _O(1/_ _polynomial(N))._
− + =
Zooming into the details of the algorithm (see Algorithm 1), each node i holds a
_positive variable λi from which it generates an exponential random variable vector_
**v. At each timestep, each node chooses a random neighbor and they exchange their**
-----
29:4 S. Dulman and E. Pauwels
Table I. Notations Used in the Article
Symbol Meaning
_D_ Diameter of the network
_N_ Number of nodes in the network
_T_ Maximum value for the time-to-live field
_M_ Maximum number of samples in a vector
_C_ Decay constant for time-to-live (by default 0.5)
_δ_ Precision of the estimation
_ϵ_ Probability that the estimation falls within a certain range
_i, i1, i2_ Indexes for the nodes in the network
_j_ Index for the samples in a vector of size M
_λi_ Local variable for node i
**vi** Vector of exponentially distributed random values on node i
**vi[0]** Original vector of random values on node i
**u** Vector containing the minimum values from all vi
**_τ i_** Vector of time-to-live values on node i
_n+_ Number of nodes holding an old value
_n−_ Number of nodes holding a negative value
_n0_ Number of nodes holding a new value
_ψ_ Average time-to-live value on negative value nodes
_k_ Integer index, k ≥ 1
_ak1, ak2_ _, . . ._ Real coefficients
_σ1_ Initial sum of values in the network
_σ2_ Intermediate sum of values in the network
_σ3_ Final sum of values in the network
**ALGORITHM 1: Mosk-Aoyama-Shah Algorithm**
**1** /* λi - local parameter for node i */
**2** /* m - number of samples in the random vectors */
**3** /* n - number of nodes in the network */
**4** /* S - set of received value vectors in the last timestep */
**5** /* vlocal - local value vector */
**6** /* v[0] _- original value vector */_
|Col1|Table I. Notations Used in the Article|
|---|---|
|||
|Symbol|Meaning|
|D N T M C|Diameter of the network Number of nodes in the network Maximum value for the time-to-live field Maximum number of samples in a vector Decay constant for time-to-live (by default 0.5)|
|δ ϵ|Precision of the estimation Probability that the estimation falls within a certain range|
|i, i1, i2 j λi vi v0 i u τ i|Indexes for the nodes in the network Index for the samples in a vector of size M Local variable for node i Vector of exponentially distributed random values on node i Original vector of random values on node i Vector containing the minimum values from all vi Vector of time-to-live values on node i|
|n + n − n0 ψ|Number of nodes holding an old value Number of nodes holding a negative value Number of nodes holding a new value Average time-to-live value on negative value nodes|
|k ak1, ak2, . . .|Integer index, k ≥1 Real coefficients|
|σ1 σ2 σ3|Initial sum of values in the network Intermediate sum of values in the network Final sum of values in the network|
**7** /* initialization - run once */
**8 if v[0]** _is uninitialized then_
**9** **for j** 1 to m do
=
**10** **v[0][ j] = random number from exponential distribution with parameter λi**
**11** **vlocal = v[0]**
**12** /* periodic update - run every timestep */
**13 for each vector v in** **do**
_S_
**14** **for j** 1 to m do
=
**15** **if vlocal[ j] > v[ j] then**
**16** **vlocal[ j] = v[ j]**
**17** Broadcast vlocal
**18** /* estimation of sum of λ-s */
**19** [�]i[n]=1 _[λ][i][ ≈]_ �mj=m1 **[v][[][ j][]]**
values, both keeping the smallest value on each position. In other words, after two
nodes i1 and i2, holding the values vi1 and vi2 communicate, each of them will hold a
vector v = vi[′]1 [=][ v]i[′]2 [with the property that][ v][[][ j][]][ =][ min(][v][i]1[[][ j][]][,][ v][i]2[[][ j][])][ ∀] _[j][ ∈]_ [(1][,][ M][). Thus,]
the smallest value on each position in the vectors propagates fast in the network,
-----
Self-Stabilized Fast Gossiping Algorithms 29:5
Fig. 1. Value removal mechanism wave-alike propagation (geometric random graph with 100, 000 nodes;
diameter 40; random values).
in O(D log N) timesteps via this push-pull gossiping mechanism (see Shah [2009],
Section 3.2.2.4, p. 32).
The authors of Shah [2009] further show that, after all vectors in the network con
verge to the minimum-holding vector u, the sum of λi values in the network is ap
_m_
proximated by the maximum likelihood estimator: [�]i[N]=1 _[λ][i][ ≈]_ �mj=1 **[u][[][ j][]][ (see Shah [2009],]**
Property 5.1, p. 72).
Furthermore, the precision of the result is independent of the network size. The
message size is dependent only on the precision δ, O(δ[−][2]) (see Shah [2009], Theorem 5.1,
p. 74 and Section 5.2.5.4, p. 75).
**3. SELF-STABILIZATION EXTENSIONS**
In this section, we extend the minimum value propagation mechanisms presented
in Section 2 to account for dynamics in the network. Specifically, we add a time-to_live field to each value—an integer value that decreases with time and marks the_
age of the current value. This mechanism takes care of nodes leaving the network,
stop-crashing, or resetting without the need of tracking each node. The time-to-live
mechanism works also in the failure case in which a group of nodes changes their
values at a given moment. Furthermore, we extend the time-to-live expiry mechanism
to achieve a O(D log N log T ) timesteps active value removal. In other words, if a
+
certain minimum value belonging to a node that changed its variable λi propagated
through the network, we mark it as “expired” and assure its associated time-to-live
value to expire (reach 0) within O(D log N log T ) timesteps. Intuitively, in a multihop
+
mesh network, the value removal mechanism takes the shape of a wave that replaces
the expired value in the network (see Figure 1).
**3.1. Passive Mechanism: Counter-Based Self-Stabilization**
We extend the algorithm presented in Mosk-Aoyama and Shah [2008] by adding to
each node i a new vector τ i holding a time-to-live counter for each value. This new
vector is initialized with a default value T, larger than the convergence time of the
original algorithm (choosing a proper value is explained next).
The values in τ i decrease by 1 every time slot, with one exception. The node generat
ing the minimum vi[ j] on the position j ∈ (1, M) sets τ i[ j] to T (see Algorithm 3,
-----
29:6 S. Dulman and E. Pauwels
**ALGORITHM 2: PropagateMinVal(v, τ** )
**1** /* v, τ - received value and time-to-live */
**2** /* vlocal, τlocal - local value and time-to-live */
**3** /* create temporary variables */
**4 (vm, vM) ←** �min(v, vlocal), max(v, vlocal)�
**5 (τm, τM) ←** corresponding (τ, τlocal) to (vm, vM)
**6** /* update logic */
**7 if vm** _vM then_
==
**8** **if vm < 0 then** /* equal negative values */
**9** _τm ←_ _Cτm_
**10** **else** /* equal positive values */
**11** min(τm, τM) ← max(τm, τM) − 1
**12 else**
**13** **if vm < 0 then** /* at least one negative value */
**14** **if vm** _vM then_
== −
**15** (τm, τM) ← (T _, T )_
**16** **else**
**17** (τm, τM) ← (Cτm, CτM)
**18** **else** /* two different positive values */
**19** _τM ←_ _τm −_ 1
**20** /* update local variables */
**21 (v, vlocal) ←** (vm, vm)
**22 (τ, τlocal) ←** corresponding (τm, τM)
line 9). In the absence of any other dynamics, all properties proved in Shah
[2009] remain unchanged as the output of our approach is identical to the original
algorithm.
The main reason for adding the time-to-live field is to account for nodes leaving
the network or nodes that fail-stop. In this way we avoid complicated mechanisms
in which nodes need to keep track of neighbors. An interesting side consequence is
that this mechanism does not require node identifiers, thus applications built on top
preserve the privacy of the nodes in the network.
The intuition behind the counter-based mechanism is that a node i0 generating
the network-wide minimum on position j (1, M) will always advertise it with the
∈
accompanying time-to-live set to the maximum T . Any other node i will adopt the
min value vi0 [ j] as vi[ j] and have a value τ i[ j] decreasing with the distance from
the minimum setting node i0. T is chosen to be larger than the maximum number of
gossiping steps it takes the minimum to reach any node in the network.
In a gossiping step between two nodes i1 and i2, if vi1[ j] = vi2[ j], then the largest
of the τ i1[ j] and τ i2[ j] will propagate (Algorithm 2, line 11). This means that τ i[ j]
on all nodes i will be strictly positive for as long as the node is online. If the node
that generated the minimum value on the position j goes offline, all the associated
**_τ i[ j] values in the network will steadily decrease (Algorithm 3, line 11) until they_**
will reach 0 and the minimum will be replaced by next smallest value in the network
(Algorithm 3, lines 12–14). Hence, it takes T timesteps for the network to “forget” the
value on position j.
**3.2. Active Mechanism: Value Removal Algorithm**
The second self-stabilizing mechanism targets nodes changing their values at runtime
but remain in the network and are therefore able to actively remove old versions of their
values that might have propagated in the network. Assume a node i changes its value
-----
Self-Stabilized Fast Gossiping Algorithms 29:7
Table II. Value Propagation
Propagation Ordering Previous Intermediate Final
none **u[k] < vi[k] < vi[′]** [[][k][]] **u[k]** **u[k]** **u[k]**
**u[k] < vi[′]** [[][k][]][ <][ v][i][[][k][]] **u[k]** **u[k]** **u[k]**
slow **vi[k] < u[k] < vi[′]** [[][k][]] **vi[k]** **vi[k]** **u[k]**
**vi[k] < vi[′]** [[][k][]][ <][ u][[][k][]] **vi[k]** **vi[k]** **vi[′]** [[][k][]]
fast **vi[′]** [[][k][]][ <][ u][[][k][]][ <][ v][i][[][k][]] **u[k]** **vi[′]** [[][k][]] **vi[′]** [[][k][]]
**vi[′]** [[][k][]][ <][ v][i][[][k][]][ <][ u][[][k][]] **vi[k]** **vi[′]** [[][k][]] **vi[′]** [[][k][]]
|Col1|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|Propagation|Ordering|Previous|Intermediate|Final|
|none|u[k] < vi[k] < v i′[k] u[k] < v i′[k] < vi[k]|u[k] u[k]|u[k] u[k]|u[k] u[k]|
|slow|vi[k] < u[k] < v i′[k] vi[k] < v i′[k] < u[k]|vi[k] vi[k]|vi[k] vi[k]|u[k] v′[k] i|
|fast|v i′[k] < u[k] < vi[k] v i′[k] < vi[k] < u[k]|u[k] vi[k]|v′[k] i v′[k] i|v′[k] i v′[k] i|
**ALGORITHM 3: ComputeSum (v, τ** )
**1** /* v[0] _- original random samples vector on this node */_
**2** /* v, τ - received value and time-to-live vectors */
**3** /* update all elements in the data vector */
**4 for j** 1 to length(v) do
=
**5** PropagateMinVal(v[ j], τ [ j])
**6** /* time-to-live update - do once every timeslot */
**7 for j** 1 to length(v) do
=
**8** **if v[ j] == v[0][ j] then** /* reinforce a minimum */
**9** **_τ_** [ j] ← _T_
**10** **else**
**11** **_τ_** [ j] ← **_τ_** [ j] − 1 /* decrease time-to-live */
**12** **if τ** [ j] <= 0 then /* value expired */
**13** **v[ j]** **v[0][ j]**
←
**14** **_τ_** [ j] ← _T_
**15** /* estimate the sum of elements */
**16 s** 0
←
**17 for j** 1 to length(v) do
=
**18** _s_ _s_ _abs(v[ j])_
← +
**19 return length(v)/s**
_λi to λi[′]_ [at some time][ t][. This change will trigger a regeneration of its original samples]
from the exponential random variable vi to vi[′] [. Let][ j][ be an index with][ j][ ∈] [(1][,][ M][). Let]
**u be the vector containing the minimum values in the network if the node i would not**
_exist. In order to understand the change happening when transitioning from λi to λi[′]_
we need to look at the relationship between the individual values vi[ j], vi[′] [[][ j][], and][ u][[][ j][].]
As shown in Table II, if u[ j] is the smallest of all three values, then no change will
propagate in the network. If vi[′] [[][ j][] is the smallest value, then this will propagate fast,]
in O(D log N) timesteps, via the basic minimum propagation mechanism. If vi[ j] is the
smallest, then this value will remain in the network until its associated time-to-live
field expires in O(T ). As usually T _D, this leads to a very slow process. We designed_
≫
the value removal mechanism to speed up the expiration of this value from the network.
The removal mechanism is triggered by the node owning the value that needs to be
removed (in our case node i) and works as follows: node i will mark the value vi[ j]
as “expired” by propagating a negative value −vi[ j]. This change will not affect the
minimum value propagation mechanism (Algorithm 2, lines 4, 21—the negative value
of a positive minimum remains the minimum in the network) or the estimation of the
sum (notice the use of the absolute value function in Algorithm 3, line 18). If node i
contacts a node also holding the value vi[ j], then first, it will propagate the negative
sign for the value, also maximizing its time-to-live field to a large value T . Intuitively,
as long as the vi[ j] is present in the network, the −vi[ j] will propagate, overwriting it.
-----
29:8 S. Dulman and E. Pauwels
Considering the large range of unique float or double numbers versus the number of
values in a network at a given time, we assume the values in the network to be unique.
The time-to-live field of any negative value will halve with each gossiping step (by
default C = 0.5) if it does not meet the vi[ j] value (Algorithm 2, lines 9, 17). Intuitively,
if a negative value is surrounded by values other than vi[ j], it will overwrite the
neighbors’ positive values while canceling itself at the same time with an exponential
rate. This mechanism somewhat resembles a predator-prey model [Arditi and Ginzburg
1989], where prey is represented by the vi[ j] variable and predators by −vi[ j]. We
designed it such that the populations cancel each-other, targeting the fixed point at
the origin as the solution for the accompanying Lotka-Volterra equations—details are
offered in Section 3.3.
LEMMA 3.1 (VALUE REMOVAL DELAY). By using the value removal algorithm, the new
_minimum propagates in the network in O(D_ log N log T ) timesteps.
+
PROOF. In the worst case scenario, the whole network contains the minimum value
**vi[k] on position k, with the time-to-live field set at the maximum T . The negative value,**
being the smallest one in the network, propagates in O(D log N) in the whole network.
Again, in the worst case scenario, we have a network with each node having the value
−vi[k] on position k with the time-to-live set to the maximum T . From this moment on,
the time-to-live will halve at each gossip step on each node, reaching 0, in the worst case
scenario in O(log T ) timesteps. This is the worst case because nodes may be contacted
by several neighbors during a timestep leading to a much faster cancellation. Overall,
the removal mechanism will be active for at most O(D log N log T ) timesteps. This
+
bound is an upper bound. In reality, the spread and cancellation mechanisms will act
in parallel, leading to tighter bounds.
This result gives us the basis for choosing the T constant. Ideally, T should be
chosen as small as possible, in line with the diameter of the network. The fact that the
removal mechanism is affected only by log T lets us use an overestimate of T, which
even if a few orders of magnitude larger than the diameter of the network will have
little impact on the convergence speed but will leave an effect on the passive expiry
mechanism. For example, if the network diameter is between 10 and 30 and the values
change roughly every 10,000 timesteps, we may safely set T anywhere between 100
and 1,000. This will not affect the convergence of the sum computation mechanism but
allow for a timely account for a node removal (under the assumption that nodes leaving
the network happens less frequently than nodes changing their values).
Put together, the mechanisms presented in this section lead to the sum computation
mechanism ComputeSum() presented in Algorithm 3. It holds the properties of the
original algorithm described in Mosk-Aoyama and Shah [2008] and it additionally
showcases self-stabilization properties to account for network dynamics in the form of
node removal and nodes changing their values in batches.
**3.3. A Model for the Value Removal Mechanism**
The value removal mechanism maps onto a predator-prey model, in which the predators
(negative values −vi[ j]) need to consume all the prey (positive values vi[ j]) before going
extinct. Graphically, the results are presented in Figure 2. In this section, we provide
a mathematical model for this mechanism, with the goal of validating that it leads
to the extinction of both predators and prey, meaning, in our case, that a new value
propagates in the network. For the sake of clarity, we assume that the vector of values
has a single element (M 1), the extension to M 1 being straightforward. We will
= ̸=
drop the index when referring to vectors, such that v[1], u[1], and τ [1] will be addressed
as v, u, and τ . The old value refers to the minimum in the network (u), the negative
_value refers to −u, and the new value to a new minimum u[′]._
-----
Self-Stabilized Fast Gossiping Algorithms 29:9
Fig. 2. Predator-prey simulation with 1,000 nodes (C1 = C2 = 0.5; initial configuration consists of a single
predator node and 999 prey nodes).
We focus on a fully connected network case, as this constitutes the most difficult
scenario. Intuitively, the value removal mechanism acts as a wave starting at a node
and encompassing all the network. In the case of a fully connected network, nodes can
talk to any of the neighbors, meaning also that the wave has a difficult time “catching”
values diffused in this communication model.
Let n+ be the number of nodes that hold the old value, n− the number of nodes holding
the negative value, and n0 the number of nodes that do not hold either—uninitialized
_nodes (n+ + n−_ + n0 = N). We propose a differential equations model to capture the
variation of these quantities in the network. Targeting the worst case scenario, initially
all nodes are assumed initialized with the old value except one, which is initialized with
the negative value (n _N_ 1, n 1).
+ = − − =
Modeling the time-to-live mechanism is difficult due to its nonlinearity. We use the
following approximation: define a variable ψ that holds the average time-to-live value
on the n− nodes. The ψ variable changes each time a node with a negative value
interacts with another node.
We focus on a node i with a negative value −vi, having the time-to-live value τi. If it
interacts with a positive value node (its counterpart or any uninitialized node), then
the tuple of time-to-live values on the two nodes becomes (τi ≈ _ψ, −) →_ (T _, T ) and_
_ψ →_ (ψ + [2(]n[T]−[ −]+1[ψ][)] [), thus increasing (the state transitions are shown in Figure 3). If the]
negative value node meets an uninitialized node, then the tuple of time-to-live values
becomes (τi, −) → (Cτi, Cτi) and ψ → (ψ + [2(]n[C]−[−]+[1)]1[ψ] [), with][ C][ ∈] [(0][,][ 1), thus decreasing.]
When several negative value nodes meet uninitialized nodes (say a fraction h), then
_ψ changes to ψ_ [′]:
_h_
_ψ_ 2(C 1)ψ (1)
= + −
1 _h[.]_
+
_ψ_
=
_ψ_ [′] =
�
_i_ _[τ][i]_
_,_
_n_
−
�
_i_ _[τ][i][ −]_ _[hn][−][ψ][ +][ 2][hn][−][C][ψ]_
_n_ _hn_
− + −
-----
29:10 S. Dulman and E. Pauwels
Fig. 3. State transition effects on time-to-live (indices 1, 2 distinguish between two nodes of the same type
interacting; the arrows indicate transition between states—they can be interpreted also as interactions
between nodes in the two states they connect; diagram does not account for the decrease of time-to-live with
each timestep).
Fig. 4. Variation of the negative value nodes with respect to average time-to-live value (interpolated lines
represent possible f (ψ) fittings).
From a probabilistic perspective, each of the n− nodes meets on average _[n]N[0]_ [unitialized]
nodes, thus h = _[n]N[0]_ [. This leads to][ ψ][ →] [(][ψ][ +][ 2(][C][ −] [1)][ψ] _n0n+0N_ [). By replacing][ h][ with][ n]N[−] [a]
similar formula is obtained for negative value nodes interacting with other negative
value nodes). Following the same reasoning, we obtain ψ → (ψ + 2(T − _ψ)_ _n+n++N_ [) for]
negative value nodes meeting positive value nodes.
Negative value nodes interacting with other negative value nodes decrease their
time-to-live values fast and transform themselves in uninitialized nodes. We can approximate the whole process with a fraction f (ψ)n−. Regarding the profile of the f (ψ)
function, few nodes become uninitialized when ψ is close to T . As ψ decreases approaching 0, negative-values nodes increasingly become uninitialized. Figure 4 shows
this dependency, while Figure 5 shows the general variation of the number of nodes
with the average value of the time-to-live field for negative value nodes.
-----
Self-Stabilized Fast Gossiping Algorithms 29:11
Fig. 5. Number of positive and negative value nodes with respect to average time-to-live value (geomet
ric random graph with 1,000 nodes; diameter 10; points are taken from 10 different simulations ran till
convergence).
Based on the behavior observed in practice, the function f (ψ) can be approximated
with an expression in the form
1
_f (ψ)_
= �k≥1 _[a][k]1_ [exp] �ak2 � _Tψ_ �k[�]
+ [�]
_k≥1_ _[a][k]3_
_ψ_
_T_
(2)
�k
�
withThis leads to the system k being an index and ak∗ suitably chosen real number coefficients.
_dn_
+
_[n][0]_
=
_dt_ _N [n][+][ −]_ _[n]N [−]_ _[n][+][,]_
_dn_
− _[n][+]_ (3)
=
_dt_ _N [n][−]_ [+][ n]N [−] _[n][0][ −]_ _[f][ (][ψ][)][n][−][,]_
_dψ_ _n+_ _n−_ _n0_
_dt_ [=][ 2(][T][ −] _[ψ][)]_ _n+ + N_ [+][ 2][ψ][(][C][1][ −] [1)] _n−_ + N [+][ 2][ψ][(][C][2][ −] [1)] _n0 + N_ _[,]_
where C1, C2 ∈ (0, 1) are constants. For simplicity, we will consider these two constants
equal (C). It is easy to check that the trivial solution (n+ = 0, n− = 0, ψ = 0) verifies
the system. This model captures well the behavior of the value removal mechanism,
but it still remains an approximation. Its main drawback is that the time scale tends
to be larger than in simulations. The approximation comes from the use of the function
_f (ψ) for which we do not have an exact expression and was obtained via fitting._
Nevertheless, when exploring the solution spaces as a function of it turns out
_C_
that the system converges to the trivial solution as long as is above a certain value,
_C_
dependent on the network size. This is shown graphically in Figure 6, in which we
plotted the variation of the number of negative value nodes as a function of time,
for various values of the constant . As seen also in practice, for a large range of,
_C_ _C_
the system converges fast to the trivial solution. As gets smaller, the convergence
_C_
speeds up and after a certain threshold the system converges to a different solution,
-----
29:12 S. Dulman and E. Pauwels
Fig. 6. Variation of the number of negative value nodes with time in a network with 1,000 nodes (C1 = C2 = C;
initial configuration consists of a single predator node and 999 prey nodes).
Fig. 7. Variation of the number of positive value nodes function of the number of negative value nodes
(C1 = C2 = C; 1,000 nodes; initial configuration consists of a single predator node and 999 prey nodes; arrows
indicate time evolution).
exhibiting oscillatory behavior—better observed in Figure 7, which plots the variation
of the number of positive value nodes versus the number of negative value nodes.
Intuitively, the oscillations and convergence value are the result of the negative value
nodes being too short-lived to cover all the network. We have tested network sizes
ranging from a few hundred nodes to several tens of millions of nodes and found that,
for example, 0.5 is a good choice guaranteeing system convergence at a fast rate.
_C =_
-----
Self-Stabilized Fast Gossiping Algorithms 29:13
Fig. 8. Sum computation during network dynamics with value removal mechanism enabled (geometric
random graph with 1,000 nodes initially; diameter 14; random values; half of the network is disconnected
at time 50; 30% nodes change their values at time 200). Notice the reduction in transition period compared
to Figure 9.
**4. DISCUSSIONS**
Our distributed approach solves most of the scaling issues and proves to be highly
robust against network dynamics (e.g., network nodes becoming unavailable due to
failures, reconfiguration, new nodes joining the system, etc.) as long as these changes
occur in batches, on a time scale comparable to a time round. As we show in the
following, our approach is very fast for a typical network, outperforming the speed of
a centralized approach. As the protocols rely on anonymous data exchanges, privacy
issues are alleviated, as the identities of the system participants are not needed in the
computations.
The downsides of our approach maps onto the known properties of this class of
epidemic algorithms. Although anonymity is preserved, an authentication system [Jesi
et al. 2007] is needed to prevent malicious data corrupting the computations.
In the following subsections, we numerically characterize some of these properties.
The simulations were performed using Matlab and C++. Nodes were randomly deployed
onto a square surface and the circular transmission range was varied until the desired
diameter of the network was obtained. All networks were verified to be formed of one
main cluster, with disconnected nodes not considered. The communication model was
push-pull gossip. We used synchronized time rounds to model time. Unless otherwise
stated, each result point represents an average over 100 simulations.
**4.1. Assumptions on Synchronized Changes**
Although the two mechanisms introduced in Section 3 do not make use of any synchronization, their functionality is guaranteed only if the changes in the network happen in
batches, with a period in the order O(D log N). The question we address in this section
refers to what happens if this assumption is violated.
Figure 8 gives a graphical representation of the two mechanisms at work. The
counter-based self-stabilization mechanism activates when part of the network is
disabled (e.g., due to a permanent hardware failure, network partitioning, software
-----
29:14 S. Dulman and E. Pauwels
Fig. 9. Sum computation during network dynamics with value removal mechanism disabled (geometric
random graph with 1,000 nodes initially; diameter 14; random values; half of the network is disconnected
at time 50; 30% nodes change their values at time 200).
bug invalidating a number of nodes, etc.). While the timers decrease toward expiration
(timesteps 50–80), the aggregate computed in the network remains constant. Once
the timers on various nodes expire, a sudden “dip” occurs in the curve (see timesteps
80–90). This is due to the fact that once a minimum value expires on a node, the node
replaces it with its own value from the original random value vector. As this is usually
larger than the minimum, the computation in Algorithm 3 (Lines 7–19) leads to a very
small value. As a new minimum propagates through the network, the computed value
begins to rise toward the final value.
In the case in which we maintain the change in the network values at the timestep
200, with the value removal mechanism disabled, the network still converges to the
proper value, only slower. This situation is presented in Figure 9. Let σ1 = [�]i[N]=1 _[λ][i][ be]_
the sum of the values in the network before the induced change in values at time 200
(the initial sum). After 30% of the nodes change their value, we notice a sudden drop
in values at time 200, which we explained earlier. Then, the network converges to an
intermediate sum σ2 (see Figure 9) and the timers associated with the slow propagating
values (see Table II and Section 3.2) decrease toward 0. Once they expire (around time
230), the network fluctuates once more and stabilizes to the value σ3.
It is interesting to notice that the difference σ2 _σ1 equals the sum of the values that_
−
_changed. In other words, assume that a subset of nodes i ⊂_ _I changed their values λi_
to λi[′] [(30% of nodes in the example in Figure 9).]
1
_σ2 =_ �M
_j=1_ [min(min][i][∈][(1][,][N][)][(][v][i][[][ j][])][,][ min][i][⊂][I] [(][v]i[′] [[][ j][]))]
def
=
_N_
�
_i=1_
_λi +_
�
_i⊂I_
_λi[′]_
leading to
_σ2 −_ _σ1 =_
�
_i⊂I_
_λi[′]_ _[.]_ (4)
-----
Self-Stabilized Fast Gossiping Algorithms 29:15
Fig. 10. Convergence of network starting from a clean state indicating that convergence time is basically
independent of N and linearly dependent on the diameter D (geometric random graph; nodes initialized with
random values; summation as local aggregate; error bars represent standard deviation; 100 simulations).
Employing the value removal mechanism speeds up the convergence process, by
“removing” the intermediate convergence level σ2 (in reality it exponentially expires the
timers during this level) and leads to the situation shown in Figure 8. The information
on how much change will occur in the network (σ2 − _σ1) is no longer available before_
the network finally stabilizes.
**4.2. Scalability Aspects**
One of the main characteristics of our approach is that the algorithm we propose
scales very well with the number of nodes in the network. As seen again Figure 10
and Figure 11, the number of nodes has little influence in the final results only as
a O(log N) term. The simulation explored a space in which we varied the number of
nodes over four orders of magnitude and the results hint that tighter boundaries might
exist than the ones we proposed in this article. We noticed that for a fully connected
network, the recovery time varies with 34% between a network with 1,000 nodes and
one with 100,000 nodes, while the variation drops to a mere 2.4% for a 20-hop network
varying from 1,000 nodes to 100,000 nodes.
These results are very important for large-scale network applications such as the
smart energy grid. As the network will be linked to a physical space (a country or in
general, a region), fully covering it, the diameter of the network is expected to, at most,
decrease with the addition of new nodes. Intuitively, when thinking of nodes as devices
with a fixed transmission range, adding more devices in the same region may lead to
shorter paths between various points. The aggregate computation approach we propose
shows, on one hand, an almost invariance to the increase in the number of nodes in
the network and a linear variation with the diameter. These properties are essential
for any solution that needs to take into account that the number of participants in the
network will increase over time.
We are also interested in understanding the effects the time-to-live of the negative
fields has on the convergence and scalability properties. We have considered a 10-hop
-----
29:16 S. Dulman and E. Pauwels
Fig. 11. Convergence of network after a disruption exhibiting the same characteristics as Figure 10 (geo
metric random graph; half of the nodes change their values after initial network convergence; summation
as local aggregate; error bars represent standard deviation; 100 simulations).
Fig. 12. Influence of T parameter—behavior is basically constant in T as long as it is larger than the
diameter D of the network (random geometric graph; 10-hop network; half of the nodes change their values
randomly after initial network convergence; summation as local aggregate; error bars represent standard
deviation; 100 simulations).
network with 1,000 to 5,000 nodes and varied the time-to-live for negative values
between 500 and 10,000. Figure 12 confirms Lemma 3.1 with respect to the log T
term. As the data shows, the convergence time was affected very little by the chosen
parameters. As expected, the diameter of the network has the larger influence in this
mechanism.
-----
Self-Stabilized Fast Gossiping Algorithms 29:17
**4.3. Influence of Communication Topology**
The underlying communication network for a large-scale network (such as a smart
energy grid) can be implemented in a number of ways, mapping to different communication topologies. For example, one might choose to use the internet backbone,
allowing any-to-any communication in the network, leading to a fully connected graph.
In the first experiment, we have initialized the network with a set of random variables
and recorded the time when the aggregated sum converges to the same value on all
nodes. As seen in Figure 10, fully connected networks lead to the fastest aggregate
computation. In a second experiment, once the network stabilized, we introduced a
change in the form of half of the nodes in the network changing their value to a different one. Again, we recorded the time until the network stabilized after this change.
As expected, Figure 11 shows that fully connected networks stabilize the fastest after
a disruption.
These results assume the internet backbone to work perfectly and able to route the
high level of traffic generated. A more realistic scenario is considering that the various
data collection points obtain data from the individual consumers via some radio technology (e.g., GPRS modems) and are themselves connected to the internet backbone.
To keep the traffic in the network to a minimum, the data collection points only communicate with their network-wise first-order neighbors, leading to a mesh network
deployment type. As seen in Figure 10 and Figure 11, the diameter of the network
clearly has the major impact factor on the results, confirming the theoretical results.
The information needs at least O(D) timesteps to propagate through the network. The
constant in the O() notation is influenced on one hand by the average connectivity in
the network (a node can only contact a single neighbor per timestep, slowing information dissemination) and the push-pull communication model on the other (a node
may be contacted by several neighbors during a timestep, speeding up information
dissemination).
**5. RELATED WORK**
Aggregate computation in large-scale networks is a topic that received significant attention across a large number of fields. It is fueled by both need (ultra-large-scale
systems make decision taking a highly complex task [Northrop et al. 2006]) and imagination (programmable matter requires a control paradigm [Goldstein et al. 2005]). A
myriad of domain specific programming languages were developed trying to ease the
task of controlling large-scale networks. The authors of Beal et al. [2013] survey efforts
in fields including amorphous computing, synthetic biology, wireless sensor networks,
pervasive computing, swarm robotics, and parallel and distributed computing, to name
a few.
The basic building blocks of all these languages are primitives that guarantee that
the system converges to a desired state. Complicated networking protocols are usually not employed as complexity was shown to arise even from combination of simple
rules [Wolfram 2002]. This holds true for both simple cellular automata [Chopard
and Droz 1998] and highly complex systems combining modern communication means
with large-scale fixed infrastructure (e.g., autonomic computing [Kephart and Chess
2003]).
In this context, our research focuses on basic primitives, based on local interactions,
which give designers the possibility of creating complex behaviors in a tractable manner. While solutions involving global information available at each node are easily
rendered useless by increasing system scales (e.g., tracking the status of all individual
nodes is infeasible), aggregate information about the system behavior is achievable.
Gossiping [Boyd et al. 2005] is such a mechanism, allowing computation of aggregates in a timely manner without the need of precise topology information. Complex
-----
29:18 S. Dulman and E. Pauwels
functions can be achieved with simple local rules, ranging from statistical information
about information distributed all over the network [Kempe et al. 2003] to network
overlays [Jelasity and Babaoglu 2006]. The authors of Mosk-Aoyama and Shah [2008]
show that a trade-off exists between the convergence time and the amount of information exchanged in the gossiping process, leading to fast converging algorithms (in
_O(D_ log N) timesteps) [Shah 2009].
It is interesting to notice that even simple aggregates received a lot of attention.
For example, counting the nodes in a network (thus summing up all the values 1 held
locally) led to a large number of solutions [Kostoulas et al. 2007; Massouli´e et al. 2006;
Madden et al. 2002]. Of interest and in line with the work presented in this article is the
synopsis diffusion mechanism [Nath et al. 2004], which employs statistics techniques
to achieve a result fast. Unfortunately, these works are usually one-shot solutions,
requiring some type of synchronization mechanisms (preferably distributed solutions
such as Werner-Allen et al. [2005]). Self-stabilization techniques [Dolev 2000] received
a lot of interest leading to approaches such as periodic synchronization [Jelasity et al.
2005], parallel running algorithms [Bicocchi et al. 2010], or asynchronous periodic
resets [Pruteanu and Dulman 2012].
Self-stabilization techniques like the ones proposed in this article are tightly related
to the classical problem of leader election. The basis for our work, the Mosk-AoyamaShah algorithm [Mosk-Aoyama and Shah 2008], has as an underlying feature the
propagation of a minimum value in a network—a common technique used in leader
election protocols [Dijkstra 1982]. Using timers for detecting changes or deadlocks
in the network state [Mayer et al. 1992] is ubiquitous in computer science protocols.
Also, from the perspective of topology, a large number of leader-election algorithms
present similarities with the underlying analysis of convergence [Shah 2009] (ring
graphs [Garcia-Molina 1982], tree networks [Antonoiu and Srimani 1996], or mesh
networks [Malpani et al. 2000]). More common techniques between these fields have
already been identified in Angluin et al. [2008]. Nevertheless, the second extension we
propose in this article breaks away from the classical topics of leader election and token
management as the assumptions it builds upon (e.g., values of nodes in the network
changing often) are rarely applicable in the works cited previously.
Returning to the problem of engineers having access to a design process for com
plex behavior, our work provides a measurement component for distributed control
approaches. The primitives we propose can be used in creating complex algebras
(such as the Presburger arithmetic employed in population protocols [Angluin et al.
2007, 2008]) or even in more exploratory solutions such as swarm chemistry [Sayama
2009].
The results of this theoretical work form the basis for building a wide range of
applications. Internet of things, smart cities, and Industry 4.0 are just a few technologies highly relevant at the moment that may make use of our work, building on
the lessons of already deployed systems (e.g., monitoring of cloud computing server
farms—Astrolabe [Van Renesse et al. 2003] or computing complex robustness metrics
in smart energy grid applications [Koc¸ et al. 2013]).
**6. CONCLUSIONS AND FUTURE WORK**
In this article, we focused on adding self-stabilizing properties to fast gossiping algorithms. The motivation is that, in the case in which the changes in the network occur
in batches separated in time, no additional synchronization mechanism is needed. A
simple extension based on counters is enough to guarantee the network stabilization
in a timely manner after the disturbance.
To this end, we propose two mechanisms (the counter-based self-stabilization and
the value removal mechanism) that together allow fast gossiping algorithms to
-----
Self-Stabilized Fast Gossiping Algorithms 29:19
withstand network dynamics. The basic properties of the fast gossiping algorithms
still hold, leading to a solution that is highly scalable, network topology agnostic, has
no single-point-of-failure and allows real-time results dissemination at all the nodes in
the network.
Regarding the future work, we noticed that the self-stabilizing approaches to large
scale highly dynamic systems are quite limited in number, offering nice investigating
venues. For example, we would like to extend the proposed counter-based mechanism
to setups in which changes can occur without any restrictions. One direction of research
includes limiting the fluctuations in the aggregate computation to a smaller dynamic
range, preventing it from decreasing toward 0 once timers expire or values change.
**REFERENCES**
Dana Angluin, James Aspnes, David Eisenstat, and Eric Ruppert. 2007. The computational power of popu
lation protocols. Distributed Computing 20, 4 (2007), 279–304.
Dana Angluin, James Aspnes, Michael J. Fischer, and Hong Jiang. 2008. Self-stabilizing population protocols.
_ACM Transactions on Autonomous and Adaptive Systems 3, 4 (2008), 13._
Gheorghe Antonoiu and Pradip K. Srimani. 1996. A self-stabilizing leader election algorithm for tree graphs.
_Journal of Parallel and Distributed Computing 34, 2 (1996), 227–232._
Roger Arditi and Lev R. Ginzburg. 1989. Coupling in predator-prey dynamics: Ratio-dependence. Journal of
_[Theoretical Biology 139, 3 (1989), 311–326. DOI:http://dx.doi.org/10.1016/S0022-5193(89)80211-5](http://dx.doi.org/10.1016/S0022-5193(89)80211-5)_
Jacob Beal, Stefan Dulman, Mirko Viroli, Nikolaus Correll, and Kyle Usbeck. 2013. Organizing the aggregate.
_Formal and Practical Aspects of Domain-Specific Languages: Recent Developments (2013), 436._
Nicola Bicocchi, Marco Mamei, and Franco Zambonelli. 2010. Handling dynamics in diffusive aggregation
schemes: An evaporative approach. Future Generation Computer Systems 26, 6 (2010), 877–889.
Stephen Boyd, Arpita Ghosh, Balaji Prabhakar, and Devavrat Shah. 2005. Gossip algorithms: Design, anal
ysis and applications. In Proceedings of the IEEE 24th Annual Joint Conference of the IEEE Computer
_and Communications Societies (INFOCOM 2005), Vol. 3. IEEE, 1653–1664._
Bastien Chopard and Michel Droz. 1998. Cellular Automata Modeling of Physical Systems. Vol. 24. Cambridge
University Press, Cambridge.
Edsger W. Dijkstra. 1982. Self-stabilization in spite of distributed control. In Selected Writings on Computing:
_A Personal Perspective. Springer, 41–46._
Shlomi Dolev. 2000. Self-Stabilization. MIT Press.
Hector Garcia-Molina. 1982. Elections in a distributed computing system. IEEE Transactions on Computers
100, 1 (1982), 48–59.
Seth Copen Goldstein, Jason D. Campbell, and Todd C. Mowry. 2005. Programmable matter. Computer 38, 6
(2005), 99–101.
Venkat Iyer, Andrei Pruteanu, and Stefan Dulman. 2011. Netdetect: Neighborhood discovery in wireless
networks using adaptive beacons. In 2011 5th IEEE International Conference on Self-Adaptive and
_Self-Organizing Systems (SASO’01),. IEEE, 31–40._
M´ark Jelasity and Ozalp Babaoglu. 2006. T-Man: Gossip-based overlay topology management. In Engineering
_Self-Organising Systems. Springer, 1–15._
M´ark Jelasity, Alberto Montresor, and Ozalp Babaoglu. 2005. Gossip-based aggregation in large dynamic
networks. ACM Transactions on Computer Systems 23, 3 (2005), 219–252.
Gian Paolo Jesi, David Hales, and Maarten van Steen. 2007. Identifying malicious peers before it’s too late:
A decentralized secure peer sampling service. In 1st International Conference on Self-Adaptive and
_[Self-Organizing Systems (SASO’07). 237–246. DOI:http://dx.doi.org/10.1109/SASO.2007.32](http://dx.doi.org/10.1109/SASO.2007.32)_
David Kempe, Alin Dobra, and Johannes Gehrke. 2003. Gossip-based computation of aggregate information.
In Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science.. IEEE, 482–
491.
Jeffrey O. Kephart and David M. Chess. 2003. The vision of autonomic computing. Computer 36, 1 (2003),
41–50.
Yakup Koc¸, Martijn Warnier, Robert E. Kooij, and Frances M. T. Brazier. 2013. A robustness metric for
cascading failures by targeted attacks in power networks. In Proceedings of the 10th IEEE International
_Conference on Networking, Sensing and Control (ICNSC’13), IEEE, 48–53._
Dionysios Kostoulas, Dimitrios Psaltoulis, Indranil Gupta, Kenneth P. Birman, and Alan J. Demers.
2007. Active and passive techniques for group size estimation in large-scale and dynamic distributed
-----
29:20 S. Dulman and E. Pauwels
[systems. Journal of Systems and Softwware 80, 10 (Oct. 2007), 1639–1658. DOI:http://dx.doi.org/10.1016/](http://dx.doi.org/10.1016/j.jss.2007.01.014)
[j.jss.2007.01.014](http://dx.doi.org/10.1016/j.jss.2007.01.014)
Samuel Madden, Michael J. Franklin, Joseph M. Hellerstein, and Wei Hong. 2002. TAG: A tiny aggregation
service for ad-hoc sensor networks. SIGOPS Operating Systems Review 36, SI (Dec. 2002), 131–146.
[DOI:http://dx.doi.org/10.1145/844128.844142](http://dx.doi.org/10.1145/844128.844142)
Navneet Malpani, Jennifer L. Welch, and Nitin Vaidya. 2000. Leader election algorithms for mobile ad hoc
networks. In Proceedings of the 4th International Workshop on Discrete Algorithms and Methods for
_Mobile Computing and Communications. ACM, 96–103._
Laurent Massouli´e, Erwan Le Merrer, Anne-Marie Kermarrec, and Ayalvadi Ganesh. 2006. Peer count
ing and sampling in overlay networks: Random walk methods. In Proceedings of the 25th Annual
_ACM Symposium on Principles of Distributed Computing (PODC’06). ACM, New York, NY, 123–132._
[DOI:http://dx.doi.org/10.1145/1146381.1146402](http://dx.doi.org/10.1145/1146381.1146402)
Alain Mayer, Yoram Ofek, Rafail Ostrovsky, and Moti Yung. 1992. Self-stabilizing symmetry breaking in
constant-space. In Proceedings of the 24th Annual ACM Symposium on Theory of Computing. ACM,
667–678.
Damon Mosk-Aoyama and Devavrat Shah. 2008. Fast distributed algorithms for computing separable func
tions. IEEE Transactions on Information Theory 54, 7 (2008), 2997–3007.
Suman Nath, Phillip B. Gibbons, Srinivasan Seshan, and Zachary R. Anderson. 2004. Synopsis diffusion for
robust aggregation in sensor networks. In Proceedings of the 2nd International Conference on Embedded
_Networked Sensor Systems. ACM, 250–262._
Linda Northrop, Peter Feiler, Richard P. Gabriel, John Goodenough, Rick Linger, Tom Longstaff, Rick
Kazman, Mark Klein, Douglas Schmidt, Kevin Sullivan, and others. 2006. Ultra-large-scale systems—
The software challenge of the future. (2006).
Andrei Pruteanu and Stefan Dulman. 2012. LossEstimate: Distributed failure estimation in wireless net
works. Journal of Systems and Software 85, 12 (2012), 2785–2795.
Hiroki Sayama. 2009. Swarm chemistry. Artificial Life 15, 1 (2009), 105–114.
Devavrat Shah. 2009. Gossip Algorithms. Now Publishers Inc.
Robbert Van Renesse, Kenneth P. Birman, and Werner Vogels. 2003. Astrolabe: A robust and scalable
technology for distributed system monitoring, management, and data mining. ACM Transactions on
_Computer Systems 21, 2 (2003), 164–206._
Geoffrey Werner-Allen, Geetika Tewari, Ankit Patel, Matt Welsh, and Radhika Nagpal. 2005. Firefly-inspired
sensor network synchronicity with realistic radio effects. In Proceedings of the 3rd International Confer_ence on Embedded Networked Sensor Systems. ACM, 142–153._
Stephen Wolfram. 2002. A New Kind of Science. Vol. 5. Wolfram Media Champaign.
Received January 2015; revised June 2015; accepted August 2015
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1145/2816819?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1145/2816819, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://ir.cwi.nl/pub/25198/a29-dulman.pdf"
}
| 2,015
|
[
"JournalArticle"
] | true
| 2015-12-31T00:00:00
|
[] | 16,063
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0312f15196d65fcb8c6581acb5acfe2fa043e138
|
[
"Medicine",
"Computer Science"
] | 0.842514
|
Designing Better Exposure Notification Apps: The Role of Persuasive Design
|
0312f15196d65fcb8c6581acb5acfe2fa043e138
|
JMIR Public Health and Surveillance
|
[
{
"authorId": "2144840",
"name": "Kiemute Oyibo"
},
{
"authorId": "3201726",
"name": "P. Morita"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"JMIR public health surveill",
"JMIR Public Health Surveill",
"JMIR public health and surveillance"
],
"alternate_urls": null,
"id": "428c04d8-ea7f-4b27-bb65-afd170194217",
"issn": "2369-2960",
"name": "JMIR Public Health and Surveillance",
"type": "journal",
"url": "https://publichealth.jmir.org/"
}
|
Background Digital contact tracing apps have been deployed worldwide to limit the spread of COVID-19 during this pandemic and to facilitate the lifting of public health restrictions. However, due to privacy-, trust-, and design-related issues, the apps are yet to be widely adopted. This calls for an intervention to enable a critical mass of users to adopt them. Objective The aim of this paper is to provide guidelines to design contact tracing apps as persuasive technologies to make them more appealing and effective. Methods We identified the limitations of the current contact tracing apps on the market using the Government of Canada’s official exposure notification app (COVID Alert) as a case study. Particularly, we identified three interfaces in the COVID Alert app where the design can be improved. The interfaces include the no exposure status interface, exposure interface, and diagnosis report interface. We propose persuasive technology design guidelines to make them more motivational and effective in eliciting the desired behavior change. Results Apart from trust and privacy concerns, we identified the minimalist and nonmotivational design of exposure notification apps as the key design-related factors that contribute to the current low uptake. We proposed persuasive strategies such as self-monitoring of daily contacts and exposure time to make the no exposure and exposure interfaces visually appealing and motivational. Moreover, we proposed social learning, praise, and reward to increase the diagnosis report interface’s effectiveness. Conclusions We demonstrated that exposure notification apps can be designed as persuasive technologies by incorporating key persuasive features, which have the potential to improve uptake, use, COVID-19 diagnosis reporting, and compliance with social distancing guidelines.
|
JMIR PUBLIC HEALTH AND SURVEILLANCE Oyibo & Morita
##### Viewpoint
# Designing Better Exposure Notification Apps: The Role of Persuasive Design
##### Kiemute Oyibo[1], BSc, MSc, PhD; Plinio Pelegrini Morita[1,2,3,4], PEng, MSc, PhD
1School of Public Health Sciences, Faculty of Health, University of Waterloo, Waterloo, ON, Canada
2Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
3eHealth Innovation, Techna Institute, University Health Network, Toronto, ON, Canada
4Institute of Health Policy, Management, and Evaluation, University of Toronto, Toronto, ON, Canada
**Corresponding Author:**
Plinio Pelegrini Morita, PEng, MSc, PhD
School of Public Health Sciences
Faculty of Health
University of Waterloo
200 University Avenue West
Waterloo, ON, N2L 3G1
Canada
Phone: 1 5198884567 ext 41372
[Email: plinio.morita@uwaterloo.ca](mailto:plinio.morita@uwaterloo.ca)
### Abstract
**Background:** Digital contact tracing apps have been deployed worldwide to limit the spread of COVID-19 during this pandemic
and to facilitate the lifting of public health restrictions. However, due to privacy-, trust-, and design-related issues, the apps are
yet to be widely adopted. This calls for an intervention to enable a critical mass of users to adopt them.
**Objective:** The aim of this paper is to provide guidelines to design contact tracing apps as persuasive technologies to make
them more appealing and effective.
**Methods:** We identified the limitations of the current contact tracing apps on the market using the Government of Canada’s
official exposure notification app (COVID Alert) as a case study. Particularly, we identified three interfaces in the COVID Alert
app where the design can be improved. The interfaces include the no exposure status interface, exposure interface, and diagnosis
report interface. We propose persuasive technology design guidelines to make them more motivational and effective in eliciting
the desired behavior change.
**Results:** Apart from trust and privacy concerns, we identified the minimalist and nonmotivational design of exposure notification
apps as the key design-related factors that contribute to the current low uptake. We proposed persuasive strategies such as
self-monitoring of daily contacts and exposure time to make the no exposure and exposure interfaces visually appealing and
motivational. Moreover, we proposed social learning, praise, and reward to increase the diagnosis report interface’s effectiveness.
**Conclusions:** We demonstrated that exposure notification apps can be designed as persuasive technologies by incorporating
key persuasive features, which have the potential to improve uptake, use, COVID-19 diagnosis reporting, and compliance with
social distancing guidelines.
**_(JMIR Public Health Surveill 2021;7(11):e28956)_** [doi: 10.2196/28956](http://dx.doi.org/10.2196/28956)
**KEYWORDS**
contact tracing app; exposure notification app; COVID Alert; COVID-19; persuasive technology; behavior change
### Introduction
The COVID-19 pandemic, beginning in the early part of 2020,
has led to the development and deployment of several digital
health technologies to slow the spread of COVID-19. COVID-19
is a human-to-human transmittable respiratory disease caused
by the coronavirus known as SARS-CoV-2, which emerged in
December 2019. Its symptoms include cough, sore throat, and
high fever, which have the potential to cause pneumonia and
respiratory failure [1]. Most prevalent among the technologies
aimed at curbing COVID-19 are digital contact tracing apps,
which help public health authorities to track or notify individuals
-----
JMIR PUBLIC HEALTH AND SURVEILLANCE Oyibo & Morita
who may have come into close contact with a person who is
infected. Traditionally, contact tracing has been a manual
process whereby people, potentially exposed to a
human-to-human transmittable disease, are identified by
interviewing persons who are infected with whom the former
may have had close contact [2]. However, with the advancement
in mobile technology and privacy-preserving cryptography (eg,
the Google/Apple Exposure Notification system), the practice
of contact tracing has gone predominantly digital worldwide
[3]. Digital contact tracing does not replace manual tracing
techniques but augments it to fast-track the containment of
COVID-19 [4,5]. The main advantage of digital over manual
contact tracing is that it automates the labor-intensive process,
especially in situations where there are a limited number of
human contact tracers [2,6]. Digital contact tracing, if adopted
by a critical mass of people, is more likely to be faster, more
effective, and accurate in comparison to the fallible nature of
human memories, especially given that COVID-19 infection
may be asymptomatic for up to 14 days [7].
Figure 1 shows how the exposure notification app works in the
real world. If Bob and Alice come in close contact (ie, within
a 2-meter distance) for 15 minutes or more, both contacts
exchange a dynamic randomly generated identification number.
In the future, if Bob tests positive and uploads his one-time key
given to him by the public health authority to the cloud-based
database of anonymized contacts, Alice will be contacted via
the app and advised on what to do next.
**Figure 1.** COVID-19 contact tracing and exposure notification process (adapted from Fairbank et al [8]).
Several countries worldwide, such as Australia, Canada, France,
South Africa, and Singapore [9-11], have launched nationwide
exposure notification apps in their respective official languages.
The apps alert people who may have come in close contact with
persons infected with COVID-19 for 15 minutes or more in the
last 14 days. The Government of Canada’s exposure notification
app is called “COVID Alert” [12]. It is available in two
languages (English and French) and can be downloaded from
the Apple and Android stores by Canadian residents in the
Northwest Territories, Prince Edward Island, Nova Scotia,
Quebec, Manitoba, Saskatchewan, New Brunswick, Ontario,
and Newfoundland and Labrador [13]. Given the current poor
uptake of contact tracing apps in general [14], in this paper, we
used the COVID Alert app as a case study to uncover some of
the weaknesses in the current design of most exposure
notification apps on the market and demonstrate how persuasive
features can be incorporated in their design to improve their
persuasiveness, uptake, and effectiveness.
The rest of the paper is organized as follows. We begin by
covering the poor uptake and design of contact tracing apps on
the market and the need to make them more motivationally
appealing. We then focus on persuasive design, key persuasive
strategies relevant to contact tracing apps, and incorporating
persuasive design in exposure notification apps using the
COVID Alert app as a case study. Finally, we discuss the
potential benefits of the proposed persuasive design of exposure
notification apps and the ethics of persuasive technology.
### Poor Uptake of Current Exposure Notification Apps
The Canadian Government has widely publicized the COVID
Alert app, but acquiring a critical mass of users has been
hampered due to privacy concerns, trust, and human factor
design issues. Part of the adoption campaign involved Prime
Minister Justin Trudeau urging Canadian residents, especially
young people, to download and use the COVID Alert app to
improve contact tracing and diminish disease trajectories [13].
In 2020, it was estimated that there were 31.38 million
smartphone users in Canada [15]. Yet, as of November 26, 2020,
-----
JMIR PUBLIC HEALTH AND SURVEILLANCE Oyibo & Morita
the COVID alert app has only been downloaded about 5.5
million times from both Apple and Google stores [16]. This
means (assuming each download can be associated with a unique
smartphone user) approximately 17.5 percent of the smartphone
users in Canada in 2020 downloaded the app as of November
26. The low adoption rate of the COVID Alert app among the
Canadian population limits its effectiveness, as research shows
that 56% of the population would have to use the app to
considerably slow down the spread of the virus [17].
### Problems With Current Contact Tracing and Exposure Notification Apps
There are several problems associated with the low uptake of
contact tracing and exposure notification apps worldwide.
Concerns hindering their adoption are privacy, data use, public
surveillance, poor persuasive design, and lack of customization
to mention but a few [7,18].
Broadly, these problems can be grouped into two categories, as
shown in Figure 2. The first category is lack of trust in
stakeholders (eg, government, tech companies, or public health
authority) pertaining to data privacy and protection [19-21].
The second category is the lack of motivational affordances in
the user interface (UI) design of exposure notification apps. In
other words, these apps are minimalist, nonpersuasive, and use
a one-size-fits-all approach, which can negatively impact
adoption [20,22].
**Figure 2.** Stakeholder and design-related issues surrounding the low uptake of contact tracing and exposure notification apps.
##### Lack of Trust in Contact Tracing Stakeholders
Privacy and trust-related concerns have been raised by the public
concerning how COVID-19 health and tech stakeholders will
handle users’privacy and data [7]. For example, most Americans
may trust COVID-19 stakeholders such as public health agencies
and universities, but they do not trust tech companies such as
Apple and Google, which developed the privacy-preserving
Google/Apple Exposure Notification system, which most of the
contact tracing apps on the market require and support to
function properly [12]. A cross-section of US smartphone users
was asked the question, “How much, if at all, do you trust _____
to ensure that people who report being diagnosed with
coronavirus using their smartphone app remain anonymous —
a great deal, a good amount, not too much or not at all?” A total
of 56% of those polled (ie, nearly 3 in 5) did not trust tech
companies such as Apple and Google, but 57% and 56% trusted
public health agencies and universities a great deal or a good
amount, respectively [23]. The limited trust in tech companies
such as Apple and Google (<45%) may not come as a surprise
given the widely reported Facebook-Cambridge Analytica
Scandal about the 2016 United States elections [24].
##### Lack of Motivational Affordances in Exposure Notification Apps
High uptake is crucial for exposure notification apps to be
effective in mitigating the spread of COVID-19. However,
according to Walrave et al [25], “it remains unclear how we can
motivate citizens to use these apps.” Although the government
and tech companies have taken some measures to increase public
trust by way of decentralization of collected data [12], Bluetooth
contact tracing, and nontracking/storage of users’ location data
via global positioning technology, much is yet to be done in the
area of persuasive design to increase the adoption rate. For
-----
JMIR PUBLIC HEALTH AND SURVEILLANCE Oyibo & Morita
example, the current version of the COVID Alert app is
minimalist [26] and lacks motivational affordances and
incentives [27]. Motivational affordances are the persuasive
elements that satisfy users’ needs. According to Zhang [28],
when an information and communication technology (ICT)
satisfies users’ motivational needs, they feel enjoyment and
want it more. Hence, “the ultimate goal of designing an ICT for
human use is to achieve high motivational affordance so that
users would be attracted to it, really want to use it, and cannot
live without it” [28]. However, “[a]part from providing receiving
notifications about possible infections, current contract tracing
apps appear to not provide a clear benefit to the user” [29].
Specifically, most of them lack vital persuasive features that
motivate people to use digital health technologies to monitor
and manage their health behaviors. Hence, the lack of persuasive
features may contribute to low adoption rates of many contact
tracing and exposure notification apps on the market [30].
Digital health researchers have stated that incorporating
persuasive features into contact tracing apps could increase their
adoption and use by the wider population [27]. In other words,
contact tracing apps are more likely to be effective as persuasive
technologies than as traditional information systems focused
on functionality.
Persuasive technology is an interactive system intentionally
designed to change attitudes or behaviors positively through
persuasion and social influence but not through coercion or
deception [31]. However, the current version of the COVID
Alert app lacks basic persuasive and social influence principles
that can motivate more users to download and use the app more
frequently. Figure 3 shows the three main functional UIs of the
COVID Alert app: “No Exposure,” “Exposure,” and “Diagnosis
Report.” Apart from being minimalistic, all three UIs do not
support essential persuasive features such as monitoring of the
users’ daily contacts and exposure time. This may help them
regulate themselves concerning observing social (physical)
distancing guidelines in public settings.
**Figure 3.** Key user interfaces in the COVID Alert app (Government of Ontario [32]).
### Persuasive Design
Persuasive design involves applied social psychology theories
in the design of technologies to change behaviors and attitudes.
Hence, persuasive technology, also called “Captology” by Fogg
[31], is regarded as the intersection of computer systems (from
the field of human-computer interaction) and the art of
persuasion (from the field of psychology). A typical example
of a persuasive technology is a mobile fitness app aimed at
motivating people to exercise more to improve their mental
well-being and physical fitness. Persuasive design focuses on
influencing human behavior, attitude, motivation, and
compliance through the systematic design of a system’s features
and affordances to promote behavior change.
##### Persuasive Techniques
There are two main design frameworks commonly used in
designing and evaluating persuasive technologies. The first
framework is called Cialdini’s [33] principles of persuasion,
which comprise six persuasive techniques: authority,
commitment, reciprocity, liking, consensus, and scarcity [34,35].
The second framework is called the persuasive system design
-----
JMIR PUBLIC HEALTH AND SURVEILLANCE Oyibo & Morita
model [36], which comprises 28 persuasive techniques and
extends Fogg’s [31] seven persuasive techniques. The persuasive
system design model includes four broad categories (primary
task support, dialogue support, system credibility support, and
social support) as shown in Figure 4 [36,37].
First, primary task support includes persuasive techniques that
help the user to carry out the target behavior easily and
**Figure 4.** Persuasive system design model [36,37].
Each of the four categories in the persuasive system design
model comprises seven persuasive techniques. Figure 5 shows
three persuasive techniques in each of the four categories
relevant to contact tracing apps. For example, primary task
support comprises self-monitoring, tailoring, and
personalization, and social support includes social learning,
social comparison, and normative influence. These techniques,
widely studied in persuasive technology research, have proven
effective in changing health behaviors such as physical activity
[39,40]. Moreover, dialogue support comprises praise, reward,
and feedback. In particular, reward, be it virtual, tangible, or
monetary, holds potential in motivating behavior change, as
people from both high-income and low-income countries are
receptive to it [41]. Finally, credibility support comprises
effectively. Second, dialogue support includes persuasive
techniques that motivate the user to perform the target behavior
through feedback and interaction with the persuasive application.
Third, social support includes persuasive techniques that
motivate the user to carry out the target behavior through social
influence. Finally, system credibility support includes persuasive
techniques that make the persuasive application look credible
to the user [38].
trustworthiness, surface credibility, and authority. Research
[36] shows that persuasive apps perceived as trustworthy and
credible are more likely to motivate behavior change. Prior
studies found a direct or indirect relationship between source
trustworthiness [42] or perceived credibility [43] and behavioral
intentions. Moreover, Oyibo et al [44] found that people from
both high-income and low-income countries are receptive to
the authority strategy. Interestingly, current exposure notification
apps on the market are already equipped with the authority and
credibility strategies by default given that they were sponsored
by national governments that symbolize authority. However,
the issue of trust in the area of data protection and privacy
remains a roadblock to adoption [23].
-----
JMIR PUBLIC HEALTH AND SURVEILLANCE Oyibo & Morita
**Figure 5.** Twelve contact tracing app persuasive techniques from the persuasive system design model.
##### Example Implementation of Key Persuasive Design Techniques
Persuasive techniques are implemented in most mobile health
apps on the market to motivate behavior change and help users
achieve their goals. Figure 6 shows a fitness app called
“BEN’FIT,” in which reward/self-monitoring and social
learning/social comparison are, respectively, implemented in
the personal and social versions (Oyibo et al [45]).
Self-monitoring enables the user to track their physical activity,
including calories burned and step count over time. Regarded
as the cornerstone of persuasive apps, self-monitoring fosters
self-awareness and commitment, among other advantages shown
in Figure 7 [46]. In the context of contact tracing apps, Cruz et
al [47] found that over 50% of their surveyed participants
wanted to know how many infected people they have come in
contact with and how many infected people have passed through
a given location. Reward provides users with something to strive
for and reinforces behaviors [48]. Feedback allows the user to
get important information about their behavior at specific points
in time, for example, after achieving a 10,000 steps milestone.
Feedback is not listed as a dialogue support feature in the
persuasive system design model, yet it is used as a persuasive
feature in motivating behavioral change. Social learning and
social comparison, which are correlated [49], use social pressure
to motivate the target behavior [48].
**Figure 6.** Implementation of SM, RW, SC, and SL in a fitness app aimed at promoting physical activity [46]. RW: reward; SC: social comparison; SL:
social learning; SM: self-monitoring.
-----
JMIR PUBLIC HEALTH AND SURVEILLANCE Oyibo & Morita
**Figure 7.** Advantages of self-monitoring, reward, social learning, and social comparison [47].
### Incorporating Persuasive Design in Exposure Notification Apps
The COVID Alert app can be redesigned to be more appealing
and motivating to the target users by incorporating essential
persuasive features to increase its effectiveness. Figure 8
provides guidelines for integrating persuasive features such as
self-monitoring, praise, reward, social comparison, and social
learning. However, prior research in the physical activity domain
shows that Canadians are more likely to be receptive to personal
than social strategies [50]. For this reason, there should be a
personal and a social version of the app to enable the target
users to make a choice based on their preferences.
**Figure 8.** Guidelines for incorporating persuasive features into key user interfaces of exposure notification apps using COVID Alert as a case study.
-----
JMIR PUBLIC HEALTH AND SURVEILLANCE Oyibo & Morita
##### No Exposure Interface
In the no exposure UI, a self-monitoring feature, which tracks
daily contacts and exposure time, and showcases historical
behavior, can be incorporated in the second half of the screen,
which is currently blank. The implementation of the
self-monitoring feature is presented in Oyibo et al [51]. In the
social version, a social comparison feature, which compares the
user’s exposure levels (daily contacts and exposure time) with
those of others in the community, can be incorporated as well.
In addition, users can be allowed to customize the app (eg,
choose a happy face avatar instead of a green hand icon that
represents their no exposure state). Research shows that
well-designed avatars can improve the user experience by
drawing a closer connection between the user’s lived and digital
identities as, for example, avatars possess some human signifiers
like facial expressions that convey emotion [52]. This is in line
with the liking principle in the persuasive system design model
(Figure 4), which states that people are more likely to be
persuaded by people similar to them or that are attractive
[33,36].
##### Exposure Interface
In the exposure UI, a self-monitoring feature, which tracks the
total number of contacts and approximately when the user was
exposed, can be incorporated in the middle of the screen, as
shown in Figure 8. The implementation of the self-monitoring
feature is presented in Oyibo et al [51]. As in the no exposure
UI, users should be able to customize the app (eg, choose a sad
face avatar instead of a purple hand icon to represent their
exposed state). In addition, in the social version, they should
be given the choice to compare their exposure levels with those
of others in the community as an additional means of motivation
and insight.
##### Diagnosis Report Interface
In the diagnosis report UI, a social learning feature, which
informs the user about the number of persons that have reported
their COVID-19 diagnosis for a given period (eg, day or week),
can be incorporated in the middle of the screen as shown in
Figure 8. This additional statistical information can encourage
users, when infected, to report their diagnosis to ensure the
safety of the community. The implementation of the social
learning feature is presented in Oyibo et al [51]. Moreover, users
can be praised or rewarded for reporting their diagnosis. In a
recent study, Jonker et al [53] found that respondents preferred
apps that offer them incentives such as a token monetary reward
(€5 [US $6] or €10 [US $12] a month), permission to gather in
small groups (eg, after recovering), or free testing for COVID-19
after receiving an exposure alert.
##### Social Location Monitoring Interface
In addition to the 12 persuasive features drawn from the
persuasive system design model (Figure 4), hot spot monitoring,
which we call “Social Location Monitoring,” can be used as a
persuasive strategy to promote adoption and use. Social location
monitoring is the tracking and gathering of information about
a location that includes the number of persons who are infected
that currently reside in, have been to, or passed the location in
a given period to help users make informed decisions. Figure
8 shows a hypothetical interface for incorporating social location
monitoring to motivate beneficial behaviors (eg, avoiding hot
spots, social distancing, and wearing a mask). In a recent study,
Li et al [54] found that respondents were more willing to install
contact tracing apps that collect users’ location data than those
that do not, due to the additional benefits they provide about
hot spot information and analysis. Social location monitoring
can help local authorities allocate resources in a better way and
enact better health care policies during the COVID-19 pandemic
[55].
### Potential Impact of the Proposed Persuasive Design
The projected impact of the persuasive design of exposure
notification apps includes improved uptake, frequent use,
increased report of diagnosis, and compliance with social
distancing guidelines. In future research efforts, we hope to
implement these persuasive design guidelines and conduct a
study to investigate the effectiveness of the persuasive design
of exposure notification apps using the COVID Alert app as a
case study. Although research has shown that persuasive design
can promote behavior change (eg, in the physical domain or
health eating), it is still not certain whether the proposed
persuasive design guidelines for exposure notification apps can
promote the target behaviors. Hence, there is a need for
empirical research in the future to investigate the effectiveness
of the proposed persuasive system design guidelines.
### Ethics of Persuasive Design
Ethical concerns about the app and impact of persuasive design
have been raised in the gray and academic literature. Admittedly,
in the wrong hands, persuasive design can be exploited or used
to manipulate unsuspecting users for financial and other gains
[56]. We regard this as “persuasive design for unethical gains.”
One area that experts believe that persuasive technologies have
been unethically used is digital apps for children. Research
shows that the amount of kids’screen time in 2018 was 10 times
the amount in 2011, with kids spending an average of 6 hours
and 40 minutes using persuasive technologies such as game
apps and social media. Hence, some health professionals
believed “children’s behaviors are being exploited in the name
of the tech world’s profit” [56]. This led 50 psychologists in
2018 to send a letter to the American Psychological Association
(APA) “accusing psychologists working at tech companies of
using ‘hidden manipulation techniques’[and prevailing on] the
APA to take an ethical stand on behalf of kids” [56]. However,
leveraging persuasive design for financial gains or unethical
benefits is not what “persuasive design for behavior change” is
about. Rather, the sole purpose for persuasive design for
behavior change is to support the user in adopting and
performing behaviors beneficial to themselves or society. An
example of behavior change beneficial to the individual is eating
healthy or exercising regularly. A persuasive app can be used
to promote these behaviors. An example of such an app is “List
It” [57]. The app motivates users to select healthy options from
a shopping list. Moreover, a behavior change beneficial to the
society is commuting by public transportation (eg, bus or train)
-----
JMIR PUBLIC HEALTH AND SURVEILLANCE Oyibo & Morita
instead of driving one’s personal car [58]. Broadly speaking,
eco-friendly behaviors aimed at reducing carbon footprints will
help, on a large scale, reduce global warming and climate change
[59]. An example of a persuasive app aimed at reducing carbon
footprints is “EcoIsland” [60]. The app, which supports the
feedback strategy, encourages users to perform eco-friendly
activities (turning down the room heater by 1 °C, commuting
by train instead of driving a car, etc) to reduce carbon dioxide
emission. Overall, the guiding moral principle (also known as
the golden rule) of persuasive technology is that “designers of
persuasive technology should not create any artifact that
persuades someone to do or think something that they (the
designers) would not want to be persuaded of themselves” [61].
### Conclusions
In this paper, we identified some of the issues surrounding the
low uptake of contact tracing and exposure notification apps
deployed by national governments worldwide to curb the spread
##### Acknowledgments
of COVID-19 and speed up the lifting of public health
restrictions. Specifically, we pinpointed lack of trust, concerns
about privacy and data use by COVID-19 stakeholders, and the
nonmotivational design of contact tracing and exposure
notification apps as potential reasons for the low adoption rates
worldwide. Using the Government of Canada’s COVID Alert
app as a case study, we provided persuasive technology design
guidelines that can help incorporate persuasive features in
contact tracing and exposure notification apps to increase their
uptake, frequent use, and compliance with social distancing
guidelines. For example, we identified three use cases (no
exposure status, exposure status, and diagnosis report interfaces)
that can support persuasive features such as self-monitoring of
the number of daily contacts and COVID-19 exposure time,
and social learning about other users that have reported their
diagnosis over a given period. In future work, we hope to
conduct a user study to investigate the effectiveness of the
implemented guidelines among Canadian residents using the
COVID Alert app as a case study [51].
This project was funded by the Cybersecurity and Privacy Institute at the University of Waterloo, and was part of the conference
organized by the Master of Public Service Policy and Data Lab.
##### Conflicts of Interest
None declared.
##### References
1. Ghosh S. Virion structure and mechanism of propagation of coronaviruses including SARS-CoV 2 (COVID-19) and some
meaningful points for drug or vaccine development. Preprints Preprint posted online on August 14, 2020. [doi:
[10.20944/preprints202008.0312.v1]](http://dx.doi.org/10.20944/preprints202008.0312.v1)
2. Barrat A, Cattuto C, Kivelä M, Lehmann S, Saramäki J. Effect of manual and digital contact tracing on COVID-19 outbreaks:
[a study on empirical contact data. J R Soc Interface 2021 May;18(178):20201000 [FREE Full text] [doi:](https://royalsocietypublishing.org/doi/abs/10.1098/rsif.2020.1000?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed)
[10.1098/rsif.2020.1000] [Medline: 33947224]](http://dx.doi.org/10.1098/rsif.2020.1000)
3. [Privacy-preserving contact tracing. Apple and Google. URL: https://www.apple.com/covid19/contacttracing/ [accessed](https://www.apple.com/covid19/contacttracing/)
2021-02-15]
4. [Download COVID Alert today. Government of Canada. URL: https://www.canada.ca/en/public-health/services/diseases/](https://www.canada.ca/en/public-health/services/diseases/coronavirus-disease-covid-19/covid-alert.html)
[coronavirus-disease-covid-19/covid-alert.html [accessed 2021-02-15]](https://www.canada.ca/en/public-health/services/diseases/coronavirus-disease-covid-19/covid-alert.html)
5. Crumpler W. Contact tracing apps are not a silver bullet. Center for Strategic and International Studies. 2020 May 15. URL:
[https://www.csis.org/blogs/technology-policy-blog/contact-tracing-apps-are-not-silver-bullet [accessed 2021-11-03]](https://www.csis.org/blogs/technology-policy-blog/contact-tracing-apps-are-not-silver-bullet)
6. Braithwaite I, Callender T, Bullock M, Aldridge RW. Automated and partly automated contact tracing: a systematic review
[to inform the control of COVID-19. Lancet Digit Health 2020 Nov;2(11):e607-e621 [FREE Full text] [doi:](https://linkinghub.elsevier.com/retrieve/pii/S2589-7500(20)30184-9)
[10.1016/S2589-7500(20)30184-9] [Medline: 32839755]](http://dx.doi.org/10.1016/S2589-7500(20)30184-9)
7. Sharon T. Blind-sided by privacy? Digital contact tracing, the Apple/Google API and big tech's newfound role as global
[health policy makers. Ethics Inf Technol 2020 Jul 18:1-13 [FREE Full text] [doi: 10.1007/s10676-020-09547-x] [Medline:](http://europepmc.org/abstract/MED/32837287)
[32837287]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=32837287&dopt=Abstract)
8. Fairbank N, Murray C, Couture A, Kline J, Lazzaro M. There's an app for that: digital contact tracing and its role in mitigating
[a second wave. Berkman Klein Center. 2020. URL: https://cyber.harvard.edu/sites/default/files/2020-05/](https://cyber.harvard.edu/sites/default/files/2020-05/Contact_Tracing_Report_Final.pdf)
[Contact_Tracing_Report_Final.pdf [accessed 2021-02-09]](https://cyber.harvard.edu/sites/default/files/2020-05/Contact_Tracing_Report_Final.pdf)
9. Jalabneh R, Zehra Syed H, Pillai S, Hoque Apu E, Hussein MR, Kabir R, et al. Use of mobile phone apps for contact tracing
to control the COVID-19 pandemic: a literature review. SSRN J Preprint posted online on July 5, 2020. [doi:
[10.2139/ssrn.3641961]](http://dx.doi.org/10.2139/ssrn.3641961)
10. Teixeira R, Doetsch J. The multifaceted role of mobile technologies as a strategy to combat COVID-19 pandemic. Epidemiol
[Infect 2020 Oct 13;148:e244 [FREE Full text] [doi: 10.1017/S0950268820002435] [Medline: 33046160]](http://europepmc.org/abstract/MED/33046160)
11. Collado-Borrell R, Escudero-Vilaplana V, Villanueva-Bueno C, Herranz-Alonso A, Sanjurjo-Saez M. Features and
functionalities of smartphone apps related to COVID-19: systematic search in app stores and content analysis. J Med Internet
[Res 2020 Aug 25;22(8):e20334 [FREE Full text] [doi: 10.2196/20334] [Medline: 32614777]](https://www.jmir.org/2020/8/e20334/)
-----
JMIR PUBLIC HEALTH AND SURVEILLANCE Oyibo & Morita
12. COVID Alert: COVID-19 exposure notification application privacy assessment Internet. Government of Canada. 2021.
[URL: https://www.canada.ca/en/public-health/services/diseases/coronavirus-disease-covid-19/covid-alert/privacy-policy/](https://www.canada.ca/en/public-health/services/diseases/coronavirus-disease-covid-19/covid-alert/privacy-policy/assessment.html)
[assessment.html [accessed 2021-07-15]](https://www.canada.ca/en/public-health/services/diseases/coronavirus-disease-covid-19/covid-alert/privacy-policy/assessment.html)
13. Trudeau J. Canada’s COVID-19 exposure notification app now available in the Northwest Territories. Prime Minister of
[Canada. 2020 Nov 26. URL: https://pm.gc.ca/en/news/news-releases/2020/11/26/](https://pm.gc.ca/en/news/news-releases/2020/11/26/canadas-covid-19-exposure-notification-app-now-available-northwest)
[canadas-covid-19-exposure-notification-app-now-available-northwest [accessed 2021-02-21]](https://pm.gc.ca/en/news/news-releases/2020/11/26/canadas-covid-19-exposure-notification-app-now-available-northwest)
14. Farronato C, Iansiti M, Bartosiak M, Denicolai S, Ferretti L, Fontana R. How to get people to actually use contact-tracing
[apps. Havard Business Review. 2020 Jul 15. URL: https://hbr.org/2020/07/](https://hbr.org/2020/07/how-to-get-people-to-actually-use-contact-tracing-apps)
[how-to-get-people-to-actually-use-contact-tracing-apps [accessed 2021-02-22]](https://hbr.org/2020/07/how-to-get-people-to-actually-use-contact-tracing-apps)
15. [O'Dea S. Number of smartphone users in Canada from 2018 to 2024 (in millions). Statista. 2020 Dec 07. URL: https://www.](https://www.statista.com/statistics/467190/forecast-of-smartphone-users-in-canada/)
[statista.com/statistics/467190/forecast-of-smartphone-users-in-canada/ [accessed 2021-10-30]](https://www.statista.com/statistics/467190/forecast-of-smartphone-users-in-canada/)
16. [Coronavirus: Trudeau pleads with young people to download COVID Alert app. Global News. 2020 Nov 27. URL: https:/](https://globalnews.ca/video/7488498/coronavirus-trudeau-pleads-with-young-people-to-download-covid-alert-app)
[/globalnews.ca/video/7488498/coronavirus-trudeau-pleads-with-young-people-to-download-covid-alert-app [accessed](https://globalnews.ca/video/7488498/coronavirus-trudeau-pleads-with-young-people-to-download-covid-alert-app)
2021-01-08]
17. O'Neill PH. No, coronavirus apps don't need 60% adoption to be effective. MIT Technology Review. 2020 Jul 05. URL:
[https://www.technologyreview.com/2020/06/05/1002775/covid-apps-effective-at-less-than-60-percent-download/ [accessed](https://www.technologyreview.com/2020/06/05/1002775/covid-apps-effective-at-less-than-60-percent-download/)
2021-01-13]
18. Trang S, Trenz M, Weiger WH, Tarafdar M, Cheung CM. One app to trace them all? Examining app specifications for
[mass acceptance of contact-tracing apps. Eur J Inf Syst 2020 Jul 27;29(4):415-428. [doi: 10.1080/0960085x.2020.1784046]](http://dx.doi.org/10.1080/0960085x.2020.1784046)
19. Basu S. Effective contact tracing for COVID-19 using mobile phones: an ethical analysis of the mandatory use of the
[Aarogya Setu application in India. Camb Q Healthc Ethics 2021 Apr;30(2):262-271 [FREE Full text] [doi:](http://europepmc.org/abstract/MED/32993842)
[10.1017/S0963180120000821] [Medline: 32993842]](http://dx.doi.org/10.1017/S0963180120000821)
20. Megnin-Viggars O, Carter P, Melendez-Torres GJ, Weston D, Rubin GJ. Facilitators and barriers to engagement with
[contact tracing during infectious disease outbreaks: a rapid review of the evidence. PLoS One 2020;15(10):e0241473 [FREE](https://dx.plos.org/10.1371/journal.pone.0241473)
[Full text] [doi: 10.1371/journal.pone.0241473] [Medline: 33120402]](https://dx.plos.org/10.1371/journal.pone.0241473)
21. [Tracking COVID-19: contact tracing in the digital age. World Health Organization. URL: https://www.who.int/news-room/](https://www.who.int/news-room/feature-stories/detail/tracking-covid-19-contact-tracing-in-the-digital-age)
[feature-stories/detail/tracking-covid-19-contact-tracing-in-the-digital-age [accessed 2021-02-14]](https://www.who.int/news-room/feature-stories/detail/tracking-covid-19-contact-tracing-in-the-digital-age)
22. Venkatesh V, Aloysius JA, Burton S. Design and evaluation of auto-ID enabled shopping assistance artifacts in customers'
[mobile phones: two retail store laboratory experiments. MIS Q 2017 Jan 1;41(1):83-113. [doi: 10.25300/misq/2017/41.1.05]](http://dx.doi.org/10.25300/misq/2017/41.1.05)
23. Timberg C, Harwell D, Safarpour A. Most Americans are not willing or able to use an app tracking coronavirus infections.
[That’s a problem for Big Tech’s plan to slow the pandemic. The Washington Post. 2020 May 29. URL: https://tinyurl.com/](https://www.washingtonpost.com/technology/2020/04/29/most-americans-are-not-willing-or-able-use-an-app-tracking-coronavirus-infections-thats-problem-big-techs-plan-slow-pandemic/)
[5un2duv8 [accessed 2021-10-30]](https://www.washingtonpost.com/technology/2020/04/29/most-americans-are-not-willing-or-able-use-an-app-tracking-coronavirus-infections-thats-problem-big-techs-plan-slow-pandemic/)
24. Johnson C. What the Cambridge Analytica scandal means for the future of Facebook marketing. Forbes. 2018 May 01.
[URL: https://www.forbes.com/sites/forbescommunicationscouncil/2018/05/01/](https://www.forbes.com/sites/forbescommunicationscouncil/2018/05/01/what-the-cambridge-analytica-scandal-means-for-the-future-of-facebook-marketing/#2ffe20df291c)
[what-the-cambridge-analytica-scandal-means-for-the-future-of-facebook-marketing/#2ffe20df291c [accessed 2020-10-30]](https://www.forbes.com/sites/forbescommunicationscouncil/2018/05/01/what-the-cambridge-analytica-scandal-means-for-the-future-of-facebook-marketing/#2ffe20df291c)
25. Walrave M, Waeterloos C, Ponnet K. Adoption of a contact tracing app for containing COVID-19: a health belief model
[approach. JMIR Public Health Surveill 2020 Sep 01;6(3):e20572 [FREE Full text] [doi: 10.2196/20572] [Medline: 32755882]](https://publichealth.jmir.org/2020/3/e20572/)
26. Sadasivan S. Illustrating with diversity and inclusion for the COVID Alert app. Canadian Digital Service. 2020 Nov 26.
[URL: https://digital.canada.ca/2020/11/26/illustrating-with-diversity-and-inclusion-for-the-covid-alert-app/ [accessed](https://digital.canada.ca/2020/11/26/illustrating-with-diversity-and-inclusion-for-the-covid-alert-app/)
2021-02-13]
27. [Turnbull S. COVID Alert app nears 3 million users, but only 514 positive test reports. CTV News. URL: https://www.](https://www.ctvnews.ca/health/coronavirus/covid-alert-app-nears-3-million-users-but-only-514-positive-test-reports-1.5125256)
[ctvnews.ca/health/coronavirus/covid-alert-app-nears-3-million-users-but-only-514-positive-test-reports-1.5125256 [accessed](https://www.ctvnews.ca/health/coronavirus/covid-alert-app-nears-3-million-users-but-only-514-positive-test-reports-1.5125256)
2021-01-14]
28. Ping Z. Motivational affordances: fundamental reasons for ICT design and use. Commun ACM 2008 Nov;51(11):145-147.
29. Kukuk L. Analyzing adoption of COVID-19 contact tracing apps using UTAUT. University of Twente Student Theses.
[2020. URL: http://essay.utwente.nl/81983/1/Kukuk_BA_EEMCS.pdf [accessed 2021-10-30]](http://essay.utwente.nl/81983/1/Kukuk_BA_EEMCS.pdf)
30. Kreps S, Zhang B, McMurry N. Contact-tracing apps face serious adoption obstacles. Brookings. 2020 May 20. URL:
[https://www.brookings.edu/techstream/contact-tracing-apps-face-serious-adoption-obstacles/ [accessed 2021-10-30]](https://www.brookings.edu/techstream/contact-tracing-apps-face-serious-adoption-obstacles/)
31. Fogg BJ. Persuasive Technology: Using Computers to Change What We Think and Do. United States: Morgan Kaufmann;
2002:1-312.
32. Download the COVID Alert mobile app to protect yourself and your community. Government of Ontario. 2020. URL:
[https://covid-19.ontario.ca/covidalert [accessed 2021-02-07]](https://covid-19.ontario.ca/covidalert)
33. Cialdini RB. Influence: The Psychology of Persuasion. New York, NY: HarperCollins; 2006:1-263.
34. Ciocarlan A, Masthoff J, Oren N. Actual persuasiveness: impact of personality, age and gender on message type susceptibility.
In: Oinas-Kukkonen H, Win KT, Karapanos E, Karppinen P, Kyza E, editors. Persuasive Technology: Development of
Persuasive and Behavior Change Support Systems 14th International Conference, PERSUASIVE 2019, Limassol, Cyprus,
April 9–11, 2019, Proceedings. Cham: Springer; 2019:283-294.
35. Kaptein M, Markopoulos P, De RB, Aarts E. Can you be persuaded? Individual differences in susceptibility to persuasion.
In: Gross T, Gulliksen J, Kotzé P, Oestreicher L, Palanque P, Prates RO, et al, editors. Human-Computer Interaction –
-----
JMIR PUBLIC HEALTH AND SURVEILLANCE Oyibo & Morita
INTERACT 2009: 12th IFIP TC 13 International Conference, Uppsala, Sweden, August 24-28, 2009, Proceedings, Part I.
Berlin, Heidelberg: Springer; 2009:115-118.
36. Oinas-Kukkonen H, Harjumaa M. Persuasive systems design: key issues, process model, and system features. Commun
[Assoc Inf Syst 2009;24(1):485-500. [doi: 10.17705/1cais.02428]](http://dx.doi.org/10.17705/1cais.02428)
37. Oyibo K. Investigating the key persuasive features for fitness app design and extending the persuasive system design model:
a qualitative approach. Proc Int Symp Hum Factors Ergonomics Health Care 2021 Jul 22;10(1):47-53. [doi:
[10.1177/2327857921101022]](http://dx.doi.org/10.1177/2327857921101022)
38. Bartlett YK, Webb TL, Hawley MS. Using persuasive technology to increase physical activity in people with chronic
obstructive pulmonary disease by encouraging regular walking: a mixed-methods study exploring opinions and preferences.
[J Med Internet Res 2017 Apr 20;19(4):e124 [FREE Full text] [doi: 10.2196/jmir.6616] [Medline: 28428155]](https://www.jmir.org/2017/4/e124/)
39. Munson S, Consolvo S. Exploring goal-setting, rewards, self-monitoring, and sharing to motivate physical activity. 2012
Presented at: 6th International Conference on Pervasive Computing Technologies for Healthcare; May 21-24, 2012; San
[Diego, CA p. 25-32 URL: http://www.scopus.com/inward/record.url?eid=2-s2.0-84865047997&partnerID=tZOtx3y1 [doi:](http://www.scopus.com/inward/record.url?eid=2-s2.0-84865047997&partnerID=tZOtx3y1)
[10.4108/icst.pervasivehealth.2012.248691]](http://dx.doi.org/10.4108/icst.pervasivehealth.2012.248691)
40. Orji R, Lomotey R, Oyibo K, Orji F, Blustein J, Shahid S. Tracking feels oppressive and 'punishy': exploring the costs and
[benefits of self-monitoring for health and wellness. Digit Health 2018;4:2055207618797554 [FREE Full text] [doi:](https://journals.sagepub.com/doi/10.1177/2055207618797554?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed)
[10.1177/2055207618797554] [Medline: 30202544]](http://dx.doi.org/10.1177/2055207618797554)
41. Oyibo K, Orji R, Vassileva J. The influence of culture in the effect of age and gender on social influence in persuasive
technology. In: Adjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization. 2017
[Presented at: UMAP '17; July 9-12, 2017; Bratislava, Slovakia p. 47-52. [doi: 10.1145/3099023.3099071]](http://dx.doi.org/10.1145/3099023.3099071)
42. Johnston A, Warkentin M. The influence of perceived source credibility on end user attitudes and intentions to comply
with recommended IT actions. In: End-User Computing, Development, and Software Engineering: New Challenges. Hershy,
PA: IGI Global; 2012:312-334.
43. Drozd F, Lehto T, Oinas-Kukkonen H. Exploring perceived persuasiveness of a behavior change support system: a structural
model. In: Bang M, Ragnemalm EL, editors. Persuasive Technology. Design for Health and Safety: 7th International
Conference, PERSUASIVE 2012, Linköping, Sweden, June 6-8, 2012. Proceedings. Berlin, Heidelberg: Springer;
2012:157-168.
44. Oyibo K, Adaji I, Orji R, Olabenjo B, Vassileva J. Susceptibility to persuasive strategies: a comparative analysis of Nigerians
vs. Canadians. In: Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization. 2018 Jul Presented
at: UMAP '18; July 8-11, 2018; Singapore, Singapore p. 229-238.
45. Oyibo K, Olagunju AH, Olabenjo B, Adaji I, Deters R, Vassileva J. Ben'Fit: design, implementation and evaluation of a
culture-tailored fitness app. In: Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization.
[2019 Presented at: UMAP'19 Adjunct; June 6, 2019; Larnaca, Cyprus p. 161-166. [doi: 10.1145/3314183.3323854]](http://dx.doi.org/10.1145/3314183.3323854)
46. Orji R, Oyibo K, Lomotey RK, Orji FA. Socially-driven persuasive health intervention design: competition, social comparison,
[and cooperation. Health Informatics J 2019 Dec;25(4):1451-1484 [FREE Full text] [doi: 10.1177/1460458218766570]](https://journals.sagepub.com/doi/10.1177/1460458218766570?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub%3dpubmed)
[[Medline: 29801426]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=29801426&dopt=Abstract)
47. Cruz M, Oliveira R, Beltrao A, Lopes P, Viterbo J, Trevisan DG, et al. Assessing the level of acceptance of a crowdsourcing
solution to monitor infectious diseases propagation. In: 2020 IEEE International Smart Cities Conference. 2020 Presented
[at: ISC2; September 28-October 1, 2020; Virtual Conference p. 1-8. [doi: 10.1109/isc251055.2020.9239069]](http://dx.doi.org/10.1109/isc251055.2020.9239069)
48. Oyibo K. Designing culture-tailored persuasive technology to promote physical activity. University of Saskatchewan:
[HARVEST. 2020. URL: https://harvest.usask.ca/handle/10388/12943 [accessed 2021-11-04]](https://harvest.usask.ca/handle/10388/12943)
49. Oyibo K, Vassileva J. Investigation of social predictors of competitive behavior in persuasive technology. In: de Vries PW,
Oinas-Kukkonen H, Siemons L, Beerlage-de Jong N, van Gemert-Pijnen L, editors. Persuasive Technology: Development
and Implementation of Personalized Technologies to Change Attitudes and Behaviors: 12th International Conference,
PERSUASIVE 2017, Amsterdam, The Netherlands, April 4–6, 2017, Proceedings. Cham: Springer; 2017:279-291.
50. Oyibo K, Vassileva J. Investigation of the moderating effect of culture on users’ susceptibility to persuasive features in
[fitness applications. Information 2019 Nov 06;10(11):344. [doi: 10.3390/info10110344]](http://dx.doi.org/10.3390/info10110344)
51. Oyibo K, Yasunaga T, Morita P. Designing contact tracing applications as persuasive technologies to improve uptake and
effectiveness. 2020 Presented at: International Symposium on Human Factors and Ergonomics in Health Care; April 12-16,
[2021; Toronto, ON URL: https://hfeshcs2021.conference-program.com/presentation/?id=INDLEC155&sess=sess102](https://hfeshcs2021.conference-program.com/presentation/?id=INDLEC155&sess=sess102)
52. Pan Y, Steed A. The impact of self-avatars on trust and collaboration in shared virtual environments. PLoS One
[2017;12(12):e0189078 [FREE Full text] [doi: 10.1371/journal.pone.0189078] [Medline: 29240837]](https://dx.plos.org/10.1371/journal.pone.0189078)
53. Jonker M, de Bekker-Grob E, Veldwijk J, Goossens L, Bour S, Rutten-Van Mölken M. COVID-19 contact tracing apps:
predicted uptake in the Netherlands based on a discrete choice experiment. JMIR Mhealth Uhealth 2020 Oct 09;8(10):e20741
[[FREE Full text] [doi: 10.2196/20741] [Medline: 32795998]](https://mhealth.jmir.org/2020/10/e20741/)
54. Li T, Cobb C, Yang J, Baviskar S, Agarwal Y, Li B, et al. What makes people install a COVID-19 contact-tracing app?
Understanding the influence of app design and individual difference on contact-tracing app adoption intention. Pervasive
[Mobile Computing 2021 Aug;75:101439. [doi: 10.1016/j.pmcj.2021.101439]](http://dx.doi.org/10.1016/j.pmcj.2021.101439)
-----
JMIR PUBLIC HEALTH AND SURVEILLANCE Oyibo & Morita
55. [Zlotnick D. Predicting emerging COVID-19 hotspots...without asking. McGill University. 2020 May 27. URL: https://www.](https://www.mcgill.ca/oss/article/covid-19-health/predicting-emerging-covid-19-hotspotswithout-asking)
[mcgill.ca/oss/article/covid-19-health/predicting-emerging-covid-19-hotspotswithout-asking [accessed 2021-10-30]](https://www.mcgill.ca/oss/article/covid-19-health/predicting-emerging-covid-19-hotspotswithout-asking)
56. Lieber C. Tech companies use "persuasive design" to get us hooked. Psychologists say it's unethical. Vox. 2010 Aug 18.
[URL: https://www.vox.com/2018/8/8/17664580/persuasive-technology-psychology [accessed 2021-10-30]](https://www.vox.com/2018/8/8/17664580/persuasive-technology-psychology)
57. Adaji I, Oyibo K, Vassileva J. List it: a shopping list app that influences healthy shopping habits. 2018 Presented at: 32nd
International BCS Human Computer Interaction Conference; July 4, 2018; Belfast, UK p. 1-4. [doi:
[10.14236/ewic/hci2018.81]](http://dx.doi.org/10.14236/ewic/hci2018.81)
58. [Reducing your transportation footprint. Center for Climate and Energy Solutions. URL: https://www.c2es.org/content/](https://www.c2es.org/content/reducing-your-transportation-footprint/)
[reducing-your-transportation-footprint/ [accessed 2021-02-13]](https://www.c2es.org/content/reducing-your-transportation-footprint/)
59. [Overview of greenhouse gases. United States Environmental Protection Agency. URL: https://www.epa.gov/ghgemissions/](https://www.epa.gov/ghgemissions/overview-greenhouse-gases)
[overview-greenhouse-gases [accessed 2021-02-12]](https://www.epa.gov/ghgemissions/overview-greenhouse-gases)
60. Kimura H, Nakajima T. Designing persuasive applications to motivate sustainable behavior in collectivist cultures.
PsychNology J 2011;9(1):7-28.
61. Page RE, Kray C. Ethics and persuasive technology: an exploratory study in the context of healthy living. 2010 Presented
at: First International Workshop on Nudge and Influence through Mobile Devices; September 7, 2010; Lisbon, Portugal p.
19-23.
##### Abbreviations
**APA:** American Psychological Association
**ICT:** information and communication technology
**UI:** user interface
_Edited by T Sanchez; submitted 22.03.21; peer-reviewed by E Arden-Close, K Blondon; comments to author 16.07.21; revised version_
_received 16.08.21; accepted 24.08.21; published 16.11.21_
_Please cite as:_
_Oyibo K, Morita PP_
_Designing Better Exposure Notification Apps: The Role of Persuasive Design_
_JMIR Public Health Surveill 2021;7(11):e28956_
_[URL: https://publichealth.jmir.org/2021/11/e28956](https://publichealth.jmir.org/2021/11/e28956)_
_[doi: 10.2196/28956](http://dx.doi.org/10.2196/28956)_
_PMID:_
©Kiemute Oyibo, Plinio Pelegrini Morita. Originally published in JMIR Public Health and Surveillance
(https://publichealth.jmir.org), 16.11.2021. This is an open-access article distributed under the terms of the Creative Commons
Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work, first published in JMIR Public Health and Surveillance, is properly cited. The complete
bibliographic information, a link to the original publication on https://publichealth.jmir.org, as well as this copyright and license
information must be included.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC8598155, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GREEN",
"url": "https://publichealth.jmir.org/2021/11/e28956/PDF"
}
| 2,021
|
[
"JournalArticle"
] | true
| 2021-03-22T00:00:00
|
[
{
"paperId": "6b4475ebffcef89e4ba17f0b2c4c3dd43c210a39",
"title": "Investigating the Key Persuasive Features for Fitness App Design and Extending the Persuasive System Design Model: A Qualitative Approach"
},
{
"paperId": "64c73600d1c2bedbb2285cfc2349cf94a3d10531",
"title": "What makes people install a COVID-19 contact-tracing app? Understanding the influence of app design and individual difference on contact-tracing app adoption intention"
},
{
"paperId": "1eeeb6cad2c58d362987732bba7e9df7a8afeee4",
"title": "Facilitators and barriers to engagement with contact tracing during infectious disease outbreaks: A rapid review of the evidence"
},
{
"paperId": "93fb691e3a9061fe24de9e62a3e000b27b373ce5",
"title": "The multifaceted role of mobile technologies as a strategy to combat COVID-19 pandemic"
},
{
"paperId": "c275f4e83eb63da1a3c922a3e38f0cd26fc6f4ae",
"title": "Effective Contact Tracing for COVID-19 Using Mobile Phones: An Ethical Analysis of the Mandatory Use of the Aarogya Setu Application in India"
},
{
"paperId": "308e37a0e967a88f9eb5dd85579d8aa1fa115659",
"title": "Assessing the level of acceptance of a crowdsourcing solution to monitor infectious diseases propagation"
},
{
"paperId": "41c66975524427a12f61dc760b9131103a1c3dc7",
"title": "Automated and partly automated contact tracing: a systematic review to inform the control of COVID-19"
},
{
"paperId": "3698d297698796e763200525506966c38f7527bb",
"title": "Virion Structure and Mechanism of Propagation of Coronaviruses including SARS-CoV 2 (COVID -19 ) and some Meaningful Points for Drug or Vaccine Development"
},
{
"paperId": "96092feee1b4df3111116bf25e89042d3afb4799",
"title": "Effect of manual and digital contact tracing on COVID-19 outbreaks: a study on empirical contact data"
},
{
"paperId": "6b66df5f1c4406b4d42c3dc967454c535ab6aefa",
"title": "Blind-sided by privacy? Digital contact tracing, the Apple/Google API and big tech’s newfound role as global health policy makers"
},
{
"paperId": "ce67f4a858bde4b89bd70c1c688c1d3ea8d2bbac",
"title": "How to Get People to Actually Use Contact-Tracing Apps"
},
{
"paperId": "5df6a0c46f1967d4a31185c578a27d4bf9606040",
"title": "One app to trace them all? Examining app specifications for mass acceptance of contact-tracing apps"
},
{
"paperId": "a1b71017f0efc93b4ec96057d08982490ac3f39c",
"title": "COVID-19 Contact Tracing Apps: Predicted Uptake in the Netherlands Based on a Discrete Choice Experiment"
},
{
"paperId": "72c363f32ad343f87ae9331f5cf3a89c0079072b",
"title": "Adoption of a Contact Tracing App for Containing COVID-19: A Health Belief Model Approach"
},
{
"paperId": "4fd5132c446eccf36ab3c5d7fc7068d2359ff29b",
"title": "Features and Functionalities of Smartphone Apps Related to COVID-19: Systematic Search in App Stores and Content Analysis"
},
{
"paperId": "cfb28d810369c0bfb1b23d00660935e74b502877",
"title": "COVID-19 Contact Tracing and Data Protection Can Go Together"
},
{
"paperId": "7e7b99bdbd4ca2bd682739254f09189acfffb436",
"title": "Socially-driven persuasive health intervention design: Competition, social comparison, and cooperation"
},
{
"paperId": "ab3dc90f0c9610038605a6cc423e01dbbeadc07c",
"title": "Investigation of the Moderating Effect of Culture on Users' Susceptibility to Persuasive Features in Fitness Applications"
},
{
"paperId": "077d0fb85bd665950ca0f27d2e5ffa75b15f09f6",
"title": "BEN'FIT: Design, Implementation and Evaluation of a Culture-Tailored Fitness App"
},
{
"paperId": "bea487bfff4e792bcec32417e24f104d6f29dc6d",
"title": "Tracking feels oppressive and ‘punishy’: Exploring the costs and benefits of self-monitoring for health and wellness"
},
{
"paperId": "89925beb57aee57d85800c1f58a06f4daf4a1dec",
"title": "Susceptibility to Persuasive Strategies: A Comparative Analysis of Nigerians vs. Canadians"
},
{
"paperId": "9d91caefcc937c769d88777b6a27e4beb5434dd1",
"title": "List It : A Shopping List App That Influences Healthy Shopping Habits"
},
{
"paperId": "58a20a3c8641b72db22df29fd2711af0c91f38bb",
"title": "What can we learn from the Facebook—Cambridge Analytica scandal?"
},
{
"paperId": "ed53113518be6d89cd9e6996f3cf07e9b35b8bd5",
"title": "The impact of self-avatars on trust and collaboration in shared virtual environments"
},
{
"paperId": "82e0c33e3da90b2242372a08c99d81d7ae5017df",
"title": "The Influence of Culture in the Effect of Age and Gender on Social Influence in Persuasive Technology"
},
{
"paperId": "2ad29ca591be0a5bfc9b4ecf542976e2217a0fea",
"title": "Investigation of Social Predictors of Competitive Behavior in Persuasive Technology"
},
{
"paperId": "0c50c2dfc98543989d594bffcf0e53ac292e7167",
"title": "Using Persuasive Technology to Increase Physical Activity in People With Chronic Obstructive Pulmonary Disease by Encouraging Regular Walking: A Mixed-Methods Study Exploring Opinions and Preferences"
},
{
"paperId": "ca0ac17a3436ff3eeb873c29500a7647025cd6ad",
"title": "Overview of Greenhouse Gases"
},
{
"paperId": "811e97789d9695c185dbdee455c49066b9f8dfb7",
"title": "Exploring Perceived Persuasiveness of a Behavior Change Support System: A Structural Model"
},
{
"paperId": "2db9b3ac3e2573e8da76307ae96598a2b9462570",
"title": "Exploring goal-setting, rewards, self-monitoring, and sharing to motivate physical activity"
},
{
"paperId": "89fcc13460a432e3d959043ee0ca091b9d2e4b49",
"title": "Human Factors and Ergonomics in Health Care"
},
{
"paperId": "e72db7a42f9f9d14d1cef0196098886581e8908d",
"title": "The Influence of Perceived Source Credibility on End User Attitudes and Intentions to Comply with Recommended IT Actions"
},
{
"paperId": "6356f0f03038d1b2cc9574ce78d301414374d4ec",
"title": "Can You Be Persuaded? Individual Differences in Susceptibility to Persuasion"
},
{
"paperId": "128331d03d594a1fd2b8bbad18af9ef56d632989",
"title": "Technical opinionMotivational affordances"
},
{
"paperId": "09f1a254c78ba0fefb01a75be971340d29d2c036",
"title": "Persuasive technology: using computers to change what we think and do"
},
{
"paperId": null,
"title": "COVID-19 exposure notification application privacy assessment Internet. Government of Canada"
},
{
"paperId": "65344f75d3684c447c269ba8621913afb3066c52",
"title": "Use of Mobile Phone Apps for Contact Tracing to Control the COVID-19 Pandemic: A Literature Review"
},
{
"paperId": "84eb9a027068672a236bff2ffba5227a44c28aae",
"title": "Analyzing Adoption of COVID-19 Contact Tracing Apps Using UTAUT"
},
{
"paperId": null,
"title": "Contact-tracing apps face serious adoption obstacles"
},
{
"paperId": null,
"title": "Download the COVID Alert mobile app to protect yourself and your community"
},
{
"paperId": null,
"title": "Canada’s COVID-19 exposure notification app now available in the Northwest Territories"
},
{
"paperId": null,
"title": "Contact tracing apps are not a silver bullet. Center for Strategic and International Studies"
},
{
"paperId": null,
"title": "Designing culture-tailored persuasive technology to promote physical activity"
},
{
"paperId": null,
"title": "There's an app for that: digital contact tracing and its role in mitigating a second wave"
},
{
"paperId": "e8ca014ac2ae1d3aabc76dcb59049917ee79be33",
"title": "Actual Persuasiveness: Impact of Personality, Age and Gender on Message Type Susceptibility"
},
{
"paperId": "2e5fee3ab4d7bca9b0b38346010e174cea4a3d84",
"title": "Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization"
},
{
"paperId": null,
"title": "Tech companies use \"persuasive design\" to get us hooked. Psychologists say it's unethical"
},
{
"paperId": null,
"title": "What the Cambridge Analytica scandal means for the future of Facebook marketing"
},
{
"paperId": "45badf2094c9ab5cf732f0e7d113787568500d9b",
"title": "Design and Evaluation of Auto-ID Enabled Shopping Assistance Artifacts in Customers' Mobile Phones: Two Retail Store Laboratory Experiments"
},
{
"paperId": "58bc868e8905fd4e6c6156514e8d6336ab3330db",
"title": "Designing Persuasive Applications to Motivate Sustainable Behavior in Collectivist Cultures"
},
{
"paperId": "02b6dd80eacb047c6770fdcddb66cfa192f7627e",
"title": "Ethics and Persuasive Technology: An Exploratory Study in the Context of Healthy Living"
},
{
"paperId": "5943456ca54beb31a052bc2b325d0944769bbbfe",
"title": "Communications of the Association for Information Systems"
},
{
"paperId": "35973a1aee551f7a196094ae8885c3f632b8419a",
"title": "Influence: The Psychology of Persuasion"
},
{
"paperId": "d7fa91506297a71f06fe5b34c09a81d15a9f97ca",
"title": "UNITED STATES ENVIRONMENTAL PROTECTION AGENCY - _"
},
{
"paperId": null,
"title": "Most Americans are not willing or able to use an app tracking coronavirus infections"
},
{
"paperId": null,
"title": "Alert app nears 3 million users, but only 514 positive test reports. CTV News"
},
{
"paperId": null,
"title": "Predicting emerging COVID-19 hotspots...without asking. McGill University"
},
{
"paperId": null,
"title": "Illustrating with diversity and inclusion for the COVID Alert app. Canadian Digital Service"
},
{
"paperId": null,
"title": "Number of smartphone users in Canada from 2018 to 2024 ( in millions ) Coronavirus : Trudeau pleads with young people to download COVID Alert app"
},
{
"paperId": null,
"title": "Reducing your transportation footprint"
},
{
"paperId": null,
"title": "Privacy-preserving contact tracing"
},
{
"paperId": null,
"title": "coronavirus apps don't need 60% adoption to be effective. MIT Technology Review"
},
{
"paperId": null,
"title": "Tracking COVID-19: contact tracing in the digital age"
}
] | 14,209
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/031316860c4e2b0077c776050f8580c94ed2b7e2
|
[
"Computer Science"
] | 0.876311
|
Smart-Contract-Based Automation for OF-RAN Processes: A Federated Learning Use-Case
|
031316860c4e2b0077c776050f8580c94ed2b7e2
|
J. Sens. Actuator Networks
|
[
{
"authorId": "35347024",
"name": "Jofina Jijin"
},
{
"authorId": "1796201",
"name": "Boon-Chong Seet"
},
{
"authorId": "1722155",
"name": "P. Chong"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
The opportunistic fog radio access network (OF-RAN) expands its offloading computation capacity on-demand by establishing virtual fog access points (v-FAPs), comprising user devices with idle resources recruited opportunistically to execute the offloaded tasks in a distributed manner. OF-RAN is attractive for providing computation offloading services to resource-limited Internet-of-Things (IoT) devices from vertical industrial applications such as smart transportation, tourism, mobile healthcare, and public safety. However, the current OF-RAN design is lacking a trusted and distributed mechanism for automating its processes such as v-FAP formation and service execution. Motivated by the recent emergence of blockchain, with smart contracts as an enabler of trusted and distributed systems, we propose an automated mechanism for OF-RAN processes using smart contracts. To demonstrate how our smart-contract-based automation for OF-RAN could apply in real life, a federated deep learning (DL) use-case where a resource-limited client offloads the resource-intensive training of its DL model to a v-FAP is implemented and evaluated. The results validate the DL and blockchain performances of the proposed smart-contract-enabled OF-RAN. The appropriate setting of process parameters to meet the often competing requirements is also demonstrated.
|
Journal of
### Sensor and Actuator Networks
_Article_
# Smart-Contract-Based Automation for OF-RAN Processes: A Federated Learning Use-Case
**Jofina Jijin** **, Boon-Chong Seet *** **and Peter Han Joo Chong**
Department of Electrical and Electronic Engineering, Auckland University of Technology,
Auckland 1010, New Zealand
*** Correspondence: boon-chong.seet@aut.ac.nz; Tel.: +64-9-921-9999 (ext. 5345)**
**Abstract: The opportunistic fog radio access network (OF-RAN) expands its offloading computation**
capacity on-demand by establishing virtual fog access points (v-FAPs), comprising user devices with
idle resources recruited opportunistically to execute the offloaded tasks in a distributed manner.
OF-RAN is attractive for providing computation offloading services to resource-limited Internetof-Things (IoT) devices from vertical industrial applications such as smart transportation, tourism,
mobile healthcare, and public safety. However, the current OF-RAN design is lacking a trusted and
distributed mechanism for automating its processes such as v-FAP formation and service execution.
Motivated by the recent emergence of blockchain, with smart contracts as an enabler of trusted
and distributed systems, we propose an automated mechanism for OF-RAN processes using smart
contracts. To demonstrate how our smart-contract-based automation for OF-RAN could apply in real
life, a federated deep learning (DL) use-case where a resource-limited client offloads the resourceintensive training of its DL model to a v-FAP is implemented and evaluated. The results validate the
DL and blockchain performances of the proposed smart-contract-enabled OF-RAN. The appropriate
setting of process parameters to meet the often competing requirements is also demonstrated.
**Citation: Jijin, J.; Seet, B.-C.; Chong,**
P.H.J. Smart-Contract-Based
Automation for OF-RAN Processes:
A Federated Learning Use-Case. J.
_Sens. Actuator Netw. 2022, 11, 53._
[https://doi.org/10.3390/](https://doi.org/10.3390/jsan11030053)
[jsan11030053](https://doi.org/10.3390/jsan11030053)
Academic Editor: Thomas Newe
Received: 7 August 2022
Accepted: 9 September 2022
Published: 13 September 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Keywords:** smart contract; automation; opportunistic fog radio access network; industrial
Internet-of-Things; federated deep learning; blockchain; computation offloading
**1. Introduction**
Recent growth in big data and the use of artificial intelligence (AI) in the Internetof-Things (IoT) led to an increasing need of resources for computation offloading and AI
model training. However, advanced AI techniques such as deep learning (DL) are computationally resource-intensive if executed on IoT devices with low computation capacity.
Hence, the need for them to offload their DL tasks to more resourceful devices increased
significantly. Traditional offloading to the cloud via a cloud radio access network (C-RAN)
has several issues, such as heavy workload at centralized baseband units (BBUs), limited
backhaul capacity, and difficulty in serving delay-sensitive applications [1]. Consequently,
researchers proposed fog radio access network (F-RAN), in which fog access points (FAPs)
are deployed at the network edge to serve the IoT devices. These FAPs can be existing
infrastructure entities further equipped with fog functionalities, or new entities deployed
in an existing infrastructure [2]. However, existing F-RANs do not leverage the presence of
available resourceful user devices to utilize their high, but idle, computation resources.
We argue that the DL tasks are more apt to be offloaded to an opportunistic F-RAN
(OF-RAN), which we proposed in [3]. OF-RAN enhances the F-RAN by harnessing the
concept of opportunistic networks (oppnets), a type of ad hoc network for utilizing available
local resources in an opportunistic manner [4]. Each oppnet is established by a seed node
that assigns one or more helper nodes to assist with a specific task. In OF-RAN, the role of
the seed node and service node is equivalent to that of FAP in F-RAN, and helper node
in oppnet. A seed node in the OF-RAN recruits locally available resourceful user devices,
-----
_J. Sens. Actuator Netw. 2022, 11, 53_ 2 of 14
node and service node is equivalent to that of FAP in F-RAN, and helper node in oppnet.
A seed node in the OF-RAN recruits locally available resourceful user devices, such as highend smartphones and tablets, as service nodes, which collectively form a virtual FAP (v-FAP)
such as high-end smartphones and tablets, as service nodes, which collectively form a
to serve a resource-limited client, e.g., an IoT device.
virtual FAP (v-FAP) to serve a resource-limited client, e.g., an IoT device.
In this paper, we consider an important problem that has yet to be addressed for OF
In this paper, we consider an important problem that has yet to be addressed for
RAN to meet real-world deployment requirements, which is a trusted and distributed mech
OF-RAN to meet real-world deployment requirements, which is a trusted and distributed
anism for automating its processes such as v-FAP formation and service execution. Auto
mechanism for automating its processes such as v-FAP formation and service execution.
mating these repetitive processes can improve operational efficiency, reduce the cost of ser
Automating these repetitive processes can improve operational efficiency, reduce the cost
vice delivery, and help move towards a zero-touch network management model. The auto
of service delivery, and help move towards a zero-touch network management model. The
mation mechanism must be custom designed for the specific processes of OF-RAN, but such
automation mechanism must be custom designed for the specific processes of OF-RAN,
a mechanism has not been proposed for OF-RAN in the literature, to the best of our
but such a mechanism has not been proposed for OF-RAN in the literature, to the best of
knowledge.
our knowledge.
Motivated by the recent emergence of blockchain technology with smart contracts as
Motivated by the recent emergence of blockchain technology with smart contracts as an
an enabler of trusted and distributed systems, this paper proposes an automated mecha
enabler of trusted and distributed systems, this paper proposes an automated mechanism
nism using smart contracts for OF-RAN processes, built on our follow-up preliminary
using smart contracts for OF-RAN processes, built on our follow-up preliminary work on a
work on a blockchain-enabled OF-RAN in [5]. The system architecture of the proposed smart
blockchain-enabled OF-RAN in [5]. The system architecture of the proposed smart-contract
contract-enabled OF-RAN is shown in Figure 1. At the access layer, seed nodes are infra
enabled OF-RAN is shown in Figure 1. At the access layer, seed nodes are infrastructure
structure devices, such as Wi-Fi access points (APs) and pico- and femto-cell base stations
devices, such as Wi-Fi access points (APs) and pico- and femto-cell base stations (BSs)
equipped with fog functionalities. Each seed node is a blockchain node, which hosts a(BSs) equipped with fog functionalities. Each seed node is a blockchain node, which hosts a
smart contract, maintains a copy of the blockchain, and establishes a blockchain networksmart contract, maintains a copy of the blockchain, and establishes a blockchain network with
with other seed nodes. At the terminal layer, each client is served by a v-FAP formed byother seed nodes. At the terminal layer, each client is served by a v-FAP formed by multiple
multiple service nodes. The selection of service nodes in a v-FAP, placement of service tasksservice nodes. The selection of service nodes in a v-FAP, placement of service tasks into service
into service nodes, and processing of service tasks are all executed automatically, accordingnodes, and processing of service tasks are all executed automatically, according to the
to the smart contract.smart contract.
**Figure 1. Smart-contract-enabled OF-RAN.**
**Figure 1. Smart-contract-enabled OF-RAN.**
To demonstrate the role that our smart-contract-based automation for OF-RAN pro
To demonstrate the role that our smart-contract-based automation for OF-RAN pro
cesses can play in real life applications, a federated DL use-case where a resource-limited
cesses can play in real life applications, a federated DL use-case where a resource-limited
client offloads the resource-intensive training of its DL model to a v-FAP is implemented
client offloads the resource-intensive training of its DL model to a v-FAP is implemented
on a physical testbed. The key contributions of this paper are:
on a physical testbed. The key contributions of this paper are:
We propose a smart-contract-based mechanism for automating OF-RAN processes to
_•_
provide trusted and distributed offloading services to resource-limited devices;
-----
_J. Sens. Actuator Netw. 2022, 11, 53_ 3 of 14
We design four smart contracts for automating three OF-RAN processes and one
_•_
application-specific (i.e., federated DL) process;
We implement a v-FAP testbed to experimentally investigate our proposed system;
_•_
_•_ We analyze the impact of various process parameters on the OF-RAN, blockchain, and
federated DL performances.
The rest of the paper is organized as follows. Section 2 reviews related works. Section 3
presents the system model. Section 4 details the process design for the proposed smart
contract. The evaluation methodology and results are discussed in Section 5, and Section 6,
respectively. Finally, Section 7 concludes the paper.
**2. Related Works**
Smart contracts and blockchain technologies are among the key enablers of Industry
4.0. This section reviews works on using blockchain with smart contracts for distributed
systems to secure and automate their processes.
In [6], the authors proposed a blockchain-based secure DL for IoT, which supports
collaborative DL with device integrity and confidentiality. The rules and policies to regulate
the learning and mining tasks are defined in the form of smart contracts residing in the
blockchain. The learning task is performed locally in IoT devices, and the learned local
models are aggregated at an edge server acting as a blockchain node that mines and
coordinates blockchain transactions. The proposed system is shown to be efficient in terms
of accuracy, time delay, and security. However, due to limited resources of IoT devices, it is
not suitable when large or complex learning tasks are involved.
The authors in [7] proposed blockchain-assisted federated learning for edge nodes to
cooperatively train and predict popular files to be cached for IoT devices. Each edge node
trains its local model, and then compresses and sends the local gradients to a cloud server
for aggregation and update of the global model. The updated global model parameters
are then returned to the edge nodes for further training or selecting files to be cached. In
order to record, secure, and verify transactions, a smart contract constituting the following
is proposed: (i) identity contract: verifies identity of IoT and edge nodes; (ii) submission
contract: provides interface for edge nodes to submit their gradients to the blockchain;
(iii) verification contract: elects supervisory consortium to verify transactions; (iv) credit
contract: reward/penalizes participants. The proposed system is shown to improve cache
hit rate and reduce file upload time. Although a blockchain is used, not much has been
explored about the impact of blockchain parameters, such as block size and block interval,
on the caching efficiency or security.
In [8], a security architecture for IoT networks based on software-defined networking
(SDN), blockchain, and fog/edge computing is proposed. Decentralization in blockchain is
used to secure sharing of IoT data and resources. A SDN-enabled edge switch continuously
monitors the data flow to the fog nodes where traffic traces are learned and analyzed to
identify malicious traffic flows. DL algorithms are used to detect attacks at network edge.
A central cloud server manages the attack detection model in the fog nodes, which can be
a processing or proofing agent. The processing agent trains the local model using local
data obtained from the edge switch, while the proofing agent aggregates local models
obtained from proofing agents and verifies the resulting attack detection model. All three
entities (manager, processing agent, and proofing agent) interact with each other through
transactions effectuated using a smart contract residing in the blockchain. The authors
show that their architecture performs well in mitigating attacks, but do not evaluate its
performance impact on delay-sensitive applications.
The authors in [9] presented EdgeChain, a blockchain-based architecture to make
mobile edge application placement decisions for mobile hosts of multiple service providers
(SPs). It uses the logic of the placement algorithm as a smart contract with the consideration of resources from all mobile edge hosts participating in the system. However, the
proposed algorithm only considers fairness in resource sharing among multiple SPs. Other
factors such as energy consumption and end-to-end latency, which are important to energy
-----
_J. Sens. Actuator Netw. 2022, 11, 53_ of resources from all mobile edge hosts participating in the system. However, the pro-4 of 14
posed algorithm only considers fairness in resource sharing among multiple SPs. Other
factors such as energy consumption and end-to-end latency, which are important to energy-constrained mobile devices and delay-sensitive applications, have not been consid
constrained mobile devices and delay-sensitive applications, have not been considered. To
ered. To reduce the blockchain’s energy and computation requirements without compro
reduce the blockchain’s energy and computation requirements without compromising its
mising its traceability and non-repudiation, a lightweight blockchain system known as
traceability and non-repudiation, a lightweight blockchain system known as LightChain
LightChain is proposed in [10]. It features a consensus mechanism with low computing
is proposed in [10]. It features a consensus mechanism with low computing power con
power consumption, a lightweight data structure for information broadcast, and a method sumption, a lightweight data structure for information broadcast, and a method to limit
to limit the growing storage cost of the ledger. the growing storage cost of the ledger.
In [11], a secure federated learning technique called Deepchain is proposed, where In [11], a secure federated learning technique called Deepchain is proposed, where
blockchain cryptographic features are used to preserve privacy of local gradients and blockchain cryptographic features are used to preserve privacy of local gradients and
guarantees auditability of training process. Its smart contract comprises a trading contract guarantees auditability of training process. Its smart contract comprises a trading contract
and processing contract, which guide the secure training process. It is evaluated in terms and processing contract, which guide the secure training process. It is evaluated in terms
of cipher size, throughput, accuracy, and training time. Similarly, in [12], blockchain is of cipher size, throughput, accuracy, and training time. Similarly, in [12], blockchain is
employed to secure federated learning, but with the additional use of digital twins of end employed to secure federated learning, but with the additional use of digital twins of end
devices at edge servers to mitigate the issue of unreliable transmission links. However, it devices at edge servers to mitigate the issue of unreliable transmission links. However, it
is unclear how deviations in data between end devices and their digital twins can impact is unclear how deviations in data between end devices and their digital twins can impact
the resulting edge intelligence. Furthermore, using blockchain to secure federated learn-the resulting edge intelligence. Furthermore, using blockchain to secure federated learning
ing process can incur high mining costs and long information exchange delays due to the process can incur high mining costs and long information exchange delays due to the
consensus protocol of the blockchain network. In contrast, our proposal herein does not consensus protocol of the blockchain network. In contrast, our proposal herein does not
use blockchain to secure federated learning, but information about the resourceful user use blockchain to secure federated learning, but information about the resourceful user
devices in the OF-RAN, in order to facilitate their selection as service nodes for a v-FAP devices in the OF-RAN, in order to facilitate their selection as service nodes for a v-FAP to
to perform federated learning or other offloading services. perform federated learning or other offloading services.
**3. System Model**
**3. System Model**
Figure 2a shows the system model of the proposed smart-contract-enabled OF-RAN,
Figure 2a shows the system model of the proposed smart-contract-enabled OF-RAN, in
in which the seed node and service nodes that constitute a v-FAP manage and execute the
which the seed node and service nodes that constitute a v-FAP manage and execute the
computation tasks offloaded by a client, respectively. The following explains the function
computation tasks offloaded by a client, respectively. The following explains the function
of each key entities in the model:
of each key entities in the model:
(a)
**Figure 2. Figure 2. ((aa)) System model; (System model; (bb)) Sequence of operations.Sequence of operations.**
(b)
_•_ _Smart Contract: Defines the rules and logic for automating the OF-RAN processes_
through four sub-contracts: (i) registration; (ii) selection and placement; (iii) service;
and (iv) mining. The registration contract registers interested resourceful user devices
as potential service nodes. The selection and placement contract firstly selects a set of
user devices based on the cost of using their resources as service nodes in a v-FAP, and
-----
_J. Sens. Actuator Netw. 2022, 11, 53_ 5 of 14
then executes OF-RAN’s task-to-node assignment (TNA) as defined in our follow-up
work in [13]. The TNA is a process for the placement of the service tasks into the
service nodes based on performance criteria such as node energy, process latency, and
fairness in workload distribution. The service contract implements the service logic,
which is application-specific. As a use-case of our proposed smart contract for OFRAN, the federated learning application is chosen. The mining contract is responsible
for new block generation from the transaction data generated upon executing the
service contract to update the ledger;
_•_ _I/O Interface: For both seed node and service nodes to exchange information when_
serving a client;
_•_ _Local Application Model: A service node’s application model that processes information_
from the seed node and generates a local outcome for the client;
_•_ _Global Application Model: A seed node’s application model that collates the local_
outcome from each service node and generates a global outcome for the client;
_•_ _Lookup Table: Records the identity and performance of each service node, which can be_
looked up for future selection of service nodes when a new v-FAP is to be formed;
_•_ _Application Output: The global outcome generated by the global application model._
In federated learning use-case, the application output is the aggregated weight, also
referred to as global update for the client’s DL model.
Figure 2b shows the sequence of operations of our system. For a resource-limited
client to offload its task to OF-RAN, it first sends a service request {1} including the task
requirements to its associated remote radio head (RRH), which in turn notifies the client
{2} of an available nearby seed node to offload its task. The client then offloads its task
{3} to this seed node in the form of data and initial model parameters. The seed node
splits the task into sub-tasks, and, based on the TNA scheme, places the sub-tasks into
each service node {4}. Upon processing, the service nodes send their local outcomes
{5} to the seed node for collation. Finally, the seed node generates and returns a global
outcome {6} to the client.
In our proposed smart-contract-enabled OF-RAN, every seed node is a blockchain
node that monitors the transactions between nodes in a v-FAP. On completing the client’s
task, the seed node updates its lookup table, and then mines a new block from it as proofof-work for propagation to blockchain under a permissioned consensus protocol [14]. Thus,
computations performed in the v-FAP for an offloading application are unaffected by the
blockchain computation performed by the seed node. This ensures that the delay-sensitive
applications can be supported.
**4. Process Design**
This section details the design of our smart contract, which is constituted of four components for automating relevant OF-RAN processes: (i) registration contract; (ii) selection
and placement contract; (iii) service contract; and (iv) mining contract, whose pseudo-code
are shown in Algorithms 1–4, respectively.
_4.1. Registration Contract_
The purpose of the registration contract is for each seed node to maintain a registry of
resourceful user devices in its proximity that could potentially become service nodes in
a v-FAP to serve an offloading client. In this contract, the seed node broadcasts a request
for expression-of-interest (EoI) in participating in a v-FAP. Each interested user device qj
replies with its identification IDj and unit cost uj,k of using resource k for all K resources in
_qj. The types of resources may include energy, computation, communication, and storage_
resources, as well as dwell time, i.e., amount of time user device remains available for the
v-FAP or remains in coverage of the seed node, whichever is less. The seed node then
registers the details for each replying device in its lookup table.
-----
_J. Sens. Actuator Netw. 2022, 11, 53_ 6 of 14
_4.2. Selection and Placement Contract_
The purpose of this contract is two-fold: (i) select N of registered user devices as
service nodes in a v-FAP; (ii) execute the OF-RAN’s TNA process in [13] for the placement
of service tasks into service nodes. The selection is based on evaluating the total cost vj of
using each user device lj in lookup table L to perform all M tasks from the client. Each task
_m ∈_ _M has a demand R[m]k_ [for resource][ k][ ∈] _[K][. Knowing unit cost][ u][j,k][ of using resource][ k][ in][ l][j][,]_
the cost cj,k of using resource k in lj, for all M tasks can be calculated. Next, the total cost vj
of using all K resources in lj, for all user devices in L, is calculated and ranked in ascending
order of cost. Then, the top N service nodes, i.e., the N lowest-cost nodes, are selected.
Finally, the TNA process is executed to find the optimal placement of service tasks
into service nodes based on the pareto-optimization of a multi-objective TNA problem [13].
The obtained solutions (placements) are non-dominant, i.e., not dominated by any member
of the solution set for all the objectives. The seed node can select from the obtained
placements, one that best meets the client’s requirements, such as minimum completion
time for delay-sensitive applications, maximum fairness for high-reliability applications by
minimizing service node failures due to overloading, and minimum energy consumption
for energy-conserving applications.
**Algorithm 1: Registration Contrac**
1. **Input: Q = {q1, q2, ..qV** _}: set of V user devices in proximity of seed node_
2. _η: set of user devices that express interest as service nodes (η ∈_ _Q)_
3. _L: lookup table of seed node_
4. _IDj: identification of user device qj_
5. _uj,k: unit cost of using resource k ∈_ _K in user device qj_
6. _K: set of resource types in a potential service node_
7. **Output: registered user devices in L as potential service nodes**
8. **Process: broadcast request for EoI to all user devices in Q**
9. add to η for each received EoI
10. if |η| > 0 // if not empty set
11. for each qj in η do
12. register IDj, uj,k in L ∀ _k ∈_ _K_
13. end for
14. end if
**Algorithm 2: Selection & Placement Contract**
1. **Input: L = {l1, l2, ..lY}: set of Y registered user devices in lookup table**
2. _N: number of service nodes in a v-FAP_
3. _M: set of service tasks offloaded from the client to the v-FAP_
4. _K: set of resource types in a potential service node_
5. _R[m]k_ [: demand for resource][ k][ ∈] _[K][ by a service task][ m][ ∈]_ _[M]_
6. _uj,k: unit cost of using resource k ∈_ _K in user device lj_
7. **Output: TNA for v-FAP**
8. **Process: for each user device lj in L do**
9. for each resource k in K do
10. _cj,k = uj,k ∑_ _R[m]k_ _[∀]_ _[m][ ∈]_ _[M][ // cost of using resource][ k][ in][ l][j][ for all][ M][ tasks]_
11. end for
12. _vj = ∑_ _cj,k ∀_ _k ∈_ _K // total cost of using all K resources in lj_
13. end for
14. rank user devices in L in ascending order of cost
15. _S ←_ top N user devices in L // select N least-cost nodes as service nodes
16. execute TNA(S, M)
-----
_J. Sens. Actuator Netw. 2022, 11, 53_ 7 of 14
**Algorithm 3: Service Contract for Federated DL Process**
1. **Input: S = {s1, s2, ..sN}: set of N service nodes in a v-FAP**
2. _T: total number of iterations in an epoch_
3. _P: total number of epochs_
4. _ε: termination threshold_
5. _wc: initial weights from client_
6. _L: lookup table of seed node_
7. **Output: final global weight w** _[f]_ ; updated L
8. **Process: initialize w** _[f]_, w[(][p][=][0][)], w[(]j _[t][=][0,][p][=][1][)]_ _←_ _wc for each sj in S_
9. set p ← 1
10. while ����w f ��� _−_ ���w(p−1)��� _≤_ _ε_ ��� _p ≤_ _P) do_
11. set t ← 0
12. while (t < T) do
13. set t ← _t + 1_
14. compute local update w[(]j _[t][,][p][)]_ using (5) for each sj in S
15. end while
16. for each sj in S do
17. set w[(]j _[p][)]_ _←_ _w[(]j_ _[T][,][p][)]_
� �
18. envelop Tx[(]j _[p][)]_ _←_ _w[(]j_ _[p][)], t[(]j_ _[p][)]_ // create transaction
19. upload Tx[(]j _[p][)]_ to seed node
20. end for
� �
21. _Tx[(][p][)]_ = _Tx1[(][p][)]_, Tx2[(][p][)], ..Tx[(]N[p][)] // seed node collates transactions
22. verify Tx[(][p][)]
23. extract w[(]j _[p][)]_ and t[(]j _[p][)]_ from Tx[(]j _[p][)]_ for each Tx[(]j _[p][)]_ in Tx[(][p][)]
24. compute global update w[(][p][)] using (6)
25. update w _[f]_ _←_ arg min _F(w)_
_w∈{w_ _[f]_,w[(][p][)]}
26. update L with t[(]j _[p][)]_ for each sj in S
27. set p ← _p + 1_
28. end while
**Algorithm 4: Mining Contract**
1. **Input: S = {s1, s2, ..sN}: set of N service nodes in a v-FAP**
2. _L: lookup table of seed node_
3. _β: block generation time_
4. _ϕ: hash pointer to previous block_
5. **Output: new block appended to blockchain**
6. **Process: retrieve t[(]j** _[p][)]_ from L for each sj in S
� �
7. _t[(][p][)]_ = _t1[(][p][)]_, t2[(][p][)], ..t[(]N[p][)]
� �
8. generate block B[(][p][)] _←_ _t[(][p][)], β, ϕ_
9. append B[(][p][)] to blockchain
_4.3. Service Contract_
The design of the service contract is application-specific. In this work, federated
learning is used as an application example of the OF-RAN for computation offloading.
Federated learning is a new paradigm for machine learning, where DL models are
executed locally in a distributed manner and results are sent to a server for aggregation [15].
The federated DL approach can be enabled by our smart-contract-enabled OF-RAN, where
-----
_J. Sens. Actuator Netw. 2022, 11, 53_ 8 of 14
a resource-limited client can offload the training of its DL model by sending its training data
and model parameters to a seed node in proximity. In turn, the seed node splits and sends
the training data and model parameters to each service node in the v-FAP. The service nodes
then train their respective local models, and send parameters of their trained models to the
seed node. Finally, the local parameters are aggregated into a global model for returning to
the client. Federated DL using service nodes in a v-FAP relies on collating their model’s
weight parameters obtained by learning from training data sets. A training data sample i is
described as a two-dimensional array (xi, yi), wherein the DL model takes vector xi as an
input (e.g., image pixels) and gives a scalar output yi (e.g., image label). For each sample i,
the DL model with weight w computes a loss function fi(w), the result of which indicates
the extent of model errors, and, thus, should be minimized in the learning process.
We consider a seed node with N resourceful user devices in its proximity that can be
recruited as service nodes, and M is the set of training data from client. We denote the set
of service nodes in a v-FAP as S = {s1, s2, . . . sN}. Each sj, j = {1..N} receives a subset
of training data mj from the seed node, and M = ∑[N]j=1 _[m][j][. A loss function for][ s][j][ over its]_
training data mj can be defined as:
1
_Fj(w) ≜_ ��mj�� ∑i∈mj fi(w) (1)
where ��mj�� returns the size of mj. The global loss function for a v-FAP can be defined as:
∑[N]j=1 _[F][j][(][w][)]_
_F(w) ≜_ (2)
_|M|_
The goal of DL task is to find optimal weight w[′] parameters that minimize F(w):
_w[′]_ ≜ arg min F(w) (3)
We denote w[(]j _[t][)]_ as the local model parameters of each service node sj at iteration t of
learning. Here, t = 0, 1, 2, .., T, where T is the maximum number of iterations. Each sj
trains its local DL model using the subset of training data mj. At the beginning, all service
nodes in S initialize their local model parameters. At each subsequent iteration t > 0, each
_sj updates its w[(]j_ _[t][)]_ by minimizing the loss function using the gradient descent update rule in
� �
(4), where λ > 0 is the learning rate and ∇Fj _w[(]j_ _[t][−][1][)]_ is the average gradient on its training
data at the previous local model parameters w[(]j _[t][−][1][)]:_
� �
_w[(]j_ _[t][)]_ = w[(]j _[t][−][1][)]_ _−_ _λ ∇Fj_ _w[(]j_ _[t][−][1][)]_ (4)
After T iterations, the updated local model parameters from each sj are sent to the
seed node where they is aggregated once every P epochs into a global model update. For
each epoch, a total of T iterations of local update are performed at each sj. The local update
of sj at epoch p and iteration t is given by:
_w[(]j_ _[t][,][ p][)]_ = w[(]j _[t][−][1,][p][)]_ _−_ _[λ]_
_mj_
�� � � � �� �
_∇Fj_ _w[(]j_ _[t][−][1,][p][)]_ _−∇Fj_ _w[(]j_ _[p][−][1][)]_ + ∇F _w[(][p][−][1][)][��]_ (5)
where p = 1, 2, ..P, w[(][p][)] istheglobalupdateatepoch p,and ∇F�w[(][p][)][�] = _|M1_ _|_ [∑]j[N]=1 _[m][j][∇][F][j]�w[(][p][)][�]_
is the global gradient value at epoch p after T iterations. Let w[(]j _[p][)]_ be the local update of sj
at epoch p after T iterations. Then, w[p] is updated as:
1 _N_ �
_w[(][p][)]_ = w[(][p][−][1][)] + _|M|_ [∑]j=1 _[m][j]_ _w[(]j_ _[p][)]_ _−_ _w[(][p][−][1][)][�]_ (6)
-----
_J. Sens. Actuator Netw. 2022, 11, 53_ 9 of 14
The final output of this process is w[(][ f][ )], which gives the final model update that
produces minimum global loss over an entire execution of local and global updates.
At the end of every epoch p, each sj in S envelops its local update w[(]j _[p][)]_ and task
completion time t[(]j _[p][)]_ to create a transaction Tx[(]j _[p][)]_ to upload to the seed node. Upon receiv
ing, the seed node verifies the transactions, extracts the local updates {w1[(][p][)][,][ w]2[(][p][)][, ..][w][(]N[p][)][} to]
compute the global update w[(][p][)], and updates its lookup table by recording the completion
� �
times _t1[(][p][)][,][ t]2[(][p][)][, ..][t][(]N[p][)]_ of each service node. The completion times can be utilized by the
seed node for a rating system that could affect their future selection for the v-FAP.
_4.4. Mining Contract_
The purpose of this contract is to generate and append a new block to the blockchain.
Each block comprises a body and header. The body contains the collated service nodes’
completion times t[(][p][)] = {t1[(][p][)][,][ t]2[(][p][)][, ..][t][(]N[p][)][}, while the header contains information about the]
seed node’s block generation rate β (a.k.a. block interval time) and hash pointer ϕ to the
previous block. On retrieving and collating the completion times from its lookup table, the
� �
seed node generates a new block B[(][p][)] = _t[(][p][)], β, ϕ_ for appending to the blockchain.
**5. Evaluation Methodology**
_5.1. Emulation_
This section describes our emulation of a smart-contract-operated v-FAP to assist
with client training of a DL model based on federated learning for image-based object
detection. The v-FAP is emulated using four Raspberry Pi 4 Model B single-board
computers (4 GB RAM, 1.5 GHz CPU) as service nodes, and an Acer Aspire F15 laptop
(8 GB RAM, 2.5 GHz CPU) as seed node (see Figure 3). The laptop is configured as a
WiFi hotspot to communicate with the service nodes. The DL models in both seed and
_J. Sens. Actuator Netw. 2022, 11, x FOR PEER REVIEW service nodes are implemented using Python 3.7 and TensorFlow 2.3.0. Python is also10 of 14_
used for implementing the smart contract in the seed node.
**Figure 3.Figure 3. Emulated v-FAP for offloaded DL tasks.Emulated v-FAP for offloaded DL tasks.**
_5.2. Simulation For federated learning, we use the MNIST dataset [16], which contains 33,600 train-_
ing instances and 8400 validation instances of 10 object classes. The training datasetWe use a realistic simulator to evaluate the performance of our blockchain network
(33,600 rowsfor OF-RAN. The Bitcoin simulator [14] built on the ns3 network simulator is used for × 785 columns) is split into 1050 mini batches (each of 32 rows × 785 columns)
for equal distribution among the service nodes. The DL model of each service node is a deepsimulating a realistic blockchain network with a set of consensus and network parameters
such as network delay, block generation time, and block size. The following defines some
-----
_J. Sens. Actuator Netw. 2022, 11, 53_ 10 of 14
neural network (DNN) with seven layers: one input, one output, and five hidden layers.
Each service node trains its own DNN model and sends the trained model parameters to
the seed node. Two metrics used in this emulation evaluation are defined as follow:
_•_ _Mean precision accuracy (MPA): The percentage of correctly predicted test instances_
using the global model from the total number of test data instances;
_•_ _Latency: The total time incurred for one epoch operation of the federated DL process._
This includes both computation and communication time.
For the two metrics above, the number of service nodes in the v-FAP is considered the
most important factor underlying their performance. In Section 6.1, we discuss the results
of these metrics under the effect of varying number of service nodes.
_5.2. Simulation_
We use a realistic simulator to evaluate the performance of our blockchain network
for OF-RAN. The Bitcoin simulator [14] built on the ns3 network simulator is used for
simulating a realistic blockchain network with a set of consensus and network parameters
such as network delay, block generation time, and block size. The following defines some
parameters and metrics used in our simulation evaluation:
_•_ _Block interval: The time interval between blocks being added to the blockchain. Herein,_
the interval depends on the average time for service nodes to compute and send their
transactions to a seed node for a new block to be generated;
_•_ _Stale block: Refers to a block not added to the blockchain due to concurrency or conflicts_
between miners. It triggers chain forks that slow the growth of main chain and, thus,
is detrimental to the security of the blockchain;
_•_ _Stale block rate: The percentage of stale blocks among the total number of blocks mined;_
_•_ _Throughput: The number of transactions in a block per unit of block interval time, in_
units of transactions per second (tps).
Table 1 shows the simulation settings used. The block size is set as recommended
in [14]. The block interval is selected based on our findings on the time incurred by our
emulated v-FAP with 1–4 service nodes. The transaction size used reflects the approximate
size of information sent by service nodes to the seed node. The number of miners, i.e., seed
nodes, is set to 16, to adequately represent the scale of the blockchain network that we
envisioned for OF-RAN, while keeping the simulation time tractable.
**Table 1. Simulation settings.**
**Parameter** **Value** **Unit**
Block size δ 0.25–4 MB
Block interval τ 2–7 minutes
Transaction size 1 KB
Number of miners 16
As mentioned above, the stale blocks can weaken the security of the blockchain, and
parameters such as block size and block interval are known to affect the stale block rate.
Additionally, the block size controls the system throughput. Thus, an appropriate block size
and block interval are required for our blockchain to balance a trade-off between security
and throughput. In Section 6.2, we discuss the results of stale block rate and throughput
under the effects of varying block size and block interval.
**6. Results and Discussion**
_6.1. Effect of Varying Service Nodes_
The number of service nodes N in a v-FAP can impact training of the federated DL
model. Figure 4 shows the results obtained from our emulated v-FAP in terms of latency
and mean precision accuracy (MPA) under varying N. It can be observed that latency
decreases as N increases. This is firstly because higher N splits the training data into
-----
The number of service nodes N in a v-FAP can impact training of the federated DL
_J. Sens. Actuator Netw. 2022, 11, 53_ 11 of 14
model. Figure 4 shows the results obtained from our emulated v-FAP in terms of latency
and mean precision accuracy (MPA) under varying N. It can be observed that latency decreases as N increases. This is firstly because higher N splits the training data into smaller
smaller mini-batches, resulting in each service node incurring smaller computation delay.
mini-batches, resulting in each service node incurring smaller computation delay. Fur
Furthermore, since the learning tasks in all service nodes are executed in parallel, increasing
thermore, since the learning tasks in all service nodes are executed in parallel, increasing N
_N decreases the maximum delay among the service nodes. It is also observed that the_
decreases the maximum delay among the service nodes. It is also observed that the overall
overall latency is significantly dominated by computation rather than communication delay.
latency is significantly dominated by computation rather than communication delay.
**Figure 4.Figure 4. Effect of varyingEffect of varying NN on latency and mean precision accuracy. on latency and mean precision accuracy.**
The MPA is determined in the seed node using the global model obtained after the
The MPA is determined in the seed node using the global model obtained after the
aggregation of local updates from each service node in the v-FAP. The results show that as
aggregation of local updates from each service node in the v-FAP. The results show that
_N increases, the MPA expectedly decreases, but only marginally, which can be attributed_
as _N increases, the MPA expectedly decreases, but only marginally, which can be at-_
to the reduced number of mini-batches per service node. Hence, it can be inferred that
tributed to the reduced number of mini-batches per service node. Hence, it can be inferred
the MPA depends on the number of mini-batches used for training the local model of the
that the MPA depends on the number of mini-batches used for training the local model of
service nodes.
the service nodes.
There is an inherent trade-off between latency and the accuracy of training the global
There is an inherent trade-off between latency and the accuracy of training the global
model. The seed node can, thus, select the number of service nodes in a v-FAP based on the
model. The seed node can, thus, select the number of service nodes in a v-FAP based on
client’s requirements. For instance, when a learning task is delay-sensitive, the seed node
the client’s requirements. For instance, when a learning task is delay-sensitive, the seed
can use more service nodes to reduce latency. However, if high accuracy is more critical
node can use more service nodes to reduce latency. However, if high accuracy is more
than low latency, the seed node can use either fewer service nodes or more training epochs
critical than low latency, the seed node can use either fewer service nodes or more training
to improve accuracy.
epochs to improve accuracy.
_6.2. Effect of Varying Block Size and Block Interval_
_6.2. Effect of Varying Block Size and Block Interval_
The setting of block size δ and block interval τ can impact the security and throughput
The setting of block size δ and block interval τ can impact the security and through
of our blockchain network for OF-RAN. Their settings are, in turn, factors to consider when
put of our blockchain network for OF-RAN. Their settings are, in turn, factors to consider
determining the appropriate number of service nodes N in a v-FAP.
when determining the appropriate number of service nodes Figure 5 shows the stale block rate and throughput under varyingN in a v-FAP. δ. The results
are obtained for a default τ = 4.5 min. It can be seen that as δ increases, the throughput
increases. However, the stale block rate also increases, which can cause the blockchain to
be more susceptible to attacks.
-----
Figure 6 shows the results under varying τ obtained for a default δ 2 MB. It shows
tency ≤ 200 secs, and MPA ≥ 92%, then, based on the results in Figure 6, the appropriate
_J. Sens. Actuator Netw. 2022, 11, 53_ that as τ increases, the stale block rate decreases, but so does the throughput. Hence, we 12 of 14
settings for our system could be δ = 2 MB; τ = 3 min; and N = 3, for a resulting stale block rate
can make appropriate choices of δ and τ that meet our stale block rate and throughput
= 0.8%, throughput = 11.5 tps, latency = 161.86 secs, and MPA = 92.19%.
requirements. Moreover, the latency incurred by the v-FAP controls the lower limit of τ,
since the seed node cannot generate a block before all transactions are received from the
service nodes. Alternatively, the _δ and_ _τ that meet the required stale block rate and_
throughput can inform an appropriate choice of N that simultaneously meet the client’s
latency and MPA requirements.
For instance, if the requirements are stale block rate ≤ 1%, throughput ≥ 10 tps, latency ≤ 200 secs, and MPA ≥ 92%, then, based on the results in Figure 6, the appropriate
settings for our system could be δ = 2 MB; τ = 3 min; and N = 3, for a resulting stale block rate
= 0.8%, throughput = 11.5 tps, latency = 161.86 secs, and MPA = 92.19%.
**Figure 5.Figure 5. Effect of block sizeEffect of block size δ𝛿 on stale block rate and throughput. on stale block rate and throughput.**
since the seed node cannot generate a block before all transactions are received from the
service nodes. Alternatively, the _δ and_ _τ_
throughput can inform an appropriate choice of N
latency and MPA requirements.
For instance, if the requirements are stale block rate ≤
tency ≤ 200 secs, and MPA ≥ 92%, then, based on the results in Figure 6, the appropriate
settings for our system could be δ = 2 MB; τ = 3 min; and N =
= 0.8%, throughput = 11.5 tps, latency = 161.86 secs, and MPA = 92.19%.
Figure 6 shows the results under varying τ obtained for a default δ = 2 MB. It shows
that as τ increases, the stale block rate decreases, but so does the throughput. Hence, we
can make appropriate choices of δ and τ that meet our stale block rate and throughput
requirements. Moreover, the latency incurred by the v-FAP controls the lower limit of
_τ, since the seed node cannot generate a block before all transactions are received from_
the service nodes. Alternatively, the δ and τ that meet the required stale block rate and
throughput can inform an appropriate choice of N that simultaneously meet the client’s
latency and MPA requirements.Figure 5. Effect of block size 𝛿 on stale block rate and throughput.
**Figure 6. Effect of block interval 𝜏 on stale block rate and throughput.**
**Figure 6.Figure 6. Effect of block intervalEffect of block interval τ𝜏 on stale block rate and throughput. on stale block rate and throughput.**
For instance, if the requirements are stale block rate 1%, throughput 10 tps,
_≤_ _≥_
latency ≤ 200 secs, and MPA ≥ 92%, then, based on the results in Figure 6, the appropriate
settings for our system could be δ = 2 MB; τ = 3 min; and N = 3, for a resulting stale block
rate = 0.8%, throughput = 11.5 tps, latency = 161.86 secs, and MPA = 92.19%.
**7. Conclusions**
For a distributed system such as OF-RAN to be deployed in the real world, it needs
a trusted operating environment and efficient means of managing its processes related
Figure 6 shows the results under varying τ obtained for a default
that as τ increases, the stale block rate decreases, but so does the throughput. Hence, we
can make appropriate choices of δ and τ that meet our stale block rate and throughput
requirements. Moreover, the latency incurred by the v-FAP controls the lower limit of
_τ, since the seed node cannot generate a block before all transactions are received from_
the service nodes. Alternatively, the δ and τ
throughput can inform an appropriate choice of N
latency and MPA requirements.Figure 5. Effect of block size 𝛿 on stale block rate and throughput.
**Figure 5.Figure 5. Effect of block sizeEffect of block size δ𝛿 on stale block rate and throughput. on stale block rate and throughput.**
Figure 6 shows the results under varying τ obtained for a default
that as τ increases, the stale block rate decreases, but so does the throughput. Hence, we
can make appropriate choices of δ and τ that meet our stale block rate and throughput
requirements. Moreover, the latency incurred by the v-FAP controls the lower limit of
_τ, since the seed node cannot generate a block before all transactions are received from_
the service nodes. Alternatively, the δ and τ
throughput can inform an appropriate choice of N
-----
_J. Sens. Actuator Netw. 2022, 11, 53_ 13 of 14
to v-FAP formation and service execution. This paper seeks to leverage blockchain with
smart contracts as an enabler of trusted and distributed systems to propose an automated
mechanism using smart contracts for OF-RAN processes. The algorithms defining the
logic of the smart contracts are designed, including the service logic for federated DL as a
use-case of the proposed mechanism. The resulting smart-contract-enabled OF-RAN is
implemented and evaluated through emulation and simulation. The evaluation validates
the federated DL performance in terms of latency and precision accuracy under the
effect of varying service nodes in a v-FAP. The results show that increasing service nodes
reduces the latency, but also marginally decrease the accuracy, suggesting the need to
balance a trade-off between the latency and accuracy of training the DL model, according
to the client’s requirements.
The evaluation also validates the blockchain performance in terms of stale block rate
and throughput under the effects of varying block size and block interval. It shows that
throughput can be increased by increasing block size, while stale block rate can be decreased
by increasing block interval. The appropriate setting of process parameters such as block
size, block interval, and number of service nodes to meet the often competing requirements
of latency, precision accuracy, stale block rate, and throughput is also demonstrated. As
future work, the expected gains in terms of operational efficiency and cost of service delivery
for such an automated OF-RAN could be analyzed for different operational scenarios and
cost structures.
**Author Contributions: Conceptualization, J.J. and B.-C.S.; methodology, J.J. and B.-C.S.; software,**
J.J.; validation, J.J. and B.-C.S.; formal analysis, J.J. and B.-C.S.; investigation, J.J.; resources, B.-C.S.;
data curation, J.J.; writing—original draft preparation, J.J.; writing—review and editing, B.-C.S. and
P.H.J.C.; visualization, J.J.; supervision, B.-C.S. and P.H.J.C.; project administration, B.-C.S. All authors
have read and agreed to the published version of the manuscript.
**Funding: This research received no external funding.**
**Data Availability Statement: The data that support the findings of this study are available from the**
authors upon reasonable request.
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Habibi, M.A.; Nasimi, M.; Han, B.; Schotten, H.D. A comprehensive survey of RAN architectures toward 5G mobile communica[tion system. IEEE Access 2019, 7, 70371–70421. [CrossRef]](http://doi.org/10.1109/ACCESS.2019.2919657)
2. Peng, M.; Yan, S.; Zhang, K.; Wang, C. Fog computing-based radio access networks: Issues and challenges. IEEE Netw. Mag. 2016,
_[30, 46–53. [CrossRef]](http://doi.org/10.1109/MNET.2016.7513863)_
3. Jijin, J.; Seet, B.-C. Opportunistic fog computing for 5G radio access networks: A position paper. In Proceedings of the Third
International Conference on Smart Grid and Innovative Frontiers in Telecommunications (SmartGIFT), Auckland, New Zealand,
23–24 April 2018.
4. Lilien, L.; Gupta, A.; Kamal, Z.; Yang, Z. Opportunistic resource utilization networks—a new paradigm for specialized ad hoc
[networks. Comput. Electr. Eng. 2010, 36, 328–340. [CrossRef]](http://doi.org/10.1016/j.compeleceng.2009.03.010)
5. Jijin, J.; Seet, B.-C.; Chong, P.H.J. Blockchain enabled opportunistic fog-based radio access network: A position paper. In
Proceedings of the 29th International Telecommunication Networks and Applications Conference (ITNAC), Auckland, New
Zealand, 27–29 November 2019.
6. Rathore, S.; Pan, Y.; Park, J.H. BlockDeepNet: A Blockchain-based secure deep learning for IoT network. Sustainability 2019, 11,
[3974. [CrossRef]](http://doi.org/10.3390/su11143974)
7. Cui, L.; Su, X.; Ming, Z.; Chen, Z.; Yang, S.; Zhou, Y.; Xiao, W. CREAT: Blockchain-assisted compression algorithm of federated
[learning for content caching in edge computing. IEEE Internet Things J. 2022, 9, 14151–14161. [CrossRef]](http://doi.org/10.1109/JIOT.2020.3014370)
8. Rathore, S.; Kwon, B.W.; Park, J.H. BlockSecIoTNet: Blockchain-based decentralized security architecture for IoT network. J. Netw.
_[Comput. Appl. 2019, 143, 167–177. [CrossRef]](http://doi.org/10.1016/j.jnca.2019.06.019)_
9. Zhu, H.; Huang, C.; Zhou, J. Edgechain: Blockchain-based multi-vendor mobile edge application placement. In Proceedings of
the 4th IEEE Conference on Network Softwarization and Workshops, Montreal, QC, Canada, 25–29 June 2018.
10. Liu, Y.; Wang, K.; Lin, Y.; Xu, W. LightChain: A lightweight Blockchain system for industrial Internet of Things. IEEE Trans. Ind.
_[Inform. 2019, 15, 3571–3581. [CrossRef]](http://doi.org/10.1109/TII.2019.2904049)_
11. Weng, J.; Zhang, J.; Li, M.; Zhang, Y.; Luo, W. Deepchain: Auditable and privacy-preserving deep learning with blockchain-based
[incentive. IEEE Trans. Dependable Secur. Comput. 2021, 18, 2438–2455. [CrossRef]](http://doi.org/10.1109/TDSC.2019.2952332)
-----
_J. Sens. Actuator Netw. 2022, 11, 53_ 14 of 14
12. Lu, Y.; Huang, X.; Zhang, K.; Maharjan, S.; Zhang, Y. Low-latency federated learning and Blockchain for edge association in
[digital twin empowered 6G networks. IEEE Trans. Ind. Inform. 2021, 17, 5098–5107. [CrossRef]](http://doi.org/10.1109/TII.2020.3017668)
13. Jijin, J.; Seet, B.-C.; Chong, P.H.J. Multi-objective optimization of task-to-node assignment in opportunistic fog RAN. Electronics
**[2020, 9, 474. [CrossRef]](http://doi.org/10.3390/electronics9030474)**
14. Gervais, A.; Karame, G.O.; Wüst, K.; Glykantzis, V.; Ritzdorf, H.; Capkun, S. On the security and performance of proof of work
blockchains. In Proceedings of the ACM SIGSAC Conference on Computer and Communications Security, Vienna, Austria, 24–28
October 2016.
15. Zhao, Z.; Feng, C.; Yang, H.H.; Luo, X. Federated-learning-enabled intelligent fog radio access networks: Fundamental theory,
[key techniques, and future trends. IEEE Wirel. Commun. Mag. 2020, 27, 22–28. [CrossRef]](http://doi.org/10.1109/MWC.001.1900370)
16. [The MINST Database. Available online: http://yann.lecun.com/exdb/mnist/ (accessed on 5 August 2022).](http://yann.lecun.com/exdb/mnist/)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/jsan11030053?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/jsan11030053, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2224-2708/11/3/53/pdf?version=1663066504"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-09-13T00:00:00
|
[
{
"paperId": "c0e71da8db8eae84fa1396ce6e293bd40e36b3f3",
"title": "Multi-Objective Optimization of Task-to-Node Assignment in Opportunistic Fog RAN"
},
{
"paperId": "568c714016ada4c88ae3152f48b6e39f63a31963",
"title": "Blockchain Enabled Opportunistic Fog-based Radio Access Network: A Position Paper"
},
{
"paperId": "3591055b6b5b0426d8962a116a330eb6e3e00332",
"title": "BlockSecIoTNet: Blockchain-based decentralized security architecture for IoT network"
},
{
"paperId": "c1021b790b1b0f6c6cdf80001a08c0e7eba9cc43",
"title": "BlockDeepNet: A Blockchain-Based Secure Deep Learning for IoT Network"
},
{
"paperId": "88b0f1160e4bf0e8f43b5ecd7c6f034c7bdaf1c1",
"title": "Opportunistic Fog Computing for 5G Radio Access Networks: A Position Paper"
},
{
"paperId": "06a9f7c0977dd1422dda1c7ac207f04dc40d776a",
"title": "EdgeChain: Blockchain-based Multi-vendor Mobile Edge Application Placement"
},
{
"paperId": "8b32309a7730de87a02e38c7262307245dca5274",
"title": "On the Security and Performance of Proof of Work Blockchains"
},
{
"paperId": "d55fe2e1a72ccf314243b6c9b8b6ec215699cb42",
"title": "Opportunistic resource utilization networks - A new paradigm for specialized ad hoc networks"
}
] | 14,147
|
en
|
[
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/031339fcbb7abe1c1355605012d2ba9f25f34331
|
[] | 0.894308
|
A Quantitative Blockchain-Based Model for Construction Supply Chain Risk Management
|
031339fcbb7abe1c1355605012d2ba9f25f34331
|
The eurasia proceedings of science, technology, engineering & mathematics
|
[
{
"authorId": "2089348797",
"name": "Clarissa Amico"
},
{
"authorId": "2258997734",
"name": "Roberto Cigolini"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"eurasia proc sci technol eng math"
],
"alternate_urls": null,
"id": "edae54fb-1f50-4383-a056-f3bf4219cd15",
"issn": "2602-3199",
"name": "The eurasia proceedings of science, technology, engineering & mathematics",
"type": null,
"url": "http://dergipark.gov.tr/epstem"
}
|
Although the use of Blockchain Technology in construction industry has been limited, nowadays several cases of adoption of this technology in construction sector can been identified. Such examples consist of maintaining digital asset records, timestamps for contracts or transactions, multiple signature transactions, smart contracts, and the repository of real information. This paper proposes a methodology consisting of a Electre Tri multi-criteria analysis method where a list of indicators and a questionnaire are used to fill a model that can be applied to evaluate the suitability of blockchain technology as a tool to mitigate supply chain risks that small and medium enterprises face in the construction industry. The model has been applied to two companies operating in the construction industry. This study contributes to the existing literature by quantitatively assessing the adoption of blockchain technology on two real case studies – company Alpha and company Beta – to limit supply chain risk in the construction sector. The dimensions considered in the analysis are company data, payments, materials, supply chain structure and information and document flow. According to the findings, the model suggests that for company Alpha blockchain technology is recommended but not useful to mitigate risks and so improving supply chain performance. On the contrary, results show that for company Beta the implementation of blockchain technology is useful.
|
**Engineering & Mathematics (EPSTEM)**
**ISSN: 2602-3199**
**The Eurasia Proceedings of Science, Technology, Engineering & Mathematics (EPSTEM), 2023**
**Volume 23, Pages 59-68**
**ICRETS 2023: International Conference on Research in Engineering, Technology and Science**
# A Quantitative Blockchain-Based Model for Construction Supply Chain
Risk Management
**Clarissa Amico**
Politecnico di Milano
**Roberto Cigolini**
Politecnico di Milano
## Abstract: Although the use of Blockchain Technology in construction industry has been limited, nowadays
several cases of adoption of this technology in construction sector can been identified. Such examples consist of
maintaining digital asset records, timestamps for contracts or transactions, multiple signature transactions, smart
contracts, and the repository of real information. This paper proposes a methodology consisting of a Electre Tri
multi-criteria analysis method where a list of indicators and a questionnaire are used to fill a model that can be
applied to evaluate the suitability of blockchain technology as a tool to mitigate supply chain risks that small
and medium enterprises face in the construction industry. The model has been applied to two companies
operating in the construction industry. This study contributes to the existing literature by quantitatively
assessing the adoption of blockchain technology on two real case studies – company Alpha and company Beta –
to limit supply chain risk in the construction sector. The dimensions considered in the analysis are company
data, payments, materials, supply chain structure and information and document flow. According to the
findings, the model suggests that for company Alpha blockchain technology is recommended but not useful to
mitigate risks and so improving supply chain performance. On the contrary, results show that for company Beta
the implementation of blockchain technology is useful.
**Keywords: Supply chain management, Blockchain technology, Construction industry**
## Introduction
Nowadays, in the construction industry, risks cause a net decrease in productivity and a slowdown in the project
process (Al-Werikat, 2017). The analysis of the Italian construction sector has reserved important attention
because it is considered of strategic importance (Kim et al., 2020; Cigolini et al., 2022), since deals with the
structures and infrastructures, which can be used by all other sectors (Cannas et al., 2020; Rossi et al., 2020)
involved in the European economy and society (Harouache et al., 2021).
Small and medium enterprises in Europe and Italy are characterized by low Supply Chain (SC) performance
level (Mafundu & Mafini 2019; Cigolini et al., 2022). The recent Covid-19 pandemic has caused a chain
reaction in all economic sectors around the world, exacerbating this situation. Although signs of recovery are
weak, Italian small and medium-sized companies seem not to have recovered from the 2008 financial crisis and
have long been plagued by low productivity (Ferreira de Araújo Lima et al., 2021).
In this context, Supply Chain Management plays pivotal role especially because today, due to increasing
globalization, SCs are more fragile than they used to be (Layaq et al., 2019; Pero et al., 2016; Amico et al.,
2022; Franceschetto et al., 2022). Because of cheap labour abroad, many companies manufacture or source
products internationally. This creates many types of risks in the SC appropriately managed with the risk
management process (Shemov et al., 2020) where risks are dealt with suitable risk reduction techniques.
- This is an Open Access article distributed under the terms of the Creative Commons Attribution-Noncommercial 4.0 Unported License,
permitting all non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
- Selection and peer-review under responsibility of the Organizing Committee of the Conference
_[© 2023 Published by ISRES Publishing: www.isres.org](http://www.isres.org/)_
-----
Blockchain technology can be indeed a tool for reducing risks due to its tamper-proof record (Xu et al., 2020;
Difrancesco et al., 2022; Amico & Cigolini 2023). Blockchain technology is used to trace the origin of the
materials or components used in the manufacture of products (Xu et al., 2020). Small and medium enterprises,
to compete with the other global players, should develop new innovation-based business strategies that ensure
efficiency, flexibility, and high-quality processes (Pozzi et al., 2019; Franceschetto et al., 2023). Digitizing
processes means moving away from paper and toward online and real-time information sharing to ensure
transparency and collaboration between the actors involved in the process. One reason for the industry's low
productivity is that it still relies primarily on paper to manage its processes (Difrancesco et al., 2022; Amico &
Cigolini 2023), and deliverables, such as blueprints, project drawings, purchase orders and supply chains,
equipment records, and daily progress reports (Kim et al., 2020).
Literature related on the classification of risks in the SCs of small and medium enterprises in the construction
industry, as well as the definition of specific indicators to evaluate blockchain suitability as risk mitigator, is
scant. Thus, this paper aims to fill this gap by understanding, through a model based on Electre Tri methodology
(élimination et choix traduisant la réalité, French for elimination and choice expressing reality, see Del Rosso
Calache et al., 2018) whether small and medium enterprises can adopt blockchain technology as a risk mitigator
tool to improve companies SC performance. Moreover, this study allows small and medium enterprises to
understand if blockchain could be the right solution for the specific context of their organization. In fact, this
paper can help to catalogue and study various aspects and the related risks of the SC by providing a clear
outcome in terms of adaptability of blockchain technology to a fragmented and heterogeneous context such as
that of small and medium enterprises in the construction industry.
The paper is structured as follows: section 2 is devoted to the description of the research background to define
the SC of the construction industry and the related risks as well as the main characteristics of blockchain
technology. Section 3 describes the methodology adopted while section 4 illustrates the model. Section 5 shows
the main findings. Finally, section 6 draws some conclusions and suggests future research paths.
## Background
Construction SCs are complex systems especially when a variety of site materials and parties (like suppliers and
sub-contractors) are involved in a construction project (Papadopoulos et al., 2016). The more people are
engaged (e.g., first tier, second tier suppliers and other tiers of sub-contractors, see Rossi et al., 2017, Pero et al.,
2020, Afraz et al., 2021), the more complex is the project. Furthermore, construction industry deals with
complex SCs because more worker, parties and materials are required to a specific project. A construction
project necessitates collaboration and cooperation among SC actors to define the best planning and organization
for the project (Gosling et al., 2016).
According to Koskela et al. (2020), the construction SC can be differentiated as a converging SC since all
materials are directed to the construction site where the object is assembled from incoming materials. Moreover,
construction SC is fragmented since construction contractors, suppliers and other participants are active in
different stages of the project, and the distribution of responsibility and authority could change over time.
Finally, construction SC is temporary because when a construction project is completed, all participants and
companies involved are usually dismissed as soon as all the actors participating in the project complete their
duties.
Furthermore, the construction SC is composed by the following three elements. (i) The primary SC that is the
stream that delivers the materials used in the final stage of the construction process. (ii) The support chain is in
charge for providing expertise and equipment that facilitate the realization of construction project (e.g.,
scaffolding and excavation supports). (iii) The human resource SC that includes the supervisory staff and labour
useful for the construction process. Hence, the construction SC consists of the human resource SC, the support
chain, the primary chain, and it is also characterised to be temporary, make to order, complex and converging
(Al-Werikat, 2017). According to Papadopoulos et al. (2016) most of the issues in the construction industry
arises at the interfaces between the various activities or roles and are due to the complex nature of the
construction environment. The main issues concern the so-called design interface phase between the client and
field contractor that embraces several difficulties in defining and then realizing client’s wishes.
Moreover, within the engineering phase between the field design contractor and the engineering contractor – the
so-called engineering interface – some documents may prove to be incorrect the design can change and –
**60**
-----
consequently – the approval of the design changes can be very long. Within the procurement phase between the
engineering contractor and the procurement actors there is the so-called vendors interface, and some drawings
may show inaccurate data, or they are not usable by vendors. Within the construction phase, some issues can
occur between vendors and suppliers: for example lack of coordination, collaboration and commitment between
suppliers, poor quality of materials and components. In the completion of the project between the site and the
commission contractor – the so-called commissioning interfaces – some issues could be related to safety issues
and difficulties with local communities. Finally, after commissioning there is the so-called operation interface:
there can be problems due to unresolved quality and technical issues, delayed operational time due to late
completion (Nanayakkara et al., 2021).
All the previous mentioned issues are related to the concept of risk. SC risk is an adverse event since it
negatively influences the desired performance of an industry (Layaq et al., 2019). In the construction industry,
examples of risks are related to demand (e.g., order fulfilment errors, inaccurate forecasts due to longer lead
times, product variety) and inventory (e.g., costs of holding inventories, rate of product obsolescence and
supplier fulfilment, see Pero et al., 2020; Ferreira de Araújo Lima et al., 2021).
The risk management process is a useful method to limit these SC risks and it is defined by five phases: risk
identification, risk measurement, risk assessment, risk evaluation and risk control and monitoring. Such process
allows to mitigate all the challenges that small and medium enterprises must face. To compete with the other
global players, small and medium enterprises should develop new innovation-based business strategies that
ensure efficiency, flexibility, and high-quality processes (Pournader et al., 2020).
Digitizing processes means moving away from paper and toward online and real-time information sharing to
ensure transparency and collaboration, timely progress and risk assessment, quality control, and ultimately,
better and more reliable results (Difrancesco et al., 2022; Amico et al., 2022). Blockchain technology offers to
small and medium enterprises the opportunity to increase productivity. Blockchain technology can record data,
transferred though all the actor involved in the SC, in a decentralized manner. This provides transparency
between members and the ability to follow the record of the entire flow of information. This information is
verifiable and allows the origin to be traced and completed (Pournader et al., 2020).
One of the main benefits of adopting a blockchain technology is that it is highly effective and transparent to all
parties involved. Blockchain is typically adopted for capital construction projects and complex contracts.
Throughout the project lifecycle, blockchain technology ensure that all parties under contract are collaborating
at all levels. Blockchain technology can ensure that all operations are always performed in accordance with the
agreed-upon terms and conditions (Pournader et al., 2020; Amico et al., 2023). Finally, blockchain technology
eliminates mutual dependence on the central authority. Its decentralization increases the importance of network
effects (Kim et al., 2020).
## Methodology
This section introduces the methodology used to outline the indicators to evaluate blockchain suitability for the
small and medium enterprises construction SCs. Fifteen indicators (three for each category) are the input of the
model designed to assess the level of suitability of blockchain as risk mitigator. Considering that the subset of
indicators refers to different issues, the decision aiding methodology to define a model that assesses the level of
blockchain suitability is a multicriteria procedure known as Electre (Norese & Carbone, 2014).
According to the research background discussed in the previous section, the main risks identified in the
construction industry are the following ones. (i) Inefficient communication between the actors involved. (ii)
Delay in the project due to SC structure inefficiency. (iii) Delays and lack payments. (iv) Loss of material
traceability. Starting from these risks the dimensions in which the indicators can be grouped are defined.
Company Data refer to the number of employees, company’s turnover, and level of digitization. Payments are
described by their reliability, the delay in receiving payments and the methods of payments. Materials are
assessed in terms of quality, delivery time and traceability. SC Structure is defined by the number and
localization of suppliers and the types of contracts. Finally, the information and document flow is referred to the
channels employed to gather documents, archiving system and sources of documents.
These dimensions have the purpose to take into consideration all the worthy elements to evaluate blockchain
suitability to mitigate risks. The importance of each dimension and then of each indicator with respect to the
**61**
-----
others is expressed using a procedure to define weights. The procedure is the Analytic Hierarchy Process (AHP,
see Saaty, 2008) and it is based on pairwise comparisons. The first step of AHP procedure is to define a scale of
preference from 1 to 5 where 1 means equality and 5 means maximum preference. 1 = Equality, 2 = Minimum
preference, 3 = Medium preference, 4 = Great preference, 5 =Maximum preference.
The second step is to perform the comparison matrix (m × n) with row i= (1, …, m) and column j = (1, ..., n).
Such comparison matrix is defined from the pairwise comparison. The comparison matrix has always 1 on the
diagonal and it is positive, reciprocal, and consistent. Positive means that aij>0. Reciprocal means that aij=1/aji.
Consistent means that aij=aik/ajk.
Once the comparison matrix is calculated, the third step consist of defining the priority vector that can be
described as the normalized eigenvector of the matrix. The procedure chosen to define the priority vector is the
so-called eigenvector method where power iterations (Saaty, 2008) are required in order that the algorithm
produces a nonzero vector considered a good approximation of the eigenvector corresponding to the greatest
eigenvalue of the matrix λ max, called principal eigenvalue. In this way, in the comparison matrix the
inconsistency will be distributed among all the elements of the matrix and the columns will be gradually close to
proportionality.
When the consistency ratio is close to zero, the priority vector can be declared as the best expression of the
weight system that will be used in the Electre Tri method. To evaluate the consistency ratio, the consistency
level of the comparison matrix through the computation of the principal eigenvalue must be evaluated. The
principal eigenvalue is obtained from the sum of the products between each element of the priority vector and
the sum of the columns of the comparison matrix.
According to Saaty (2008), in a consistent reciprocal matrix, the largest eigenvalue is equal to the size of the
comparison matrix. Meanwhile, if some inconsistencies are taken into consideration, it is required a measure of
consistency using consistency index (CI), where CI = (λmax –n)/(n–1) that measures the level of consistency as a
deviation from the size of the comparison matrix. The consistency index needs to be compared with the random
index that is defined as the result of the average value obtained from 50,000 computation of the consistency
ratio of a matrix with the entries above the main diagonal at random from the 17 values {1/9, 1/8, …,1, 2, …,8,
9} and the entries below the diagonal by taking reciprocals (Saaty, 2008). In Table 3 the values obtained from
one set of simulations for matrices of size 1, 2, …,15 is illustrated. The result of this comparison is the
consistency ratio (CR), where CR = CI/RI.
The analysis deals with a 5x5 matrix regarding dimensions while 3x3 concerning indicators, so the Random
Index (RI) is 1.11 and 0.52, respectively (see Table 1).
Table 1. Random Index.
Random Index Values
Matrix Order 1 2 3 4 5
RI 0.00 0.00 0.52 0.89 1.11
Matrix Order 6 7 8 9 10
RI 1.25 1.35 1.40 1.45 1.49
Matrix Order 11 12 13 14 15
RI 1.52 1.54 1.56 1.58 1.59
When the consistency ratio is lower or equal to 10 percent the inconsistency can be considered acceptable and
consequently the priority vector a good approximation of the weight system (Saaty, 2008). This process is
employed to define the weight system of dimensions and indicators. In a primary analysis, the indicators’
weights were deduced considering that, within the same dimension, they have equal weight one respect to the
other. Then, it has been realized that there are some indicators with more importance that the others belonging to
the same dimensions and so the AHP process has been performed to define indicators weights.
## Model
This section outlines the model and provide the main results obtained from the priority vector of the considered
dimensions (company data, payments, materials, SC structure, information, and document flow, see Table 2) as
well as the indexes used to evaluate the consistency of the matrix (see Table 3).
**62**
-----
Table 2. Priority vector.
Priority
Dimensions
Vector
Category
Weights
Company Data 0.0665 6.65
Payments 0.1820 18.20
Materials 0.1718 17.18
SC Structure 0.3296 32.96
Information and Document flow 0.2501 25.01
Table 3. Consistency indexes.
Categories Values
λmax 5.4
N 5
CI 0.1016
RI 1.11
CR 9.15%
The value of the consistency ratio (CR) is lower than 10% and so the priority vector can be considered a good
approximation of the weight system (Saaty, 2008). Regarding the dimension’s weight system, it can be observed
that “SC structure” is considered the main dimension according to the pairwise comparison executed, so the
scores obtain within this dimension are relevant in determining the final category. Meanwhile, “Company data”
is considered less important when compared to others. The other three dimensions (Payments, Materials, and
Information and Document flow) are almost at the same level of importance in fact none of them is so relevant
in the final category definition when compared to the others.
Considering “SC structure” dimension, Table 4 shows the results obtained from the indicators’ priority vector.
The indexes used to evaluate the consistency of the matrix are: λmax that is equal to 3.0735, the number of
indicators equal to 3, the consistency index equal to 0.03668, the random index equal to 0.52 and the
consistency ratio equal to 7.07%. Also in this case, the value of the consistency ratio is lower than 10% for each
group of indicators and so the priority vector obtained can be considered a good approximation of the weight
system (Saaty, 2008).
Table 4. Priority vector of indicators of the SC Structure dimension.
Priority
Vector
Category
Weights
Number of suppliers 0.6144 61.44
Suppliers’ localization 0.1172 11.72
Typologies of contracts stipulated 0.2684 26.84
Regarding the indicators’ weight system, it can be observed that in “SC structure” dimension the prevailing
indicator is “Number of suppliers” because blockchain technology is useful with complex SCs. To explain the
importance of each category and of each indicator, in relation to the others, the weight system defined according
to procedure described has been directly implemented in the Electre Tri model.
Then, to fill the model, a questionnaire is formulated considering the three indicators related to “SC structure”
dimension (number of suppliers, suppliers’ localization, and typologies of contracts stipulated). For each
indicator a specific question is formulated. Then, all the possible answers (four for each indicator) are quantified
with a score from one to four where one corresponds to the situation in which blockchain technology cannot
provide an improvement in company’s performance; while four represents the case in which blockchain is
useful to mitigate risks and so increase the SC performance. Consequently, each answer considers a one to three
value obtaining an overall scale from one to twelve that represents the scoring of the model. For each question,
answer (i) gives a score from 1 to 3; answer (ii) from 4 to 6; answer (iii) from 7 to 9 while answer (iv) from 10
to 12.
According to the first indicator – the number of suppliers – the following question is formulated.
_How many suppliers are involved in your company's SC?_
**63**
-----
The possible answers are: (i) less than 10 suppliers. (ii) Between 10 and 30 suppliers. (iii) Between 30 and 50
suppliers. (iv) more than 50 suppliers.
For the second indicator – suppliers’ localization – the question formulated is as follows.
_Where are located your company suppliers in relation to your company?_
The possible answers are: (i) less than 20 km; (ii) Between 20 and 100 km; (iii) Between 100 and 200 km; (iv)
more than 200 km.
Finally, for the third indicator – typologies of contracts stipulated – the related question is the following one.
_How often your company use long term contract with your suppliers?_
The possible answers are: (i) never; (ii) a small percentage; (iii) the vast majority; (iv) always.
After formulating the questionnaire, in the Electre Tri method, to perform a rating, specific categories must be
defined and, consequently, the definition of their profile is needed (Saaty 2008). Four different categories have
been settled with their three relative profiles (see Table 5).
Table 5. Profile values.
Profiles Value
D – C 3.5
C – B 6.5
B – A 9.5
Category A means that blockchain technology is completely useful for small and medium enterprises and is
reached when most indicators’ scores suggest a situation that can take great advantages by the implementation
of blockchain technology as a risk mitigator. Category B means that blockchain is useful and it indicates that
there are several features that can be improved thanks blockchain technology, but it is not guaranteed that the
overall process can take advantage from this implementation. In Category C the implementation of blockchain is
suggested for small and medium enterprises but is not useful. In fact, this category includes firms for which
blockchain can provide some occasional improvements and so it is suggested but considered not suitable to
mitigate risks and so improving the SC performance. Finally, Category D means that blockchain is completely
useless, thus firms do not benefit by the implementation of this technology.
## Results
In this section, two model applications are provided to evaluate the process implemented in two real companies
operating in the construction sector and differently categorized. The former (Alpha) can be considered a small
enterprise while the latter one (Beta) is medium-sized company. The two companies are both located in the
same area and so their SCs are facing similar issues.
Company Alpha can be classified as a small enterprise since the number of employees is higher than 10 and the
turnover is slightly higher than 2 million. Moreover, the level of digitalisation of the company does not put in
place significative initiatives thus, Alpha cannot be considered a digitized firm. Payments are received often on
time while the materials flow in some cases is not completely transparent. The SC structure is characterized by
several suppliers higher than fifty and almost all the contracts are based on long term relations. The suppliers are
all located within 100 km with respect to the firm. Finally, the documents and information sources are received
both in paper and in digital form, thus the archiving system is quite well organized.
Table 6 shows the model results for company Alpha. According to the model proposed in this study, the final
category obtained is C: “Blockchain suggested but not useful”. The overall result is highly influenced by the
“Company data”, “Payments” and “Information and Documents flow” dimensions.
Company Beta is a medium enterprise since the number of employees is greater than 50 and the turnover is
more than 10 million. Until now, the level of digitization is quite low. The payments are usually reliable but
when the company operates as a subcontractor there some cases in which the payment is not guaranteed.
However, the payments are almost never received on time.
**64**
-----
Table 6. Company Alpha model results.
Weights λ-cutting Category
Dimensions Indicators
Dimensions Indicator level profile
Company Number of 6.65 0.95 3.99 C
data employees
Turnover 1.9
Level of 3.8
digitalization
Payment Reliability 18.2 2.97 10.92 C
Delay in receiving 9.82
payments
Methods of 5.4
payments
Mate-rials Quality 32.95 2.01 10.3 A
Delivery time 4.60
Traceability 10.55
SC structure Number of 17.17 20.24 19.77 A
suppliers
Suppliers’ 3.86
localization
Typologies of 8.84
contracts
stipulated
Information Channels used to 25.01 3.49 15.01 C
and gather documents
document Archiving system 13.20
flow Sources of 8.31
documents
Regarding materials, they are received often according to project requirements and ISO standard. Meanwhile,
sometimes materials arrive later with respect to the project timing. Regarding the SC structure, the number of
suppliers is around 50 and most contracts are established on long term dealings. Usually, suppliers are located
250 km from company Beta.Finally, regarding information and documents flow, the archiving system need
improvement since the number of documents are huge. Table 7 shows the model results for company Beta and
the final category registered is Category B “Blockchain useful”. The overall result is influenced by the fact that
the dimensions with the highest weight scores B.
The results obtained leave room to several insights. If companies obtain as a result for which the blockchain
technology is not recommended or useless, there is the possibility to perform a deep dive analysis by
understanding what the areas are where the implementation of this technology is not suggested. In fact, results
show if there is a particular area where the implementation can provide an increase in SC performance
In the case of company Alpha, despite its final category is C, “materials” and “SC structure” dimensions reached
category A showing that for these two dimensions blockchain technology is completely useful. It means that
blockchain technology could improve materials traceability, quality, and the delivery time with respect to the
company requirements. Moreover, blockchain technology can be useful since it can improve the optimum
number and localisation of suppliers as well as the typology of contracts stipulated with them. The other three
dimensions (company data, payment and information and document flow) have reached as final category C
showing that blockchain technology is suggested but not useful for the company.
In the case of company Beta, “company data” and “information and document flow” dimensions reached
category C. Also in this case, blockchain technology can be suggested but not useful for the company, especially
in evaluating the sources and the channels to gathering documents and information as well as the quantities of
documents and information shared. On the contrary, “payment”, “materials” and “SC structure” dimensions
show that blockchain is useful specifically by evaluating the reliability of the different payments methods and if
payments are subject to delays. Finally, blockchain technology can be useful in improving SC indicators as well
as quality, traceability, and delivery time of materials.
**65**
-----
Table 7. Company beta model results
Weights λ-cutting Category
Dimensions Indicators
Dimensions Indicator level profile
Company Number of
0.95
data employees
Turnover 6.65 1.9 3.99 C
Level of
3.8
digitalization
Payment Reliability 2.97
Delay in
receiving 9.82
18.2 10.92 B
payments
Methods of
5.41
payments
Materials Quality 2.01
Delivery time 32.95 4.61 10.3 B
Traceability 10.55
SC structure Number of
20.24
suppliers
Suppliers’
3.86
localization 17.17 19.77 B
Typologies of
contracts 8.84
stipulated
Information Channels used
and to gather 3.49
document documents
flow Archiving 25.01 15.01 C
13.2
system
Sources of
8.31
documents
## Conclusions and Future Research Paths
This paper focused on the definition of the main issues related to construction supply chain by investigating the
main risks that small and medium enterprises have to face at supply chain level. Moreover, this paper explored
the implementation of blockchain technology as risk mitigator for small and medium enterprises’ supply chains
in the construction industry.
The methodology adopted consists of a literature review and a quantitative model based on Electre Tri
multicriteria analysis method. The main outcomes of the literature review showed that construction supply chain
faces several issues generated at the interfaces between the various activities, for example design, engineering
vendor’s interfaces, as well as commissioning and operation interfaces.
The model performed in this paper aimed to assess the blockchain suitability as risk mitigator. This model was
applied to two real companies, namely Alpha and Beta. Company Alpha is a small firm while company Beta a
medium one. The model is based on a questionnaire – and then the related answers – built on a list of indicators
(number of employees, turnover, level of digitalization, payments’ reliability, delay in receiving payments,
payments methods, quality, delivery time and traceability of materials, number and localization of suppliers,
typologies of contracts stipulated with suppliers, channels used to gather documents and information, archiving
system document and sources). Moreover, the model is built on a system of weights that represents the
importance of each dimension (company data, payments, materials, supply chain structure, information and
document flow). Then, a rating procedure was assessed where four categories has been defined: category A
means that blockchain technology is completely useful; category B that blockchain is useful; category C that the
technology is recommended but not useful and finally category D where blockchain technology is considered
completely useless.
**66**
-----
Findings show that for company Alpha blockchain technology is suggested but not useful because company
data, payment and information and documentation flow dimensions obtained weights associated with profile
category “C”. Regarding company Beta, the final category obtained is “B”, thus blockchain technology is
considered useful for the company.
The model adopted in this study is an effective tool that allows small and medium enterprises to evaluate if the
blockchain technology could act as risk mitigator and so improve firms’ supply chain performance. As future
research paths, other studies could enhance the number of indicators considered in the model. Moreover, other
research could consider different industries in which blockchain technology can be implemented, for example
the apparel or transport sectors.
## References
Afraz, M. F., Bhatti, S. H., Ferraris, A., & Couturier, J. (2021). The impact of supply chain innovation on
competitive advantage in the construction industry: Evidence from a moderated multi-mediation
model. Technological Forecasting and Social Change, _162, 120370._
Al-Werikat, G. (2017). Supply chain management in construction. _International Journal of Scientific and_
_Technology Research, 6(3), 106-110._
Amico C., Cigolini R., & Franceschetto S. (2022a). Supply chain resilience in the European football industry:
the impact of Covid-19. Proceedings of the Summer School Francesco Turco.
Amico, C., Cigolini, R., & Franceschetto S. (2022b). Using blockchain to mitigate supply chain risks in the
construction industry. Proceedings of the Summer School Francesco Turco.
Amico, C., & Cigolini, R. (2023). Improving port supply chain through blockchain-based bills of lading: a
quantitative approach and a case study. _Maritime_ _Economics_ _and_ _Logistics,_
https://doi.org/10.1057/s41278-023-00256-y
Cannas, V., Ciccullo, F., Cigolini, R., & Pero, M. (2020). Sustainable innovation in the dairy supply chain:
enabling factors for intermodal transportation. _International Journal of Production Research,_ _58(24),_
7314-7333.
Cigolini, R., Gosling, J., Iyer, A., & Senicheva, O. (2022a). Supply chain management in construction and
engineer-to-order industries. Production Planning and Control, 33(9-10), 803-810.
Cigolini, R., Franceschetto, S., & Sianesi A. (2022b). Shop floor control in the VLSI circuit manufacturing: a
simulation approach and a case study. _International Journal of Production Research,_ _60(18), 5450–_
5467.
Del Rosso Calache, L. D., Galo, N. R., & Ribeiro Carpinetti, L. C. (2018). A group decision approach for
supplier categorization based on hesitant fuzzy and Electre Tri. _International Journal of Production_
_Economics, 202,182-196._
Difrancesco, R. M., Meena, P., & Kumar, G. (2022). How blockchain technology improves sustainable supply
chain processes: a practical guide. Operations Management Research, 1-22.
Ferreira de Araújo Lima, P., Marcelino-Sadaba, S., & Verbano, C. (2021). Successful implementation of project
risk management in small and medium enterprises: a cross-case analysis. International Journal of
_Managing Projects in Business,_ _14(4), 1023-1045._
Franceschetto S., Amico C., Cigolini R. (2022). The ‘new normal’ in the automotive supply chain after Covid”.
_Proceedings of the Summer School Francesco Turco._
Franceschetto, S., Amico, C., Brambilla, M., & Cigolini R. (2023). Improving supply chain in the automotive
industry with the right bill of material configuration. IEEE Engineering Management Review.
Gosling, J., Pero, M., Schoenwitz, M., Towill, D., & Cigolini, R. (2016). Defining and categorizing modules in
building projects: an international perspective. Journal of Construction Engineering and Management,
_142(11)._
Harouache, A., Chen, G. K., Sarpin, N. B., Hamawandy, N. M., Jaf, R. A., Qader, K. S., & Azzat, R. S. (2021).
Importance of green supply chain management in Algerian construction industry towards sustainable.
_Journal of Contemporary Issues in Business and Government,_ _27(1), 1055-1070._
Kim, K., Lee, G., & Kim, S. (2020). A study on the application of blockchain technology in the construction
industry. KSCE Journal of Civil Engineering, _24 (9), 2561-2571._
Koskela, L., Vrijhoef, R., & Dana Broft, R. (2020). Construction supply chain management through a lean
lens. Successful Construction Supply Chain Management: Concepts and Case Studies, 109-125.
Layaq, M. W., Goudz, A., Noche, B., & Atif, M. (2019). Blockchain technology as a risk mitigation tool in
supply chain. International Journal of Transportation Engineering and Technology, 5(3), 50-59.
**67**
-----
Mafundu, R. H., & Mafini, C. (2019). Internal constraints to business performance in black-owned small to
medium enterprises in the construction industry. The Southern African Journal of Entrepreneurship
_and Small Business Management,_ _11(1), 1-10._
Nanayakkara, S., Perera, S., Senaratne, S., Weerasuriya, G. T., & Bandara, H. M. N. D. (2021). Blockchain and
smart contracts: A solution for payment issues in construction supply chains. In Informatics, _8(2), 36._
Norese, M. F., & Carbone, V. (2014). An application of Electre Tri to support innovation. Journal of
_Multi‐Criteria Decision Analysi,_ _21 (1-2), 77-93._
Papadopoulos, G. A., Zamer, N., Gayalis, S. P., & Tatsiopoulos, I. P. (2016). Supply chain improvement in
construction industry. Universal Journal of Management, 4(10), 528-534.
Pero, M., Rossi, M., Xu, J., & Cigolini, R. (2020). Designing supplier networks in global product development.
_International Journal of Product Lifecycle Managemen, 13(2), 115-139._
Pero, M., Sianesi, A., & Cigolini, R. (2016). Reinforcing supply chain security through organizational and
cultural tools within the intermodal rail and road industry. _International Journal of Logistics_
_Management,_ _27(3), 816-836._
Pournader, M., Kach, A., & Talluri, S. (2020). A review of the existing and emerging topics in the supply chain
risk management literature. Decision Sciences, _51(4), 867-919._
Pozzi, R., Pero, M., Cigolini, R., Zaglio, F., & Rossi, T. (2019). Using simulation to reshape the maintenance
systems of caster segments. International Journal of Industrial and Systems Engineering, 33(1), 75-96.
Rossi, T., Pozzi, R., Pero, M., & Cigolini, R. (2017). Improving production planning through finite-capacity
MRP. International Journal of Production Research, 55(2), 377-391.
Rossi, T., Pozzi, R., Pirovano, G., Cigolini, R., & Pero, M. (2020). A new logistics model for increasing
economic sustainability of perishable food supply chains through intermodal transportation.
_International Journal of Logistics Research and Applications, 24(4), 346-363._
Saaty, T.L. (2008). Decision making with the analytic hierarchy process. _International Journal Services_
_Sciences, 1(1), 83-98._
Shemov, G., Garcia de Soto, B., & Alkhzaimi, H. (2020). Blockchain applied to the construction supply chain:
A case study with threat model. Frontiers of Engineering Management, 7(4), 564-577.
Xu, J., Abdelkafi, N., & Pero, M. (2020). On the impact of blockchain technology on business models and
supply chain management. Proceedings of the Summer School Francesco Turco.
## Author Information
**Clarissa Amico**
Politecnico di Milano
Department of Management, Economics, and Industrial
Engineering, Politecnico di Milano
Via Lambruschini, 4/B, 20156 Milano – Italy
Contact e-mail: clarissavaleria.amico@polimi.it
**Roberto Cigolini**
Politecnico di Milano
Department of Management, Economics, and Industrial
Engineering, Politecnico di Milano
Via Lambruschini, 4/B, 20156 Milano – Italy
**To cite this article:**
Amico, C. & Cigolini, R. (2023). A quantitative blockchain-based model
for construction supply chain risk management. The Eurasia Proceedings of Science, Technology, Engineering
_& Mathematics (EPSTEM), 23, 59-68._
**68**
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.55549/epstem.1361713?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.55549/epstem.1361713, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNCSA",
"status": "HYBRID",
"url": "http://www.epstem.net/en/download/article-file/3413836"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-10-16T00:00:00
|
[
{
"paperId": "f356488ad976f27697f41c5661e8a1920301db5d",
"title": "Improving port supply chain through blockchain-based bills of lading: a quantitative approach and a case study"
},
{
"paperId": "0d5c3fdb37557fc57ec0b17ca4f05bc122b1867f",
"title": "How blockchain technology improves sustainable supply chain processes: a practical guide"
},
{
"paperId": "354c93e79a8be17de68b1345a7ef46e4db716429",
"title": "Shop floor control in the VLSI circuit manufacturing: a simulation approach and a case study"
},
{
"paperId": "f14b80a2e2600b9b0eff965ec7ad894c7351fa9d",
"title": "Blockchain and Smart Contracts: A Solution for Payment Issues in Construction Supply Chains"
},
{
"paperId": "99260a0d6c7625dfd98ea517fb4175a07773b7f3",
"title": "Successful implementation of project risk management in small and medium enterprises: a cross-case analysis"
},
{
"paperId": "94de4804f475b8988f7362b355f229d33cd0b3bd",
"title": "Supply chain management in construction and engineer-to-order industries"
},
{
"paperId": "185bed99adf810f4e27dd2a0a89aae914bc4f525",
"title": "Sustainable innovation in the dairy supply chain: enabling factors for intermodal transportation"
},
{
"paperId": "8acbb5264644351f320701de64e2f458caecb418",
"title": "Blockchain applied to the construction supply chain: A case study with threat model"
},
{
"paperId": "ca075c74994f5d7cdaafde22a5030a04e615fc2b",
"title": "A Study on the Application of Blockchain Technology in the Construction Industry"
},
{
"paperId": "6eeb909d94a7d331a838bc6e5e7bfbfc2feaca40",
"title": "A Review of the Existing and Emerging Topics in the Supply Chain Risk Management Literature"
},
{
"paperId": "9b30fedf917fb53444c5cc0bf86fb6623467378a",
"title": "A new logistics model for increasing economic sustainability of perishable food supply chains through intermodal transportation"
},
{
"paperId": "0b610dd8e64cee236445242577ad6858b5f7bafa",
"title": "Construction Supply Chain Management through a Lean Lens"
},
{
"paperId": "b2b1e19b142dcb4e7c624de81fb760164e5a2c16",
"title": "Blockchain Technology as a Risk Mitigation Tool in Supply Chain"
},
{
"paperId": "505a4663c0df2c3ef407180e2f5794e8295418bf",
"title": "Using simulation to reshape the maintenance systems of caster segments"
},
{
"paperId": "0009218e4d70b60cc78494d945e2a15b249514b6",
"title": "Internal constraints to business performance in black-owned small to medium enterprises in the construction industry"
},
{
"paperId": "6f30c7ceaf53cdb145f2fc3df2b818d69fd6852e",
"title": "A group decision approach for supplier categorization based on hesitant fuzzy and ELECTRE TRI"
},
{
"paperId": "601c256db271b6bffabdf622b9691b282189d6f0",
"title": "Improving production planning through finite-capacity MRP"
},
{
"paperId": "6227023a9f6743e15b9fceeeb8f5bf70bd34b5f0",
"title": "Reinforcing supply chain security through organizational and cultural tools within the intermodal rail and road industry"
},
{
"paperId": "2c265366be490cdee685c886a43425c656f3ba68",
"title": "Supply Chain Improvement in Construction Industry"
},
{
"paperId": "905df2340b77156e0e81162396f483758ef610fa",
"title": "Defining and Categorizing Modules in Building Projects: An International Perspective"
},
{
"paperId": "e3c561049eb532e328fc2b8288c490986cd9403f",
"title": "DECISION MAKING WITH THE ANALYTIC HIERARCHY PROCESS"
},
{
"paperId": "0a0611288d811d7410f1b32c528925a2fe59fccb",
"title": "Improving Supply Chain in the Automotive Industry With the Right Bill of Material Configuration"
},
{
"paperId": null,
"title": "The ‘new normal’ in the automotive supply chain after Covid”"
},
{
"paperId": null,
"title": "Supply chain resilience in the European football industry: the impact of Covid-19"
},
{
"paperId": "931efd48f5408782015438c5bee585ac4d1c57b6",
"title": "The impact of supply chain innovation on competitive advantage in the construction industry: Evidence from a moderated multi-mediation model"
},
{
"paperId": "8df254b8e1cc29ca40da06ff83adfa575c947028",
"title": "Designing supplier networks in global product development"
},
{
"paperId": "7a8cbb8812d9dbe4076c9d55a349107096234074",
"title": "Importance of Green Supply Chain Management in Algerian Construction Industry towards sustainable development"
},
{
"paperId": "2f827560eac3ce2c5632ada639154fe8a795fd4c",
"title": "An Application of ELECTRE Tri to Support Innovation"
},
{
"paperId": "30eb18af3e7b7fc3ed8c673862aeb3ad940fa393",
"title": "SUPPLY CHAIN MANAGEMENT IN CONSTRUCTION"
},
{
"paperId": null,
"title": "Using blockchain to mitigate supply chain risks in the construction industry"
},
{
"paperId": null,
"title": "International Conference on Research in Engineering, Technology and Science (ICRETS), July 06-09, 2023, Budapest/Hungary"
}
] | 9,164
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0313ea3d846088658467b508c7f99758f3cf3073
|
[
"Medicine",
"Computer Science"
] | 0.906413
|
User Centered Neuro-Fuzzy Energy Management Through Semantic-Based Optimization
|
0313ea3d846088658467b508c7f99758f3cf3073
|
IEEE Transactions on Cybernetics
|
[
{
"authorId": "26859859",
"name": "Shaun K. Howell"
},
{
"authorId": "2383958",
"name": "H. Wicaksono"
},
{
"authorId": "2000176",
"name": "B. Yuce"
},
{
"authorId": "3171598",
"name": "K. McGlinn"
},
{
"authorId": "1757067",
"name": "Y. Rezgui"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Trans Cybern"
],
"alternate_urls": [
"https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6221036"
],
"id": "404813e7-95da-4137-be14-2ba73d2df4fd",
"issn": "2168-2267",
"name": "IEEE Transactions on Cybernetics",
"type": null,
"url": "http://ieeexplore.ieee.org/servlet/opac?punumber=6221036"
}
|
This paper presents a cloud-based building energy management system, underpinned by semantic middleware, that integrates an enhanced sensor network with advanced analytics, accessible through an intuitive Web-based user interface. The proposed solution is described in terms of its three key layers: 1) user interface; 2) intelligence; and 3) interoperability. The system’s intelligence is derived from simulation-based optimized rules, historical sensor data mining, and a fuzzy reasoner. The solution enables interoperability through a semantic knowledge base, which also contributes intelligence through reasoning and inference abilities, and which are enhanced through intelligent rules. Finally, building energy performance monitoring is delivered alongside optimized rule suggestions and a negotiation process in a 3-D Web-based interface using WebGL. The solution has been validated in a real pilot building to illustrate the strength of the approach, where it has shown over 25% energy savings. The relevance of this paper in the field is discussed, and it is argued that the proposed solution is mature enough for testing across further buildings.
|
## User Centered Neuro-Fuzzy Energy Management Through Semantic-Based Optimization
### Shaun K. Howell, Hendro Wicaksono, Baris Yuce, Member, IEEE, Kris McGlinn, and Yacine Rezgui
**_Abstract—This paper presents a cloud-based building energy_**
**management system, underpinned by semantic middleware, that**
**integrates an enhanced sensor network with advanced analytics,**
**accessible through an intuitive Web-based user interface. The**
**proposed solution is described in terms of its three key lay-**
**ers: 1) user interface; 2) intelligence; and 3) interoperability.**
**The system’s intelligence is derived from simulation-based opti-**
**mized rules, historical sensor data mining, and a fuzzy reasoner.**
**The solution enables interoperability through a semantic knowl-**
**edge base, which also contributes intelligence through reasoning**
**and inference abilities, and which are enhanced through intel-**
**ligent rules. Finally, building energy performance monitoring is**
**delivered alongside optimized rule suggestions and a negotiation**
**process in a 3-D Web-based interface using WebGL. The solu-**
**tion has been validated in a real pilot building to illustrate the**
**strength of the approach, where it has shown over 25% energy**
**savings. The relevance of this paper in the field is discussed,**
**and it is argued that the proposed solution is mature enough for**
**testing across further buildings.**
**_Index Terms—ANN, data mining, energy management, fuzzy_**
**logic, genetic algorithm, ontology, optimal control, semantic Web,**
**WebGL.**
I. INTRODUCTION
UBLIC buildings have substantial proliferations of
control/automation technologies and tend to experience
# P
large discrepancies between “designed” and “operational”
energy use, as well as increased user comfort dissatisfaction [1], [2]. Actual energy performance can be considered
as the result of a complex combination of, and interaction
between, three factors: 1) intrinsic quality of the building;
2) “in use” conditions and user behavior; and 3) energy
control and actuation strategy [3], [4]. Whilst altering factor
Manuscript received January 6, 2018; revised April 18, 2018; accepted
May 18, 2018. Date of publication July 17, 2018; date of current version June 6, 2019. This work was supported by the European Commission
in the Context of the KnoholEM Project through the EEB-ICT-2011.6.4
(ICT for energy-efficient buildings and spaces of public use) Program
under Grant 285229. This paper was recommended by Associate Editor
P. P. Angelov. (Corresponding author: Shaun K. Howell.)
S. K. Howell and Y. Rezgui are with the Cardiff School of
Engineering, Cardiff University, Cardiff CF24 3AA, U.K. (e-mail:
howellsk5@cardiff.ac.uk).
H. Wicaksono is with Jacobs University Bremen gGmbH, 28759 Bremen,
Germany (e-mail: h.wicaksono@jacobs-university.de).
B. Yuce is with the College of Engineering, Mathematics and Physical
Sciences, University of Exeter, Exeter EX4 4QF, U.K.
K. McGlinn is with the School of Computer Science and Statistics,
University of Dublin Trinity College, Dublin, Ireland.
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TCYB.2018.2839700
1) requires complete and costly energy retrofitting interventions, academic evidence suggests that factors 2) and 3) play
a determinant role in the energy equation of a building [5].
Managing energy performance implies the ability to monitor and characterize usage patterns whilst understanding user
behavior and comfort aspirations in order to devise user centered real-time energy optimization plans. However, energy
control is usually handed to smart systems that 1) do not
offer flexibility in responding to unforeseen situations or needs
or 2) exhibit a level of complexity that hinders their effective
use by facility managers [6].
Moreover, building energy interventions have been designed
without taking into account the need to negotiate energy
use and desired environmental conditions [1], [6]. building
management systems (BMSs) can be seen as the interface
between energy systems and users, including facility managers (FMs). On the one hand, occupants need to feel an
engagement with the process of regulating their energy usage
in a way that enhances their living and working experience
in buildings; conversely, energy control systems should have
a level of intelligence and interactivity that promote usercentered and negotiable (multiobjective) energy optimization
strategies [2], [5].
Existing BMS in research and industry have shown: 1) various adoption and use problems which suggest a lack of
understanding of users’ expectations in terms of levels of
automation and functionality; 2) limitations in their capacity to factor in (near) real-time dynamic changing conditions,
as well as addressing; and 3) often conflicting multiobjective
goals, e.g., reducing energy while enhancing occupants’ comfort and working experience [2]. State-of-the-art research in
BMSs involves the use of semantic-based real-time sensing
tools [7]–[9] that factor in space occupancy patterns as well as
user comfort feedback. However, these tools need to promote
more effective energy control strategies through enhanced
interoperability with existing energy modeling environments,
building control systems, and operational log feeds, and deliver
higher-order intelligence (through correlation and analysis
of energy modeling predictions and actual use), accessible
through more intuitive user-interfaces.
This paper proposes a methodology that exploits finer
integration of sensing, interoperability, intelligence, and user
interfaces to confer FMs the desired levels of interaction
(including automation and functionality) with the BMS to
address a wide range of energy scenarios. This builds on
prior work [7], [10], following further experimentation with
the approach, development of the underpinning software
Thi k i li d d C i C A ib i 3 0 Li F i f i h // i /li /b /3 0/
-----
platform and algorithmic design, and pilot site validation,
which are the focus of this paper. Following the introduction, this paper critically discusses related research, identifying
gaps to be addressed by this paper. Section III illustrates
the overarching methodology and the various underpinning
components delivering a semantic negotiable strategy for
energy management. The subsequent sections then detail
each of these components, namely: semantic Web middleware (Section IV), rule-based analytics (Section V), and
smart GUI (Section VI). This paper then presents the validation of the approach in a public care home building in
The Netherlands. This paper discusses the proposed approach
and provides concluding remarks and directions for future
research.
II. BACKROUND
_A. Toward Economic, Extensible, and Integrated Retrofit_
_BEMS Solutions_
Building energy management systems (BEMSs) aim to
improve the energy performance of operational buildings.
They work on the principle of collecting data about the current state of a building, analyzing this data, then providing
feedback to the appropriate decision maker, or reconfiguring
the building automatically.
Such a system can be conceptualized in different architectural layers; a sensor layer, computational layer, and an application layer [11]. The sensor layer includes all the energy and
environment monitoring devices, the computational layer analyses this data to generate knowledge and desired actuations.
The application layer then either acts on this automatically,
or provides decision support through a user interface which
may also send notifications to stimulate behavioral change and
feedback. An alternative architecture is presented in [2], which
includes a middleware layer between the sensor and analytics
layers. This middleware connects the distributed infrastructure of sensors and actuators with the processing engine, and
is responsible for handling heterogeneity and interoperability. Other architectural configurations for buildings and smart
homes were observed [12], [13], each sharing a similar layered
architecture.
The reduction of energy consumption through a BEMS
requires it to be economic and engaging for the decision
maker, and to deliver integrated, accurate, and attractive measures for energy saving. This requires the extension of the state
of the art at both the analytics and interface levels. However,
it must also be suitable for retrofit into buildings with existing
sensor networks of heterogeneous components and be extensible, as the state of the art continues to improve, so must also
innovate in the middleware BEMS layer. To this end, recent
advances in each of these three layers are now reviewed in
turn: 1) middleware; 2) analytics; and 3) interfaces.
_B. Interoperating Legacy Systems With Advanced_
_Analytics—The Role of Semantic Middleware_
A flexible and thorough middleware solution is essential
to interoperate between the existing sensor and managet t i b ildi d th l l ti d
visual components of a retrofitted BEMS. Whilst interoperable
data exchange protocols are critical [14], and other barriers,
such as data quality, integrity, and security exist [15], interoperability of data formats and meaning is a critical challenge
in ICT interventions in the built environment [14]–[16]. This
highlights the key challenges in energy management interoperability of both shared syntax and semantics between ICT
components. These incompatibilities currently require ad-hoc
mappings for effective communication and interoperation with
a retrofitted BEMS.
Instead, a common vocabulary and conceptual model mitigates the effort required for software artefacts to communicate
effectively in an energy management system [9], [14]–[17].
Such artefacts are referred to as semantic models, and are
being developed using the Web ontology language (OWL),
to facilitate the semantic Web [9], [18], [19], Internet of
Things [20], and linked data [21]. These ontological semantic
models standardize the description of concepts, relationships,
and properties in the domain.
In the built environment domain, the openBIM IFC
data model is already experiencing strong uptake [22]. This
model uses a less expressive format than OWL, and its federation into OWL is an active area of research [23]. Whilst this
does not sufficiently model energy management concepts, its
extension toward BEMS would improve the adoption of the
resulting model.
The ISES project used an OWL-DL ontology to address
interoperability in an integrated lifecycle BEMS [24]. Also,
the HESMOS project developed an ontology-equipped framework to integrate distributed and heterogeneous data from
ICT building energy systems [25]. However, these projects
do not strongly consider their alignment with existing
standards, such as the IFC, and do not model occupant
behavior. This gap is therefore addressed by the ontological middleware developed for the presented BEMS
solution.
_C. State of the Art of BEMS Rule Generation and_
_Application_
One of the biggest built environment challenges is the
need for adaptive, autonomous, and replicable management
solutions. Thus, several retrofit building energy management
and control systems exist [26], [27], which use intelligent
approaches to deal with complexity and uncertainty [28]–[30].
Whilst this can also be achieved through a semantic-based
approach [31], this typically requires domain expert knowledge, although automated knowledge discovery processes are
emerging [32]–[39].
Several rule generation and knowledge discovery processes
exist, such as rule mining [32], combined mining [33], cooperative rules [34], neural network [35], fuzzy logic [36],
fuzzy rough set [37], genetic algorithm [38], ant colony
optimization [39], hybrid algorithms [7], [40], evolving
fuzzy systems [41], [42], decision trees [43], [44], fuzzy
classifiers [45], fuzzy pattern trees [46], and rule ripping
approaches [47]. These provide a flexible method of approxiti l i d t d i d hi hli ht th it bilit
-----
of machine learning [40] and well-trained ANNs [27] for
approximating highly nonlinear problems.
Mitra and Hayashi [40] proposed a neuro-fuzzy rule generation framework, capturing the strengths of both neural
networks and fuzzy systems for use in the area of medical diagnosis. Neural networks perform well in data driven
processes, providing a continuous learning ability, and fuzzy
systems perform well in logic-based systems. Combining the
two approaches therefore presents merits in data driven, logical systems. However, they have not compared their algorithm
with other prominent rule generation algorithms.
Finally, Pal and Pal [38] proposed a self-organized rule
generation process for a fuzzy controller, through a genetic
algorithm. This selected the optimal number of rules without supervision, eliminating the need for expert involvement.
They tested their solution on an inverted pendulum; reducing
the number of rules by circa 95%, and resulting in an integral
absolute time error of 0.1019. However, their approach did
not optimize the membership functions of the fuzzy inference
engine, which could increase the robustness of their approach.
The strengths and weaknesses of the rule generation
techniques presented above vary across different types of
problem, and they could be improved through multi combined
approaches, such as using neural network-based optimization
processes. Moreover, they could be extended with advanced
partitioning techniques, such as PCA, or other fast classifiers, although PCA may not perform well with large numbers
of inputs. This weakness can be avoided by also using
multi regression analysis (MRA), where PCA determines the
required number of classes and MRA can then determine
these classes, using a regression coefficient. Therefore, hybrid
processes deliver the strengths of several approaches, especially regarding data driven processes [48], although this has
to be logically linked well with other methods.
_D. Toward Engaging Interface for Building Energy_
_Monitoring and Decision Support_
The final layer of the BEMS system is the application layer,
which allows the FM to interact with the system’s data monitoring, analytics, and actuation capabilities. Several commercial energy monitoring tools exist, which allow the monitoring
of energy consumption in a building. MonaVisa is a product
which is retrofitted alongside an existing BMS [49]. This collects temperature and CO2 sensor readings and assesses these
against a comfort range, generating a notification when a KPI
leaves this range. These assessments are conducted at different
time scales for each monitored room and are delivered through
a GUI. PlugWise is an energy monitoring tool which transmits
energy readings over the ZigBee protocol. This allows additional sensors to be added to monitor temperature, motion,
gas, and electricity. Again, collected readings can be viewed
as charts and graphs for each metered appliance over varying
time scales, and overlaid onto a 2-D floorplan.
Increasingly, these sensing, analytics, and actuation services are delivered through WebApps. These aim to provide
engaging interfaces with seamless cross-platform deployment.
HTML5 id fl ibl d t ibl t t th
requirements of many tools, and Asynchronous JavaScript and
XML (AJAX) and SPARQL queries can be used to access
the underpinning knowledge. Further, WebGL facilitates 3-D
visuals in HTML5 Web pages without the need for browser
plugins, as HTML5 is supported natively by modern browsers.
This is highly beneficial because it allows deployment across
operating systems, other Web page elements can form part of
the GUI, and the visuals can make use of a number of highlevel communications tools, such as AJAX. The 3-D graphics
software interface to WebGL is written in JavaScript, which
allows the use of the document object model to manipulate the
Web page, and allows the visualization to be manipulated by
standard Web form controls. Finally, as this allows the seamless integration of 3-D visualizations with Web technologies,
it allows the computationally expensive simulation and analytics tasks to be performed on the server side, with only the
rendering of 3-D data performed by the user’s Web client.
III. OVERVIEW OF PROPOSED SOLUTION
The research and development of a novel BEMS was undertaken through an EC FP7 project [50] and tested within
a mixed mode residential care home in The Netherlands. The
project aimed to produce a BEMS, which could be retrofitted
into public buildings with minimal investment, to exploit an
enhanced sensing infrastructure and the existing BEMS, augmented with analytics and visualization components through
a semantic Web approach. This involves a semantic knowledge
base, which describes the physical properties of the building
as an extension of the openBIM IFC data model [18], [23],
through an RDF store and SPARQL endpoint. The semantic
model in the knowledge base also contextualizes the historical data stored in an MySQL database by formalizing a shared
meaning. The novel analytics include the automated production of rules through simulation-based rule generation [7] and
their subsequent fuzzification alongside rules from mining on
historical metering data. The visualization component utilized
an HTML5-based smart GUI to deliver engaging 3-D WebGL
visuals alongside real-time and historical energy performance
monitoring and decision support, by presenting the optimized
rules as user-friendly actuation suggestions. The BEMS aimed
to promote trust with FMs through a negotiation-based userin-the-loop approach. This meant the FM was responsible
for actuating the suggested changes, as this was attractive
to industrial partners due to liability and legislation concerns
around automated actuation. Finally, the semantic Web-based
approach aimed to promote reusability and extensibility, by
allowing the deployment of the BEMS in further buildings
without redesign of its underlying technologies, as was tested
through four other European pilot sites within the project.
This paper focusses on presenting the enhanced BEMS and
delivering proof of concept at the selected pilot site. The
following sections therefore discuss the key components of
the proposed system’s service-oriented architecture; the RDF
store, SPARQL mapper, and knowledge base which constitute
the semantic middleware, the data mining engine, rule engine,
and fuzzy real time reasoner, which constitute the system’s
analytics components, and the system’s smart GUI, as shown
i Fi 1 b f il t it lid ti i t d
-----
Fig. 1. Architecture of the proposed solution.
The numbers H1-H5 and R1-R6 in Fig. 1. describe the two
data flows involved in the proposed solution, i.e., historical
and real-time data flows. Each data collected from sensors is
transmitted through Web service interface into the MySQL
database periodically in a certain interval resulting in a collection of historical data (H1). Through the BuildVis interface,
the user can query the historical data to perform performance
monitoring, for example to monitor the energy performance of
certain building zone in a specified time range (H2–H3). The
historical data stored in the MySQL database are then used as
training data by machine learning algorithms to generate rules
(H4). The resulting rules are transformed into SWRL rules and
integrated into the knowledge base (H5). The rule generation
is performed in a larger interval to update the knowledge, for
example once in a month.
Through the Web service interface, the fuzzy reasoner collects data from the sensors in real time (R1). Then, it invokes
the appropriate rules, i.e., rules with certain weights in the
knowledge base that have been selected by the user (R2). The
fuzzy reasoner fires the rules by setting the variables in the
condition part with the values collected from the sensors (R3).
Through BuildVis GUI, the user can define an energy saving goal of a certain category in his building, for example
10% energy saving for heating (R4). The knowledge base
returns the suggestions containing set points values of different
actuators to achieve the desired goal (R5). Subsequently, the
user could set the set points corresponding to the suggested
values (R6).
To summarize the relationships between the core analytics components: a genetic algorithm generates energy-saving
rules, using an ANN as the cost function (as a surrogate for the
thermal simulation), these rules map the current building state
and actuator states to optimal actuator states for the imminent
future. The rules are fuzzified and then stored in the knowledge base, and updated on a periodic basis (e.g., weekly).
The r les are sed b the f reasoner at r ntime alongside
actual sensor data, where the fuzzy reasoner recommends the
best actuator state given the current observed building state
and actuator states.
IV. SEMANTIC WEB MIDDLEWARE
_A. Role of Semantic Middleware_
As mentioned, a critical problem in retrofitting advanced
analytics into existing buildings is the range of heterogeneous data sources and existing BEMS solutions encountered: such as (in our pilot case study) Priva, Controlli,
and EUGENE. This was overcome through a key novelty
of this paper; the knowledge base and accompanying software which served as the integration components of the
proposed energy management system. It integrates heterogeneous data sources required by the system, and also
provides some of the intelligence capabilities through reasoning on the rules and structures contained in the knowledge
base.
Each of existing BEMS solution uses different communication protocol, for example, EUGENE uses Modbus and
Priva uses BACNet. However, they provide Web service REST
interface. The data are transmitted from those BEMS solution to the middleware layer through REST Web service. We
developed a program to perform the mapping between the Web
service schema and our knowledge base model.
The approach of a semantic middleware solution was
adopted over traditional options to facilitate reuse and extensibility in the BEMS domain and the wider domains of smart
cities and the Internet of Things, and to build the BEMS
solution in line with the wider trend toward Web-based software. Through this approach, the proposed solution could be
deployed in further buildings regardless of the proprietary
data schemas and protocols used by their previously installed
sensing, actuation, and BEMS infrastructure, and could be
d t i t t b ildi t ith
-----
Fig. 2. Main concepts, relationships, and IFC mappings in the domain ontology.
management at the district scale, such as, where renewables or
microgrids require active and collaborative management [51].
_B. IFC-BEMS Domain Ontology_
The OWL was used to represent the knowledge base in
order to achieve a high degree of expressiveness of the knowledge model. The knowledge domain model consists of classes
representing building physical elements that are observed
and analyzed in energy management activities, and building controls consisting of sensors, controllers, alarm, etc.,
which act as observer and controller of physical building
elements. Furthermore, the knowledge model represents the
human actors and their behaviors that can affect the states
of building physical elements. In the knowledge model, the
states are classified into simple states, for example window or
room states, and complex states, which are built by relating
several simple states. Energy efficiency and comfort degrees
are examples of complex states. This resulted in 145 asserted
classes, 43 object property slots, and 43 data property slots; the
key physical and sensory classes and relationships are shown
in Fig. 2.
In order to provide the possibility to reuse existing industrial standards, the knowledge domain model is aligned to IFC
model, as also shown in Fig. 2. The alignment is done by
defining the explicit IFC-OWL mappings that are stored in the
class annotations. For example, the IFC entity IfcWindow
is mapped to OWL class Window using the annotation
correspondToIfcEntity. The other main IFC concepts
which were reused were the physical building elements and
geometries, such as doors, walls and openings, and the key
extensions included descriptions of the zones, sensors, states,
people, and behaviors in the domain. In total the domain ontology asserted 44 mappings to IFC concepts. This allowed an
automatic IFC to OWL document conversion using SPARQL
queries [23].
_C. Population of Pilot Site Knowledge Base_
The domain ontology model only contained classes, relati th d d fi iti f th i ti I d
to apply the knowledge base in a specific building, the ontology had to be populated with instances corresponding to the
objects in the building that are considered essential for the
energy management activities. Most current building layouts
are only drawn as 2-D sketch using CAD applications, such
as AutoCAD [52]. They contain only geometrical primitives,
such as lines, curves, points, etc. Therefore, in order to populate the ontology, the semantic information of the sketch had to
be extracted. OntoCAD is an open source tool that was developed to solve the problem. The tool clusters the geometric
primitives in layers. Using the tool, we defined templates representing semantic objects, such as doors, rooms, and chairs,
and select the areas in the drawing which corresponded to
the to-be-generated ontology instances. The tool updated the
property values of the generated instance automatically, such
as the position and the perimeter. OntoCAD also allowed the
validation and correction of the knowledge population, where
necessary [53].
The knowledge base also embeds SWRL rules, which are
generated automatically using both historical metering (generated through data mining) and simulation data. Each rule
is equipped with a weight indicating the confidence of the
rule. The weight has values between 0 and 1. These are
used by the fuzzy reasoner to evaluate the importance of
the rules. This is necessary to account for the large number
of rules generated by the data mining and simulation modules. As well as these custom rules, the ontology deployment
performs inference through the Jena inference module. This
allows new knowledge to be produced automatically by the
software from the stated axioms, resulting in inferred knowledge being used alongside explicit knowledge. For example,
if a sensor is stated to be connected to a specific element, as
a property of the sensor, then the software infers as a property of the building element, that the element has that sensor
connected.
_D. RDF Store and SPARQL Endpoint_
This module is the main communication module between
the knowledge base and the smart GUI. The knowledge base
t ll th d t b t th b ildi d it t l t t
-----
Fig. 3. Example of an SPARQL query.
the BEMS. To enable visualization of the building floor plan,
an existing 2-D DWG file is parsed and converted directly
into RDF and stored on the Fuseki server. The data extraction
tool OntoCAD is used to identify zones in the building and
add additional information as sensor types and locations. This
information is also stored as an OWL file and uploaded into
a Fuseki server which is running on a linux operational systembased virtual platform, and maintained by the Knowledge and
Data Engineering Group in Trinity College Dublin [54]. Each
pilot building has its own instance of Fuseki server to store
the building specific knowledge base. The smart GUI queries
the ontology using a combination of AJAX and SPARQL
(SPARQL Protocol and RDF Query Language). When the
FM selects the pilot building through the smart GUI, several SPARQL queries are made to the Fuseki server, one of
which returns JSON objects which are then used to store
a 2-D array of JavaScript zone objects, which describe each
zone in the building. A query example is shown in Fig. 3.
This would be enough to display the zones graphically (using
WebGL), although as each property is returned as strings,
perimeters must be parsed client side to get each point given
in Fig. 3.
V. OPTIMIZED RULE-BASED ANALYTICS
In order to enhance the reasoning capabilities of the knowledge base, we integrated rules from data mining over sensor
data, and rules from thermal simulation-based optimization.
The rules are represented with SWRL in order to allow the
integration into the knowledge base. The data mining rules
are mainly used to identify inconsistent performance and to
predict energy consumption in the building. Conversely, the
simulation-generated rules aim to impose optimal set point
configurations toward the negotiated target energy saving.
Both rule types are critical to the BEMS’s capability to assist
FMs in improving energy efficiency in the building. The main
reason for utilizing the simulation-based rules in the proposed
methodology was the complex behavior of the building
environments, which could not be fully captured by rules
without a simulation model and a robust intelligent solution.
The following sections present the generation approaches of
both rule types. Nevertheless, this paper focuses on simulation
rules and only introduces data mining rules briefly, as they
d ib d i [23]
_A. Extraction of Rules Through Data Mining on Historical_
_Metering Data_
The objective of the data mining was to identify correlations
between indoor and outdoor sensor data, user behaviors, and
energy consumption data, and to express these as rules. The
rules were then federated into SWRL rules in the knowledge
base to enrich each building’s model. Reasoning on the rules
generated new knowledge that can be utilized for the following
goals.
1) Prediction of the energy consumption of certain user
activities, building zones, and appliances.
2) Detection of energy consumption anomalies in user
activities, zones, and appliances.
3) Inference of user activities in building or zones based
on contextual sensor data.
4) Fault detection in appliances, based on their energy
consumption.
5) Prediction of actuator states or configurations toward
meeting specific comfort levels.
These intelligent capabilities were achieved through the
collection and algorithmic analysis of the following relevant
sensor data.
1) Indoor Sensor Data: Zone temperatures, CO2 concentrations, and door and window states.
2) Outdoor Sensor Data: Dry-bulb temperature, precipitation rate, wind speed, brightness/luminance, and air
humidity.
To allow different analyses at different aggregation levels,
energy consumptions were collected using energy meters at
various levels. At the appliance level, energy meters were
installed at active power sockets. At the zone level, energy
meters were installed at the distribution board for the target
zone. At the building level, energy meters were installed in
the central distribution board.
Behavioral data were then collected; mainly based on the
usage of appliances and zone occupancies. That meant that if
a user undertook multiple activities in a zone without changing the appliance usage, those activities were not considered as
different behaviors. Key daily periods were identified, where
similar behaviors were observed across days: lunch time,
office hours, coffee break time, maintenance/cleaning time,
and nonoffice hours.
The rules reflecting interrelationships between behavior, surroundings parameters (temperature, humidity, etc.) and energy
consumption were generated through decision tree-based classification algorithms, such as C4.5 [43]. Each path in the
decision tree from the root to the leaf constitutes a rule.
_B. Simulation-Based Optimized Rule Generation_
This system module used a 6-staged process to produce
energy saving rules based on thermal simulations of the building, as shown in Fig. 4. This approach uses preprocessing
to produce optimization scenarios and simulation data, and
to identify sensitive variables, then trains an ANN based on
this data. This ANN is then used as the cost function in
a GA optimization to output actionable rules, which are then
l t d f ffi
-----
Fig. 4. Simulation-based rule generation method environmental variables.
TABLE I
PROPOSED SCENARIO FOR FORUM BUILDING
Fig. 5. Thermal model for forum building’s atrium zone (pilot zone).
_1) Building Thermal Simulation and Sensitivity Analysis:_
The preprocessing stage consists of scenario definition,
simulated data generation, sensitivity analysis, and variable mapping. The scenario defines the objectives of the
optimization and the available control variables, actors, and
sensors. Thermal simulation and data generation involves thermal model development and utilization for each building.
Sensitivity analysis and variable mapping then determines the
most sensitive variables, and maps them with the building’s
artefacts, as expressed in the knowledge base.
In this paper, a public residential care home in
The Netherlands, named “the Forum,” was used as a case
study, based on the scenario shown in Table I. A thermal simulation model of the building was created in DesignBuilder, as
shown in Fig. 5, which includes detailed material, occupancy,
d t ti d t
TABLE II
MAPPED SENSORS AND COEFFICIENTS FOR EACH OBJECTIVE
EnergyPlus was used to produce simulated data across the
permutations of the scenario’s independent variables. In the
Forum building, the four actuators resulted in 32 permutations,
so the annual simulation was repeated to produce 32 datasets.
PCA and MRA were then used to reduce the simulation
model’s 954 reported variables. The ideal reduction was determined by PCA, and then MRA was used to rank the variables’
sensitivity according to the scenario’s objectives. This process
was modeled as: 1) where Fj denotes either thermal energy
consumption or predicted mean vote (PMV) in this case [7]
In (1), Var denotes the variables generated from the simula
[−→]
tion, coefji denotes the coefficient of variable Vari for Fj, and
_numvar is the available number of variables._
The identified variables are then mapped with the existing
sensors installed in the target building. Variables which cannot
be mapped to sensors can inform the acquisition of additional
sensors or can be excluded from subsequent stages of the process. The list of mapped sensors for the Forum building are
given in Table II, and were used in the following ANN-GA rule
generation.
_2) ANN-Based Learning Process: ANNs predict the behav-_
ior of highly nonlinear systems, such as building energy
systems [29], by conducting machine learning over training
data. ANNs have been researched in energy management
systems for the last two decades [55], yet they continue
to perform competitively [56], and as such are still the
most widely used type of data-driven model for building
energy prediction [57] in research. Hence, this paper also utilizes an ANN-based learning method, where the novelty of
the proposed system is the use of this traditional method
in a unique way alongside GA, behavioral data mining,
fuzzy rules, and ontology technologies, within an end-to-end
BEMS. Following experimentation, a traditional multilayer
perceptron-based ANN approach was found to perform adequately, although there is room for further investigation into
deep architectures and other types of data-driven models,
which could be interchanged with the 3-layer MLP used if
found to perform better. The proposed ANN design used
th t i bl id tifi d i l i t ll
�−→�
_Fj_ Var =
numvar
�
coefjiVari. (1)
_i=1_
-----
Fig. 6. Proposed ANN topology for the pilot zone.
the four actuator states at the current timestep, and time
information. The outputs were then the zone’s PMV and
energy consumption at the subsequent timestep, as shown
in Fig. 6.
For an ANN to be effective, it must be well-trained and use
an appropriate topology. To ensure this, the learning algorithm,
number of hidden layers (and their number of process elements) and transfer function have to be determined robustly. In
this paper, several experiments were designed and conducted
to determine the optimum ANN parameters. In the experimental design, an iterative parameter tuning approach is utilized.
The initial configuration set is selected as; single hidden layer
with five neurons, gradient descent-based learning algorithm,
tangent-sigmoid transfer functions in hidden and output layers, 0.0001 error rate, 4000 epochs for number of hidden layer,
number of process elements in hidden layer, learning function,
error rate and number of epochs, respectively. The next stage
is changing one of the parameters while keeping others constant, if the error rate with the selected parameter is better
than its constant value will be updated for further parameter
selection. The best parameters were found to be: a single hidden layer of 30 neurons, using a Levenberg–Marquardt-based
learning function, logarithmic sigmoid and tangent sigmoidbased transfer function in hidden and output layers. Using
these parameters, the desired error rate (0.0001) was achieved
at 70th epoch.
The ANN was trained with 80% of the dataset and tested on
the remaining 20%, within MATLAB. The ANN architecture
and training decisions are described further in [7]. This model
was then used as the cost function of the GA rule production. The univariate hyperparameter search approach yielded
an ANN with sufficient performance within the time and compute limitations of the work, however, further work includes
optimizing the ANN design further through grid search or
a similar technique.
_3) GA-ANN-Based Optimized Rule Generation: The rule_
generation is based on finding optimized solutions for the
set of control variable with related environmental variables,
desired optimization level (i.e., 5%, 10%, 15%, 20%, 25%,
and 30%), and time information. Once an optimum solution
is found, this optimum solution and related environment l t d t ti i f d i t l t
Fig. 7. General formation of the proposed chromosome string.
date-time info, the achieved improvement level, zone ID, and
a weight based on the achieved and desired improvement in
the target variable.
GA optimization was used with an ANN cost function.
GA is a very popular optimization technique for complex
problems [7], [30]. The proposed approach uses the actuator states alongside sensor data in the chromosome string, and
uses mutation, crossover and fitness evaluation to iteratively
improve the rule in a stochastic manner. The general formation
of a chromosome string is shown in Fig. 7.
The proposed chromosome string includes two groups:
1) variable and 2) constant features. The variable group
includes the control variables (temperature setpoint, window
setpoint, blind setpoint, and shading setpoint). The constant
group of the string consists of the values of the sensitive variables and time information which are denoted from X5 to X17
for month, day, hour, outdoor temperature, wind speed, wind
direction, solar irradiation, solar azimuth angle, solar latitude
angle, zone air temperature, zone heating rate, zone ideal total
cooling rate, and occupancy, respectively.
Only the control variables (X1, . . ., X4) are involved in
the mutation and crossover operations of the GA process,
and the other string elements are kept constant to determine
the optimized value for the control variables. The relationship between cost function variables (inputs and output) is
presented in
Minimize: _Fenergy consumption(X1, X2, X3 . . . X17)_
(2)
Subject to constraints: |FPMV(X1, X2, X3 . . . X17)| < 1 (3)
16 ≤ _X1 ≤_ 24 (4)
0 ≤ _X2 ≤_ 1 (5)
0 ≤ _X3 ≤_ 1 (6)
0 ≤ _X4 ≤_ 1. (7)
_FEnergy_Consumption is the energy consumption amount based on_
the variation of the control variables X1, X2, X3, and X4 while
keeping other variables (X5, . . ., X17) constant, and FPMV is
constraint named as the PMV function value to keep the
thermal comfort under between 1 and 1.
−
The genetic algorithm’s crossover operation used
a multipoint gene exchange within the variation groups
of two parents’ chromosome strings as shown in Fig. 8. The
mutation operation also acted only on the parents’ variation
groups, where it selected one or more elements according to
a probability value as shown in Fig. 8. Both the chromosome
and the mutation operations are implemented on the p and r
worst regions (solution sets) based on their fitness values, as
shown in Fig. 9.
The algorithm used an elite selection approach, where the
b t l ti k t i i l th
-----
Fig. 8. Crossover operations in the proposed GA.
Fig. 9. Mutation operations in the proposed GA.
Fig. 10. General formation of the elitism process in the proposed GA.
Fig. 11. Example of the generated optimized rules.
operation acted on the remaining p chromosome strings, and
the mutation probability was α, using the roulette technique.
The mutation operation was also implemented on r worst individuals with a β probability rate. Hence, the best solutions are
kept in the solution pool as shown in Fig. 10.
The primary stopping condition of the optimization was the
target improvement decided by the FM. The FM negotiates
an acceptable set of actuations by choosing a target, such as
30% energy reduction, then observing the optimized actuations
required, and either accepting these or adjusting the target. An
example of the generated rule is shown in Fig. 11, and the
ll GA ANN b d i h i Fi 12
Fig. 12. GA-ANN optimized rule generation process.
Fig. 13. Fuzzy reasoning module architecture.
_C. Fuzzy Reasoner_
The rules produced by the GA-ANN process are stored
as SWRL rules, but are used by a fuzzy reasoner. Fuzzy
logic is inspired by the human, approximation-based, reasoning process [58]. This process rationalizes an appropriate output from inaccurate and incomplete information. The
proposed fuzzy reasoner communicates with the GUI through
the mapper module, and the knowledge base through the
Java expert system shell [59], as shown in Fig. 13. In this
paper, a Mamdani fuzzy inference system was utilized: despite
this approach’s simplicity, it was found to provide adequate
performance.
Although the rules are generated automatically through
machine learning, the user ultimately decides which rules
should be applied. The weights are initially set automatically
di t th fid f th l b t th i
-----
TABLE III
RULE VARIABLES FOR THE FORUM BUILDING
able to change the weights in accordance with their needs
or context, such that the most appropriate rules have higher
weights.
The fuzzy reasoner consists of five modules: 1) fuzzification
module; 2) SWRL bridge; 3) rule engine; 4) defuzzification module; and 5) rule matching module. This reasoner is
used when the FM requests decision support. The first step
involves comparing the dynamic sensor data to the antecedent
parts of the semantic rules; shown for the Forum building in Table III. The consequent part of the fuzzy rules
then defines the optimized actuator states, similar to the
approach of Wang and Mendel [64]. However, in this paper,
the antecedents consist of a wide variety of range-based
formations, instead of predefined membership functions.
As well as accuracy, interpretability is an important and
often conflicting performance metric in fuzzy rule-based
systems. Casillas et al. [60] defined interpretability in this
context as the capacity to express the qualities of the real
system as subjective properties based on experts’ assessments.
Another comprehensive survey on the topic is presented by
Lughofer [61], who suggests that one of the perquisites for
interpretability is complexity reduction, which is part of the
distinguishability and simplicity of the fuzzy rule partitioning process. This is supported by the similar sentiment of
Gacto et al. [62], who presented a detailed review about interpretability and complexity. Hence, to promote interpretability,
the fuzzy reasoner of the proposed system uses a small number of relatively simple triangular membership functions with
predefined ranges, as illustrated in Fig. 14. Also, prior work
suggests that this approach promotes greater performance and
is computationally inexpensive [63]. This was valuable given
the large number of SWRL rules used in the proposed system.
These rules are then converted into fuzzy rules with a constant number of membership functions. The inference engine
then implements the membership conversion given in (2).
The fuzzy reasoner incorporates fuzzy rules which are made
by mapping between the crisp variables of the theoretical
rules and fuzzified variables. This fuzzification process uses
f b hi f ti l b l d th l t d b hi
Fig. 14. Fuzzy membership function example.
degrees for each corresponding crisp variable. The optimized
rule antecedent part consists of the rule weight, desired
optimization level, optimization objective level, outdoor temperature, wind speed, wind direction, solar radiation, solar
azimuth and altitude angles, indoor temperature, zone sensible heating and total cooling rates, and occupancy levels,
respectively. The consequent part consists of the control variable values. The fuzzification process converts the variables
in both the antecedent and consequent parts of the crisp rules.
The rule conversion is based on Wang Mendel approach [62]
which consists of: 1) identifying the membership degrees in
every fuzzy partition of inputs and output variables and 2)
associating the existing crisp rules with a fuzzy rule which
has a linguistic label with maximum degree. Hence, the rules
are presented in the form of “IF in1 is labelin1 and in2 is
labelin2 and · · · and inn is labelinn THEN out1 is labelout1,”
where labelini is the best covered linguistic label in each input
subspace and labelout1 is the best covered output label. The
membership degree of the rule in each subspace is μlabelini
and μlabelout1, respectively. To avoid conflicting rules, we have
utilized importance degree, where if there are multiple rules
which have the same antecedent and consequent labels then
the one with the greatest importance degree will remain in the
rule base. The importance degree for each rule is computed
based on following (8) to evaluate the interpretability:
ID(Rule) = μlabelin1 _μlabelin2 · · · μlabelinn_ _μlabelout1 ._ (8)
Nine hundrenden fifty eight rules were generated for each
objective, resulting in 3882 rules in total. The inference
engine then implements the membership-based conversion
given in (9), where μVk is the membership value of the output
variable. An example of converted fuzzy rule is presented in
Fig. 15. The inference engines design is based on the experts’
experiences
_μVk = min�max�μA11 · · · μAm1�, . . ., max�μA12 · · · μAm2��._
(9)
-----
Fig. 15. Example fuzzy rules presented in inference engine.
The defuzzification process then determines the selected
output values for any given input set. This operation uses a rule
weighting method, which increases the accuracy of the fuzzy
system [65]. The weight given to each rule before the fuzzification is also included as a coefficient in the reasoning process,
as shown in
_Vcrisp =_ ��wwiμiμVV((YY)) .Y (10)
The rule weight is determined by the closeness of the expected
target value to its desired value, evaluated by simulation during
the rule generation process. The weights are calculated according to (11), where wi, yi, and ˆyi are the ith rule’s percentage
weight, best solution found, and expected target optimization
level, respectively
Fig. 16. WebGL view of the building’s zones.
Fig. 17. Zone energy monitoring interface.
the role of the GUI in presenting the knowledge from the solutions’ various analytics components, in the form of suggested
actions, is presented.
_A. Building Zone View and Performance Monitoring_
A 3-D visualization of the building’s thermal zones was
seen as a key requirement of an engaging tool, so this was
enabled by converting 2-D CAD plans into semantic models
in the knowledge base. As well as showing an extruded floor
plan of the building, each zone is described in the knowledge
base by its geometric properties, function (kitchen, atrium,
etc.), ID, and its connected sensors, and these are all displayed after clicking the zone, which triggers a query of the
knowledge base.
After choosing a zone and a sensor type (or multiple types),
the energy monitoring interface shown in Fig. 16 allows the
FM to view the current and historic performance of the zone.
This is achieved through a histogram of sensed data values
and a traffic light graphic which indicates the acceptability
of the current performance, relative to its mean value. The
historical sensor data is retrieved from the SQL database using
a combination of AJAX and PHP server-side scripting. SQL
was chosen due to the speed at which it can handle queries
f l t f hi t i l d t
_wi = 100����_ _yi −wi_ _yi_
_._ (11)
����
To summarize, SWRL rules are generated through an optimized ANN-based approach (ANN-genetic algorithm), for
different reduction levels, which is the basis of the theoretical
rule generation process, the generated rules are then converted
into fuzzy rules to create the rule base of the fuzzy inference
system by inclusion of the linguistic transformations. Once
a user desired level of reduction for a desired objective is
received then the fuzzy inference engine is to utilize these
inputs for its inference engine and to determine the most convenient outcomes in the existing post-processed SWRL rules.
After determining the consequent, the rule engine searches for
rules with the same actuator states and sorts them according
to their weights stored in the knowledge base. The highest
weighted one is selected as a response for the users.
VI. SMART GUI
This section describes the implementation and features of
the front-end tool and how it accesses the different data sources
to enable monitoring and visualization of the relevant static
and dynamic data, and the display of suggestions to the
FM. The interface has been evaluated to determine its level
of usability, the resulting findings determined that over five
demo objects, one of which forms the core of the evaluation
presented here, the FMs were supported in the task of identifying and applying suggestions [66]. The BEMS interface was
implemented using modern Web languages and the bootstrap
framework [6]. The interface contains three main windows;
Fig. 16 shows the WebGL view of the building’s zones and
Fig. 17 shows the energy monitoring and actuation suggestion
window. The interface also has a menu “choose building,” so
that the FM can select different buildings, if they are responsible for more than one. First, the ability to view the static
properties and the historic and current energy KPIs of each
b ildi d d FM’ it i di d S d
-----
Fig. 18. Actuation suggestions query window.
The suggestions are tailored to a particular zone, which is
generally a room. The tool also shows 3-D coordinates for
sensors, based on industry foundation class data, or manual
data entry. Incorporating IFC Cartesian locations represents
ongoing work. Also, the color of each sensor’s icon represents
its current state.
_B. Optimized Suggestion and Negotiation Interface_
To deliver the knowledge produced by the solution’s analytics, and hence support the FM in reducing the energy
consumption of the building, the interface displays suggested
actions as part of a negotiation interface based on the data mining, theoretical rules, and their fuzzification. Fig. 18 shows
how the FM configures these criteria using drop down menus
and slider bars, generated by jQuery selectors. Once the FM
has selected a zone, chosen a goal type (e.g., reduce electricity consumption) and moved the slider to the target energy
saving (e.g., 20%) they press the “query suggestions” button.
This uses AJAX and PHP to query and return suggestions
based on the rules produced by the back-end analytics and
displays a number of recommended actions, such as adjusting
the blinds or heating temperature set point.
Critically, the FM’s expert knowledge is then utilized to
determine if the suggested actions are appropriate, as the simulated implications on the building are then displayed in the
energy monitoring histograms, and the FM chooses whether
or not to act on the suggestions. If they deem the savings
to have negative implications on comfort, or otherwise, they
can adjust the query criteria and view more suitable suggested
actions. This means of control was a requirement of the solution, as FMs during the aforementioned usability evaluations
indicated that they wished to have final say on whether to enact
changes.
VII. RESULTS
To evaluate the performance of the developed solution, the
system’s intelligence was tested in the EnergyPlus simulation
environment and the full system was deployed in a real pilot
building, so as to validate the entire system, including the
semantic middleware and GUI components. The pilot building was a public residential care home in The Netherlands
(the Forum), and the decision support capabilities of the
t t t d f th b ildi ’ 3 t t i
Fig. 19. One-day simulation temperature setpoint profiles.
Fig. 20. One-day simulation energy consumption profiles.
the main energy consuming space of the building. As the
Forum building is primarily an elderly care home, maintaining
thermal comport was critical whilst attempting to reduce the
building’s energy consumption by using the suggested actions
of the system.
Initially, the proposed solution was tested by simulating the
zone’s energy consumption over a day and then repeating the
simulation with the optimized energy saving rules, applied at
the start of each timestep. This reduced the energy consumption from 258 to 201 kWh, whilst maintaining an absolute
PMV of less than 1, which was deemed an acceptable level
of occupant comfort. In contrast, the well-known rule-based
systems RULE5, RULE3, and C4.5 only achieved energy consumptions of 258 kWh, 259 kWh, and 259 kWh, respectively,
with the absolute PMV values increasing to 1.7, which represents greater discomfort. The generated set points and resulting
energy consumption profiles from these experiments are shown
in Figs. 19 and 20.
Following preliminary success, the simulation was extended
to a two-month period. Using the proposed method the energy
consumption was reduced from 14 600 to 11 400 kWh during the months of October and November, whereas RULE5,
RULE3, and C4.5 achieved 13 500 kWh, 13 900 kWh, and
15 400 kWh, respectively, as shown in Fig. 21. Again, the
proposed approach maintained an absolute PMV of less than 1.
The full retrofit BEMS solution was then deployed in the
pilot building, initially for a single day and subsequently for
an extended period from October 1, 2014 to January 20, 2015.
In each of these tests the FM utilized the system’s decision
support to receive suggested actions for energy saving, and
after negotiating the severity of these, actioned them through
local control systems. Based on the single day experiment, the
daily energy consumption was reduced from 77 to 58 kWh as
illustrated in Fig. 22. Over the two month period, the total
energy consumption reduced from 7500 to 5600 kWh, when
dj t d f d d t t ti h i
-----
Fig. 21. Two-month simulation average energy consumption per day.
Fig. 22. One-day real pilot site energy consumption.
Fig. 23. Two-month real pilot site energy consumption.
Fig. 23. Finally, the FM was satisfied with the thermal comfort achieved and no negative feedback was received from the
occupants.
VIII. DISCUSSION AND CONCLUSION
This paper has presented a retrofit BEMS capable of
delivering energy savings through analytics across existing
data sources and actuators in a building, by using semantic
middleware to integrate heterogeneous devices within a cloudbased, service-oriented architecture. As well as the novelty of
the semantic approach, the solution represents a step change by
encouraging the use of AI by FMs, by respecting the FM’s role
in the decision process and using an engaging GUI, and the
solution has been successfully deployed in a public building
i Th N th l d
In this paper, the state of the art and previous research was
discussed within each of the conceptual layers of a retrofit
BEMS. A novel BEMS was then introduced and the components and methodology of each of its layers were discussed
in turn. First, the semantic middleware layer was introduced
as a key novelty, and its benefits of interoperating a building’s devices and systems in an extensible, replicable and
affordable manner was explained. The methodology of instantiating a domain ontology aligned with international standards
was presented through the use of OntoCAD to populate an
extended version of the IFC data model. Second, the solution’s
intelligence was explained as a combination of intelligent rule
generation techniques and a fuzzy reasoner. The combined use
of rules generated through data mining and simulation-based
optimization through SWRL ontology integration was shown.
Finally, the GUI of the solution was explored; its interactions
with the back-end to present zone-based performance monitoring and optimized rule suggestions were explained. Also, the
client-side software decisions of WebGL and HTML5 were
discussed as a means to enable cross platform deployment
without requiring additional user downloads, whilst still providing a 3-D interface and many developer benefits toward
further maturing the solution. Through a simple traffic light
graphic, FMs can determine the zones requiring attention, and
the pop ups alert the FM when a new energy saving suggestion is made. This type of feature would be ideal for mobile
integration, so that FMs can be alerted in the field.
The solution was tested within both simulated and real
buildings, with encouraging results in both cases. Both cases
showed significant energy savings over both a single day and
a period of several winter months, with the real building displaying circa 25% energy savings on average. Whilst these
results are highly positive and serve as a proof of concept,
further work is now required to demonstrate the solution’s
replicability across other buildings. Other features which are
of interest for development include the use of a wizard to help
the FM with tasks, and providing multilingual support to allow
deployment across countries; as driving FM engagement with
the tool through an attractive and intuitive interface is a key
contribution of the work.
Whilst the individual components used in the proposed
system delivered sufficient performance, key ongoing work
includes further opitimization of each. For example, the
ANN model implemented could be interchanged with a more
advanced deep learning model, and its hyperparameters
could be further optimized via a dense grid search or
similar.
Given the successful deployment of the solution and the
key novelties identified, this paper demonstrates the potential of a cloud-based approach to a retrofit BEMS solution by using semantic middleware as a system integration
component alongside a human–computer negotiation process,
advanced AI and an engaging user interface. The BEMS
presented can therefore act as a reference point for similar solutions in terms of the energy saving potential, upfront
investment reduction through system integration, and logistics
and liability issue mitigation regarding AI control of building
t
-----
ACKNOWLEDGMENT
The authors would like to thank the indirect contributions of
the research partners, especially CETMA for their contribution
toward the fuzzy aspects of the KnoholEM project and Kiril
Tonev for his role in the software implementation of the
interoperability layer.
REFERENCES
[1] L. Pérez-Lombard, J. Ortiz, and C. Pout, “A review on buildings energy
consumption information,” Energy Build., vol. 40, no. 3, pp. 394–398,
Mar. 2008.
[2] A. DePaola, M. Ortolani, G. L. Re, G. Anastasi, and S. K. Das,
“Intelligent management systems for energy efficiency in buildings: A survey,” ACM Comput. Surveys, vol. 47, no. 1, pp. 1–38,
Jul. 2014.
[3] P. Hoes, J. L. M. Hensen, M. G. L. C. Loomans, B. de Vries, and
D. Bourgeois, “User behavior in whole building simulation,” Energy
_Build., vol. 41, no. 3, pp. 295–302, Mar. 2009._
[4] Y. Agarwal et al., “Occupancy-driven energy management for smart
building automation,” in Proc. 2nd ACM Workshop Embedded Sens.
_Syst. Energy Efficiency Build., Zürich, Switzerland, Nov. 2010, pp. 1–6._
[5] O. T. Masoso and L. J. Grobler, “The dark side of occupants’ behaviour
on building energy use,” Energy Build., vol. 42, no. 2, pp. 173–177,
Feb. 2010.
[6] M. Dibley, H. Li, Y. Rezgui, and J. Miles, “Software agent reasoning supporting non-intrusive building space usage monitoring,” Comput.
_Ind., vol. 64, no. 6, pp. 678–693, Aug. 2013._
[7] B. Yuce and Y. Rezgui, “An ANN-GA semantic rule-based system to
reduce the gap between predicted and actual energy consumption in
buildings,” IEEE Trans. Autom. Sci. Eng., vol. 14, no. 3, pp. 1351–1363,
Jul. 2017.
[8] P. H. Shaikh, N. B. M. Nor, P. Nallagownden, I. Elamvazuthi,
and T. Ibrahim, “A review on optimized control systems for
building energy and comfort management of smart sustainable
buildings,” _Renew._ _Sustain._ _Energy_ _Rev.,_ vol. 34, pp. 409–429,
Jun. 2014.
[9] M. Dibley, H. Li, Y. Rezgui, and J. Miles, “An ontology framework
for intelligent sensor-based building monitoring,” Autom. Construct.,
vol. 28, pp. 1–14, Dec. 2012.
[10] S. Howell, Y. Rezgui, and B. Yuce, “Knowledge-based holistic energy
management of public buildings,” in Proc. Int. Conf. Comput. Civil
_[Build. Eng., 2014, pp. 1667–1674, doi: 10.1061/9780784413616.207.](http://dx.doi.org/10.1061/9780784413616.207)_
[11] A. H. Kazmi, M. J. O’grady, D. T. Delaney, A. G. Ruzzelli, and
G. M. P. O’hare, “A review of wireless-sensor-network-enabled building energy management systems,” ACM Trans. Sensor Netw., vol. 10,
no. 4, pp. 1–43, Jun. 2014.
[12] R. V. P. Yerra and P. Rajalakshmi, “Context aware building energy management system with heterogeneous wireless network architecture,” in
_Proc. Wireless Mobile Netw. Conf. (WMNC), Dubai, UAE, Apr. 2013,_
pp. 1–8.
[13] K. Park et al., “Building energy management system based on smart
grid,” in Proc. IEEE 33rd Int. Telecommun. Energy Conf. (INTELEC),
Oct. 2011, pp. 1–4.
[14] IEEE Standards Committee, “Communication technology interoperability,” in IEEE Guide for Smart Grid Interoperability of Energy
_Technology and Information Technology Operation With the Electric_
_Power System (EPS), End-Use Applications and Loads, New York, NY,_
USA: Inst. Elect. Electron. Eng., Sep. 2011, pp. 42–63.
[15] BSI British Standards, _Smart_ _City_ _Concept_ _Model—Guide_ _to_
_Establishing a Model for Data Interoperability. London, U.K.: BSI_
Stand., Oct. 2014.
[16] W. Shang, Q. Ding, A. Marianantoni, J. Burke, and L. Zhang, “Securing
building management systems using named data networking,” IEEE
_Netw., vol. 28, no. 3, pp. 50–56, May/Jun. 2014._
[17] T. E. El-Diraby and J. Zhang, “A semantic framework to support corporate memory management in building construction,” Autom. Construct.,
vol. 15, no. 4, pp. 504–521, Jul. 2006.
[18] H. Wicaksono, K. Aleksandrov, and S. Rogalski, “Knowledgebased intelligent energy management using building automation
system,” in Automation. London, U.K.: InTechOpen, 2012. [Online].
Available: https://www.intechopen.com/books/automation/an-intelligentsystem-for-improving-energy-efficiency-in-building-using-ontologyd b ildi t
[19] W3C. Semantic Web. Accessed: Dec. 14, 2015. [Online]. Available:
http://www.w3.org/standards/semanticweb/
[20] A. Pal, A. Mukherjee, and B. Purushothaman, “Model-driven development for Internet of Things: Towards easing the concerns of
application developer,” in _Internet_ _of_ _Things._ _User-Centric_ _IoT,_
vol. 150, R. Giaffreda et al., Eds. Cham, Switzerland: Springer, 2015,
pp. 339–346.
[21] W3C. (2015). _Linked_ _Data_ _Platform_ _1.0._ Accessed: Dec. 30,
2015. [Online]. Available: http://www.w3.org/TR/2015/REC-ldp-20
150226/
[22] “Industrial strategy: Government and industry in partnership,”
HM Govt., London, U.K., Govt. Rep. BIS/13/955, 2012.
[23] H. Wicaksono, P. Dobreva, P. Häfner, and S. Rogalski, “Methodology
to develop ontological building information model for energy management system in building operational phase,” in Knowledge Discovery,
_Knowledge Engineering and Knowledge Management Communications_
_in Computer and Information Science, vol. 454. Heidelberg, Germany:_
Springer, Sep. 2015, pp. 168–181.
[24] M. Kadolsky et al., ISES Deliverable D3. 1: Ontology Specification,
ISES, Brussels, Belgium, Jan. 2014.
[25] J. Ploennigs, H. Diblowski, A. Roder, K. Kabitzsch, and B. Hensel,
_HESMOS Deliverable D4.1: Ontology Specification for Model-Based_
_ICT System Integration, HESMOS, Brussels, Belgium, Nov. 2011._
[26] R. Gulbinas, A. Khosrowpour, and J. Taylor, “Segmentation and classification of commercial building occupants by energy-use efficiency and
predictability,” IEEE Trans. Smart Grid, vol. 6, no. 3, pp. 1414–1424,
May 2015.
[27] J. S. Byun and S. Park, “Development of a self-adapting intelligent system for building energy saving and context-aware smart services,” IEEE Trans. Consum. Electron., vol. 57, no. 1, pp. 90–98,
Feb. 2011.
[28] U. Rutishauser, J. Joller, and R. Douglas, “Control and learning of ambience by an intelligent building,” IEEE Trans. Syst., Man, Cybern. A,
_Syst., Humans, vol. 35, no. 1, pp. 121–132, Jan. 2005._
[29] B. Yuce et al., “Utilizing artificial neural network to predict energy
consumption and thermal comfort level: An indoor swimming pool case
study,” Energy Build., vol. 80, pp. 45–56, Sep. 2014.
[30] C. Yang et al., “High throughput computing based distributed genetic
algorithm for building energy consumption optimization,” Energy Build.,
vol. 76, pp. 92–101, Jun. 2014.
[31] H. Grzybek, S. Xu, S. Gulliver, and V. Fillingham “Considering the
feasibility of semantic model design in the built-environment,” Buildings,
vol. 4, no. 4, pp. 849–879, Nov. 2014.
[32] K. K. Rohitha, G. K. Hewawasam, K. Premaratne, and M. L. Shyu,
“Rule mining and classification in a situation assessment application:
A belief-theoretic approach for handling data imperfections,” IEEE
_Trans. Syst., Man, Cybern. B, Cybern., vol. 37, no. 6, pp. 1446–1459,_
Dec. 2007.
[33] L. Cao, H. Zhang, Y. Zhao, D. Luo, and C. Zhang, “Combined
mining: Discovering informative knowledge in complex data,” IEEE
_Trans. Syst., Man, Cybern. B, Cybern., vol. 41, no. 3, pp. 699–712,_
Jun. 2011.
[34] J. Casillas, O. Cordon, and F. Herrera, “COR: A methodology to improve
ad hoc data-driven linguistic rule learning methods by inducing cooperation among rules,” IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 32,
no. 4, pp. 526–537, Aug. 2002.
[35] L. M. Fu, “Rule generation from neural networks,” IEEE Trans.
_Syst.,_ _Man,_ _Cybern.,_ _Syst.,_ vol. 24, no. 8, pp. 1114–1124,
Aug. 1994.
[36] T.-P. Wu and S.-M. Chen, “A new method for constructing membership functions and fuzzy rules from training examples,” IEEE
_Trans. Syst., Man, Cybern. B, Cybern., vol. 29, no. 1, pp. 25–40,_
Feb. 1999.
[37] P. Maji and P. Garai, “Fuzzy–rough simultaneous attribute selection
and feature extraction algorithm,” IEEE Trans. Cybern., vol. 43, no. 4,
pp. 1166–1177, Aug. 2013.
[38] T. Pal and N. R. Pal, “SOGARG: A self-organized genetic algorithmbased rule generation scheme for fuzzy controllers,” IEEE Trans. Evol.
_Comput., vol. 7, no. 4, pp. 397–415, Aug. 2003._
[39] C.-F. Juang and P.-H. Chang, “Designing fuzzy-rule-based systems using
continuous ant-colony optimization,” IEEE Trans. Fuzzy Syst., vol. 18,
no. 1, pp. 138–149, Feb. 2010.
[40] S. Mitra and Y. Hayashi, “Neuro-fuzzy rule generation: Survey in
soft computing framework,” IEEE Trans. Neural Netw., vol. 11, no. 3,
pp. 748–768, May 2000.
[41] E. Lughofer, _Evolving_ _Fuzzy_ _Systems—Methodologies,_ _Advanced_
_C_ _d A_ _li_ _i_ H id lb G S i 2011
-----
[42] E. Lughofer, “Evolving fuzzy systems—Fundamentals, reliability, interpretability and useability,” in Handbook of Computational Intelligence,
P. P. Angelov, Ed. New York, NY, USA: World Sci., 2016, pp. 67–135.
[43] J. R. Quinlan, C4.5: Programs for Machine Learning. San Francisco,
CA, USA: Morgan Kaufmann, 1993.
[44] W. Heidl, S. Thumfart, E. Lughofer, C. Eitzinger, and E. P. Klement,
“Machine learning based analysis of gender differences in visual
inspection decision making,” _Inf._ _Sci.,_ vol. 224, pp. 62–76,
Mar. 2013
[45] E. Lughofer et al., “Explaining classifier decisions linguistically for stimulating and improving operators labeling behavior,” Inf. Sci., vol. 420,
pp. 16–36, Dec. 2017.
[46] R. Senge and E. Hüellermeier, “Top-down induction of fuzzy pattern
trees,” IEEE Trans. Fuzzy Syst., vol. 19, no. 2, pp. 241–252, Apr. 2011.
[47] J. Fürnkranz, “Round robin classification,” J. Mach. Learn. Res., vol. 2,
pp. 721–747, Mar. 2002.
[48] B. Soua, A. Borgi, and M. Tagina, “Attributes regrouping in fuzzy rule
based classification systems,” in Proc. IEEE Conf. Signals Circuits Syst.
_(SCS), Nov. 2009, pp. 1–6._
[49] Mona Visa. Mona Visa Dashboard. Accessed: Dec. 30, 2015. [Online].
Available: http://www.monavisa.info/home/dashboard.jsp
[50] G. Anzaldi et al., “KnoHolEM: Knowledge-based energy management
for public building through holistic information modeling and 3-D visualization,” in Proc. 2nd INTERA Conf. Int. Technol. Robot. Appl., 2014,
pp. 47–58.
[51] M. Dankers, F. van Geel, and N. M. Segers, “A Web-platform for linking
IFC to external information during the entire lifecycle of a building,”
_Procedia Environ. Sci., vol. 22, pp. 138–147, Dec. 2014._
[52] K. Krahtov, S. Rogalski, D. Wacker, D. Gutu, and J. Ovtcharova, “A
generic framework for life-cycle-management of heterogenic building
automation system,” in Proc. Flexible Autom. Intell. Manuf. Conf.,
Jul. 2009, pp. 461–468.
[53] P. Häfner, V. Häfner, H. Wicaksono, and J. Ovtcharova, “Semiautomated ontology population from building construction drawing,” in Proc. Knowl. Eng. Ontol. Develop. Conf., Sep. 2013,
pp. 379–386.
[54] K. McGlinn, K. Jones, A. Cochero, A. Gerdelan, and D. Lewis,
“Usability evaluation of an activity modelling tool for improving accuracy of predictive energy simulations,” in Proc. IBPSA Build. Simulat.
_Conf., Dec. 2015, pp. 1798–1805._
[55] S. A. Kalogirou, “Applications of artificial neural-networks for energy
system,” Appl. Energy, vol. 67, nos. 1–2, pp. 17–35, Sep. 2000.
[56] M. W. Ahmad, M. Mourshed, and Y. Rezgui, “Trees vs neurons:
Comparison between random forest and ANN for high-resolution
prediction of building energy consumption,” Energy Build., vol. 147,
no. 15, pp. 77–89, 2017.
[57] K. Amasyali and N. M. El-Gohary, “A review of data-driven building
energy consumption prediction studies,” Renew. Sustain. Energy Rev.,
vol. 81, pp. 1192–1205, Jan. 2018.
[58] L. A. Zadeh, “A theory of approximate reasoning,” in Machine
_Intelligence, vol. 9, J. E. Hayes, D. Mitchie, and L. I. Mikulich, Eds._
New York, NY, USA: Wiley, 1979, pp. 149–194.
[59] R. Orchard, “Fuzzy reasoning in JESS: The FuzzyJ toolkit and
FuzzyJess,” in Proc. 3rd Int. Conf. Enterprise Inf. (ICEIS), Jul. 2001,
pp. 533–542.
[60] J. Casillas, O. Cordón, F. Herrera, and L. Magdalena, Interpretability
_Issues in Fuzzy Modeling. Heidelberg, Germany: Springer-Verlag, 2003._
[61] E. Lughofer, “On-line assurance of interpretability criteria in evolving
fuzzy systems—Achievements, new concepts and open issues,” Inf. Sci.,
vol. 251, pp. 22–46, Dec. 2013.
[62] M. J. Gacto, R. Alcalá, and F. Herrera, “Interpretability of Linguistic
fuzzy rule-based systems: An overview of interpretability measures,” Inf.
_Sci., vol. 181, no. 20, pp. 4340–4360, Oct. 2011._
[63] W. Pedrycz, “Why triangular membership functions?” Fuzzy Sets Syst.,
vol. 64, no. 1, pp. 21–30, 1994.
[64] L.-X. Wang and J. M. Mendel, “Generating fuzzy rules by learning
from examples,” IEEE Trans. Syst., Man, Cybern., vol. 22, no. 6,
pp. 1414–1427, Nov./Dec. 1992.
[65] S.-M. Chen and W.-C. Hsin, “Weighted fuzzy interpolative reasoning
based on the slopes of fuzzy sets and particle swarm optimization
techniques,” IEEE Trans. Cybern., vol. 45, no. 7, pp. 1250–1261,
Jul. 2015.
[66] K. McGlinn, B. Yuce, H. Wicaksono, S. Howell, and Y. Rezgui,
“Usability evaluation of a Web-based tool for supporting holistic building energy management,” Autom. Construct., vol. 84, pp. 154–165,
[Dec. 2017, doi: 10.1016/j.autcon.2017.08.033.](http://dx.doi.org/10.1016/j.autcon.2017.08.033)
**Shaun K. Howell received the Ph.D. degree in**
applied AI and semantic Web systems from Cardiff
University, Cardiff, U.K., in 2017.
He has researched on nine research projects at
national and international levels and produced over
30 publications. He continues his research into
applied AI with Vivacity Labs, London, U.K., as
a Senior Machine Learning Researcher. His current research interests include deep neural networks,
multiagent systems, and semantic Web technologies.
**Hendro Wicaksono received the Dr.-Ing. degree**
from the Karlsruhe Institute of Technology,
Karlsruhe, Germany, in 2016.
He is currently a Professor of industrial engineering with the Jacobs University Bremen, Bremen,
Germany. He was a Researcher with the Institute
of Information Management in Engineering,
Karlsruhe Institute of Technology. He is also a
Visiting Professor with the Faculty of Economics
and Business, Airlangga University, Surabaya,
Indonesia. He has been researching and managing
teens of international research projects in energy-management systems
in buildings, production, and cities using semantic technologies and data
mining. He has published over 40 papers (two nominations for Best Papers).
**Baris Yuce (M’15) received the Ph.D. degree from**
Cardiff University, Cardiff, U.K., in 2012.
He was with the School of Engineering,
Cardiff University from 2013 to 2017. He is
a Lecturer with the University of Exeter, Exeter,
U.K. His current research interests include intelligent systems, such as optimization algorithms, ANN,
fuzzy logic, multiagent systems and their applications on smart buildings, energy systems, robotics,
water management systems, scheduling, and supply
chain management. He has published over 35 papers
in the above fields.
Dr. Yuce is a member of the IEEE Robotics and Automation Society and
IEEE Power and Energy Society.
**Kris McGlinn received the Ph.D. degree from**
Trinity College Dublin, Dublin, Ireland, in 2013.
He is a Research Fellow with Adapt Centre,
Trinity College Dublin. He has been conducting
research for over ten years in knowledge engineering, building automation, and ubiquitous computing.
His research has been in the exploration of BIM and
Linked Data Technologies, to address issues for IT
related to interoperability and integration of building
and building related data models.
Dr. McGlinn is also the Chair of the W3C Linked
Building 1286 Data Community Group.
**Yacine Rezgui received the M.Eng. and Ph.D.**
degrees from ENPC, Marne-la-Vallée, France.
He is a Professor of building systems and
informatics with Cardiff University, Cardiff, U.K.,
and the Founding Director of the BRE Centre
in Sustainable Engineering. He conducts research
in informatics, including ontology engineering and
artificial intelligence applied to the built environment. He has over 200 refereed publications in the
above areas and has successfully completed over
40 research projects at a national and international
(European Framework Programs) levels.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TCYB.2018.2839700?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TCYB.2018.2839700, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://ieeexplore.ieee.org/ielx7/6221036/8732614/08412214.pdf"
}
| 2,019
|
[
"JournalArticle"
] | true
| 2019-09-01T00:00:00
|
[
{
"paperId": "7877ed98e3772881ca3c56f68e4e7877c2109ddd",
"title": "Explaining classifier decisions linguistically for stimulating and improving operators labeling behavior"
},
{
"paperId": "310a37f3628cdc8343215417efc0be75e6b9c968",
"title": "Trees vs Neurons: Comparison between random forest and ANN for high-resolution prediction of building energy consumption"
},
{
"paperId": "d03218a076ad0edbe6b718d47b103032615865fd",
"title": "An ANN-GA Semantic Rule-Based System to Reduce the Gap Between Predicted and Actual Energy Consumption in Buildings"
},
{
"paperId": "866aa2f357087b917ff74a96e2d5d276fcda58e1",
"title": "Weighted Fuzzy Interpolative Reasoning Based on the Slopes of Fuzzy Sets and Particle Swarm Optimization Techniques"
},
{
"paperId": "078cb6e8119560d3241fe96d4c7792e7cb4ea8e0",
"title": "Segmentation and Classification of Commercial Building Occupants by Energy-Use Efficiency and Predictability"
},
{
"paperId": "1b621e197e78c95939d1fad6ecb33a44e99391db",
"title": "Considering the Feasibility of Semantic Model Design in the Built-Environment"
},
{
"paperId": "d94f31b464e8e958e7b5b0d57fb000e199fe312c",
"title": "Model-Driven Development for Internet of Things: Towards Easing the Concerns of Application Developers"
},
{
"paperId": "1a450b0a551627181eeb6b121aa004e876cf007f",
"title": "Utilizing artificial neural network to predict energy consumption and thermal comfort level: an indoor swimming pool case study"
},
{
"paperId": "c8df74f4d80bfbefe21dae2eb4e88f526a4cfbbc",
"title": "Securing building management systems using named data networking"
},
{
"paperId": "f1e80bbc2a75192174848e79304831bad74f1014",
"title": "Knowledge-Based Holistic Energy Management of Public Buildings"
},
{
"paperId": "36f0308f6d6a01ae3fa242e014b6d81cb55bf775",
"title": "Intelligent Management Systems for Energy Efficiency in Buildings"
},
{
"paperId": "d39f488af5a276010adf8b98cbb1eb50cb025b3b",
"title": "A Review of Wireless-Sensor-Network-Enabled Building Energy Management Systems"
},
{
"paperId": "ef1d66762cca4bb8fbdf9eb061afc6105240d6bd",
"title": "A review on optimized control systems for building energy and comfort management of smart sustainable buildings"
},
{
"paperId": "a44e773eb768edd2e366a3281884ab9beb97eb3e",
"title": "High throughput computing based distributed genetic algorithm for building energy consumption optimization"
},
{
"paperId": "151b72dc8400dab7d64f00844f6865e7529fb294",
"title": "On-line assurance of interpretability criteria in evolving fuzzy systems - Achievements, new concepts and open issues"
},
{
"paperId": "88c7f5a3561899c07fb12f577708d051f19bd4f2",
"title": "Methodology to Develop Ontological Building Information Model for Energy Management System in Building Operational Phase"
},
{
"paperId": "f9808269f4b1b5cc3200676b25b5d4c136f1d3bf",
"title": "Fuzzy–Rough Simultaneous Attribute Selection and Feature Extraction Algorithm"
},
{
"paperId": "3cd09dd6f8e37836dd2ec82df23673bf17472bbe",
"title": "Software agent reasoning supporting non-intrusive building space usage monitoring"
},
{
"paperId": "5b296009649feb95006c5e41ab8a6274bce9b9c3",
"title": "Context aware building energy management system with heterogeneous wireless network architecture"
},
{
"paperId": "32f5626f19ac123d65496efb87c99a46776287df",
"title": "Machine learning based analysis of gender differences in visual inspection decision making"
},
{
"paperId": "d741f2da6bc417625a9bc52c73ae210362f2abb2",
"title": "An ontology framework for intelligent sensor-based building monitoring"
},
{
"paperId": "378a002b73ed675b0520254f51601bc38da4b74c",
"title": "Building Energy Management System based on Smart Grid"
},
{
"paperId": "2c04bc1b7c91af0d3dc58afe24f8f5f2381f5423",
"title": "Interpretability of linguistic fuzzy rule-based systems: An overview of interpretability measures"
},
{
"paperId": "8aa22c5e6eb8a96dd4b0434c11c22ab85e634b3c",
"title": "Combined Mining: Discovering Informative Knowledge in Complex Data"
},
{
"paperId": "a4ced72ccb8e926c40455bbc83ca42bee9fee5ed",
"title": "Top-Down Induction of Fuzzy Pattern Trees"
},
{
"paperId": "7c10863a5a34a337e18656d50e1d5b2e8d8861ef",
"title": "Development of a self-adapting intelligent system for building energy saving and context-aware smart services"
},
{
"paperId": "cef7b99ffb893937f62c097ac5d33712e58792f3",
"title": "Evolving Fuzzy Systems - Methodologies, Advanced Concepts and Applications"
},
{
"paperId": "fa37dad7dbb14c796811f501132065f82803b29b",
"title": "Occupancy-driven energy management for smart building automation"
},
{
"paperId": "ca10ca753aff094b91f51785dbe7b387e1c50275",
"title": "Designing Fuzzy-Rule-Based Systems Using Continuous Ant-Colony Optimization"
},
{
"paperId": "472a28054e4aeef8c43c311f8751ab1fbce26743",
"title": "The dark side of occupants’ behaviour on building energy use"
},
{
"paperId": "cd25b17c638f4060a6181fdd0c597159d22b1b35",
"title": "Attributes Regrouping in Fuzzy Rule Based Classification Systems: An Intra-Classes Approach"
},
{
"paperId": "22215c5cf301efb333855a99790f98454d5f2702",
"title": "User behavior in whole building simulation"
},
{
"paperId": "f1642e4ca659559ca027f59e97117bab3b678f5d",
"title": "Rule Mining and Classification in a Situation Assessment Application: A Belief-Theoretic Approach for Handling Data Imperfections"
},
{
"paperId": "3b9cf1f98ee2b4e59a7e94b60b4bbd945908af93",
"title": "A semantic framework to support corporate memory management in building construction"
},
{
"paperId": "eca32c870c03142f078f4a06bc92631522f00c72",
"title": "COR: a methodology to improve ad hoc data-driven linguistic rule learning methods by inducing cooperation among rules"
},
{
"paperId": "ca6fe1cc235db94b651be25d577b5f41158ebf9d",
"title": "Applications of artificial neural-networks for energy systems"
},
{
"paperId": "04ab811930a86ffef0968c8d8555d833364a9af0",
"title": "Neuro-fuzzy rule generation: survey in soft computing framework"
},
{
"paperId": "b03a4786c0d483f52f9791bad4a1eb9bb386cc4d",
"title": "A new method for constructing membership functions and fuzzy rules from training examples"
},
{
"paperId": "936806337a56ddead1e1968479fe4670e7a71236",
"title": "Rule Generation from Neural Networks"
},
{
"paperId": "19725b64efd2fc52bb0603be8eb4945f9e0cc52c",
"title": "Why triangular membership functions"
},
{
"paperId": "12056d523e8445f9dfedf7dc113dfcfcae706ec3",
"title": "Generating fuzzy rules by learning from examples"
},
{
"paperId": "1f9eacccd173e4994b742162fe0ebd452b3742f3",
"title": "A review of data-driven building energy consumption prediction studies"
},
{
"paperId": "71ff9d054e8f28444d6b73761914eed8ccd96317",
"title": "A Web-platform for Linking IFC to External Information during the Entire Lifecycle of a Building"
},
{
"paperId": "0c218d8102b671b8d28d70034be445b107c3dfa4",
"title": "A review on buildings energy consumption information"
},
{
"paperId": "c8e0982a6e83617c518b8e7e4a623b500607cb13",
"title": "Control and learning of ambience by an intelligent building"
},
{
"paperId": "d4f4348760d436f1cf7d1c456ecbdae8c94dcbdb",
"title": "Interpretability issues in fuzzy modeling"
},
{
"paperId": "83d40f5e09b1143464d9c297bdb8c571e9a5ad4c",
"title": "Round Robin Classification"
},
{
"paperId": "ad0be2cf7b8fb4ae7502919f541bec9141941ab6",
"title": "Fuzzy Reasoning in JESS: The Fuzzyj Toolkit and Fuzzyjess"
},
{
"paperId": "7feb0fc888cd55360949554db032d7d1cba9e947",
"title": "Programs for Machine Learning"
},
{
"paperId": "c678c2ca0a22a7b8417ff989954a09273b5d3d82",
"title": "A Theory of Approximate Reasoning"
}
] | 18,411
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/03170b65b4fdd3f113e86fd5b85e86f506b61fbd
|
[
"Computer Science"
] | 0.867862
|
An Effective Method for Combating Malicious Scripts Clickbots
|
03170b65b4fdd3f113e86fd5b85e86f506b61fbd
|
European Symposium on Research in Computer Security
|
[
{
"authorId": "1941400",
"name": "Yanlin Peng"
},
{
"authorId": "2107901585",
"name": "Linfeng Zhang"
},
{
"authorId": "2107318648",
"name": "J. M. Chang"
},
{
"authorId": "144133738",
"name": "Y. Guan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ESORICS",
"Eur Symp Res Comput Secur"
],
"alternate_urls": null,
"id": "0bddd5d7-2897-495a-a961-465abe6e04de",
"issn": null,
"name": "European Symposium on Research in Computer Security",
"type": "conference",
"url": "http://www.wikicfp.com/cfp/program?id=923"
}
| null |
# An Effective Method for Combating Malicious
Scripts Clickbots
Yanlin Peng, Linfeng Zhang, J. Morris Chang, and Yong Guan
Iowa State University, Ames IA 50011, USA
_{kitap,zhanglf,morris,guan}@iastate.edu_
**Abstract. Online advertising has been suffering serious click fraud**
problem. Fraudulent publishers can generate false clicks using malicious
scripts embedded in their web pages. Even widely-used security techniques like iframe cannot prevent such attack. In this paper, we propose a framework and associated methodologies to automatically and
quickly detect and filter false clicks generated by malicious scripts. We
propose to create an impression-click identifier which is able to link corresponding impressions and clicks together with a predefined lifetime.
The impression-click identifiers are stored in a special data structure
and can be later validated upon a click is received. The framework has
the nice features of constant-time inserting and querying, low false positive rate and low quantifiable false negative rate. From our experimental
evaluation on a primitive PC machine, our approach can achieve a false
negative rate 0.00008 using 120MB memory and average inserting and
querying time is 3 and 1 microseconds, respectively.
**Keywords: Online Advertising Networks, Click Fraud, Network Foren-**
sics, Attack Detection.
## 1 Introduction
Recent-year rapid development of the Internet has led to a new, billion-dollar
online advertising market. Using new web technologies, online advertising has
many appealing features. Firstly, online adverting has the capability to target
potential customers more quickly and more accurately than traditional broadcast advertisements, which potentially improves return on investment (ROI).
Besides, direct response from potential customers is available, thus the performance of advertising campaigns can be tracked more easily. Online advertising
also requires much fewer efforts and costs to set up and maintain. Hence, more
and more companies have invested on online advertising campaigns. In 2008, online advertising revenues in the United States totaled $23.4 billion, with a 10.6
percent increase from 2007 [1].
Online advertising typically involves three parties: advertisers, publishers and
syndicators. An advertisers provides advertisement (we use ad for short) information and pays for advertising. A publisher displays ads on her web sites and
gets paid. A syndicator acts as a commissioner who gets ads from advertisers and
M. Backes and P. Ning (Eds.): ESORICS 2009, LNCS 5789, pp. 523–538, 2009.
_⃝c_ Springer-Verlag Berlin Heidelberg 2009
-----
524 Y. Peng et al.
distributes them to publishers, and earns commission fees. Some large publishers, e.g. ESPN.com, have their own advertising system and deal with advertisers
directly. But many small advertisers and small publishers depend on syndicator’s
professional service for advertising and billing.
Advertisers may be charged per thousand displays of ads (pay per mille,
PPM), per click on ads (pay per click, PPC), or per conversional action (pay per
action, PPA). Of course, advertisers would prefer paying according to sales by
using PPA model. But publishers would prefer paying according to their traffic
load by using PPM model. As the result of balancing risks between advertisers
and publishers, the PPC model has been the most prevalent payment model in
the online advertising market [2].
However, PPC model has been suffering serious click fraud problem. Click
fraud is a type of Internet crime that occurs in online advertisement models
when an ad is being clicked for the purpose of generating a charge without
having actual interest in the target of the ad’s link. Typically, two types of motivations are behind click frauds. Malicious advertisers may click on competitors’
ads in order to increase their advertising expense. Since current advertising systems usually use auction scheme, such attack may deplete competitors’ daily
advertising budgets and remove them from the competing list. Fraudulent publishers often inflate the number of clicks on ads displaying on their own web sites
in order to get more commissions. A survey indicates that honest Internet advertisers paid $1.3 billion for click fraud in 2006 [3]. The overall industry average
click fraud rate for Q4 2008 is estimated at 17.1% [4]. Because of large number
of fraudulent clicks, some syndicator companies (e.g. Google and Yahoo!) have
been facing lawsuits recently [5,6]. Hence, preventing click fraud is a critical task
to keep the healthiness of the online advertising market.
Fraudulent clicks could be generated by different entities using different tech
niques. Human, such as cheap labors, could generate fraudulent clicks manually.
Clickbots [7] could generate automatic and large amount of fraudulent clicks
quickly. A clickbot can be a special program on a virus/Trojan infected computer or a malicious script embedded in a publisher’s web page. The latter one
does not even require breaking into someone’s computers. Whenever an innocent
user visits the web site, the malicious script, which exploits vulnerabilities of online advertising models, is executed in the visitor’s browser and may click ads
automatically and stealthily. An experiment using malicious scripts had been
conducted and cumulated thousands of dollars in the publisher’s account [8]. In
this paper, we focus on fraudulent clicks generated by such malicious scripts.
Several existing solutions have the capabilities to address some types of fraud
ulent clicks. However, none of them is able to prevent fraudulent clicks generated
by malicious scripts as effective as the solution proposed in this paper.
Anomaly-based methods are industry-wide solutions to detect fraudulent
clicks by detecting abnormal features in clicking streams. As Tuzhilin, Daswani
et al. discussed in [9, 10, 11], fraudulent clicks, whether committed by human
beings or bots, will show anomalies if enough data are collected. For example,
duplicate click is one well-known anomaly, which indicates that clicks with the
-----
An Effective Method for Combating Malicious Scripts Clickbots 525
same identifier appearing within a short time period are likely to be fraudulent
clicks. Efficient algorithms for detecting duplicate clicks are proposed by Metwally et al. in [12] and Zhang et al. in [13]. In online advertising systems, a
number of such online or offline filters are applied to identify anomalies. These
filters are trade secrets, hence the details are not disclosed. The primary limitation for anomaly-based detection is the data limitation. When too little data
are available, it may be hard to identify anomalies. Another limitation is the
hardness to distinguish meaningless (but non-fraudulent) clicks from fraudulent
clicks. That’s why syndicators such as Google claim that they detect invalid
clicks.
Another solution proposed by Juels et al. tries to authenticate valid clicks.
In [14], they propose a credential-based approach to identify premium clicks
(i.e. good clicks) instead of excluding invalid clicks. If a user has committed
legitimate behaviors (e.g. purchases), the clicks from her browser are marked as
premium clicks and cryptographic credentials are stored in the browsing cache for
authentication. This approach, however, is still subject to the attack presented in
this paper, where click fraud may be committed in a browser used by a legitimate
user. If credentials have been stored due to the legitimate behaviors from that
user, fraudulent clicks will also be identified as premium clicks.
As the carrier of ads, the security of the advertising client is also very im
portant. Many syndicators, like Google and Yahoo!, have wrapped their ads by
iframes and utilize the same-origin-policy to protect their advertising clients
[15,11]. Another approach to protect advertising client is to use spiders to visit
publisher’s web sites and try to discover misuse of advertising clients [15]. However, both approaches could be circumvented by malicious publishers, which will
be further discussed in Section 2.
In this paper, we propose a framework and associated methodologies to de
tect and prevent fraudulent clicks that are generated by malicious scripts embedded in fraudulent publisher’s web sites We propose to create an one-time
impression-click identifier with a predefined lifetime for each impression. At the
syndicator’s server, the impression-click identifiers are stored in a special data
structure and are later validated against received clicks. Compared to na¨ıve data
structures (e.g. linked list) which result in high costs to store and query items,
the proposed data structure has the characteristics of constant-time query, low
memory space requirement, low false negative, and low false positive. Compared
to general Bloom Filters [17], the proposed data structure has the capability of
automatically deleting the outdated identifiers and that have been clicked. Thus,
the proposed framework can be used to detect click fraud effectively.
Click fraud detection may be performed using online or offline filters [9]. How
ever, offline detections are often used to detect sophisticated click frauds which
will appear only after some sort of data integration and are hard to be detected
at runtime. On the contrary, simple and fast detections are more preferable to be
implemented as online filters to filter invalid clicks quickly. Since the framework
proposed in this paper can be executed efficiently, we propose to apply the detection method presented in this paper at runtime, Using a primitive PC machine
-----
526 Y. Peng et al.
to process 3, 328, 587 impressions and 277, 633 clicks, our approach achieved a
false negative rate 0.00008 and average 3 microseconds for inserting an identifier,
average 1 microsecond for validating an identifier.
Contributions of this research: (1) We propose a framework which has the
capability to correlate genuine impressions and clicks thus prevents the fraudulent clicks that are generated by malicious scripts embedded in publisher’s web
pages. (2) The proposed framework has the capability of automatically deleting
the outdated identifiers and the identifiers that have been clicked. (3) The proposed framework can achieve constant processing time, low false negative and
low false positive.
Note that the solution proposed in this paper does not mean to be a complete
solution for all types of click frauds. Rather, it provides client-side and serverside methods to prevent a type of click fraud that is committed by sophisticated
malicious scripts in publisher’s web pages. This solution can be seamlessly combined with other click fraud detection methods to provide better protection.
In this paper, we discuss the scenario that the publisher and the syndicator
are from different origins only. In case that they are from the same origin, the
publisher does not have the motivation to exploit the advertising clients.
The rest of the paper is organized as follows. We describe the malicious-script
generated click fraud and define the problem in Section 2. In Section 3, we
propose and analyze a framework to address the problem. Experimental results
are discussed in Section 4. We conclude our paper in Section 5.
## 2 Problem Definition
In this section, we firstly present a general framework of online advertising. Then,
we discuss how malicious scripts can be used to launch click fraud attacks even
though iframe has been used. At the end of the section, we specify the objectives
of this research.
**2.1** **A Framework for Advertising Networks**
In general, a typical advertising network involves three parties: advertisers, syndicators and publishers. A visitor interacts with all of them. A visitor is an information consumer who visits web sites via a browser and may click on interested
ads. In ad networks, visitors are the targets of advertising and visitor’s browsers
transfer ad handling messages between publishers, syndicators and advertisers.
Figure 1 shows a typical ad network working process (we call it Ad Handling
_Process) consisting of ten steps. In the following description, we assume that_
ads are wrapped with iframes, which is a widely-adopted security technique to
protected advertising clients. We provide a pseudo form of the messages that
are exchanged between the visitor V, the publisher P, the syndicator S and the
advertiser A at each step, and provide a corresponding brief description. In the
description, HT T Preq denotes an HTTP request and HT T Presp denotes an
HTTP response.
-----
An Effective Method for Combating Malicious Scripts Clickbots 527
config-ad code
Publisher Syndicator Advertiser
9 10
1 2
1-2: Get a web page
3 4 5 6 7 8 3-4: Get the show-ad code
5: Impression request
6: Impression response
Browser 7: Clicking request
8: Clicking response
Visitor
9-10: Get the landing page
**Fig. 1. A framework for advertising networks**
_Step 1: V →_ _P : HT T Preq{URLpub}. A visitor requests a publisher’s web page_
at URLpub via her browser.
_Step 2: P →_ _V : HT T Presp{Pagepub, Codeconf_ _}. The publisher’s web server_
sends back the content of the web page Pagepub, with the embedded
config-ad code Codeconf . We call Pagepub as a referring page, since it
may refer the visitor to an advertiser’s web site. The config-ad code contains configuration information about the publisher and a link URLshow
to a show-ad code on the syndicator’s server.
_Step 3: V_ _→_ _S : HT T Preq{URLshow}. The visitor’s browser requests the_
show-ad code from the syndicator’s server at URLshow.
_Step 4: P →_ _V : HT T Presp{Codeshow}. The syndicator’s server returns the_
show-ad code Codeshow, a snippet of script code, whose primary task is
to construct an iframe which points to the real ad page URLimp. For
example, URLimp may be like http://syndicator.com/ads?
client=publisher-id&referrer=http://publisher.com/.The iframe
may look like <iframe src="URLimp" id="ads_frame"></iframe>.
_Step 5: V →_ _S : HT T Preq{URLimp}. The visitor’s browser sends an HTTP_
request for the ad page to the syndicator’s server at URLimp (impression
_request_ ).
_Step 6: S →_ _V : HT T Presp{Pageimp}. The syndicator’s server composes and_
returns an HTML document (impression response). The HTML document contains the descriptions and links for ads.
_Step 7: V →_ _S : HT T Preq{URLclick}. If the visitor clicks an ad, an HTTP_
request is sent to the syndicator’s server at URLclick (click request). The
important parameters, such as the publisher’s client ID and the URL of
the advertiser’s landing page, are embedded as parameters of URLclick.
For example, URLclick may look like http://syndicator.com/click?
client=publisher-id&adurl=http://advertiser.com/&referrer=
http://publisher.com/.
_Step 8: S →_ _V : HT T Presp{URLad}. The syndicator validates the click. If_
valid, the syndicator charges the advertiser and pays the publisher. Otherwise, the advertiser is not charged for an invalid click. For both validation results, the same HTTP response containing the URL of the
advertiser’s landing page URLad will be sent back (click response). The
-----
528 Y. Peng et al.
Publisher’s
crawler program
Store ad copies Get ad copies
Ad
Publisher Syndicator Advertiser
Pool
9 10
2 1
1: Get a web page
{Pagepub, Adspub, 7 8 2: Return the web page
Codemalicious} with ad copies and
a malicious code
Browser 7: Auto-clicking request
8: Clicking response
Visitor
9-10: get the landing page
**Fig. 2. A framework for malicious-script-generating click fraud**
syndicator purposely makes no difference between valid response and invalid response to prevent attackers probing the click validation scheme.
_Step 9: V →_ _A : HT T Preq{URLad}. Following the response in Step 8, the_
visitor’s browser sends an HTTP request to advertiser’s server at URLad.
_Step 10: A →_ _V : HT T Presp{Pagead}. The advertiser’s server returns the land-_
ing page.
**2.2** **Threat Model**
Many syndicators use iframe to wrap and protect their advertising clients
[15, 11]. Using iframe, the same-origin-policy, which is enforced in all modern
browsers, will prevent the script from one origin to read and change the web
content from a different origin. The origin is defined by the protocol, port and
host fields of a URL [18]. Since the publisher’s web sites and the syndicator’s
web server are from different origins, the scripts on the publisher’s web sites cannot click ads in the iframe. However, same-origin-policy can be circumvented.
In this section, we present an attack to circumvent the same-origin policy. Such
attack has been proved to be effective by the Think Digit Magazine [8].
This type of attack is launched by fraudulent publishers. As shown in Figure 2,
before launching attacks, the publisher uses a crawler program to visit her own
web site and downloads ads. The publisher may run this program iteratively and
store all available ad copies into an ad pool on her web server and is ready for
attacks. Compared with the typical ad handling process in Figure 1, the attack
has the following different processing steps:
_Step 2: After receiving an HTTP request from a visitor, the publisher’s server re-_
turns HT T Presp{Pagepub, Adspub, Codemal}, where additional Adspub
are the ad copies selected from the publisher’s ad pool, additional Codemal
is a malicious script to generate automatic clicks on Adspub. Note that
_Codeconf in the normal process is missing._
_Step 3-6: These steps are skipped because Codeconf is missing._
_Step 7: The malicious code generates an automatic click on an ad copy in_
_Adspub. Note that the ad copies in Adspub and the malicious code are_
|Ad Pool|Publisher|
|---|---|
-----
An Effective Method for Combating Malicious Scripts Clickbots 529
from the same origin – the publisher, hence the automatic false click will
be generated successfully.
There is an obvious shortcoming of this attack: Steps 3-6 of the normal ad handling process are missing. Hence the syndicator’s server can detect this attack
simply by checking whether a corresponding impression request is received before a click request. A smarter script will download the genuine ads and provide
fake ads at the same time. Specifically, the config-ad code Codeconf is still embedded in the response of Step 2. Now, every click seems having a corresponding
impression. Without specially designed mechanisms, it is hard for the syndicator
to distinguish such false clicks from the genuine clicks.
The smarter script has a challenge to guess which ads are returned by a syndi
cator in the iframe so that it can click on the same copy from the publisher’s ad
pool. The challenge occurs because the publisher’s script cannot read the content
within an iframe and the ads in the iframe are often displayed dynamically due to
the auction scheme. However, the smarter script still has good chances to guess by
using special techniques, such as applying careful design to reduce the number of
available ads for a web page or sending multiple impression requests for one visit.
**2.3** **Na¨ıve Solutions**
PPA model could be used to address general click fraud problem. However, PPA
model is less preferred by publishers, since each display of ads will increase the
traffic load of the publisher’s web site and publishers take the risk that visitors
do not convert on their web sites.
A syndicator can also place a nonce into browser cookies each time an ad is
requested, then check that nonce when a click request is received. The problem
of this solution is that users may not click on ads right away and browser cookies
may be deleted before clicking ads. For example, Firefox has an option to let a
user delete cookies when closing the browser. Thus, a valid click may be sent
without a cookie. Deleting cookies is not unusual among users. A study of 2,337
users found that 10 percent of the users has the habit to delete cookies daily and
more delete cookies in a longer regular period [19]. If a click without a cookie is
not counted as valid, publishers are not fairly paid.
Another possible solution is to encode a time window, an IP address (or a
cookie) and other related information into the clicking URL, using a secret key
known by the syndicator only [20]. The encoded information is checked when a
click request is received. The problem is similar: users may change IP addresses
or delete cookies before clicking ads, thus validation of encoded information will
fail. There are considerably many scenarios that IP address will changed, such
as a user in a DHCP domain or roaming to a different subnet. If we classify
those clicks as invalid, publishers are not fairly paid.
A syndicator may have human investigators to check publisher’s website for
misusing of advertising clients and malicious scripts. If malicious scripts that
manipulate ads are found, the publisher’s account will be suspended. However,
manual investigation is impossible to monitor all publisher’s websites when the
publisher network becomes large. Hence, automatic spidering programs are often
-----
530 Y. Peng et al.
used to investigate publisher’s website. However, a cloaking-type attack can circumvent the spidering investigation effectively [15]. In the attack, the publisher
serves a bad version with malicious scripts to normal visitors and a good version with benign scripts to the advertising system’s spiders. A hidden forwarder
is used to distinguish normal users from investigation spiders. The forwarder’s
URL is distributed to normal visitors via methods like spam email and redirect
them to real publisher’s websites. The publisher checks the referer field in the
HTTP header to distinguish normal visitors from investigation spiders. For visits
whose referrer is not the hidden forwarder, the good version is returned.
Smart investigation spiders or honeyclients may be able to get the bad version
with the malicious script finally. However, the challenge remain to discover the
malicious intension of the script from all kinds of obfuscation techniques [16].
_Objective of the Research. By analyzing the threat model and na¨ıve solutions,_
we realize that embedding a nonce into the clicking URL and then validate the
nonce is a viable solution. However, considering millions or billions of impression
requests received by a large syndicator like Google, querying and validating
the nonce is not an easy task. In this research, our goal is to develop effective
solutions to combat malicious script-based click fraud attacks, which can (1)
distinguish false clicks generated by malicious scripts and the clicks generated by
authentic advertising clients; (2) resist replay attacks; (3) to be efficient enough
to be deployed and run on heavily-loaded advertising system servers; (4) achieve
low false positive and low quantifiable false negative.
## 3 The Proposed Approach
We propose a framework to combat malicious script-generating click frauds. The
proposed framework assumes that iframe is already used to enforce the sameorigin-policy. For simplicity, we assume that the proposed framework runs on
syndicator’s server, but it can also run on advertiser’s or third-party’s server.
On the syndicator’s server, we proposed to add four operations: creating,
_storing, validating and deleting impression-click identifiers, where an impression-_
click identifier is a one-time identifier that is assigned to an impression and
the following clicks on it. After an impression request is received, the creating
operation is executed to generate an impression-click identifier and embed it into
ad links that are returned to a visitor. After being created, the identifier is stored
into a special data structure for later validation. The data structure used in this
framework can serve every query in constant time and with low false negative
and low false positive. This is crucial to the success of processing billions of adclicking requests received every day. The data structure used in this framework
also has new properties to handle time-based sliding windows and remember
clicked impressions. After a click request is received, the validating operation
is executed to validate the click. If the impression-click identifier of the click is
missing, or cannot be found, or has been expired, the click is classified as invalid.
Otherwise, it is classified as valid. The deleting operation is executed periodically
to delete the expired impression-click identifiers.
-----
An Effective Method for Combating Malicious Scripts Clickbots 531
The proposed framework only modifies the ad handling process by additional
creating and storing operations between Step 5 and Step 6 and validating operation between Step 7 and Step 8. The deleting operation is executed periodically
on the syndicator’s server. The modification requires only small changes on the
syndicator’s server, and no changes on other involved parties (visitors, publishers, advertisers). Hence, it is easy to implement and deploy.
**3.1** **Definition and Terminology**
We present several definitions and terms used in this paper here.
**Definition 1. An impression-click identifier is assigned for each authentic im-**
_pression and the authentic clicks on it. We define the impression-click identifier_
_as an one-time identification vector ⟨IDpub, URLR, IPv, S⟩, where IDpub is the_
_publisher’s ID, URLR is the URL of the referring page (described in Step 2 of_
_Figure 1) which displays the ad content generated by the syndicator, IPv is the_
_visitor’s IP address, S is a one-time random identifier generated by cryptograph-_
_ically secure pseudo-random number generator._
**Definition 2. The lifetime of an impression-click identifier is defined as a time**
_period T . If an ad impression were not clicked within T, the syndicator should_
_expect to receive no more meaningful clicks on that impression._
**Definition 3. A time-based sliding window is defined as a window which con-**
_tains the impression-click identifiers that have arrived in the last T time units._
_For any time t, the impression-click identifiers that arrived within (t_ _T, t] are_
_−_
_valid, while all identifiers arriving before t_ _T are expired (i.e., invalid)._
_−_
**Definition 4. A timestamp used in our framework is defined as a finite,**
_wraparound integer that is associated with a time point. The timestamp starts at_
0 and is increased by 1 at each new time point (clock tick). When the timestamp
_reaches the wraparound value W_ _, it returns to 0. Hence, a timestamp is an inte-_
_ger between [0, W_ 1]. We assume that a sliding window with length T contains
_−_
_N time points. Then, W must satisfy W_ _N_ _._
_≥_
**Definition 5. An active timestamp in our framework is defined as a timestamp**
_which is not N older than the current timestamp. Let ts denote the current_
_timestamp, ts[′]_ _denote the timestamp to be checked. If (ts_ _ts[′]) mod W < N_ _,_
_−_
_ts[′]_ _is active. Similarly, an expired timestamp is defined as a timestamp which_
_is N older than the current timestamp. If (ts_ _−_ _ts[′]) mod W ≥_ _N_ _, ts[′]_ _is expired._
**3.2** **Creating Impression-Click Identifers**
When a syndicator’s server receives an impression request, an impression-click
identifer is created. IDpub, URLR, IPv are firstly extracted from the HTTP
header and the IP header. Then, the syndicator’s server generates a one-time
random identifier S which is, for example, a random number generated by a
-----
532 Y. Peng et al.
cryptography-secure random number generator. Now, the syndicator’s server has
constructed an impression-click identifier ⟨IDpub, URLR, IPv, S⟩. The random
number S is embedded into ad links of the ad page that will be returned to the
visitor. In a legitimate clicking scenario, the ad link will be clicked by the same
visitor at the same web page, hence S will be sent back to the syndicator with the
same IDpub, URLR, IPv as the corresponding impression request. Hence, a valid
click request must have the same impression-click identifier as the corresponding
impression request. In this way, we connect an impression and the following valid
clicks on it together.
**3.3** **Storing Impression-Click Identifers**
After the impression-click identifer is created, it must be stored for later validation purpose. In a large ad network, it is a challenge to store and validate
the impression-click identifiers efficiently due to billions of impression and click
requests may be received each day. We proposed to use a special data structure
to accomplish the tasks. We also proposed to use a time-based sliding window
to maintain active and expiration statuses of impression-click identifiers.
The data structure is represented as an array of m entries P [0], P [1], _, P_ [m
_· · ·_ _−_
1], where each entry of the array contains an E-bit integer (called timestampinteger and denoted as E[i]) and a bit (called click-bit and denoted as B[i]]), where
_E = ⌈log2(N_ +C+1)⌉. Parameters N and C will be described later. All timestampintegers are initialized to invalid timestamps (all 1s) and all click-bits are initialized
to 1s. The data structure also has k hash functions which are used to assist inserting
and querying operations.
Our framework uses a sliding window to contain the items arrived within the
last T time periods. The period contains N timestamps. We let the wraparound
value W for the timestamps equal to N + C, where C 0 is a parameter
_≥_
to adjust the overhead of the deleting operation and will be further explained
when we present the deleting operation. Simply saying, the array may have
_N + C different timestamps and the sliding window contains N most recent_
timestamps. The timestamps in the sliding window are active, and that out of
the sliding window are expired. A timestamp-integer of the data structure must
contain one invalid timestamp (all 1s) and N + C active or expired timestamps.
Hence, a timestamp-integer must have at least E = ⌈log2(N + C + 1)⌉ bits.
Assume that the impression request arrives at time t, with corresponding
timestamp ts [0, N + C 1]. To store an impression-click identifier, the syndi_∈_ _−_
cator’s server hashes the impression-click identifier ID by k hash functions and
gets k hash results hi(ID)(1 ≤ _i ≤_ _k). The corresponding k timestamp-integers,_
whose indices are the same as the k hash results, are set to the current timestamp
_ts, and the corresponding k click-bits are set to 1._
**3.4** **Validating Impression-Click Identifer**
When a click request is received, the syndicator’s server validates the impressionclick identifer of the request. The syndicator’s server tries to extract IDpub,
-----
An Effective Method for Combating Malicious Scripts Clickbots 533
_URLR, IPv, S from the HTTP header and the IP header. If S is missing, the_
click is marked as invalid immediately. Otherwise, we construct an impressionclick identifier ID = ⟨R, IDpub, IPv, S⟩ for the click request.
Then, the syndicator’s server queries ID in the data structure. Assume that
the click request arrives at time t, with corresponding timestamp ts [0, N +
_∈_
_C_ 1]. The syndicator’s server hashes ID by k hash functions and check k cor_−_
responding entries E[hi(ID)] and B[hi(ID)]. If any of the k timestamp-integers
is invalid (all 1s) or expired ((ts − _E[hi(ID)]) mod (N + C) ≥_ _N_ ), undoubtedly the corresponding impression request has never been received or has been
expired already. If all of the k click-bits are 0, the corresponding impression has
been clicked with a very high probability. In either case, the click is classified as
_invalid. Otherwise, the click is classified as valid._
**3.5** **Deleting Expired Impression-Click Identifers**
The deleting operation firstly starts at the beginning of the (N + 1)th time
point (the timestamp is N ), and then is invoked once at the beginning of each
successive time point (after the timestamp is updated). Each time, the operation
scans _m_
_⌈_ (C+1) _[⌉]_ [continuous entries. If an entry contains an expired timestamp, the]
timestamp-integer is reset to invalid (all 1s) and the click-bit is reset to 1.
We denote the starting entry of a deleting operation as P [i] and the ending
entry as P [j]. The first deleting operation starts from the head of the array and
has P [i] = P [0]. Other deleting operations start from the next entry of the last
scanned entry and have P [i] = P [(j + 1) mod m] . Whenever the operation
reaches the bottom of the array, it will go around to the head P [0].
The proposed framework uses the parameter C to adjust the number of entries
that are scanned by a deleting operation. If C = 0, the whole array is scanned at
the beginning of each time point and the expired timestamps are cleaned. Each
operation must scan m entries. By using C > 0, the number of scanned entries
for each deleting operation is reduced to _m_
_⌈_ (C+1) _[⌉][. For example, when][ C][ = 1,]_
only half of the entries are scanned in a deleting operation.
Compared with the traditional sliding window technique, our framework de
lays the deleting of an expired timestamp for at most C time points. The benefit
is that we reduce the number of scanned entries, thus the running time overhead,
for each deleting operation. Note that the wraparound value is N + C, while a
sliding window contains only N timestamps. Hence, the expired timestamps that
are not cleaned yet will be temporarily stored in the array. If a validating operation reads an expired timestamp, it will immediately recognize the expiration,
hence will not introduce any error.
The analysis of false negative and false positive rates is simple. To save space,
we do not show the proof here. More details can be found in [21].
**3.6** **Security Analysis**
**Effectiveness of the Proposed Approach Against Script-generating**
**Click Fraud Attacks. In our proposed framework, we use a special data**
-----
534 Y. Peng et al.
structure to validate the impression-click identifier sent along with the visitor’s
ad click request. The unique feature of this approach is that we have very low
false positive. That is, we will not likely say that a valid impression-click identifier
is “invalid”. From the theoretical analysis, we can also show that the false negative can be controlled to be a low and acceptable value by carefully determining
the system parameters such as the size of the space and the number of hash
functions. This means that the possibility that an invalid impression-click identifier is regarded as “valid” can be controlled to be low enough to be acceptable
for both the advertisers and other online advertising business parties. Under our
proposed framework, there is only one way for the attacks to be able to succeed:
Correctly guessing and generating an active and valid impression-click identifier.
However, it is practically infeasible to do so since it is hard for a malicious script
to read the impression-click identifier embedded in an iframe without more sophisticated attacks, and we use cryptographically secure pseudo-random number
generator to generate impression-click identifiers.
**Effectiveness of the Proposed Approach Against Cross-Site Scripting**
**Attack. Cross-site scripting attack, a popular attack on web applications, can-**
not work under our proposed framework. Such attack requires that malicious
scripts are injected into the ad page that are generated by a syndicator and
viewed by visitors. By doing so, the malicious script, and the malicious publisher, could be able to get access to the impression-click identifiers. But this is
infeasible, because the syndicator will not accept inputs from the publishers and
add it into the ad page. Hence, it is impossible to inject malicious scripts into
the ad page, because the syndicator would not do it and no other parties could
do it.
**Effectiveness of the Proposed Approach Against Replay Attack. If an**
attacker is able to sniff or retrieve the impression-identifers that are embedded
in an iframe, it is possible to replay the identifiers and generate false clicks.
However, the proposed framework deletes an identifier once it is clicked. Hence,
the replay attack on each identifier is restricted to once.
**Effectiveness of the Proposed Approach Against Man-in-the-middle**
**Attack. It is possible to launch sophisticated man-in-the-middle attack to in-**
tercept valid impression-click identifiers such that the malicious publisher could
be able to generate malicious automatic ad clicks with the intercepted valid
impression-click identifiers. But a very simple solution can effectively defend
against such man-in-the-middle attacks, which is to use HTTPS instead of
HTTP. With it, the man-in-the-middle attacker cannot read valid impressionclick identifiers from the ad page sent by the syndicator any more.
**Limitation of the Proposed Approach. Although the proposed framework**
is able to prevent malicious-script generating fraudulent clicks effectively, it is
limited to address this type of click fraud only. The framework is not able to
prevent click fraud generated by human or bot machines.
-----
An Effective Method for Combating Malicious Scripts Clickbots 535
## 4 Experimental Evaluation
We evaluate the performance of the proposed framework using two data sets:
an HTTP data set and a synthetic data set. The HTTP data set is transformed
from a data set of publicly available HTTP traffic[1] during 2 weeks in 1995,
which contains 3, 326, 797 impression requests and 277, 633 clicking requests. The
synthetic data set is generated by us according to general rules of web traffic
and ad clicks, which contains 20, 971, 520 impression requests and 2, 023, 813 click
requests. Although real clicking data are not available for evaluation, these two
data sets are still able to testing performance of the proposed framework. The
HTTP data set captures characteristics of real web traffic, while the synthetic
data set contains much more data to test the scalability of the framework.
**4.1** **Experimental Setup**
The original HTTP data set contains total 3, 328, 587 HTTP requests. Each
HTTP request has a host that made the request, a time when the request was
received, and other information. We transform each HTTP request in the data
set to one impression request. The impression-click identifier of an impression
request simply consists of a host of the request and a random number. Such
simplification will not affect the evaluation of the performance. The arriving
time of the impression request is the same as the HTTP request. In the total,
we have 3, 326, 797 impression requests after removing the disordered requests.
We generate clicks using a typical click-through rate 0.1. That is, for each
impression request generated above, there is a probability 0.1 to generate a click
request for it. All clicks are generated as invalid clicks. Hence, in our evaluation, the false negative rate is approximate to the fraction of total clicks that
are classified as valid clicks. In order to evaluate the capability of the proposed
framework to handle different fraudulent clicks, we have purposely generated
three types of invalid clicks. The first type of invalid clicks have the same identifiers as impressions, but arriving T time later (i.e. expired). The second type of
invalid clicks are generated with invalid hosts (but not expired). The third type
of invalid clicks are generated with different random numbers (also not expired).
Different fractions of the three types of invalid clicks actually have undetectable
impact on the evaluation results. In the following description, we imply that the
fraction of the three types of invalid clicks is 0.2, 0.3, 0.5.
We run our evaluations on a PC with a 3GHz Pentium-4 CPU and 1GB
memory. Other parameters for the HTTP data set are: T = 1 week (604, 800
seconds), N = C = 604, 800, E = 32 bits.
The synthetic data set is generated as follows. We generate impression re
quests which arrive in random time intervals. Clicks requests are generated
using the similar methods as that is used to generate clicks for HTTP data
set. The data totally contains 20, 971, 520 impression requests and 2, 023, 813
1 ClarkNet HTTP traffic,
[http://ita.ee.lbl.gov/html/contrib/ClarkNet-HTTP.html](http://ita.ee.lbl.gov/html/contrib/ClarkNet-HTTP.html)
-----
536 Y. Peng et al.
**Fig. 3. False negative rate vs. Space usages and Number of hash functions**
click requests. Other parameters for the HTTP data set are: T = 4 months,
_N = C = 1, 048, 576, E = 32 bits._
**4.2** **Experimental Results**
We have evaluated the proposed framework using both the HTTP data set and
the synthetic data set. Their results are similar, hence we show and discuss the
results for HTTP data set only. More details about the synthetic results can be
found in [21].
At first, we evaluate the false negative rates for different space usage and
number of hash functions. The result is shown in Figure 3. We observe a shape
like a gorge. The bottom of the gorge is the minimum values of m under specific
space usages. Under specific number of hash functions, the false negative rate
decreases when the space usage increases, because a larger memory will reduce
the collisions between the hash results, hence can reduce the false negative rate.
In this experiment, we are able to achieve a low false negative rate 0.00008 when
the space usage is 120MB and k = 13.
The second experiment is to evaluate the time used for an inserting operation.
In Figure 4(a), we observe that the inserting time increases linearly with k. We
also observe that the inserting time is similar for different size of memory, i.e.
the inserting overhead is almost not affected by the space usage. When we use
120MB memory and 13 hash functions, the average inserting time is as small as
less than 3 microseconds.
The third experiment is to evaluate the time used for a querying operation.
In Figure 4(b), we observe an interesting result that the querying time increases
non-linearly when k is small, and then increases linearly when k is large enough.
The reason for this observation is as below. When k is relatively small, active
timestamps occupies a small portion of the entries. A querying operation likely
meets an invalid or expired timestamp before checking all k entries and stops. A
small increase of k will cause a large increase of active timestamps. Hence, a lot
-----
An Effective Method for Combating Malicious Scripts Clickbots 537
(a) Average inserting time vs. Number of
hash functions
(b) Average querying time vs. Number of
hash functions
**Fig. 4. Average inserting or querying time vs. Number of hash functions**
more entries have to be checked and the querying time increase in an exponentiallike speed. When k is large enough, most of the entries are occupied by active
timestamps. A querying operation has to check almost all k entries. Hence,
the querying time increases linearly with k. We also observe that a querying
operation costs less time when a larger size of memory is used. The reason is
that when using a larger space, more entries have invalid or expired timestamps,
hence a query operation checks less entries in average. When we use 120MB
memory and 13 hash functions, the average querying time is as small as less
than 1 microseconds.
## 5 Conclusions
In this paper, we propose an effective solution to validate and filter click frauds
generated by malicious scripts from fraudulent publishers. We propose a set of
operations that can create an one-time impression-click identifier for each ad
impression request and validate it later. Our proposed solution has been proved
to be able to achieve constant-time inserting and querying, low false positive
rate and low quantifiable false negative rate.
## Acknowledgments
This work was partially supported by NSF under grants No. CNS-0644238, CNS0626822, and CNS-0831470. We appreciate anonymous reviewers for their valuable suggestions and comments.
## References
1. PricewaterhouseCoopers, Iab internet advertising revenue report, 2008 full-year
[results, http://www.iab.net/media/file/IAB_PwC_2008_full_year.pdf](http://www.iab.net/media/file/IAB_PwC_2008_full_year.pdf)
-----
538 Y. Peng et al.
2. Mitchell, S.P., Linden, J.: Click fraud: What is it and how do we make it go away
[(December 2006), http://www.kowabunga.com/white-papers.aspx](http://www.kowabunga.com/white-papers.aspx)
3. Survey, O.: Hot topics: Click Fraud Reaches $1.3 Billion, Dictates End of “Don’t
[ask, Don’t Tell” Era, http://www.outsellinc.com/store/products/243](http://www.outsellinc.com/store/products/243)
4. Click Forensics, Inc., Industry Click Fraud Rate Higher Than Ever Reaching
[17.1% in Q4 (2008), http://www.clickforensics.com/newsroom/press-releases/](http://www.clickforensics.com/newsroom/press-releases/120-click-fraud-index.html)
[120-click-fraud-index.html](http://www.clickforensics.com/newsroom/press-releases/120-click-fraud-index.html)
5. Mills, E.: Google Click Fraud Settlement Given Go-Ahead (July 2006),
[http://news.cnet.com/Google-click-fraud-settlement-given-go-ahead/](http://news.cnet.com/Google-click-fraud-settlement-given-go-ahead/2100-1024_3-6099368.html)
[2100-1024 3-6099368.html](http://news.cnet.com/Google-click-fraud-settlement-given-go-ahead/2100-1024_3-6099368.html)
6. Liedtke, M.: Yahoo Settles Click Fraud Lawsuit (June 2006),
[http://www.msnbc.msn.com/id/13601951/](http://www.msnbc.msn.com/id/13601951/)
7. Daswani, N., Stoppelman, M.: The Anatomy of Clickbot.A. In: Proceedings of the
First Conference on First Workshop on Hot Topics in Understanding Botnets, p.
11 (2007)
8. Think Digit Magazine, Clickety-clack: Googlewhack! (November 2007),
[http://www.thinkdigit.com/details.php?article_id=1983](http://www.thinkdigit.com/details.php?article_id=1983)
9. Tuzhilin, A.: The Lane’s Gifts v. Google Report. Tech. Rep. (2006),
[http://googleblog.blogspot.com/pdf/Tuzhilin_Report.pdf](http://googleblog.blogspot.com/pdf/Tuzhilin_Report.pdf)
10. Metwally, A., Agrawal, D., Abbad, A.E., Zheng, Q.: On Hit Inflation Techniques
and Detection in Streams of Web Advertising Networks. In: ICDCS 2007, p. 52
(2007)
11. Daswani, N., Mysen, C., Rao, V., Weis, S., Gharachorloo, K., Ghosemajumder,
S.: Crimeware: Understanding New Attacks and Defenses, 1st edn., vol. 11, pp.
325–354. Addison-Wesley, Reading (2008)
12. Metwally, A., Agrawal, D., Abbadi, A.E.: Duplicate Detection in Click Streams.
In: WWW 2005, pp. 12–21 (2005)
13. Zhang, L., Guan, Y.: Detecting Click Fraud in Pay-Per-Click Streams of Online
Advertising Networks. In: ICDCS 2008 (June 2008)
14. Juels, A., Stamm, S., Jakobsson, M.: Combating Click Fraud via Premium Clicks.
In: 16th USENIX Security Symposium, pp. 17–26 (2007)
15. Gandhi, M., Jakobsson, M., Ratkiewicz, J.: Badvertisements: Stealthy Click-Fraud
with Unwitting Accessories. Journal of Digital Forensic Practice 1(2), 131–142
(2006)
16. Chellapilla, K., Maykov, A.: A taxonomy of JavaScript redirection spam. In: AIR
Web 2007: Proceedings of the 3rd international workshop on Adversarial information retrieval on the web, pp. 81–88 (2007)
17. Broder, A., Mitzenmacher, M.: Network Applications of Bloom Filters: A Survey.
Internet Mathematics 1, 485–509 (2004)
18. The Same Origin Policy,
[http://www.mozilla.org/projects/security/components/same-origin.html](http://www.mozilla.org/projects/security/components/same-origin.html)
19. McGann, R.: Study: Consumers delete cookies at surprising rate (March 2005),
[http://www.clickz.com/3489636](http://www.clickz.com/3489636)
20. Daswani, N., Kern, C., Kesavan, A.: Foundations of Security: What Every Pro
grammer Needs to Know. Apress (February 2007)
21. Peng, Y., Zhang, L., Chang, J.M., Guan, Y.: An Effective Method for Combating
Malicious Scripts Clickbots, Tech Report,
[http://www.ece.iastate.edu/~kitap/docs/clickfraud.pdf](http://www.ece.iastate.edu/~kitap/docs/clickfraud.pdf)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-642-04444-1_32?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-642-04444-1_32, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,009
|
[
"JournalArticle",
"Conference"
] | false
| 2009-09-21T00:00:00
|
[] | 11,775
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/031722db6618fd5c8ceab1ab639e890aab0ac624
|
[
"Computer Science",
"Mathematics"
] | 0.804361
|
New Definitions and Separations for Circular Security
|
031722db6618fd5c8ceab1ab639e890aab0ac624
|
International Conference on Theory and Practice of Public Key Cryptography
|
[
{
"authorId": "145551562",
"name": "David Cash"
},
{
"authorId": "144885069",
"name": "M. Green"
},
{
"authorId": "1808829",
"name": "S. Hohenberger"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int Conf Theory Pract Public Key Cryptogr",
"Public Key Cryptogr",
"Public Key Cryptography",
"PKC"
],
"alternate_urls": null,
"id": "85615766-2ae6-4727-b0dc-5985bdf9eb74",
"issn": null,
"name": "International Conference on Theory and Practice of Public Key Cryptography",
"type": "conference",
"url": "https://en.wikipedia.org/wiki/Public-Key_Cryptography_(conference)"
}
| null |
# New Definitions and Separations
for Circular Security
David Cash[1][,⋆], Matthew Green[2][,⋆⋆], and Susan Hohenberger[2][,⋆⋆⋆]
1 IBM T.J. Watson Research Center
2 Johns Hopkins University
**Abstract. Traditional definitions of encryption security guarantee se-**
crecy for any plaintext that can be computed by an outside adversary. In some settings, such as anonymous credential or disk encryption systems, this is not enough, because these applications encrypt
messages that depend on the secret key. A natural question to ask is
do standard definitions capture these scenarios? One area of interest is
_n-circular security where the ciphertexts E(pk 1, sk 2), E(pk 2, sk 3), . . .,_
_E(pk n−1, sk n), E(pk n, sk 1) must be indistinguishable from encryptions_
of zero. Acar et al. (Eurocrypt 2010) provided a CPA-secure public key
cryptosystem that is not 2-circular secure due to a distinguishing attack.
In this work, we consider a natural relaxation of this definition. In
formally, a cryptosystem is n-weak circular secure if an adversary given
the cycle E(pk 1, sk 2), E(pk 2, sk 3), . . ., E(pk n−1, sk n), E(pk n, sk 1) has no
significant advantage in the regular security game, (e.g., CPA or CCA)
where ciphertexts of chosen messages must be distinguished from ciphertexts of zero. Since this definition is sufficient for some practical applications and the Acar et al. counterexample no longer applies, the hope
is that it would be easier to realize, or perhaps even implied by standard definitions. We show that this is unfortunately not the case: even
this weaker notion is not implied by standard definitions. Specifically, we
show:
**– For symmetric encryption, under the minimal assumption that one-**
way functions exist, n-weak circular (CPA) security is not implied
by CCA security, for any n. In fact, it is not even implied by authenticated encryption security, where ciphertext integrity is guaranteed.
_⋆_ This work was performed at the University of California, San Diego, supported in
part by NSF grant CCF-0915675.
_⋆⋆_ Supported in part by the Defense Advanced Research Projects Agency (DARPA)
and the Air Force Research Laboratory (AFRL) under contract FA8750-11-2-0211,
the Office of Naval Research under contract N00014-11-1-0470, NSF grant CNS1010928 and HHS 90TR0003/01. Its contents are solely the responsibility of the
authors and do not necessarily represent the official views of the HHS.
_⋆⋆⋆_ Supported in part by the Defense Advanced Research Projects Agency (DARPA)
and the Air Force Research Laboratory (AFRL) under contract FA8750-11-2-0211,
the Office of Naval Research under contract N00014-11-1-0470, NSF CNS 1154035,
a Microsoft Faculty Fellowship and a Google Faculty Research Award. Applying
to all authors, the views expressed are those of the authors and do not reflect the
official policy or position of the Department of Defense or the U.S. Government.
M. Fischlin, J. Buchmann, and M. Manulis (Eds.): PKC 2012, LNCS 7293, pp. 540–557, 2012.
_⃝c_ International Association for Cryptologic Research 2012
-----
New Definitions and Separations for Circular Security 541
**– For public-key encryption, under a number-theoretic assumption, 2-**
weak circular security is not implied by CCA security.
In both of these results, which also apply to the stronger circular security definition, we actually show for the first time an attack in which the
_adversary can recover the secret key of an otherwise-secure encryption_
_scheme after an encrypted key cycle is published. These negative results_
are an important step in answering deep questions about which attacks
are prevented by commonly-used definitions and systems of encryption.
They say to practitioners: if key cycles may arise in your system, then
even if you use CCA-secure encryption, your system may break catastrophically; that is, a passive adversary might be able to recover your
secret keys.
**Keywords: Encryption,** Definitions, Circular Security,
Counterexamples.
## 1 Introduction
Encryption is one of the most fundamental cryptographic primitives. Most definitions of encryption security [22,19,35] follow the seminal notion of Goldwasser
and Micali which guarantees indistinguishability of encryptions for messages chosen by the adversary [22]. However, Goldwasser and Micali wisely warned to be
careful when using a system proven secure within this framework on messages
that the adversary cannot derive himself.
Over the past several years, there has been significant interest
in designing schemes secure against _key-dependent_ _message_ _attacks,_
e.g., [15,11,31,3,27,29,13,14,5,2], where the system must remain secure even
when the adversary is allowed to obtain encryptions of messages that depend on
the secret keys themselves. In this work, we are particularly interested in circular
security [15]. A public-key cryptosystem is n-circular secure if the ciphertexts
_E(pk 1, sk 2), E(pk 2, sk 3), . . ., E(pk n−1, sk n), E(pk n, sk_ 1), as well as ciphertexts
of chosen messages, cannot be distinguished from encryptions of zero, for
independent key pairs. Either by design or accident, these key cycles naturally
arise in many applications, including storage systems such as BitLocker [13],
anonymous credentials [15], the study of “axiomatic security” [31,3] and more.
See [13] for a discussion of the applications.
Until recently, few positive or negative results regarding circular security were
known outside of the random oracle model. On one hand, no n-circular secure
cryptosystems were known for n > 1. On the other hand, no counterexamples
existed for n > 1 to separate the definitions of circular and CPA security; that
is, as far as anyone knew the CPA-security definition already captured circular
security for any cycle larger than a self-loop.
Recently, this gap has been closing in two ways. On the positive side, several
circular-secure schemes have been proposed [13,5,14]. The focus of the current
work is on negative results – namely, investigating whether standard notions of
encryption are “safe” for circular applications.
-----
542 D. Cash, M. Green, and S. Hohenberger
In 2008, Boneh, Halevi, Hamburg and Ostrovsky proved, by counterexample,
that one-way security does not imply circular security [13]. Recently, Acar, Beleniky, Bellare and Cash [2] proved that, under an assumption in bilinear groups,
CPA-security does not imply circular security.
_Our Results. We narrow this gap even further by studying the extent to which_
standard definitions (e.g., CPA, CCA) imply a weak form of circular security.
Our results are primarily negative.
_1. Relaxing the Circular Security Notion. Perhaps the current formulation of_
circular security is “too strong”; that is, perhaps there is a relaxed notion of this
definition which simultaneously satisfies many practical applications and yet is
also already captured by standard security notions. This is an area worth investigating. We begin by proposing a natural relaxation called weak circular secu_rity where the adversary is handed an encrypted cycle E(pk 1, sk 2), E(pk 2, sk 3),_
. . ., E(pk n−1, sk n), E(pk n, sk 1) along with the public keys and then proceeds to
play the CPA or CCA security game as normal (where these ciphertexts are also
off-limits for the decryption oracle). We stress here that the encrypted cycle is
_always generated as described, and is never changed to encryptions of zero. This_
definition is intriguing, and perhaps of independent interest, for two reasons.
First, the Acar et al. [2] counterexample does not apply to it. That construction uses the bilinear map to test whether a sequence of ciphertexts contain a
cycle or zeros. Here the adversary knows he’s getting an encrypted cycle, but
then must extract some knowledge from this that helps him distinguish two
messages of his choosing.
Second, this definition appears sufficient for some practical settings. Using
a weak circular secure encryption scheme, Alice and Bob could exchange keys
with each other over an insecure channel knowing that: (1) Eve can detect that
they did so, but (2) Eve cannot learn anything about their other messages.
Similarly, an adversary scanning over a user’s BitLocker storage may detect
that her drive contains an encrypted cycle, but cannot read anything on her
drive. In an anonymous credential system of Camenisch and Lysyanskaya [15],
a user has multiple keys. To participate in the system, the user must encrypt
them in a cycle, provide this cycle to the other users, and prove that she has
done this correctly. Then, if she shares one key, she automatically shares all her
keys. In their application, detection of a cycle is actually desirable, provided that
subsequent encryptions remain secure.
_2. Symmetric-Key Counterexamples. In the symmetric setting, we show that_
standard notions do not imply n-circular security for any positive n. Specifically,
given any n 1, we show how to construct a secure authenticated encryption
_≥_
scheme (which is necessarily CCA-secure; see Section 2) that is not n-weak circular secure, under the minimal assumption that secure authenticated encryption
schemes exist, which are equivalent to one-way functions.
The main technical ingredient in our counterexample is a lemma showing that
it is provably hard for an adversary to compute an encrypted key cycle itself,
-----
New Definitions and Separations for Circular Security 543
assuming that the symmetric scheme under attack is a secure authenticated
encryption scheme (or CCA secure). We stress that this lemma does not hold if
the encryption scheme is only CPA secure.
Our lemma gives us leverage in constructing a counterexample because it
means the adversary is given strictly more power in the weak circular security
game than in the standard security game. Specifically, the adversary is given an
encrypted key cycle in the weak circular security game that it could not have
computed itself, and we design a scheme to help such an adversary without
affecting regular security.
_3. Public-Key Counterexamples. We show that neither CPA nor CCA-security_
imply (even) weak circular security for cycles of size 2. That is, we show secure
systems that are totally compromised when the independently-generated ciphertexts E(pk A, sk B) and E(pk B, sk _A) are released. This is a difficult task, because_
the system must remain secure if either one, but only one, of these ciphertexts
are released. Moreover, this counterexample requires new ideas. We cannot use
the common trick in self-loop counterexamples that test if the message is the
secret key corresponding to the public key, since there is no way for the encryption algorithm with public key pk A to distinguish, say, sk B from any other valid
message. Specifically, we show that:
If there exists an algebraic setting where the Symmetric External Diffie
Hellman[1] (SXDH) assumption holds, then there exists a CPA-secure cryptosystem which is not 2-weak circular secure. The proposed scheme is particularly
interesting in that it breaks catastrophically in the presence of a 2-cycle — revealing the secret keys of both users.
Moreover, if simulation-sound non-interactive zero- knowledge (NIZK) proof
systems exist for NP and there exists an algebraic setting where the Symmetric
External Diffie-Hellman (SXDH) assumption holds, then there exists a CCAsecure cryptosystem which is not 2-weak circular secure. This is also the first
separation of CCA security and (regular) circular security.
These results deepen our understanding of how to define “secure” encryption
and which practical attacks are captured by the standard definitions. They also
provide additional justification for the ongoing effort, e.g. [13,14,5], to develop
cryptosystems which are provably circular secure.
**1.1** **Related Work**
In 2001, Camenisch and Lysyanskaya [15] introduced the notion of circular secu_rity and used it in their anonymous credential system to discourage users from_
delegating their secret keys. They also showed how to construct a circular-secure
cryptosystem from any CPA-secure cryptosystem in the random oracle model.
1 The SXDH assumption states that there is a bilinear setting e : G1 _×G2 →_ GT where
the DDH assumption holds in both G1 and G2. It has been extensively studied and
used e.g., [21,38,32,12,8,6,24,9,25], perhaps most notably as a setting of the GrothSahai NIZK proof system [25].
-----
544 D. Cash, M. Green, and S. Hohenberger
Independently, Abadi and Rogaway [1] and Black, Rogaway, Shrimpton [11] introduced the more general notion of key-dependent message (KDM) security,
where the encrypted messages might depend on an arbitrary function of the secret keys. Black et al. showed how to realize this notion in the random oracle
model.
Halevi and Krawczyk [27] extended the work of Black et al. to look at KDM
security for deterministic secret-key functions such as pseudorandom functions
(PRFs), tweakable blockciphers, and more. They give both positive and negative results, including some KDM-secure constructions in the standard model
for PRFs. In the symmetric setting, Hofheinz and Unruh [29] showed how to
construct circular-secure cryptosystems in the standard model under relaxed
notions of security. Backes, Pfitzmann and Scedrov [7] presented stronger notions of KDM security (some in the random oracle model) and discussed the
relationships among these notions.
In the public-key setting, Boneh, Halevi, Hamburg and Ostrovsky [13] presented the first cryptosystem which is simultaneously CPA-secure and n-circularsecure (for any n) in the standard model, based on either the DDH or Decision
Linear assumptions. As mentioned earlier, Boneh et al. [13] also proved, by counterexample, that one-way security does not imply circular security. One-way encryption is a very weak notion, which informally states that given (pk _, E(pk_ _, m)),_
the adversary should not be able to recover m. Given any one-way encryption
system, they constructed a one-way encryption system that is not n-circular
secure (for any n). Their system generates two key pairs from the original
and sets PK = pk 1 and SK = (sk 1, sk 2). A message (m1, m2) is encrypted
as (m1, E(pk 1, m2)). In the event of a 2-cycle, the values Enc(pk A, sk B) =
(sk B,1, E(pk A,1, sk B,2)) and Enc(pk B, sk A) = (sk A,1, E(pk B,1, sk A,2)) provide
the critical secret key information (sk B,1, sk A,1) in the clear.
Subsequently, Applebaum, Cash, Peikert and Sahai [5] adapted the circularsecure construction of [13] into the lattice setting. Camenisch, Chandran and
Shoup [14] extended[13] to the first cryptosystem which is simultaneously CCAsecure and n-circular-secure (for any n) in the standard model, by applying the
“double encryption” paradigm of Naor and Yung [34]. (Interestingly, we use this
same approach in Section 4.4 to extend our public-key counterexample from
CPA to CCA security.)
Haitner and Holenstein [26] recently provided strong impossibility results for
KDM-security with respect to 1-key cycles (a.k.a., self-loops.) They study the
problem of building an encryption scheme where it is secure to release E(k, g(k))
for various functions g. First, they show that there exists no fully-black-box reduction from a KDM-secure encryption scheme to one-way permutations (or even
some families of trapdoor permutations) if the adversary can obtain encryptions
of g(k), where g is a poly(n)-wise independent hash function. Second, there exists
no reduction from an encryption scheme secure against key-dependent messages
to, essentially, any cryptographic assumption, if the adversary can obtain an
encryption of g(k) for an arbitrary g, as long as the security reduction treats
both the adversary and the function g as black boxes. These results address
-----
New Definitions and Separations for Circular Security 545
the possibility of achieving strong single-user KDM-security via reductions to
cryptographic assumptions. The results in this paper study a version of KDM
security that is in one sense weaker – we only allow a narrow class of functions g
– but also stronger because it considers multiple users. Our results also address
a different question regarding KDM security. We study whether or not KDM security is always implied by regular security while Haitner and Holenstein study
the possibility of achieving strong single-user KDM security via specialized constructions.
Recently, Acar et al. [2] demonstrated both public and private key encryption
systems that are provably CPA-secure and yet also demonstrably not 2-circular
secure. Their counterexample does not apply to CCA or weak circular security.
There is also a relationship to recent work on leakage resilient and auxiliary
_input models of encryption, which mostly falls into the “self-loop” category._
In leakage resilient models, such as those of Akavia, Goldwasser and Vaikuntanathan [4] and Naor and Segev [33], the adversary is given some function h
of the secret key, not necessarily an encryption, such that it is information the_oretically impossible to recover sk_ . The auxiliary input model, introduced by
Dodis, Kalai and Lovett [18], relaxes this requirement so that it only needs to
be difficult to recover sk .
_Self-Loops. In sharp contrast to all n_ 2, the case of 1-circular security is
_≥_
fairly well understood. A folklore counterexample shows that CPA-security does
not directly imply 1-circular security. Given any encryption scheme (G, E, D),
one can build a second scheme (G, E[′], D[′]) as follows: (1) E[′](pk _, m) outputs_
_E(pk_ _, m)_ 0 if m = sk and m 1 otherwise, (2) D[′](sk _, c_ _b) outputs D(sk_ _, m) if_
_||_ _̸_ _||_ _||_
_b = 0 and sk otherwise. It is easy to show that if (G, E, D) is CPA-secure, then_
(G, E[′], D[′]) is CPA-secure. When E[′](pk _, sk_ ) = sk _||1 is exposed, then there is_
a complete break. Conversely, given any CPA-secure system, one can build a
1-circular secure scheme in the standard model [13].
## 2 Definitions of Security
A public-key encryption system Π is a tuple of algorithms (KeyGen, Enc, Dec),
where KeyGen is a key-generation algorithm that takes as input a security parameter λ and outputs a public/secret key pair (pk _, sk); Enc(pk_ _, m) encrypts a_
message m under public key pk ; and Dec(sk _, c) decrypts ciphertext c with secret_
key sk . A symmetric-key encryption system is a public-key encryption system,
except that it always outputs pk =, and the encryption algorithm computes
_⊥_
ciphertexts using sk, i.e. by running Enc(sk _, m). In the symmetric case we will_
sometimes write K instead of sk . As in most other works, we assume that all algorithms implicitly have access to shared public parameters establishing a common
algebraic setting.
Our definitions of security will associate a message space, denoted M, with
each encryption scheme. Throughout this paper, we assume that the space of
possible secret keys output by KeyGen is a subset of the message space M and
-----
546 D. Cash, M. Green, and S. Hohenberger
IND-CPA(Π, A, λ)
_b_ _←{r_ 0, 1}
(pk _, sk_ ) ← KeyGen(1[λ])
(m0, m1, z) ←A1(pk )
_y ←_ Enc(pk _, mb)_
ˆb ←A2(y, z)
Output ([ˆ]b =? _b)_
AE(Π, A, λ)
_b_ _←{r_ 0, 1}
_K ←_ KeyGen(1[λ])
ˆb ←A[E]K,b[ae] [(][·][,][·][)][,][D]K,b[ae] [(][·][)](1[λ])
Output ([ˆ]b =? _b)._
**Fig. 1. Experiments for Definitions 1 and 3**
thus any secret key can be encrypted using any public key. For symmetric encryption schemes we will always have M 0, 1 .
_⊂{_ _}[∗]_
By ν(k) we denote some negligible function, i.e., one such that, for all c > 0
and all sufficiently large k, ν(k) < 1/k[c]. We abbreviate probabilistic polynomial
time as PPT.
**2.1** **Standard Security Definitions**
_Public-key encryption. We recall the standard notion of indistinguishability of_
encryptions under a chosen-plaintext attack due to Goldwasser and Micali [22].
**Definition 1 (IND-CPA). Let Π = (KeyGen, Enc, Dec) be a public-key encryp-**
_tion scheme for the message space M_ _. For b ∈{0, 1}, A = (A1, A2) and λ ∈_ N,
_let the random variable IND-CPA(Π,_ _, λ) be defined by the probabilistic algo-_
_A_
_rithm described on the left side of Figure 1. We denote the IND-CPA advantage_
of A by Adv[cpa]Π,A[(][λ][) = 2][·][Pr[][IND-CPA][(][Π,][ A][, λ][) = 1]][−][1][. We say that][ Π][ is][ IND-CPA]
secure if Adv[cpa]Π,A[(][λ][)][ is negligible for all PPT][ A][.]
We also consider the indistinguishability of encryptions under chosen-ciphertext
attacks [34,35,19].
**Definition 2 (IND-CCA). Let Π = (KeyGen, Enc, Dec) be a public-key encryp-**
_tion scheme for the message space M_ _. Let the random variable IND-CCA(Π,_ _, λ)_
_A_
_be defined by an algorithm identical to IND-CPA(Π,_ _, λ) above, except that both_
_A_
_A1 and A2 have access to an oracle Dec(sk_ _, ·) that returns the output of the_
_decryption algorithm and A2 cannot query this oracle on input y. We denote the_
IND-CCA advantage of A by Adv[cca]Π,A[(][λ][) = 2][ ·][ Pr[][IND-CCA][(][Π,][ A][, λ][) = 1]][ −] [1][. We]
_say that Π is IND-CCA secure if Adv[cca]Π,A[(][λ][)][ is negligible for all PPT][ A][.]_
_Symmetric-key authenticated encryption. We recall the definition of secure au-_
thenticated (symmetric-key) encryption due to [36], except that we will not
require pseudorandom ciphertexts. Bellare and Namprempre [10] showed that
AE implies IND-CCA, and is in fact strictly stronger. For our counterexample,
we target this very strong definition of security in order strengthen our results
by showing that even this does not imply weak circular security.
-----
New Definitions and Separations for Circular Security 547
IND-CIRC-CPA[n](Π, A, λ)
_b_ _←{r_ 0, 1}
For i = 1 to n:
(pk i, sk i) ← KeyGen(1[λ])
If b = 1 then
**y ←** EncCycle(pk, sk)
Else
**y ←** EncZero(pk, sk)
ˆb ←A(pk, y)
Output ([ˆ]b =? _b)_
EncCycle(pk, sk)
For i = 1 to n
_yi ←_ Enc(pk i, sk (i mod n)+1)
Output y
IND-WCIRC-CPA[n](Π, A, λ)
_b_ _←{r_ 0, 1}
For i = 1 to n:
(pk i, sk i) ← KeyGen(1[λ])
**y ←** EncCycle(pk, sk)
(j, m0, m1, z) ←A1(pk, y)
_y ←_ Enc(pk j _, mb)_
ˆb ←A2(y, z)
Output ([ˆ]b =? _b)_
EncZero(pk, sk)
For i = 1 to n
_yi ←_ Enc(pk i, 0[|][sk] [(][i][ mod][ n][)+1][|])
Output y
**Fig. 2. Experiments for Definitions 4 and 5. Each is defined with respect to a mes-**
sage space M, and we assume that m0, m1 ∈ _M always. We write pk, sk, and y for_
(pk 1, . . ., pk n), (sk 1, . . ., sk n) and (y1, . . ., yn) respectively
**Definition 3 (AE). Let Π = (KeyGen, Enc, Dec) be a symmetric-key encryp-**
_tion scheme for the message space M_ _. Let the random variable AE(Π,_ _, λ) be_
_A_
_defined by the probabilistic algorithm described on the right side of Figure 1._
_In the experiment, the oracle EK,b[ae]_ [(][·][,][ ·][)][ takes as input a pair of equal-length]
_messages (m0, m1) and computes Enc(K, mb). The oracle DK,b[ae]_ [(][·][)][ takes as in-]
_put a ciphertext c and computes Dec(K, c) if b = 1 and always returns_ _if_
_⊥_
_b = 0. The adversary is not allowed to submit any ciphertext to DK,b[ae]_ [(][·][)][ that]
_was previously returned by EK,b[ae]_ [(][·][,][ ·][)][. We denote the][ AE][ advantage of][ A][ by]
Adv[ae]Π,A[(][λ][) = 2][·][Pr[][AE][(][Π,][ A][, λ][) = 1]][−][1][. We say that][ Π][ is][ AE][ secure][ if][ Adv]Π,[ae] _A[(][λ][)]_
_is negligible for all PPT_ _._
_A_
**2.2** **Circular Security Definitions**
We next give definitions for circular security of public-key and symmetric-key
encryption. These definitions are variants of the Key-Dependent Message (KDM)
security notion of Black et al. [11]. By restricting the adversary’s power, we make
it significantly harder for us to devise a counterexample and thus prove a stronger
negative result.[2]
**Definition 4 (IND-CIRC-CPA[n]). Let Π = (KeyGen, Enc, Dec) be a public-key**
_encryption scheme for the message space M_ _. For b_ 0, 1 _, integer n > 0,_
_∈{_ _}_
_adversary A and λ ∈_ N, let the random variable IND-CIRC-CPA[n](Π, A, λ) be
2 If we allowed the adversary to obtain encryptions of any affine function of the secret
keys, as is done in [27,13], then we could devise a trivial counterexample where the
adversary uses 1-cycles to break the system.
-----
548 D. Cash, M. Green, and S. Hohenberger
_defined by the probabilistic algorithm on the left side of Figure 2. We denote the_
IND-CIRC-CPA[n] advantage of _by_
_A_
Adv[n]Π,[-][circ]A _[-][cpa](λ) = 2 · Pr[IND-CIRC-CPA[n](Π, A, λ) = 1] −_ 1.
_We say that Π is IND-CIRC-CPA[n]_ secure if Adv[n]Π,[-][circ]A _[-][cpa](λ) is negligible for all_
_PPT_ _._
_A_
One could augment this definition by modifying the IND-CIRC-CPA[n] experiment
to allow for a challenge “left-or-right” query as in IND-CPA. While this is a quite
natural modification, it only strengthens the definition, and we are interested in
studying the weakest notions for which we can give a separation. Next we give
a definition of weak circular security of public-key encryption.
**Definition 5 (IND-WCIRC-CPA[n]). Let Π = (KeyGen, Enc, Dec) be a public-key**
_encryption scheme for the message space M_ _. For b_ 0, 1 _, integer n > 0,_
_∈{_ _}_
_adversary A and λ ∈_ N, let the random variable IND-WCIRC-CPA[n](Π, A, λ)
_be defined by probabilistic algorithm on the center of Figure 2. We denote the_
IND-WCIRC-CPA[n] advantage of _by_
_A_
Adv[n]Π,[-][wcirc]A _[-][cpa](λ) = 2 · Pr[IND-WCIRC-CPA[n](Π, A, λ) = 1] −_ 1.
_We say that Π is IND-WCIRC-CPA[n]_ secure if the function Adv[n]Π,[-][wcirc]A _[-][cpa](λ) is_
_negligible for all PPT_ _._
_A_
Finally, we give a definition of weak circular security for symmetric encryption.
We will abuse notation and also call this IND-WCIRC-CPA[n] security, since it will
be clear from the context whether or not we mean public-key and symmetric-key.
**Definition 6 (IND-WCIRC-CPA[n]). Let Π = (KeyGen, Enc, Dec) be a symmetric-**
_key encryption scheme for the message space M_ _. For b_ 0, 1 _, integer n > 0,_
_∈{_ _}_
_adversary A and λ ∈_ N, let IND-WCIRC-CPA[n](Π, A, λ) be defined by the follow_ing probabilistic algorithm:_
IND-WCIRC-CPA[n]b [(][Π,][ A][, λ][)]
_r_
_b_ 0, 1
_←{_ _}_
_For i = 1 to n:_
_Ki ←_ KeyGen(1[λ])
**y** EncCycle(K)
_←_
ˆb Enc(·,·,·)(y)
_←A[�]_
?
_Output ([ˆ]b_ = b)
EncCycle(K)
_For i = 1 to n_
_yi ←_ Enc(Ki, K(i mod n)+1)
_Output y_
Enc�(j, m0, m1)
_Return Enc(Kj, mb)_
_We denote the IND-WCIRC-CPA[n]_ advantage of _by_
_A_
Adv[n]Π,[-][wcirc]A _[-][cpa](λ) = 2 · Pr[IND-WCIRC-CPA[n](Π, A, λ) = 1] −_ 1.
_We say that Π is IND-WCIRC-CPA[n]_ secure if Adv[n]Π,[-][wcirc]A _[-][cpa](λ) is negligible for_
_all PPT_ _._
_A_
-----
New Definitions and Separations for Circular Security 549
_Discussion. In both the IND-CPA and IND-CIRC-CPA notions, the adversary_
must distinguish an encryption (or encryptions) of a special message from the
encryption of zero. This choice of the message zero is arbitrary. We keep it in the
statement of our definition to be consistent with [13]; however, it is important
to note, for systems such as ours where zero is not in the message space, that
zero can be replaced by any constant message for an equivalent definition. Acar
et al. [2] use an equivalent definition where zero is replaced by a fresh random
message.
We will not need to define a notion of security to withstand circular and
_chosen-ciphertext attacks, because we are able to show a stronger negative re-_
sult. In Section 4.4, we provide an IND-CCA-secure cryptosystem, which is provably not IND-CIRC-CPA-secure. In other words, we are able to devise a peculiar
cryptosystem: one that withstands all chosen-ciphertext attacks, and yet breaks
under a weak circular attack which does not require a decryption oracle.
## 3 Counterexample for Symmetric Encryption
_Encryption Scheme Πae. Let Πae[′]_ [= (][KeyGen][′][,][ Enc][′][,][ Dec][′][) be a secure authenti-]
cated encryption scheme. To simplify our results, we assume that KeyGen[′](1[λ])
outputs a uniformly random key K in {0, 1}[λ], that the message space M _[′]_ =
0, 1, and that ciphertexts output by Enc[′](K, m) are always in 0, 1,
_{_ _}[∗]_ _{_ _}[p][(][|][m][|][)]_
where p is some polynomial that depends on λ. We also assume that the first λ
bits of a ciphertext are never equal to K. All of these assumptions can be removed via straightforward and standard modifications to our arguments below.
Fix a positive integer n. We now construct our counterexample scheme, de
noted Πae = (KeyGen, Enc, Dec). We will take KeyGen = KeyGen[′], i.e., Πae also
uses keys randomly chosen from {0, 1}[λ]. The message-space of Πae will consist of
_M =_ 0, 1 0, 1, bit strings of length either λ or np(λ). The algorithms
_{_ _}[λ]_ _∪{_ _}[np][(][λ][)]_
Enc and Dec are defined as follows.
Enc(K, m)
If IsCycle(K, m) then
Output K _m_
_∥_
Else
Output Enc[′](K, m)
Dec(K, c)
If c = K _m˜_ then
_∥_
Output ˜m
Else
Output Dec[′](K, c)
IsCycle(K, m)
If _m_ = np(λ)
_|_ _| ̸_
Return false
Parse m as (c1, . . ., cn)
_K2 ←_ Dec[′](K, c1)
For i = 2 to n
_Ki mod n+1 ←_ Dec[′](Ki, ci)
?
Return (K1 = K)
Decryption is always correct. This follows from our assumption that Enc[′] will
never output a ciphertext that contains K as a prefix. We first establish the AE
security of our scheme.
-----
550 D. Cash, M. Green, and S. Hohenberger
**Theorem 1. Encryption scheme Πae is AE secure whenever Πae[′]** _[is][ AE][ secure.]_
_(Proof in the full version of this work [17].)_
The proof proceeds by showing that computing an encrypted key-cycle during
the AE game is equivalent to recovering the secret key. From there we can reduce
the security of Πae to Πae[′] [easily.]
Curiously, Theorem 1 is no longer true if one replaces AE security with a
symmetric version of IND-CPA security for both Πae and Πae[′] [. Namely, some type]
of chosen-ciphertext security is required on Πae[′] [to prove even chosen-plaintext]
security of Πae. Intuitively, this is because it might be possible for an adversary
to compute an encrypted key-cycle on its own if the scheme is only IND-CPAsecure, but not if the scheme is AE-secure. In fact, the work of Boneh et al. [13]
gives an explicit example of a scheme where the adversary can compute a cycle
himself.
_The Attack. We now show that Πae is not circular-secure for n cycles, even in a_
weak sense.
**Theorem 2. Πae is not IND-WCIRC-CPA[n]** _secure._
_Proof. We give an explicit adversary_ that has advantage negligibly close to 1.
_A_
The adversary takes as input the encrypted key-cycle y in the IND-WCIRC-CPA[n]
game. It queries Enc[�](1, m0, m1), where m0 = y and m1 is a random message of
the same length. Let y be the ciphertext returned by the oracle.
At this point, there are many ways to proceed; perhaps the simplest is to
observe that the length of y depends on the challenge bit b. This is because, if
_b = 0, then m0 = y was encrypted, resulting in y = K ∥_ **y, which is λ + np(λ)**
bits long. If b = 1 then y was computed by running Enc[′](K, m1), which will be
_p(|m1|) = p(np(λ)) bits long if IsCycle(K, m1) returns false. Thus, as long as_
IsCycle(K, m1) returns false, A2 can compute the value of b by measuring y’s
length.
But why should IsCycle(K, m1) return false? This follows from the AE security
of Πae[′] [. Let us parse][ m][1] [into (][c][1][, . . ., c][n][), where each][ c][i] _[∈{][0][,][ 1][}][p][(][λ][)][ is random.]_
When IsCycle(K, m1) returns true, it must be that Dec[′](K, c1) did not return ⊥.
But if this happens, then we can construct an adversary to break the AE security
of Πae[′] [. The adversary simply queries][ D]K,b[ae] [(][·][) at a random point, observes if it]
returns or not, and outputs [ˆ]b = 0 or 1 depending on this observation.
_⊥_
We note that we could design an encryption scheme that does not have this
type of ciphertext-length behavior by giving a different attack that abuses the
fact that K is present in the ciphertext in one case, but not the other. We have
chosen to present the attack this way for simplicity only.
## 4 Counterexamples for Public-Key Encryption
**4.1** **Preliminaries and Algebraic Setting**
_Bilinear Groups. We work in a bilinear setting where there exists an efficient_
mapping function e : G1 G2 GT involving groups of the same prime order p.
_×_ _→_
-----
New Definitions and Separations for Circular Security 551
Two algebraic properties required are that: (1) if g generates G1 and h generates
G2, then e(g, h) ̸= 1 and (2) for all a, b ∈ Zp, it holds that e(g[a], h[b]) = e(g, h)[ab].
**Decisional Diffie-Hellman Assumption (DDH): Let G be a group of prime**
order p _Θ(2[λ]). For all PPT adversaries_, the following probability is 1/2 plus
_∈_ _A_
an amount negligible in λ:
Pr � _g, z0 ←_ G; a, b ← Zp; z1 ← _gab; d ←{0, 1};_
_d[′]_ _←A(g, g[a], g[b], zd) : d = d[′]_
�
_._
**Strong External Diffie-Hellman Assumption (SXDH): Let e : G1** _×_ G2 →
GT be bilinear groups. The SXDH assumption states that the DDH problem is
hard in both G1 and in G2. This implies that there does not exist an efficiently
computable isomorphism between these two groups. The SXDH assumption appears in many prior works, such as [21,38,32,12,8,6,24,9,25,2].
_Indistinguishability and Pseudorandom Generators._
**Definition 7 (Indistinguishability). Two ensembles of probability distribu-**
_tions {Xk}k∈N and {Yk}k∈N with index set N are said to be computationally_
indistinguishable if for every polynomial-size circuit family {Dk}k∈N, there ex_ists a negligible function ν such that_
_|Pr [x ←_ _Xk : Dk(x) = 1] −_ Pr [y ← _Yk : Dk(y) = 1]|_
_c_
_is less than ν(k). We denote such sets {Xk}k∈N_ _≈{Yk}k∈N._
**Definition 8 (Pseudorandom Generator [30]). Let Ux denote the uniform**
_distribution over_ 0, 1 _. Let ℓ(_ ) be a polynomial and let G be a deterministic
_{_ _}[x]_ _·_
_polynomial-time algorithm such that for any input s_ 0, 1 _, algorithm G_
_∈{_ _}[n]_
_outputs a string of length ℓ(n). We say that G is a pseudorandom generator if_
_the following two conditions hold:_
**– (Expansion:) For every n, it holds that ℓ(n) > n.**
_c_
**– (Pseudorandomness:) For every n, {Uℓ(n)}n** _≈{s ←_ _Un : G(s)}n._
The constructions of Section 4.2 use a PRG where the domain of the function is
an exponentially-sized cyclic group.
**4.2** **Encryption Scheme Πcpa**
We now describe an encryption scheme Πcpa = (KeyGen, Enc, Dec). It is set in
asymmetric bilinear groups e : G1 G2 GT of prime order p where we assume
_×_ _→_
that the groups G1 and G2 are distinct and that the DDH assumption holds in
both. We assume that a single set of group parameters (e, p, G1, G2, GT, g, h),
where G1 = ⟨g⟩, G2 = ⟨h⟩, will be shared across all keys generated at a given
security level and are implicitly provided to all algorithms.
-----
552 D. Cash, M. Green, and S. Hohenberger
The message space is M = {0, 1} × Z[∗]p _[×][ Z]p[∗][. Let][ encode][ :][ M →{][0][,][ 1][}][ℓ][(][λ][)]_
and decode : 0, 1 denote an invertible encoding scheme where ℓ(λ)
_{_ _}[ℓ][(][λ][)]_ _→M_
is the polynomial length of the encoded message. Let F : GT →{0, 1}[ℓ][(][λ][)] be a
pseudorandom generator secure under the Decisional Diffie Hellman assumption.
(Recall that pseudorandom generators can be constructed from any one-way
function [28].)
KeyGen(1[λ]). The key generation algorithm selects a random bit β 0, 1 and
_←{_ _}_
random values a1, a2 ← Z[∗]p[. The secret key is set as][ sk][ = (][β, a][1][, a][2][). We note]
that sk . The public key is set as:
_∈M_
_pk =_
�
(0, e(g, h)[a][1], g[a][2]) ∈{0, 1} × GT × G1 if β = 0
(1, e(g, h)[a][1], h[a][2]) ∈{0, 1} × GT × G2 if β = 1.
Encrypt(pk _, M_ ). The encryption algorithm parses the public key pk =(β, Y1, Y2),
where Y2 may be in G1 or G2 depending on the structure of the public key,
and message M = (α, m1, m2) ∈M. Note that m1 and m2 cannot be zero,
but these values can be easily included in the message space by a proper
encoding.
Select random r ← Zp and R ← GT . Set I = F (R) ⊕ encode(M ).
Output the ciphertext C as:
_C =_
�
(g[r], R · Y1[r][, Y]2[ rm][2] _· g[m][1], I)_ if β = 0;
(h[r], R · Y1[r][, Y]2[ rm][2] _, I)_ if β = 1.
We note that in the first case, C ∈ G1 × GT × G1 × {0, 1}[ℓ][(][λ][)], while in the
second C ∈ G2 × GT × G2 × {0, 1}[ℓ][(][λ][)].
Decrypt(sk _, C). The decryption algorithm parses the secret key sk = (β, a1, a2)_
and the ciphertext C = (C1, C2, C3, C4). Next, it computes:
_R =_
�
(C2/e(C1, h))[a][1] if β = 0;
(C2/e(g, C1))[a][1] if β = 1.
Then it computes M _[′]_ = F (R) ⊕ _C4 ∈{0, 1}[ℓ][(][λ][)]_ and outputs the message
_M = decode(M_ _[′])._
_Discussion. Like the circular-secure scheme of Boneh et al. [13], the above cryp-_
tosystem is a variation on El Gamal [20]. It is a practical system, which on first
glance might be somewhat reminiscent of schemes the readers are used to seeing
in the literature. The scheme includes a few “artificial” properties: (1) placing
a public key in either G1 or G2 at random and (2) the fact that the ciphertext
value C3 is unused in the decryption algorithm. We will shortly see that these
features are “harmless” in a semantic-security sense, but very useful for recovering the secret keys of the system in the presence of a two cycle. While it is
not unusual for counterexamples to have artificial properties (e.g., [16,23]), we
-----
New Definitions and Separations for Circular Security 553
can address these points as well.[3] In the full version of this work [17], we show
that property (1) can be removed by doubling the length of the ciphertext. For
property (2), we observe that many complex protocols such as group signatures
(e.g., [12]) combine ciphertexts with other components that are unused in decryption but are quite important to the protocol as a whole. Thus, we believe
our counterexample is not that far fetched. It is possible that such an attack
could exist on one of today’s commonly-used encryption algorithms.
We first show that Πcpa meets the standard notion of CPA security.
**Theorem 3. Encryption scheme Πcpa is IND-CPA secure under the Decisional**
_Diffie-Hellman Assumption in G1 and G2 (SXDH)._
The proof is given in the full version of this work [17]. It is relatively standard
and involves repeated applications of the DDH assumption and PRG security.
**4.3** **The Attack**
Despite being IND-CPA-secure, cryptosystem Πcpa is not even weakly circular
secure for 2-cycles. Specifically, given a circular encryption of two keys, we show
that an adversary can distinguish another ciphertext with advantage 1/2. Our
adversary actually does much more than this: with probability 1/2 over the coins
used in key generation, it can recover both secret keys.
This is the first circular attack that allows the adversary to recover the secret
keys. (In the full version of this work [17], we discuss how to improve these
probabilities to almost 1.) Our attack combines elements of both ciphertexts in
an attempt to recover sk A, which can then be used to decrypt the first ciphertext
and obtain sk B. It is counterintuitive that this is possible, given that it is easy
to see that IND-CPA-security guarantees that it is safe for one of them to send
their message.
**Theorem 4. Πcpa is not IND-WCIRC-CPA[2]-secure.**
_Proof. We give PPT adversary A = (A1, A2) such that Adv[2-]Πcpa[wcirc],A[-][cpa](λ) is equal_
to 1/2. Since IND-WCIRC-CPA security requires that this advantage be negligible,
this attack breaks security. The adversary proceeds as follows. The first stage of
the adversary, A1, obtains the two public keys, which we will write as pk A and
_pk B, and an encrypted cycle, which we will write as (CA, CB)._
If both keys have β = 0 or β = 1 (call this event E1), the adversary aborts and
instructs the second stage (A2) to output a random bit. Since the two keys are
independently generated by the challenger, this event will occur with probability
exactly 1/2. Below we will condition on E1 not happening, and wlog assume that
_pk A = (0, e(g, h)[a][1], g[a][2]) and pk B = (1, e(g, h)[b][1], h[b][2]). The corresponding secret_
keys sk A = (0, a1, a2), sk B = (1, b1, b2) are not known to the adversary.
3 While our scheme is different from that of Acar et al. [2], that scheme also has similar
artificial properties such as the presence of values that are not used in decryption.
-----
554 D. Cash, M. Green, and S. Hohenberger
We write the given ciphertexts CA = (cA,1, cA,2, cA,3, cA,4) and CB = (cB,1,
_cB,2, cB,3, cB,4). A1 will output two arbitrary distinct messages, and request_
that the challenge use pk A. For the state passed to A2, it now computes:
_X := cB,2 ·_ _e[e]([(]c[c][A,]A,[1]3[, c], c[B,]B,[3]1[)])_ _[.]_
_A1 sets sk[ˆ]_ _A = decode(cB,4 ⊕_ _F_ (X)) and passes this with the challenge messages
as state to A2.
_A2 receives a ciphertext y and the passed state. It parses sk[ˆ]_ _A as a secret key_
for Πcpa and computes Dec( sk[ˆ] _A, y), and tests if this is equal to either of the_
challenge messages. If so, it outputs the corresponding bit. Otherwise it outputs
a random bit.
Let’s explore why this test works. Write CA = Enc(pk A, sk B) and CB =
Enc(pk B, sk A). Then:
_CA = (cA,1, cA,2, cA,3, cA,4)_
= (g[r], R · e(g, h)[ra][1], g[ra][2][b][2][+][b][1] _, F_ (R) ⊕ encode(sk B))
_CB = (cB,1, cB,2, cB,3, cB,4)_
= (h[s], S · e(g, h)[sb][1] _, h[sa][2][b][2]_ _, F_ (S) ⊕ encode(sk A))
for some r, s ∈ Zp and R, S ∈ GT . Then we have that:
_e(g[r], h[sa][2][b][2]_ )
_X := cB,2 ·_ _e[e]([(]c[c][A,]A,[1]3[, c], c[B,]B,[3]1[)]) [=][ S][ ·][ e][(][g, h][)][sb][1][ ·]_ _e(g[ra][2][b][2][+][b][1]_ _, h[s])_
_e(g, h)[rsa][2][b][2]_
= S · e(g, h)[sb][1] _·_
_e(g, h)[rsa][2][b][2]_ _e(g, h)[sb][1][ =][ S.]_
_·_
Thus, A1 recovers sk[ˆ] _A = sk A as decode(cB,4_ _⊕F_ (S)), and A2 will correctly guess
bit b in this case.
Write [ˆ]b for the output of A2. We have
Adv[2-]Πcpa[wcirc],A[-][cpa](λ) = 2 Pr[[ˆ]b = b] − 1
= 2(Pr[[ˆ]b = b|E1] Pr[E1]+
Pr[[ˆ]b = b|¬E1] Pr[¬E1]) − 1
= 2(1 1/2 + 1/2 1/2) 1
_·_ _·_ _−_
= 1/2
This completes the proof.
**4.4** **Extension: A Counterexample for CCA Security**
We show that there exists an IND-CCA-secure cryptosystem, which suffers a
complete break when Alice and Bob trade secret keys over an insecure channel;
-----
New Definitions and Separations for Circular Security 555
i.e., transmit the two-key cycle E(pk A, sk B) and E(pk B, sk A). Our construction
follows the “double-encryption” approach to building IND-CCA systems from
IND-CPA systems as pioneered by Naor and Yung [34] and refined by Dolev,
Dwork and Naor [19] and Sahai [37]. Our building blocks will be:
1. The IND-CPA-secure cryptosystem Πcpa = (G, E, D) from Section 4. Let
_E(pk_ _, m; r) be the encryption of m under public key pk with randomness r._
2. An adaptively non-malleable non-interactive zero-knowledge (NIZK) proof
system with unpredictable simulated proofs and uniquely applicable proofs
for the language L of consistent pairs of encryptions, defined as:
_._
_L =_
� (e0, e1, c0, c1) : _∃m, r0, r1 ∈{0, 1}[∗]_ s.t.
_c0 = E(e0, m; r0) and c1 = E(e1, m; r1)_
�
A proof system for L can be realized under relatively mild assumptions, such
as the difficulty of factoring Blum integers (e.g., [37]). One complication is that
the secret keys for this cryptosystem now change and the construction must be
adapted accordingly, so that the secret key can still be recovered by the adversary
during a circular attack. We show that this is possible.
## 5 Conclusion and Open Problems
In this work, we presented a natural relaxation of the circular security definition,
which may prove interesting for positive results in its own right. We demonstrated that its guarantees are not already captured by standard definitions
of encryption. To do this, we presented symmetric and public-key encryption
systems that are secure in the IND-CPA and IND-CCA sense, but fail catastrophically in the presence of an encrypted cycle. This provides the first answer to the
foundational question on whether IND-CCA-security captures (weak or regular)
circular security for all cycles larger than self-loops. In either case, it does not.
Our work leaves open the interesting problem of finding a public-key coun
terexample for cycles of size 3. Secondly, while our symmetric counterexample
_≥_
depended only on the existence of AE-secure symmetric encryption, our publickey counterexample, like that of Acar et al. [2], required a specific bilinear map
assumption. It would be highly interesting to find a counterexample assuming
only that IND-CPA- or IND-CCA-secure systems exist.
Finally, we observe that our public-key counterexample contains a novel and
curious property – certain combinations of independently generated ciphertexts
_trigger the release of their underlying plaintext. From Rabin’s_ [1]2 [-OT system to]
DH-DDH gap groups, the cryptographic community has a strong history of turning such oddities to an advantage. If we view a cryptosystem with this property
as a new primitive, what new functionalities can be realized using it?
**Acknowledgments. The authors thank Ronald Rivest for the suggestion to**
view the public key counterexample in Section 4 as a potential building block
for other functionalities.
-----
556 D. Cash, M. Green, and S. Hohenberger
## References
1. Abadi, M., Rogaway, P.: Reconciling two views of cryptography (the computational
soundness of formal encryption). J. Cryptology 15(2), 103–127 (2002)
2. Acar, T., Belenkiy, M., Bellare, M., Cash, D.: Cryptographic Agility and Its Relation to Circular Encryption. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS,
vol. 6110, pp. 403–422. Springer, Heidelberg (2010)
3. Ad˜ao, P., Bana, G., Herzog, J., Scedrov, A.: Soundness of Formal Encryption in
the Presence of Key-Cycles. In: de Capitani di Vimercati, S., Syverson, P.F., Gollmann, D. (eds.) ESORICS 2005. LNCS, vol. 3679, pp. 374–396. Springer, Heidelberg (2005)
4. Akavia, A., Goldwasser, S., Vaikuntanathan, V.: Simultaneous Hardcore Bits and
Cryptography against Memory Attacks. In: Reingold, O. (ed.) TCC 2009. LNCS,
vol. 5444, pp. 474–495. Springer, Heidelberg (2009)
5. Applebaum, B., Cash, D., Peikert, C., Sahai, A.: Fast Cryptographic Primitives
and Circular-Secure Encryption Based on Hard Learning Problems. In: Halevi, S.
(ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 595–618. Springer, Heidelberg (2009)
6. Ateniese, G., Camenisch, J., de Medeiros, B.: Untraceable RFID tags via insubvertible encryption. In: CCS 2005, pp. 92–101 (2005)
7. Backes, M., Pfitzmann, B., Scedrov, A.: Key-dependent message security under active attacks -BRSIM/UC-soundness of Dolev-Yao-style encryption with key cycles.
J. of Comp. Security 16(5), 497–530 (2008)
8. Ballard, L., Green, M., de Medeiros, B., Monrose, F.: Correlation-resistant storage. Technical Report TR-SP-BGMM-050705, Johns Hopkins University, CS Dept,
[2005. http://spar.isi.jhu.edu/~mgreen/correlation.pdf.](http://spar.isi.jhu.edu/~mgreen/correlation.pdf)
9. Belenkiy, M., Chase, M., Kohlweiss, M., Lysyanskaya, A.: P-signatures and Noninteractive Anonymous Credentials. In: Canetti, R. (ed.) TCC 2008. LNCS, vol. 4948,
pp. 356–374. Springer, Heidelberg (2008)
10. Bellare, M., Namprempre, C.: Authenticated encryption: Relations among notions
and analysis of the generic composition paradigm. J. Cryptology 21(4), 469–491
(2008)
11. Black, J., Rogaway, P., Shrimpton, T.: Encryption-Scheme Security in the Presence
of Key-Dependent Messages. In: Nyberg, K., Heys, H.M. (eds.) SAC 2002. LNCS,
vol. 2595, pp. 62–75. Springer, Heidelberg (2003)
12. Boneh, D., Boyen, X., Shacham, H.: Short Group Signatures. In: Franklin, M. (ed.)
CRYPTO 2004. LNCS, vol. 3152, pp. 41–55. Springer, Heidelberg (2004)
13. Boneh, D., Halevi, S., Hamburg, M., Ostrovsky, R.: Circular-Secure Encryption from Decision Diffie-Hellman. In: Wagner, D. (ed.) CRYPTO 2008. LNCS,
vol. 5157, pp. 108–125. Springer, Heidelberg (2008)
14. Camenisch, J., Chandran, N., Shoup, V.: A Public Key Encryption Scheme Secure against Key Dependent Chosen Plaintext and Adaptive Chosen Ciphertext
Attacks. In: Joux, A. (ed.) EUROCRYPT 2009. LNCS, vol. 5479, pp. 351–368.
Springer, Heidelberg (2009)
15. Camenisch, J.L., Lysyanskaya, A.: An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation. In: Pfitzmann, B. (ed.)
EUROCRYPT 2001. LNCS, vol. 2045, pp. 93–118. Springer, Heidelberg (2001)
16. Canetti, R., Goldreich, O., Halevi, S.: The random oracle methodology, revisited.
J. of the ACM 51(4), 557–594 (2004)
17. Cash, D., Green, M., Hohenberger, S.: New definitions and separations
for circular security. Cryptology ePrint Archive, Report 2010/144 (2012),
[http://eprint.iacr.org/2010/144](http://eprint.iacr.org/2010/144)
-----
New Definitions and Separations for Circular Security 557
18. Dodis, Y., Kalai, Y.T., Lovett, S.: On cryptography with auxiliary input. In: STOC
2009, pp. 621–630 (2009)
19. Dolev, D., Dwork, C., Naor, M.: Nonmalleable cryptography. SIAM J. Computing 30(2), 391–437 (2000)
20. El Gamal, T.: A Public Key Cryptosystem and a Signature Scheme Based on
Discrete Logarithms. In: Blakely, G.R., Chaum, D. (eds.) CRYPTO 1984. LNCS,
vol. 196, pp. 10–18. Springer, Heidelberg (1985)
21. Galbraith, S.D.: Supersingular Curves in Cryptography. In: Boyd, C. (ed.) ASIACRYPT 2001. LNCS, vol. 2248, pp. 495–513. Springer, Heidelberg (2001)
22. Goldwasser, S., Micali, S.: Probabilistic encryption. J. Comput. Syst. Sci. 28(2),
270–299 (1984)
23. Goldwasser, S., Kalai, Y.T.: On the (In)security of the Fiat-Shamir Paradigm. In:
FOCS 2003, p. 102 (2003)
24. Green, M., Hohenberger, S.: Universally Composable Adaptive Oblivious Transfer.
In: Pieprzyk, J. (ed.) ASIACRYPT 2008. LNCS, vol. 5350, pp. 179–197. Springer,
Heidelberg (2008)
25. Groth, J., Sahai, A.: Efficient Non-interactive Proof Systems for Bilinear Groups.
In: Smart, N.P. (ed.) EUROCRYPT 2008. LNCS, vol. 4965, pp. 415–432. Springer,
Heidelberg (2008)
26. Haitner, I., Holenstein, T.: On the (Im)Possibility of Key Dependent Encryption.
In: Reingold, O. (ed.) TCC 2009. LNCS, vol. 5444, pp. 202–219. Springer, Heidelberg (2009)
27. Halevi, S., Krawczyk, H.: Security under key-dependent inputs. In: ACM CCS
2007, pp. 466–475 (2007)
28. Hastad, J., Impagliazzo, R., Levin, L.A., Luby, M.: A pseudorandom generator
from any one-way function. SIAM J. Computing 28(4), 1364–1396 (1999)
29. Hofheinz, D., Unruh, D.: Towards Key-Dependent Message Security in the Standard Model. In: Smart, N.P. (ed.) EUROCRYPT 2008. LNCS, vol. 4965, pp. 108–
126. Springer, Heidelberg (2008)
30. Katz, J., Lindell, Y.: Introduction to Modern Cryptography. Chapman &
Hall/CRC (2008)
31. Laud, P., Corin, R.: Sound Computational Interpretation of Formal Encryption
with Composed Keys. In: Lim, J.-I., Lee, D.-H. (eds.) ICISC 2003. LNCS, vol. 2971,
pp. 55–66. Springer, Heidelberg (2004)
32. McCullagh, N., Barreto, P.S.L.M.: A New Two-Party Identity-Based Authenticated
Key Agreement. In: Menezes, A. (ed.) CT-RSA 2005. LNCS, vol. 3376, pp. 262–
274. Springer, Heidelberg (2005)
33. Naor, M., Segev, G.: Public-Key Cryptosystems Resilient to Key Leakage. In:
Halevi, S. (ed.) CRYPTO 2009. LNCS, vol. 5677, pp. 18–35. Springer, Heidelberg
(2009)
34. Naor, M., Yung, M.: Public-key cryptosystems provably secure against chosen ciphertext attacks. In: STOC 1990, pp. 427–437 (1990)
35. Rackoff, C., Simon, D.R.: Non-interactive Zero-Knowledge Proof of Knowledge
and Chosen Ciphertext Attack. In: Feigenbaum, J. (ed.) CRYPTO 1991. LNCS,
vol. 576, pp. 433–444. Springer, Heidelberg (1992)
36. Rogaway, P., Shrimpton, T.: A Provable-Security Treatment of the Key-Wrap
Problem. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 373–390.
Springer, Heidelberg (2006)
37. Sahai, A.: Non-malleable non-interactive zero knowledge and adaptive chosenciphertext security. In: FOCS 1999, pp. 543–553 (1999)
38. Scott, M.: Authenticated id-based key exchange and remote log-in with simple
[token and pin number (2002), http://eprint.iacr.org/2002/164](http://eprint.iacr.org/2002/164)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-642-30057-8_32?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-642-30057-8_32, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,012
|
[
"JournalArticle"
] | false
| 2012-05-21T00:00:00
|
[] | 16,425
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0319c4645946c793142a3e419cea510fa9465310
|
[
"Computer Science"
] | 0.861809
|
Object-NoSQL Database Mappers: a benchmark study on the performance overhead
|
0319c4645946c793142a3e419cea510fa9465310
|
Journal of Internet Services and Applications
|
[
{
"authorId": "8676412",
"name": "Vincent Reniers"
},
{
"authorId": "2443017",
"name": "A. Rafique"
},
{
"authorId": "2211794",
"name": "Dimitri Van Landuyt"
},
{
"authorId": "1752104",
"name": "W. Joosen"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Internet Serv Appl"
],
"alternate_urls": [
"http://www.springer.com/computer/communication+networks/journal/13174",
"http://link.springer.com/journal/volumesAndIssues/13174"
],
"id": "42af0896-7bcc-4002-8d09-63fd18f13f53",
"issn": "1867-4828",
"name": "Journal of Internet Services and Applications",
"type": "journal",
"url": "http://www.jisajournal.com/"
}
|
In recent years, the hegemony of traditional relational database management systems (RDBMSs) has declined in favour of non-relational databases (NoSQL). These database technologies are better adapted to meet the requirements of large-scale (web) infrastructures handling Big Data by providing elastic and horizontal scalability. Each NoSQL technology however is suited for specific use cases and data models. As a consequence, NoSQL adopters are faced with tremendous heterogeneity in terms of data models, database capabilities and application programming interfaces (APIs). Opting for a specific NoSQL database poses the immediate problem of vendor or technology lock-in. A solution has been proposed in the shape of Object-NoSQL Database Mappers (ONDMs), which provide a uniform abstraction interface for different NoSQL technologies.Such ONDMs however come at a cost of increased performance overhead, which may have a significant economic impact, especially in large distributed setups involving massive volumes of data.In this paper, we present a benchmark study quantifying and comparing the performance overhead introduced by Object-NoSQL Database Mappers, for create, read, update and search operations. Our benchmarks involve five of the most promising and industry-ready ONDMs: Impetus Kundera, Apache Gora, EclipseLink, DataNucleus and Hibernate OGM, and are executed both on a single node and a 9-node cluster setup.Our main findings are summarised as follows: (i) the introduced overhead is substantial for database operations in-memory, however on-disk operations and high network latency result in a negligible overhead, (ii) we found fundamental mismatches between standardised ONDM APIs and the technical capabilities of the NoSQL database, (iii) search performance overhead increases linearly with the number of results, (iv) DataNucleus and Hibernate OGM’s search overhead is exceptionally high in comparison to the other ONDMs.
|
DOI 10.1186/s13174 016 0052 x
#### and Applications
### RESEARCH Open Access
# Object-NoSQL Database Mappers: a benchmark study on the performance overhead
##### Vincent Reniers[*], Ansar Rafique, Dimitri Van Landuyt and Wouter Joosen
**Abstract**
In recent years, the hegemony of traditional relational database management systems (RDBMSs) has declined in
favour of non-relational databases (NoSQL). These database technologies are better adapted to meet the
requirements of large-scale (web) infrastructures handling Big Data by providing elastic and horizontal scalability.
Each NoSQL technology however is suited for specific use cases and data models. As a consequence, NoSQL adopters
are faced with tremendous heterogeneity in terms of data models, database capabilities and application
programming interfaces (APIs). Opting for a specific NoSQL database poses the immediate problem of vendor or
technology lock-in. A solution has been proposed in the shape of Object-NoSQL Database Mappers (ONDMs), which
provide a uniform abstraction interface for different NoSQL technologies.
Such ONDMs however come at a cost of increased performance overhead, which may have a significant economic
impact, especially in large distributed setups involving massive volumes of data.
In this paper, we present a benchmark study quantifying and comparing the performance overhead introduced by
Object-NoSQL Database Mappers, for create, read, update and search operations. Our benchmarks involve five of the
most promising and industry-ready ONDMs: Impetus Kundera, Apache Gora, EclipseLink, DataNucleus and Hibernate
OGM, and are executed both on a single node and a 9-node cluster setup.
Our main findings are summarised as follows: (i) the introduced overhead is substantial for database operations
in-memory, however on-disk operations and high network latency result in a negligible overhead, (ii) we found
fundamental mismatches between standardised ONDM APIs and the technical capabilities of the NoSQL database, (iii)
search performance overhead increases linearly with the number of results, (iv) DataNucleus and Hibernate OGM’s
search overhead is exceptionally high in comparison to the other ONDMs.
**Keywords: Object-NoSQL Database Mappers, Performance evaluation, Performance overhead, MongoDB**
**1** **Introduction**
Online systems have evolved into the large-scale web and
mobile applications we see today, such as Facebook and
Twitter. These systems face a new set of problems when
working with a large number of concurrent users and
massive data sets. Traditionally, Internet applications are
supported by a relational database management system
(RDBMS). However, relational databases have shown
key limitations in horizontal and elastic scalability [1–3].
Additionally, enterprises employing RDBMS in a
[*Correspondence: vincent.reniers@cs.kuleuven.be](mailto: vincent.reniers@cs.kuleuven.be)
Department of Computer Science, KU Leuven, Celestijnenlaan 200A, B-3001
Heverlee, Belgium
distributed setup often come at a high licensing cost,
and per CPU charge scheme, which makes scaling over
multiple machines an expensive endeavour.
Many large Internet companies such as Facebook,
Google, LinkedIn and Amazon identified these limitations
[1, 4–6] and in-house alternatives were developed, which
were later called non-relational or NoSQL databases.
These provide support for elastic and horizontal scalability by relaxing the traditional consistency requirements
(the ACID properties of database transactions), and offering a simplified set of operations [3, 7, 8]. Each NoSQL
database is tailored for a specific use case and data model,
and distinction is for example commonly made between
column stores, document stores, graph stores, etc. [9].
© The Author(s). 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
[International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and](http://creativecommons.org/licenses/by/4.0/)
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons license, and indicate if changes were made.
-----
This is a deviation from the traditional “one-size-fits-all”
paradigm of RDBMS [2], and leads to more diversity and
heterogeneity in database technology. Due to their specific nature and their increased adoption, there has been
a steep rise in the creation of new NoSQL databases.
In 2009, there were around 50 NoSQL databases [10],
whereas today we see over 200 different NoSQL technologies [11]. As a consequence, there is currently large heterogeneity in terms of interface, data model, architecture
and even terminology across NoSQL databases [7, 12].
Picking a specific NoSQL database introduces the risk
of vendor or technology lock-in, as the application code
has to be written exclusively to its interface [7, 13]. Vendor lock-in hinders future database migrations, which in
the still recent and volatile state of NoSQL is undesirable,
and additionally makes the creation of hybrid and crosstechnology or cross-provider storage configurations [14]
more challenging.
Fortunately, a solution has been proposed in the shape
of Object-NoSQL Database Mappers (ONDM) [7, 12, 13].
ONDMs provide a uniform interface and standardised
data model for different NoSQL databases or even relational databases. Even multiple databases can be used
interchangeably, a characteristic called as polyglot or
_cross-database persistence [13, 15]. These systems sup-_
port translating a common data model and operations to
the native database driver. Despite these benefits, several
concerns come to mind with the adoption of such middleware, and the main drawback would be the additional
performance overhead associated with mapping objects
and translating APIs. The performance impact potentially
has serious economic consequences as NoSQL databases
tend to run in large cluster environments and involve massive volumes of data. As such, even the smallest increase
in performance overhead on a per-object basis can have a
significant economic cost.
In this paper, we present the results of an extensive
and systematic study in which we benchmark the performance overhead of five different open-source Javabased ONDMs: Impetus Kundera [16], EclipseLink [17],
Apache Gora [18], DataNucleus [19] and Hibernate
OGM [20]. These were selected on the basis of industry relevance, rate of ongoing development activity and
comparability. We benchmarked the main operations of
write/insert, read, update and a set of six distinct search
queries on MongoDB. MongoDB is currently one of the
most widespread adopted, and mature NoSQL document
databases, in addition it is the only mutually supported
database by all five ONDMs. The benchmarks presented
in this paper are obtained in a single-node MongoDB
setup and in a distributed MongoDB cluster consisting of
nine nodes.
The main contribution of this paper is that it quantifies
the performance cost associated with ONDM adoption,
as such allowing practitioners and potential adopters to
make informed trade-off decisions. In turn, our results
inform ONDM technology providers and vendors about
potential performance issues, allowing them to improve
their offerings where necessary. In addition, this is to our
knowledge the first study that involves an in-depth performance overhead comparison for search operations. We
specifically focus on six distinct search queries of varying
complexity.
In addition, the study is a partial replica study of an
earlier performance study [21], which benchmarked three
existing frameworks. We partially confirm the previous
findings, yet in turn strengthen this study by: (i) adopting
an improved measurement methodology, with the use of
Yahoo!’s Cloud Serving Benchmark (YCSB) [3] —an established benchmark for NoSQL systems – and (ii) focusing
on an updated set of promising ONDMs.
Our main findings first and foremost confirm that current ONMDs do introduce an additional performance
overhead that may be considered substantial. As these
ONDMs follow a similar design, the introduced overhead
is roughly comparable: respectively the write, read and
update overhead ranges between [4 − 14%], [4 − 21%] and
[60 − 194%] (on a cluster setup). The overhead on update
performance is significant due to interface mismatches,
i.e. situations in which discrepancies between the uniform API and the NoSQL database capabilities negatively
impact performance.
Regarding search, we found that query performance
overhead can become substantial, especially for search
queries involving many results, and secondly, that
DataNucleus and Hibernate OGM’s search overhead is
exceptionally high in comparison to the other ONDMs.
The remainder of this paper is structured as follows: Section 2 discusses the current state and background of Object-NoSQL Database Mappers. Section 3
states the research questions of our study and Section 4
discusses the experimental setup and motivates the selection of ONDMs. Section 5 subsequently presents the
results of our performance evaluation on write, read, and
update operations, whereas Section 6 presents the performance results of search operations. Section 7 discusses the
overall results, whereas Section 8 connects and contrasts
our work to related studies. Finally, Section 9 concludes
the paper and discusses our future work.
**2** **Object-NoSQL Database Mappers**
This section provides an overview of the current state of
Object-NoSQL Database Mappers (ONDMs) and motivates their relevance in the context of NoSQL technology.
**2.1** **Object-mapping frameworks for NoSQL**
In general, object mapping frameworks convert inmemory data objects into database structures (e.g.
-----
database rows) before persisting these objects in the
database. In addition, such frameworks commonly provide a uniform, technology-independent programming
interface and as such enable decoupling the application
from database specifics, facilitating co-evolution of the
application and the database, and supporting the migration towards other databases.
In the context of relational databases, such frameworks
are commonly referred to as “Object-Relational Mapping” (ORM) tools [22], and these tools are used extensively in practice. In a NoSQL context, these frameworks
are referred to as “Object-NoSQL Database Mapping”
(ONDM) tools [12] or “Object-NoSQL Mapping (ONM)”
tools [23].
In the context of NoSQL databases, data mapping frameworks are highly compelling because of the
increased risk of vendor lock-in associated to NoSQL
technology: without such platforms, the application has to
be written for each specific NoSQL database and due to
the heterogeneity in technology, programming interface
and data model [7, 13], later migration becomes difficult. As shown in an earlier study, the use of ONDMs
simplifies porting an application to another NoSQL
significantly [21].
An additional benefit is the support for multiple
databases, commonly referred to as database interoperability or cross-database and polyglot persistence [13, 15].
Cross-database persistence facilitates the use of multiple NoSQL technologies, each potentially optimised for
specific requirements such as fast read or write performance. For example, static data such as logs can be stored
in a database that provides very fast write performance,
while cached data can be stored in an in-memory keyvalue database. Implementing such scenarios without an
object-database mapper comes at the cost of increased
application complexity.
However, ONDM technology only emerged fairly
recently, and its adoption in industry is rather modest. Table 1 outlines the benefits and disadvantages of
using ONDM middleware. The main argument against
the adoption of ONDMs is the additional performance
overhead. The study presented in this paper focuses on
quantifying this overhead. In the following section, we
outline the current state of ONDM middleware.
**2.2** **Current state of ONDMs**
In this paper, we focus on object-database mappers that support application portability over multiple
NoSQL databases. Examples are Hibernate OGM [20],
EclipseLink [17], Impetus Kundera [16] and Apache
Gora [18].
Table 2 provides an overview of the main features of several ONDMs such as: application programming interfaces
(APIs), support for query languages and database support.
The API is the predominant characteristic as it determines the used data model and the features that are made
accessible to application developers. A number of standardised persistence interfaces exist, such as the Java Persistence API (JPA) [24], Java Data Objects (JDO) [25] and
the NPersistence API [26] for .NET. Some products such
as Apache Gora [18] or offer custom, non-standardised
development APIs.
Many of the currently-existing ODNMs (for Java) implement JPA. Examples are EclipseLink [17], DataNucleus
[19] and Impetus Kundera [16]. Some of these products
support multiple interfaces. For example, DataNucleus
supports JPA, JDO and REST. JPA relies extensively on
annotations. Classes and attributes are annotated to indicate that their instances should be persisted to a database.
The annotations can cover aspects such as the relationships, actual column name, lazy fetching of objects,
predefined query statements and embedding of entities.
Associated with JPA is its uniform query language
called the Java Persistence Query Language (JPQL) [24].
It is a portable query language which works regardless
of the underlying database. JPQL defines queries with
complex search expressions on entities, including their
relationships [24].
The uniform interface (e.g. JPA) and query language (e.g.
JPQL) allow the user to abstract his/her application software from the specific database. However, this abstraction
comes at a performance overhead cost, which stems from
translating operations and data objects to the intended
native operations and data structures and vice versa. For
example, on write, the object is translated to the intended
data structure of the underlying NoSQL database, while
on read, the query operation is translated to the native
query. Once the result is retrieved, the retrieved data
structure is converted back into an object.
**Table 1 Advantages and disadvantages of adopting ONDM middleware**
Advantages Disadvantages
Unified interface, query language and data model for Performance overhead incurred from translating the
multiple databases uniform interface and data model to its native counterparts
Increased application maintainability
Cross-database persistence and database portability Potential loss of database-specific features due to the
Third-party functionality (e.g. caching) abstraction level of the ONDM
-----
**Table 2 Features and database support for the evaluated ONDMs**
Hibernate OGM Kundera Apache Gora EclipseLink DataNucleus
Evaluated Version 4.1.1 Final 2.15 0.6 2.5.2 5.0.0.M5
Interface JPA JPA, REST Gora API JPA JPA, JDO, REST
Query Languages JPQL, Native Queries JPQL, Native Queries Query interface JPQL, Expressions, JPQL, JDOQL,
Native Queries Native Queries
RDBMS ✕ ✓ ✕ ✓ ✓
NoSQL Databases _MongoDB, Neo4j,_ _MongoDB, Neo4j,_ _MongoDB, HBase, Cassandra,_ _MongoDB, JMS, XML,_ _MongoDB, HBase,_
Ehcache, CouchDB, CouchDB, Cassandra, Apache Solr, Oracle AQ, Oracle NoSQL, Cassandra, Neo4j,
Infinispan ElasticSearch, HBase, Apache Accumulo JSON, XML,
Redis, Oracle NoSQL Amazon S3,
GoogleStorage,
NeoDatis
Database support for such mapping and translation
operations varies widely. For example, EclipseLink is a
mature ORM framework which has introduced NoSQL
support only gradually over time, and it currently only
supports Oracle NoSQL and MongoDB. While Kundera
was intended specifically for NoSQL databases, it now
also provides RDBMS support by using Hibernate ORM.
Despite the heterogeneity between RDBMS and NoSQL,
a combination of both can be used.
The following section introduces our main research
questions, upon which we have built this benchmark
study.
**3** **Research questions**
Our study is tailored to address the following research
questions:
**RQ1 What is the overhead (absolute and relative) of a**
write, read and update operation in the selected
ONDMs?
**RQ2 What is the significance of the performance over-**
head in a realistic database deployment?
**RQ3 What is the impact of the development API on the**
performance overhead?
**RQ4 How does the performance overhead of a JPQL**
search query (search on primary key) compare to
that of the JPA read operation (find on primary key)?
**RQ5 What is the performance overhead of JPQL query**
translation, and does the nature/complexity of the
query play a role?
**Expectations and initial hypotheses.** We summarise
our expectations and up-front hypotheses below:
- RQ1: Although earlier studies [21, 23] have yielded
mixed results, in general, the performance overhead
has been shown to be rather substantial: ranging
between 10 and 70% depending on the operation for a
single-node setup. DataNucleus in particular is
shown to have tremendous overhead [23]. We expect
to confirm such results and thus increase confidence
in these findings.
- RQ2: ONDMs are by design independent of the
underlying database, and therefore, we expect the
absolute overhead not to be affected by the setup or
the complexity of the database itself. As a
consequence, we expect the absolute overhead to
potentially more significant (i.e. a higher relative
overhead) for low-latency setups (e.g. a single node
setup or an in-memory database), in comparison to
setups featuring more network latency or disk I/O
(e.g. a database cluster or a disk-intensive setup).
- RQ3: We expect to find that the programming
interface does have a certain impact on performance.
For example, the JPA standard relies heavily on code
annotations, we expect the extensive use of reflection
on these objects and their annotations within the
ONDM middleware to substantially contribute to the
overall performance overhead.
- RQ4: This is in fact an extension to RQ3, focusing on
which development API incurs the highest
performance overhead. On the one hand, JPA is
costly due to its reliance on annotation-based
reflection, while on the other hand, query translation
can become costly as well. To our knowledge, this is
the first benchmark study directly comparing the JPA
and JPQL performance overhead over NoSQL search
queries.
- RQ5: We expect complex queries to be more costly
in query translation. Additionally, queries retrieving
multiple results should have increased overhead as
each result has to be mapped into an object.
The following section presents the design and setup of
our benchmarks that are tailored to provide answers to the
above questions.
**4** **Benchmark setup**
This section discusses the main design decisions involved
in the setup of our benchmark study. Section 4.1 first
-----
discusses the overall architecture of an ONDM framework, and then Section 4.2 discusses the measurement
methodology for the performance overhead. Section 4.3
subsequently motivates our selection of Object-NoSQL
Database Mapping (ONDM) platforms for this study,
whereas Section 4.4 elaborates further on the benchmarks we have adopted and extended for our study. Next,
Section 4.5 discusses the different deployment configurations in which we have executed these benchmarks.
Finally, Section 4.6 summarises how our study is tailored
to provide answers to the research questions introduced
in the previous section.
**4.1** **ONDM Framework architecture**
The left-hand side of Fig. 1 depicts the common architecture of Object-NoSQL Database Mappers (ONDMs)
which is layered. As shown at the top of Fig. 1, an
ONDM platform supports a Uniform Data Model
in the application space. In the Java Persistence API
(JPA) for example, these are the annotated classes. In
Apache Gora however, mapping classes are generated
from user specifications. An ONDM provides a Uniform
Interface based on the Uniform Data Model. The
Middleware Engine implements the operations of the
Uniform Interface and delegates these operations to
the correct Database Mapper.
The Database Mapper is a pluggable module that
implements the native Database Driver’s API.
Different Database Mapper modules are created for
different NoSQL databases. The Database Mapper
converts the uniform data object to the native data structure, and calls the corresponding native operation(s). The
Database Driver executes these native operations
and handles all communication with the database.
The right hand side of Fig. 1 illustrates the situation
in which no ONDM framework is employed, and the
application directly uses the native client API to communicate with the database.
Comparing both alternatives in Fig. 1 clearly illustrates
the cost of object mapping as a key contributor to the
performance overhead introduced by ONDM platforms.
Both write requests (which involve translating in-memory
objects or API calls to native API calls) and read requests
or search queries (which involve translating database
objects to application objects) rely extensively on database
mapping. Our benchmark study, therefore, focuses on
measuring this additional performance overhead.
In addition, Fig. 1 clearly shows that an ONDM is
designed to be maximally technology-agnostic: other than
the Database Mapper which makes abstraction of a
specific database technology, the inner workings of the
ONDM do not take the specifics of the selected database
technology into account.
**4.2** **Measurement methodology**
In order to measure the overhead of ONDMs, we first
measure tONDM, the total time it takes to perform a
database operation (read, write, update, search), which
is the sum of time spent by the ONDM components
depicted on the left-hand side of Fig. 1.
In addition, we measure tDB, the total time it takes
to execute the exact same database operations using the
native client API (right-hand side of Fig. 1). By subtracting both measurements, we can characterise the performance overhead introduced by the ODNM framework
as tOverhead = tONDM − _tDB. This is exactly the addi-_
tional overhead incurred by deciding to adopt an ONDM
framework instead of developing against the native
client API.
To maintain comparability between different ODNMs,
we must: (i) select a specific database and database version
that is supported by the selected ONDM frameworks (our
baseline for comparison), (ii) ensure that each ONDM
framework uses the same database driver to communicate with the NoSQL database, (iii) run the exact same
benchmarks in our different setups. These decisions are
explained in the following sections.
**4.3** **ODNM selection**
Our benchmark study includes the following five
ONDMs: EclipseLink [17], Hibernate OGM [20], Impetus
Kundera [16], DataNucleus [19] and Apache Gora [18].
Table 2 lists these ONDMs and summarises their main
characteristics and features.
As mentioned above, to maintain comparability of
our benchmark results, it is imperative to ensure that
the selected ONDMs employ the exact same NoSQL
database, and database driver version as our baseline.
Driven by Table 2, we have selected MongoDB version 2.6
as the main baseline for comparison. In contrast to other
-----
NoSQL technologies such as Cassandra for which many
alternative client APIs and drivers are available, MongoDB provides only a single Java driver which is used by
all of the selected frameworks. Furthermore, MongoDB
can be used in various deployment configurations such
as a single node or cluster setup, which will allow us to
address RQ2.
In addition to MongoDB support as the primary selection criterion, we have also taken into account other comparability and industry relevance criteria: (i) JPA support,
(ii) search support via JPQL, (iii) maturity and level of
ongoing development activity. For example, we have deliberately excluded frameworks such as KO3-NoSQL [27] as
their development seems to have been discontinued.
Although Apache Gora [18] is not JPA-compliant, it is
included for the purpose of exploring the potential impact
of the development API on the performance overhead
introduced by these systems (RQ3).
**4.4** **Benchmark design**
Our benchmarks are implemented and executed on top
of the Yahoo! Cloud Serving Benchmark (YCSB) [3], an
established benchmark framework initially developed to
evaluate the performance of NoSQL databases. YCSB provides a number of facilities to accurately measure and
control the benchmark execution of various workloads on
NoSQL platforms.
**Read, write, update. YCSB comes with a number of pre-**
defined workloads and is extensible, in the sense that
different database client implementations can be added
(by implementing the com.yahoo.ycsb.DB interface,
which requires implementations for read, update, insert
and delete (CRUD) operations on primary key).
Our implementation provides such extensions for
the selected ONDMs (Hibernate OGM, DataNucleus
EclipseLink, Kundera and Apache Gora). Especially the
implementations for the JPA-compliant ONDMs are
highly similar. To avoid skewing the results and to ensure
comparability of the results, we did not make use of
any performance optimization strategies offered by the
ONDMs, such as caching, native queries and batch operations.
Furthermore, since implementations for NoSQL
databases were already existing, we simply reused the
client implementation for MongoDB for obtaining our
baseline measurements.
**Search. YCSB does not support benchmarking search**
queries out of the box. Therefore, we have defined a set
of 6 read queries, which we execute on each platform
in YCSB. These queries differ in both complexity and
number of results. In support of these benchmarks, we
populate our existing objects with more realistic values
such as firstName and lastName, instead of YCSB’s
default behavior which involves generating lenghty strings
of random characters.
Note that we do not benchmark query performance for
Apache Gora, since it has no support for JPQL and lacks
support for basic query operators such as AND, OR[1].
**4.5** **Deployment setup**
To address RQ2 and assess the impact of the database
deployment configuration on the performance overhead
introduced by ONDMs, we have executed our benchmarks over different deployment configurations. Figure 2
depicts these different configurations graphically. The
client node labeled YCSB Benchmark runs the ONDM
framework or the native driver which are driven by the
YCSB benchmarks discussed above.
The single-node setup (cf. Fig. 2a) involves two commodity machines, one executing the YCSB benchmark,
and the other hosting a single MongoDB database
instance.
The MongoDB cluster (cf. Fig. 2b) consists of a single router server, 3 configuration servers and 5 database
shards. Each database is sharded and all of the inserted
entities in each database are load balanced across all 5
database shards without replication.
Each node consists of a Dell Optiplex 755 (Intel® Core™
2 Duo E6850 3.00GHz, 4GB DDR2, 250GB hard disk).
In both cases, the benchmarks were executed in a local
lab setting, and the average network latency between
nodes in our lab setup is quite low: around 135μs. As
YCSB Benchmark
Router
YCSB Benchmark
MongoDB
#### (a)
Configuration servers
YCSB Benchmark
Router
Database shards
#### (b)
**Fig. 2 Deployment setups: a single-node setup and b 9-node cluster**
-----
a consequence, our calculations of the relative overhead
often represent the absolute worst case.
**4.6** **Setup: research questions**
Below, we summarise how we address the individual
research questions introduced in Section 3:
- RQ1: Create, read, update. We answer RQ1 by
running the benchmarks discussed above for the
create, read and update operations. Our benchmarks
are sequential: in the load phase, 20 million entities
(20GB) are written to the database. In the transaction
phase, the desired workload is executed on the data
set (involving read and update). The inserted entity is
a single object.
- RQ2: Significance of performance overhead. To
put the absolute performance overhead
measurements into perspective, we have executed
our benchmarks in two different environments: (i) a
remote single-node MongoDB instance, and (ii) a
9-node MongoDB cluster. These concrete setups are
depicted in Fig. 2. In both cases, the actual execution
of the benchmark is done on a separate machine to
avoid CPU contention. The inserted data size
consumes the entire memory pool of the single node
and cluster shards. Read requests are not always able
to find the intended record in-memory, resulting in
lookup on disk. Based on the two types of responses
we determine the general impact of ONDMs on
overhead for deployments of varying data set sizes
and memory resources.
- RQ3: Impact of development API. By comparing
the results for the JPA middleware (Kundera,
Hibernate ORM, DataNucleus and EclipseLink) to
the results for Apache Gora (which offers custom,
non-JPA compliant developer APIs), we can at least
exploratively assess the potential performance impact
of the interface.
- RQ4: JPA vs JPQL. To answer RQ5, we compare the
basic JPA find on primary key (read lookup) to a
JPQL query on primary key. By comparing both, we
can assess the extra overhead cost of JPQL query
translation.
- RQ5: Search query performance overhead. We
have benchmarked queries on secondary indices in
increasing order of query complexity for the ONDMs
and compare the results to the benchmarks of the
native MongoDB client API.
The next two sections present and discuss our findings
in relation to these five research questions.
**5** **Write, read and update performance results**
This section presents the results of our benchmarks that
provide answers to questions RQ1-3. Research questions
**RQ4-5 regarding search performance are discussed in**
Section 6.
The next sections first determine the overhead introduced by the selected ONDMs on the three operations
(write, read, and update) in the context of the single
remote node setup. In order to understand how the
ONDMs introduce overhead, the default behaviour of
MongoDB (our baseline for comparison) must be taken
into account, which we discuss in the next Section 5.1.
**5.1** **Database behaviour**
In our benchmarks, twenty million records (which corresponds to roughly 20GB) are inserted into the single node
MongoDB database. Considering the machine only has
4GB RAM, it is clear that not all of the records will fit
in-memory. As a consequence, read operations will read
a record from memory around 5% of the time, but mainly
require disk I/O. In-memory operations are, on average,
30 times as fast as operations requiring disk I/O. Similarly,
the update operations will only be able to update a subset of objects in-memory. This, however, does not apply
to the write operation: on write, the database regularly
flushes records to disk, which also influences the baseline. Figure 3 shows the distribution in latency for each
type of operation. We can clearly identify a bimodal distribution for read and update operations. Write operations
are normally distributed, however skewed to the right, as
expected.
The aim of this study is to identify the overhead introduced by ONDMs. However, the variance on latency for
objects on-disk is quite high (±25ms) and in this case, the
behaviour of the ONDM frameworks may no longer be
the contributing factor determining the overhead. Therefore, we have analysed the separate distributions of read
and update. To alleviate this, we compare both data sets
(in-memory versus on-disk) separately.
**5.2** **RQ1 Impact on write, read and update performance**
**on a single node**
Table 3 shows the overhead for write, read and update
operations. Read and update operations are divided
according to the overhead for objects in-memory and
on-disk. We first discuss the results for operations inmemory. The write and read overhead of ONDMs ranges
respectively between [9.9%, 36.5%] and [6.7%, 42.2%] and
as such may be considered significant. However, the
update operation is considerably slower and introduces
twice as much latency for a single update operation in
comparison to the native MongoDB driver[2]. The main
reason for this is that update operations in the ODNMs
frameworks first perform a read operation before actually updating a certain object. This is in contrast to the
native database’s capabilities: for example MongoDB can
update records without requiring a read. Surprisingly
-----
enough, each of the observed frameworks require a read
before update, resulting in the addition of read latency on
update and thus significant overhead. Moreover, DataNucleus executes the read again, even though the object
provided on update is already read, thus executing a read
twice. This is a result of DataNucleus its mechanisms to
ensure consistency, and local objects are verified against
the database. The requirement of read on update in the
ONDMs is a clear mismatch between the uniform interface and the native database’s capabilities.
While operations on in-memory data structures show
consistent overhead results, this is not the case for operations which trigger on-disk lookup. It may seem that the
ONDM frameworks in some cases outperform the native
database driver, but this is mainly due to the variance of
database latency. The ordering in performance is not preserved for on-disk operations, and Kundera in particular
experienced a higher latency. Considering the small overhead of around [15μs, 300μs] which ONDMs introduce
for operations in-memory, this is only a minimal contributor in the general time for on-disk operations. For
example, MongoDB takes on average 15.9ms ± 5.2ms for
read on-disk. This is an increase in latency of 2 to 3 orders
of magnitude. In other words, the relative overhead introduced by ONDMs is insignificant, when data needs to be
searched for on-disk.
**5.3** **RQ2: Impact of the database topology**
As shown for a single remote node, the overhead on write,
read or update is significant for in-memory data. In case
of the cluster, we expect the absolute overhead to be comparable to the single-node setup. Table 4 shows the results
for write, read and update. As shown, the relative overhead percentages are substantially smaller in comparison
to the single node. EclipseLink has only a minor write
and read overhead of respectively 2.5 and 3.6%, which can
be explained by considering that the absolute overhead
remains more or less constant, while the baseline latency
does increase. For example, EclipseLink’s absolute read
overhead is 15μs for the single node, and identically 15μs
on the cluster. However, the write overhead decreases
from 43μs to 29s. This is attributed to the fact that MongoDB experienced more outliers, as its standard deviation
for write is 12μs higher. The behaviour of each run is
always slightly different, therefore the standard deviation,
and thus behaviour of the database must be taken into
account when interpreting these results. The ideal case is
read in-memory, where the standard deviation is almost
identical for all four frameworks and the native MongoDB
driver. In general, the write and read overhead is still quite
significant and ranges around [4%, 9%] for EclipseLink and
Kundera, which are clearly more optimised than the other
frameworks.
**Table 3 Average latency and relative overhead for each platform on a single node**
_Write_ _Read in-memory_ _Read on-disk_ _Update in-memory_ _Update on-disk_
Samples _n = 20.000.000_ _n = 45.000_ _n = 750.000_ _n = 39.000_ _n = 750.000_
Platform Latency (μs) Latency (μs) Latency (ms) Latency (μs) Latency (ms)
MongoDB 403 ± 110 - 217 ± 34 - 15.9 ± 5.2 - 298 ± 106 - 19.3 ± 9.1
EclipseLink 446 ± 105 10.8% 232 ± 41 6.7% 14.2 ± 5.0 −10.45% 579 ± 91 93.9% 16.9 ± 8.0 −12.0%
Kundera 442 ± 96 9.9% 256 ± 57 17.7% 17.1 ± 5.6 +8.0% 338 ± 56 13.3% 20.7 ± 9.8 +7.6%
Hibernate OGM 452 ± 72 12.3% 289 ± 42 32.8% 15.1 ± 6.5 −4.7% 620 ± 53 107.6% 16.8 ± 8.0 −12.8%
Apache Gora 495 ± 92 22.9% 282 ± 65 29.8% 14.5 ± 5.0 −8.5% 570 ± 108 91.0% 17.4 ± 8.2 −9.5%
DataNucleus 550 ± 76 36.5% 309 ± 64 42.2% 14.3 ± 5.0 −9.8% 882 ± 49 194.8% 17.7 ± 8.3 −8.0%
-----
**Table 4 Average latency and relative overhead for each platform on a cluster**
_Write_ _Read in memory_ _Read on disk_ _Update in memory_ _Update on disk_
Samples _n = 20.000.000_ _n = 360.000_ _n = 610.000_ _n = 300.000_ _n = 600.000_
Platform Latency (μs) Latency (μs) Latency (ms) Latency (μs) Latency (ms)
MongoDB 694 ± 90 - 434 ± 26 - 11.7 ± 3.8 - 534 ± 122 - 14.6 ± 6.7
EclipseLink 723 ± 78 4.1% 449 ± 27 3.6% 11.0 ± 3.5 −5.4% 1052 ± 72 97.1% 15.2 ± 6.8 3.6%
Kundera 725 ± 79 4.4% 471 ± 27 8.7% 11.2 ± 3.5 −4.2% 858 ± 57 60.8% 15.9 ± 7.4 8.9%
Hibernate OGM 764 ± 68 10.1% 505 ± 28 16.4% 11.2 ± 3.6 −3.6% 1083 ± 67 102.9% 14.9 ± 6.6 2.1%
Apache Gora 791 ± 62 14.0% 506 ± 26 16.7% 11.5 ± 3.7 −1.2% 1034 ± 75 93.7% 15.7 ± 7.2 7.5%
DataNucleus 788 ± 54 13.6% 526 ± 27 21.2% 11.4 ± 3.6 −2.2% 1567 ± 40 193.8% 15.4 ± 6.5 5.5%
In case of update, the frameworks again introduce a substantial overhead, because they perform a read operation
before an update. The cost of the additional read is even
higher in the cluster context, considering that a single read
takes around 434μs.
When operations occur on-disk, it may seem that the
frameworks outperform the baseline. Once again, this is
attributed to the general behaviour of the MongoDB cluster. The standard deviation for reading on-disk for the
baseline is, for example, 10% higher than the frameworks.
The results of each workload execution may also vary due
to records being load balanced at run-time. However, the
cluster allows for a more precise determination of the
overhead as there are more memory resources available,
which in turn results in less variable database behaviour
such as on-disk lookups. In addition, the write performance is less affected by the regular flush operation of a
single node.
**5.4** **RQ3: Impact of the interface on performance**
In contrast to the four JPA-compliant frameworks, we now
include Apache Gora in our benchmarks, which offers a
non-standardised, REST-based programming interface.
Tables 3 and 4 presents the average latency of Apache
Gora for write, read and update on the two database
topologies. Even though the interface and data model is
quite different from JPA, the overhead is very similar.
Surprisingly enough, we do not see a large difference
in update performance. As we actually observe the same
behaviour for Apache Gora’s update operation: Apache
Gora’s API specifies no explicit update operation, but
instead uses the same write method put(K key, T
object) for updating records. As a result, the object has
to be read before updating. If an object has not yet been
read and needs to be updated, it may be best to perform
an update query instead.
**5.5** **Conclusions**
In summary, the following conclusions are made from
the results regarding RQ1-3 about the performance of
ONDMs:
- The write, read and update performance overhead
can be considered significant. Overheads are
observed between [4%, 14%] for write, [4%, 21%] for
read and [60%, 194%] for update, on the cluster.
- The relative overhead becomes insignificant as the
database latency increases. Examples are cases which
trigger on-disk lookups or even when a higher
network latency is present.
- Interface mismatches can exist between the uniform
interface and the native database’s capabilities which
decrease performance.
The next section discusses our benchmark results
regarding the performance overhead introduced by the
uniform query language JPQL for the JPA ONDMs.
**6** **JPQL search performance**
Contrary to the name, NoSQL databases often do feature
a query language. In addition, ONDMs provide a uniform
SQL-like query language on top of these heterogeneous
languages. For example, JPA-based object-data mappers
provide a standardised query language called JPQL. We
have evaluated the performance of JPQL for the JPAbased platforms: EclipseLink, Kundera, DataNucleus and
Hibernate OGM.
While it is clear that there can be quite some overhead attached to a create, read or update operation, the
question RQ4 still remains whether or not the JPQL
search overhead is similar to JPA read. Section 6.1 therefore first compares two different ways to retrieve a single
object: using a JPQL search query, or with a JPA lookup.
Then, Section 6.2 addresses RQ5 by considering how
the performance overhead of a JPQL query is affected by
its nature and complexity.
**6.1** **RQ4: Single object search in JPA and JPQL**
We compare a read for a single object using the JPA interface, to the same read in JPQL query notation. This allows
us to determine the exact difference in read overhead
between JPA and JPQL for RQ4.
-----
In order to be able to compare the results from the
earlier JPA read to the JPQL search on the same object
for RQ4, we have re-evaluated the read performance by
inserting 1 million entities (roughly 1GB of data). The
data set is completely in-memory for the single-node and
cluster setup, allowing for a consistent measurement of
the performance overhead. More specifically, our benchmarks compare the performance overhead incurred by
Query A (JPA code) with the overhead incurred by
Query B (JPQL equivalent code) in Listing 1.
**Listing 1 JPQL and JPA search on primary key**
A) entityManager . find ( Person . cl a s s, id ) ;
B) SELECT ∗ FROM Person p WHERE p . id = : id
Table 5 shows the average latency for a find in JPA and a
search in JPQL for the same object. We can clearly see that
in general, a query in JPQL comes at a higher performance
overhead cost (RQ4). Additional observations:
- Kundera and EclipseLink both perform similarly in
JPA and JPQL single entity search performance.
- Interestingly, DataNucleus and Hibernate OGM are
drastically slower for JPQL queries.
In DataNucleus the additional JPQL overhead stems
from the translation of the query to a generic expression tree, which is then translated to the native MongoDB
query.
Additionally, DataNucleus makes use of a lazy query
loading approach to avoid memory conflicts. As a result,
it executes a second read call to verify if there are any
records remaining.
Code inspection in Hibernate OGM revealed that this
platform extensively re-uses components from the Hibernate ORM engine, which may result in additional overhead due to architectural legacy.
JPQL provides more advanced search functionality than
JPA’s single find on primary key. The next section discusses the performance benchmark results on a number
of JPQL queries of increasing complexity.
**Table 5 The average latency on single object search in JPA,**
JPQL, and MongoDB’s native read
Native driver 1-node read 9-node read
MongoDB 197μs 434μs
Platform JPA JPQL JPA JPQL
Latency Latency s Latency Latency
Kundera 243μs 285μs 478μs 520μs
EclipseLink 218μs 291μs 448μs 520μs
Hibernate OGM 270μs 1.804μs 521μs 2.098μs
DataNucleus 288μs 811μs 492μs 1.236μs
**6.2** **RQ5: Relation between the nature and complexity of**
**the query and its overhead**
This section discusses the results of our search benchmarks, and more specifically how the overhead of a search
query is related to the complexity of the query for RQ5.
Queries which retrieve multiple results incur more performance overhead, as all the results have to be mapped to
objects.
The benchmarked search queries are presented in
Listing 2. The respective queries are implemented in
JPQL and executed in the context of all four ONDM
platforms. Our baseline measurement is the equivalent
MongoDB native query. The actual search arguments are
chosen randomly at runtime by YCSB and are marked as
:variable.
The queries are ordered according to the average results
retrieved per query. Query C is a query on secondary
indices using the AND operator and always retrieves a single result. By comparison to Query B, which retrieves a
single object on the primary key, we can determine the
impact of a more complex query text translation.
In contrast, Queries D, E and F retrieve respectively on average 1.35, 94 and 2864 objects. When we
compare the performance of Queries D,E and F, we can
assess what impact the amount of results have on the overhead. First, we evaluate the case where we retrieve a single
result with a more complex query.
**Listing 2 JPQL search queries**
C) SELECT p FROM Person p WHERE
( p . email = : email ) AND
( p . personalnumber = : personalnumber )
D) SELECT p FROM Person p WHERE
p . email = : email
E ) SELECT p FROM Person p WHERE
( p . personalnumber < : upperBound ) AND
( p . personalnumber > : lowerBound )
F ) SELECT p FROM Person p WHERE
( p . firstName = : firstName ) OR
( p . lastName = : lastName )
**_6.2.1_** **_JPQL search using the AND operator_**
Table 6 presents the results for Query C, the JPQL
search using AND on secondary indices. The query always
returns a single object in our experiment. In comparison to the results from JPQL search on a primary key in
Table 5, we observe an increase in baseline latency due to
the use of secondary indices and the AND operator.
Additionally for the ONDMs, we observe an increase
in read overhead for the more complex query on the single node for Kundera and Eclipselink. As it turns out
EclipseLink is less efficient than Kundera in handling the
more complex query. Furthermore, DataNucleus shows a
higher increase in performance overhead, as the query is
-----
**Table 6 The average latency and overhead for Query C, which**
retrieves a single object
_1-node_ _9-node_
Native driver Latency Overhead Latency Overhead
MongoDB 281μs - 621μs
Platform
Kundera 408μs 127μs 743μs 122μs
EclipseLink 453μs 172μs 783μs 162μs
Hibernate OGM 590μs 309μs 921μs 301μs
DataNucleus 1.010μs 729μs 1.581μs 960μs
translated to a more complex expression tree first, and
secondly due to the additional read from its lazy loading
approach.
Surprisingly, Hibernate OGM’s absolute overhead on
the remote node is 309μs for the more complex Query
C, while for the simple search (Query B) on primary
key this was 1.607μs. Clearly, Hibernate OGM has some
inefficiencies regarding its query performance.
**_6.2.2_** **_JPQL search on a secondary index_**
Query D is a simple search on a secondary index of
a person. The query retrieves on average 1.35 objects.
Therefore, multiple records can be retrieved on search
which have to be mapped into objects.
Table 7 shows the average latency and relative overhead
of Query D for the four JPA platforms, as for the similar
query implemented in MongoDB’s native query language.
Again, we conclude that Kundera and EclipseLink are
most efficient at handling the query.
**_6.2.3_** **_JPQL search on a range of values_**
Table 8 shows the average latency for the JPQL search
Query E. The performance overhead introduced by the
ONDM platforms increases as on average 94 results
have to be mapped into objects, and ranges between
[453μs, 3.615μs] on the single node, and [473μs, 3.988μs]
on the cluster.
**Table 7 The average latency and overhead for Query D, which**
retrieves on average 1.35 objects
_1-node_ _9-node_
Native driver Latency Overhead Latency Overhead
MongoDB 250μs - 576μs
Platform
Kundera 347μs 97μs 677μs 100μs
EclipseLink 396μs 146μs 729μs 152μs
Hibernate OGM 553μs 304μs 883μs 306μs
DataNucleus 957μs 707μs 1.520μs 944μs
**Table 8 The average latency and overhead for Query E, which**
retrieves on average 94 objects
_1-node_ _9-node_
Native driver Latency Overhead Latency Overhead
MongoDB 943μs - 1.901μs
Platform
Kundera 1.396μs 453μs 2.374μs 473μs
EclipseLink 1.556μs 613μs 2.550μs 649μs
Hibernate OGM 4.558μs 3.615μs 5.889μs 3.988μs
DataNucleus 3.831μs 2.888μs 4.786μs 2.885μs
**_6.2.4_** **_JPQL search using the OR operator_**
The average latency of Query F is presented in Table 9.
Again, the performance overhead introduced by the
ONDMs increases as this query involves retrieval of on
average 2.864 records, to the range of [7.6ms, 56.6ms]
and [10.2ms, 42ms] on the respective database topologies. These results allow us to highlight the specific
object-mapping cost of each ONDM. Kundera seems
to have significantly more efficient object-mapping than
EclipseLink. The average overhead for each object
retrieved ranges between [3μs, 17μs].
**6.3** **Search performance conclusion**
In summary, several conclusions can be made from the
results regarding RQ4-5 about the query search performance of ONDMs:
- JPQL search on a primary key has a higher overhead
than JPA’s find for the same object (RQ4).
- The performance overhead of a JPQL query is closely
related to the complexity of its translation and the
amount of results retrieved (RQ5) and there are large
differences between the ONDM in terms of the
performance cost associated to search queries.
Finally, the additional performance overhead per
search result in general decreases for queries
**Table 9 The average latency and overhead for Query F, which**
retrieves on average 2.864 objects
_1-node_ _9-node_
Native driver Latency Overhead Latency Overhead
MongoDB 20.226μs - 39.689μs
Platform
Kundera 27.989μs 7.763μs 49.889μs 10.210μs
EclipseLink 33.640μs 13.414μs 56.059μs 16.370μs
Hibernate
OGM 58.806μs 38.580μs 75.234μs 35.545μs
DataNucleus 77.093μs 56.587μs 81.628μs 41.993μs
-----
involving large amounts of results, which motivates
the use of JPQL for large result sets.
The next section discusses our benchmark results in
further detail.
**7** **Discussion**
First, Section 7.1 discusses the main threats to validity. Then, we provide a more in-depth discussion about
some of the more surprising results of our benchmarks, more specifically about Kundera’s fast update
performance (Section 7.2), and the observed mismatch
between standards such as JPA and NoSQL technology
(Section 7.3). Finally, we discuss the significant overhead
in search performance for Hibernate OGM and DataNucleus (Section 7.4).
**7.1** **Threats to validity**
As with any benchmark study, a number of threats to
validity apply. We outline the most notable topics below.
**Internal validity We discuss a number of threats:**
- Throughput rate control. A possible threat to
validity is related to the method of measurement.
Although YCSB allows specifying a fixed throughput
rate, we did not make use of this function. Limiting
the throughput ensures that no platform is
constrained by the resources of the server or client.
For example, the MongoDB native database driver
can process create, read and update operations at a
faster rate than the ONDMs, as shown. In such a
case, the MongoDB driver may reach its threshold of
maximum performance, as dictated by its
deployment constraints. In contrast, the ONDMs
work at a slower rate and are less likely to reach this
threshold. Consequentially, the computing resources
of the MongoDB node will not be as much of an
issue. When applying throughput rate control, the
possibility of reaching this threshold is excluded, and
the average latency would be a more truthful
depiction of the individual performance.
To increase our confidence in the obtained results,
we did run a smaller-scale additional evaluation in
which we applied throughput rate control (limited to
10.000 operations per write, read and update) and did
not notice any deviations from our earlier results.
Furthermore, during our main experiment we have
measured CPU usage, I/O wait time and memory
usage. From these measurements[3] we gather that no
cluster node used more than 10% CPU usage on
average. Although the single-node database setup
experienced the heaviest load, during workload
execution, it was still idling 50% of the time.
As such, we conclude that the MongoDB cluster and
single-node setup did not reach their limits during
our benchmarks.
- Choice of the baseline. In this study, we implicitly
assume that the choice for MongoDB as the back-end
database has no significant impact on the
performance overhead of ONDMs, because we
subtract the MongoDB latency in our performance
overhead calculations. Furthermore, the
database-specific mapper is a modularly pluggable
module which is independent of the core middleware
engine responsible for data mapping. Each
database-specific implementation only varies in its
implementation of these engine interfaces. These
arguments lead us to believe that there will be
minimal variation in overhead between NoSQL
technologies. We can confirm this by referring to a
previous study on the performance overhead [21], in
which Cassandra and MongoDB were used as the
baseline for comparison. The study shows similar
relative overheads despite using a different database
technology as the baseline for comparison.
**External validity. There is a number of ways in which the**
results may deviate from realistic deployments of ONDM
systems. Specifically, our benchmark is designed to quantify the worst-case performance overhead in a number of
ways.
- Entity relationships. For simplicity, we chose to
work with single entities containing no relationships.
There are a number of different ways relationships
can be persisted in NoSQL databases: denormalizing
to a single entity, storing them as separate entities,
etc. This may have a drastic effect on the object-data
mapper’s performance. A single entity containing no
relationships allows us to monitor the overhead of
each platform without unnecessary complexity. The
performance overhead of an application that relies
extensively on associations between entities may vary
from the results obtained in our study.
- Optimization strategies. The studied ONDMs offer
various caching strategies and transaction control
mechanisms. EclipseLink even supports
cross-application cache coordination, which may
improve performance significantly. As already
discussed in Section 4.4, to maximally ensure
comparability of our results, we disabled these
mechanisms in our benchmarks. In the case of
Object-Relational Mappers (ORMs), the impact of
performance optimizations has already been studied
[28, 29]. A similar study can prove useful for ONDMs
and should be considered future work.
- Database deployment. We have shown that
although these frameworks introduce more or less a
-----
constant absolute performance overhead, the
significance of this performance overhead may
depend highly on the nature and complexity of the
overall database setup and the application case. For
example, in the context of an in-memory database
featuring a high-bandwidth and low-latency
connection, the introduced overhead may be deemed
significant. In contrast, general database deployments
often read from disk and feature a higher network
latency, and in such a context, the introduced
overhead may be considered minimal or negligible.
It is therefore important to stress that for the above
reasons, different and in many cases, better performance characteristics can be expected in realistic ONDM
deployments.
**7.2** **Kundera’s update performance**
Looking at the update performance results of Impetus
Kundera in Tables 3 and 4, one might conclude that Kundera significantly outperforms EclipseLink and Hibernate
OGM when it comes to updating. However, upon closer
inspection, we discovered that in the tested version of
Kundera an implementation mistake was made.
More specifically, Kundera’s implementation does not
make use of the MongoDB property WriteConcern.
ACKNOWLEDGED, which forces the client to actively wait
until MongoDB acknowledges issued update requests (a
default property in MongoDB since version 2.6 [30]). By
not implementing this, Kundera’s implementation gains
an unfair advantage since some of the network latency is
not included in the measurement.
We have reported this bug in the Kundera bug reporting
system [31].
**7.3** **JPA-NoSQL interface mismatch**
One remarkable result is the observation that update
operations consistently introduce more performance
overhead when compared to read or write operations
(cf. Table 3). The main cause for this is that the JPA
standard imposes that updates can only be done on
_managed entities, i.e. it forces the ONDM to read the_
object prior to update. This causes the update operation to be significantly costlier than a read operation[4]. As
pointed out by [21], similar drawbacks are associated to
delete operations (which were not benchmarked in this
study).
In the context of Object-Relational Mappers (ORMs),
this problem is commonly referred to as the object_relational impedance mismatch [32], and one may argue_
that in a NoSQL context, such mismatch problems may
be more significant due to the technological heterogeneity among NoSQL systems and the wide range of features
and data models supported in NoSQL.
Similar drawbacks apply to JPQL search operations,
especially when there is a discrepancy between the native
search capabilities and the features assumed by JPQL.
Future work is required to determine whether other
existing standardised interfaces such as REST-based APIs,
Java Data Objects (JDO) are better suited, and more
in-depth research is required toward dedicated, NoSQLspecific abstraction interfaces that can further reduce the
cost inherent to database abstraction.
**7.4** **JPQL search performance**
When comparing the results of our query benchmarks
(cf. Section 6), it becomes clear that the performance overhead results for DataNucleus and Hibernate OGM are
drastically worse than those of EclipseLink and Impetus
Kundera: in some cases, Hibernate OGM introduces up to
383% overhead whereas the overhead introduced by the
other two ONDMs never exceeds 66%.
According to the Hibernate OGM Reference Guide [20],
the search implementation is a direct port of the search
implementation of Hibernate’s Object-Relational Mapper (ORM). Architectural legacy could therefore be one
potential explanation for these surprising results.
Similarly to Hibernate OGM, DataNucleus shows a
more consistent overhead of around 300%. In this case,
the overhead is mainly attributed to the fact that it executes additional and unnecessary reads. Furthermore, the
queries are translated first into a more generic expression tree, and then to the native database query. Various
optimization strategies are provided to cache these query
compilations, which might in turn provide more optimal
performance. However, it is clear that the compilation of
queries to generic expression trees, independent of the
data store, takes a toll on performance.
**8** **Related work**
This section addresses three domains of related work:
(i) performance studies on Object-relational Mapper
(ORM) frameworks, (ii) academic prototypes of ObjectNoSQL Database Mappers and (iii) (performance) studies
on ONDMs.
**8.1** **Performance studies on ORM frameworks**
In the Object-relational Mapper (ORM) space, several
studies have evaluated the performance of ORM frameworks, mainly focused on a direct comparison between
frameworks [33–37]. Performance studies were mainly
conducted on Java-based ORM frameworks, however,
some studies also evaluated ORM in .NET based frameworks [38, 39]. However, few studies actually focused
on the overhead, but more on the differences between
the frameworks. The benchmark studies of Sembera [40]
and Kalotra [35] suggest that EclipseLink is slower than
Hibernate. However, a study by ObjectDB actually lists
-----
EclipseLink as faster than Hibernate OGM [41]. The
methods used in each study differ and the results are not
directly applicable to NoSQL. Since none of these studies quantify the exact overhead of these ORM systems,
comparison to our results is difficult.
The studies by Van Zyl et al. [42] and Kopteff [34] compare the performance of Java ORM-frameworks to the
performance of Object-databases. These studies evaluate
whether object databases can be used instead of ORM
tools and traditional relational databases, reducing the
mapping cost.
Although executed in a different technological context
(.NET), the studies of Gruca et al. [38] and Cvetkovic et al.
[39] seem to indicate that there is less overhead associated
to translating abstraction query languages (such as Entity
SQL, LINQ or Hibernate HQL) to SQL in the context of
relational databases, when compared to our results. The
relatively high search overhead in our results is caused by
the larger abstraction gap between NoSQL query interfaces and JPQL (which is a SQL-inspired query language
by origin).
**8.2** **Academic prototypes**
Our study focused mainly on Object-NoSQL Database
Mappers (ONDMs) with a certain degree of maturity and
industry-readiness. Apart from these systems, a number
of academic prototypes exist that provide a uniform API
for NoSQL data stores. This is a very wide range of systems, and not all of them perform object-data mapping.
ODBAPI, presented by Sellami et al. [13], provides a unified REST API for relational and NoSQL data stores.
Dharmasiri et al. [43] have researched a uniform query
implementation for NoSQL. Atzeni et al. [7] and Cabibbo
[12] have presented Object-NoSQL Database Mappers
which employ object entities as the uniform data model.
Cabibbo [12] is the first to coin the term “Object-NoSQL
Datastore Mapper”.
We have excluded such systems as most of these implementations are proof-of-concepts, and few of them are
readily available.
**8.3** **Studies on ONDMs**
Three existing studies have already performed an evaluation and comparison of Object-NoSQL Database
Mappers. Wolf et al. [44] extended Hibernate, the ORM
framework, to support RIAK, a NoSQL Key-Value data
store. In support of this endeavour, they evaluated the
performance and compared it with the performance of
Hibernate ORM configured to use with MySQL. The
study provides valuable insights as to how NoSQL technology can be integrated into object-relational mapping
frameworks.
Störl et al. [23] conducted a comparison and performance evaluation of Object-NoSQL Database Mappers
(ONDMs). However, the study does not quantify the
overhead directly, making a comparison difficult. Moreover, these benchmarks were obtained on a single node,
and as a consequence, the results may be affected by CPU
contention. Highly surprising in their results is the read
performance of DataNucleus, which is shown to be at least
40 times as slow EclipseLink. We only measured similar
results when entity enhancement was left enabled atruntime, which recompiles entity classes to a meta model
on each read. As a result, this may indicate fundamental
flaws in the study’s measurement methodology.
Finally, our study is a replica study of an earlier performance study by Rafique et al. [21], and we confirm many
of these results. Our study differs in the sense that: (i) we
adopted an improved measurement methodology, providing more insight on the correlation between the overhead
and the database’s behaviour and setup. Secondly, (ii) we
conducted our evaluation using YCSB (an established
NoSQL benchmark), (iii) we focus on a more mature set
of ONDMs which have less overhead, and finally (iv) we
evaluated the performance impact of ONDMs over search
operations.
**9** **Conclusions and future work**
Object-NoSQL Database Mapper (ONDM) systems have
large potential: firstly, they allow NoSQL adopters to make
abstraction of heterogeneous storage technology by making source code independent of specific NoSQL client
APIs, and enable them to port their applications relatively easy to different storage technologies. In addition,
they are key enablers for novel trends such as federated storage systems in which the storage tier of the
application is composed of a combination of different heterogeneous storage technologies, potentially even hosted
by different providers (cross-cloud and federated storage
solutions).
There are however a number of caveats, such as the
potential loss of NoSQL-specific features (due to the
mismatch between APIs), and most notably, the additional performance overhead introduced by ONDM systems. The performance benchmarks presented in this
paper have quantified this overhead for a standardised
NoSQL benchmark, the Yahoo! Cloud Serving Benchmark
(YCSB), specifically for create, read and update, and most
notably search operations. In addition, we have explored
the effect of a number of dimensions on the overhead: the
storage architecture deployment setup, the amount of
operations involved and the impact of the development
API on performance.
Future work however is necessary for a survey study
or gap analysis on existing ORM and ONDM framework
with support for NoSQL and its features, specifically in
the context of e.g. security and cross-database persistence. Additionally, we identify the need for a NoSQL
-----
search benchmark, as we have seen YCSB used for these
purposes, although it is not supported by default. In addition, we aim to provide an extended empirical validation
of our results on top of additional NoSQL platform(s).
The results obtained in this study inform potential
adopters of ONDM technology about the cost associated to such systems, and provides some indications as
to the maturity of these technologies. Especially in the
area of search, we have observed large differences among
ONDMs in terms of the performance cost.
This work fits in our ongoing research on policy-based
middleware for multi-storage architectures in which these
ONDMs represent a core layer.
**Endnotes**
1 Furthermore, Apache Gora implements most query
functionality based on client-side filtering, which can be
assumed quite slow.
2 The results indicate that this is however not the case
for Kundera, which is attributable to an implementation
mistake in Kundera’s update mechanism (see Section 7.2)
3 Our resource measurements indicate that factors such
as I/O and CPU play a negligible role in the results. For
example, the utilization of ONDM platforms required
only limited additional CPU usage at the client side for
read (Additional file 1).
4 Kundera’s update strategy is slightly different: the
merge( object ) update operation in Kundera reads
the object only when it is unmanaged, whereas in the
other platforms this is explicitly done by the developer.
The solution in Kundera therefore avoids the cost of
mapping the result of the read operation to an object.
**Additional file**
**Authors’ information**
The authors are researchers of imec-DistriNet-KU Leuven at the Department of
Computer Science, KU Leuven, 3001 Heverlee, Belgium.
**Competing interests**
The authors declare that they have no competing interests.
Received: 24 February 2016 Accepted: 2 December 2016
**References**
1. B˘az˘ar C, Iosif CS, et al. The transition from rdbms to nosql. a comparative
analysis of three popular non-relational solutions: Cassandra, mongodb
and couchbase. Database Syst J. 2014;5(2):49–59.
2. Stonebraker M, Madden S, Abadi DJ, Harizopoulos S, Hachem N,
Helland P. The end of an architectural era:(it’s time for a complete
rewrite). In: Proceedings of the 33rd International Conference on Very
[Large Data Bases. Vienna: VLDB Endowment; 2007. p. 1150–1160. http://](http://dl.acm.org/citation.cfm?id=1325851.1325981)
[dl.acm.org/citation.cfm?id=1325851.1325981.](http://dl.acm.org/citation.cfm?id=1325851.1325981)
3. Cooper BF, Silberstein A, Tam E, Ramakrishnan R, Sears R. Benchmarking
cloud serving systems with YCSB. In: Proceedings of the 1st ACM
symposium on Cloud computing - SoCC ’10. Association for Computing
[Machinery (ACM); 2010. p. 143–154. doi:10.1145/1807128.1807152.](http://dx.doi.org/10.1145/1807128.1807152)
[http://dx.doi.org/10.1145/1807128.1807152.](http://dx.doi.org/10.1145/1807128.1807152)
4. Lakshman A, Malik P. Cassandra: a decentralized structured storage
system. ACM SIGOPS Oper Syst Rev. 2010;44(2):35–40.
5. Chang F, Dean J, Ghemawat S, Hsieh WC, Wallach DA, Burrows M,
Chandra T, Fikes A, Gruber RE. Bigtable: A distributed storage system for
structured data. ACM Trans Comput Syst (TOCS). 2008;26(2):4.
6. DeCandia G, Hastorun D, Jampani M, Kakulapati G, Lakshman A, Pilchin
A, Sivasubramanian S, Vosshall P, Vogels W. Dynamo. ACM SIGOPS
Operating Systems Review. 2007;41(6):205–220.
[doi:10.1145/1323293.1294281. http://dx.doi.org/10.1145/1323293.](http://dx.doi.org/10.1145/1323293.1294281)
[1294281.](http://dx.doi.org/10.1145/1323293.1294281)
7. Atzeni P, Bugiotti F, Rossi L. Uniform access to nosql systems. Inform Syst.
2014;43:117–133.
8. Stonebraker M. Sql databases v. nosql databases. Commun ACM.
[2010;53(4):10–11. doi:10.1145/1721654.1721659.](http://dx.doi.org/10.1145/1721654.1721659)
9. Cattell R. Scalable sql and nosql data stores. ACM SIGMOD Rec. 2011;39(4):
12–27.
10. Stonebraker M. Stonebraker on nosql and enterprises. Commun ACM.
2011;54(8):10–11.
[11. NoSQL databases. http://www.nosql-database.org. Accessed 22 Feb 2016.](http://www.nosql-database.org)
12. Cabibbo L. Ondm: an object-nosql datastore mapper: Faculty of
[Engineering, Roma Tre University; 2013. Retrieved June 15th. http://](http://cabibbo.dia.uniroma3.it/pub/ondm-demo-draft.pdf)
[cabibbo.dia.uniroma3.it/pub/ondm-demo-draft.pdf.](http://cabibbo.dia.uniroma3.it/pub/ondm-demo-draft.pdf)
13. Sellami R, Bhiri S, Defude B. Odbapi: a unified rest API for relational and
NoSQL data stores. In: 2014 IEEE International Congress on Big Data. IEEE;
[2014. p. 653–660. doi:10.1109/bigdata.congress.2014.98. http://dx.doi.](http://dx.doi.org/10.1109/bigdata.congress.2014.98)
[org/10.1109/bigdata.congress.2014.98.](http://dx.doi.org/10.1109/bigdata.congress.2014.98)
14. Rafique A, Landuyt DV, Lagaisse B, Joosen W. Policy-driven data
management middleware for multi-cloud storage in multi-tenant saas. In:
2015 IEEE/ACM 2nd International Symposium on Big Data Computing
[(BDC); 2015. p. 78–84. doi:10.1109/BDC.2015.39.](http://dx.doi.org/10.1109/BDC.2015.39)
[15. Fowler M. Polyglot Persistence. 2015. http://martinfowler.com/bliki/](http://martinfowler.com/bliki/PolyglotPersistence.html)
[PolyglotPersistence.html. Accessed 22 Feb 2016.](http://martinfowler.com/bliki/PolyglotPersistence.html)
[16. Impetus: Kundera Documentation. https://github.com/impetus-](https://github.com/impetus-opensource/Kundera/wiki)
[opensource/Kundera/wiki. Accessed 28 May 2016.](https://github.com/impetus-opensource/Kundera/wiki)
[17. Eclipselink: Understanding EclipseLink 2.6. 2016. https://www.eclipse.org/](https://www.eclipse.org/eclipselink/documentation/2.6/concepts/toc.htm)
[eclipselink/documentation/2.6/concepts/toc.htm. Accessed 27 May 2016.](https://www.eclipse.org/eclipselink/documentation/2.6/concepts/toc.htm)
[18. Apache Gora: Apache Gora. http://gora.apache.org/. Accessed28 May2016.](http://gora.apache.org/)
[19. DataNucleus: DataNucleus AccessPlatform. 2016. http://www.](http://www.datanucleus.org/products/accessplatform_5_0/index.html)
[datanucleus.org/products/accessplatform_5_0/index.html. Accessed 28](http://www.datanucleus.org/products/accessplatform_5_0/index.html)
May 2016.
[20. Red Hat: Hibernate OGM Reference Guide. 2016. http://docs.jboss.org/](http://docs.jboss.org/hibernate/ogm/5.0/reference/en-US/pdf/hibernate_ogm_reference.pdf)
[hibernate/ogm/5.0/reference/en-US/pdf/hibernate_ogm_reference.pdf.](http://docs.jboss.org/hibernate/ogm/5.0/reference/en-US/pdf/hibernate_ogm_reference.pdf)
Accessed 28-05-2016.
21. Rafique A, Landuyt DV, Lagaisse B, Joosen W. On the Performance Impact
of Data Access Middleware for NoSQL Data Stores. IEEE Transactions on
[Cloud Computing. 2016;PP(99):1–1. doi:10.1109/TCC.2015.2511756.](http://dx.doi.org/10.1109/TCC.2015.2511756)
**[Additional file 1: CPU Metric. (TXT 2 kb)](http://dx.doi.org/10.1186/s13174-016-0052-x)**
**Acknowledgements**
This research is partially funded by the Research Fund KU Leuven (project
GOA/14/003 - ADDIS) and the DeCoMAdS project, which is supported by
VLAIO (government agency for Innovation by Science and Technology).
**Availability of data and materials**
The datasets supporting the conclusions are included within the article. The
[benchmark, which is an extension of YCSB, can be found at: https://github.](https://github.com/vreniers/ONDM-Benchmarker)
[com/vreniers/ONDM-Benchmarker The software is distributed under the](https://github.com/vreniers/ONDM-Benchmarker)
Apache 2.0 license. The project is written in Java and is therefore platform
independent.
**Authors’ contributions**
VR conducted the main part of this research with guidance from AR, who has
done earlier research in this domain. DVL supervised the research and
contents of the paper, and WJ conducted a final supervision. All authors read
and approved the final manuscript.
-----
22. Barnes JM. Object-relational mapping as a persistence mechanism for
object-oriented applications: PhD thesis, Macalester College; 2007.
23. Störl U, Hauf T, Klettke M, Scherzinger S, Regensburg O. Schemaless
nosql data stores-object-nosql mappers to the rescue? In: BTW;
[2015. p. 579–599. http://www.informatik.uni-rostock.de/~meike/](http://www.informatik.uni-rostock.de/~meike/publications/stoerl_btw_2015.pdf)
[publications/stoerl_btw_2015.pdf.](http://www.informatik.uni-rostock.de/~meike/publications/stoerl_btw_2015.pdf)
[24. Oracle Corporation: The Java EE6 Tutorial. 2016. http://docs.oracle.com/](http://docs.oracle.com/javaee/6/tutorial/doc/)
[javaee/6/tutorial/doc/. Accessed 22 Feb 2016.](http://docs.oracle.com/javaee/6/tutorial/doc/)
[25. Apache JDO: Apache JDO. https://db.apache.org/jdo/. Accessed 22 Feb](https://db.apache.org/jdo/)
2016.
[26. NET Persistence API. http://www.npersistence.org/. Accessed 22 Feb 2016.](http://www.npersistence.org/)
[27. Curtis N. KO3-NoSQL. 2007. https://github.com/nichcurtis/KO3-NoSQL.](https://github.com/nichcurtis/KO3-NoSQL)
Accessed 22 Feb 2016.
28. van Zyl P, Kourie DG, Coetzee L, Boake A. The influence of optimisations
on the performance of an object relational mapping tool. 2009150–159.
[doi:10.1145/1632149.1632169.](http://dx.doi.org/10.1145/1632149.1632169)
29. Wu Q, Hu Y, Wang Y. Research on data persistence layer based on
[hibernate framework. 20101–4. doi:10.1109/IWISA.2010.5473662.](http://dx.doi.org/10.1109/IWISA.2010.5473662)
[30. MongoDB: MongoDB Documentation. 2016. https://docs.mongodb.com/](https://docs.mongodb.com/v2.6/)
[v2.6/. Accessed 22 Feb 2016.](https://docs.mongodb.com/v2.6/)
[31. Kundera bug regarding MongoDB’s WriteConcern. https://github.com/](https://github.com/impetus-opensource/Kundera/issues/830)
[impetus-opensource/Kundera/issues/830. Accessed 22 Feb 2016.](https://github.com/impetus-opensource/Kundera/issues/830)
32. Ireland C, Bowers D, Newton M, Waugh K. A classification of
object-relational impedance mismatch. In: Advances in Databases,
Knowledge, and Data Applications, 2009. DBKDA ’09. First International
[Conference On; 2009. p. 36–43. doi:10.1109/DBKDA.2009.11.](http://dx.doi.org/10.1109/DBKDA.2009.11)
33. Higgins KR. An evaluation of the performance and database access
strategies of java object-relational mapping frameworks. ProQuest
[Dissertations and Theses. 82. http://gradworks.umi.com/14/47/1447026.](http://gradworks.umi.com/14/47/1447026.html)
[html.](http://gradworks.umi.com/14/47/1447026.html)
34. Kopteff M. The Usage and Performance of Object Databases compared
[with ORM tools in a Java environment. Citeseer. 2008. http://citeseerx.ist.](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.205.8271&rank=1&q=kopteff&osm=&ossid=)
[psu.edu/viewdoc/summary?doi=10.1.1.205.8271&rank=1&q=kopteff&](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.205.8271&rank=1&q=kopteff&osm=&ossid=)
[osm=&ossid=.](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.205.8271&rank=1&q=kopteff&osm=&ossid=)
35. Kalotra M, Kaur K. Performance analysis of reusable software systems.
[2014773–778. doi:10.1109/CONFLUENCE.2014.6949308.](http://dx.doi.org/10.1109/CONFLUENCE.2014.6949308)
36. Ghandeharizadeh S, Mutha A. An evaluation of the hibernate
object-relational mapping for processing interactive social networking
[actions. 201464–70. doi:10.1145/2684200.2684285.](http://dx.doi.org/10.1145/2684200.2684285)
37. Yousaf H. Performance evaluation of java object-relational mapping tools.
Georgia: University of Georgia; 2012.
38. Gruca A, Podsiadło P. Beyond databases, architectures, and structures:
10th international conference, bdas 2014, ustron, poland, may 27–30,
2014. proceedings. 201440–49. Chap. Performance Analysis of .NET Based
Object–Relational Mapping Frameworks.
[doi:10.1007/978-3-319-06932-6_5.](http://dx.doi.org/10.1007/978-3-319-06932-6_5)
39. Cvetkovi´c S, Jankovi´c D. Objects and databases: Third international
conference, icoodb 2010, frankfurt/main, germany, september 28–30,
2010. proceedings. 2010147–158. Chap. A Comparative Study of the
Features and Performance of ORM Tools in a .NET Environment.
[doi:10.1007/978-3-642-16092-9_14.](http://dx.doi.org/10.1007/978-3-642-16092-9_14)
40. Šembera L. Comparison of jpa providers and issues with migration.
[Masarykova univerzita, Fakulta informatiky. 2012. http://is.muni.cz/th/](http://is.muni.cz/th/365414/fi_m/)
[365414/fi_m/.](http://is.muni.cz/th/365414/fi_m/)
[41. JPA Performance Benchmark (JPAB). http://www.jpab.org/. Accessed 22](http://www.jpab.org/)
Feb 2016.
42. Van Zyl P, Kourie DG, Boake A. Comparing the performance of object
databases and ORM tools. In: Proceedings of the 2006 annual research
conference of the South African institute of computer scientists and
information technologists on IT research in developing couuntries [SAICSIT ’06; 2006. p. 1–11. doi:10.1145/1216262.1216263.](http://dx.doi.org/10.1145/1216262.1216263)
43. Dharmasiri HML, Goonetillake MDJS. A federated approach on
heterogeneous nosql data stores. 2013234–23.
[doi:10.1109/ICTer.2013.6761184.](http://dx.doi.org/10.1109/ICTer.2013.6761184)
44. Wolf F, Betz H, Gropengießer F, Sattler KU. Hibernating in the
cloud-implementation and evaluation of object-nosql-mapping. Citeseer.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1186/s13174-016-0052-x?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1186/s13174-016-0052-x, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNC",
"status": "GOLD",
"url": "https://jisajournal.springeropen.com/track/pdf/10.1186/s13174-016-0052-x"
}
| 2,017
|
[
"JournalArticle"
] | true
| 2017-01-05T00:00:00
|
[
{
"paperId": "d388142fcd9d79e1dd8960d19ad4ce77dd42e1db",
"title": "On the Performance Impact of Data Access Middleware for NoSQL Data Stores A Study of the Trade-Off between Performance and Migration Cost"
},
{
"paperId": "5375d09e055c945c0ec7f9002335a85eb16fc903",
"title": "Policy-Driven Data Management Middleware for Multi-cloud Storage in Multi-tenant SaaS"
},
{
"paperId": "a86bbc83478eab6805b7a6a5d302d2b8feb94e62",
"title": "An Evaluation of the Hibernate Object-Relational Mapping for Processing Interactive Social Networking Actions"
},
{
"paperId": "9ee1e16e8783a4e980eb842fdea319cd4d8c22f6",
"title": "Performance analysis of reusable software systems"
},
{
"paperId": "f9d502ef730e8f5e2b63ea0793b90aa9b6e291dd",
"title": "The Transition from RDBMS to NoSQL. A Comparative Analysis of Three Popular Non-Relational Solutions: Cassandra, MongoDB and Couchbase"
},
{
"paperId": "651add60205497037851faa3f0610aa7486cc3f2",
"title": "Uniform access to NoSQL systems"
},
{
"paperId": "be2b3e2524a413f311686c3dc66a040143f7a947",
"title": "ODBAPI: A Unified REST API for Relational and NoSQL Data Stores"
},
{
"paperId": "860131ac79fbcb7f1c1630204d27b0f833643da7",
"title": "Performance Analysis of .NET Based Object-Relational Mapping Frameworks"
},
{
"paperId": "b68554396e044d494902c5af249df897cad0b77e",
"title": "A federated approach on heterogeneous NoSQL data stores"
},
{
"paperId": "719821e49d6628914b4fc536a41e4365578a78b8",
"title": "Performance evaluation of Java Object-Relational Mapping tools"
},
{
"paperId": "bfc88c88d15184d803b13b109d03a6a86d2ac678",
"title": "Stonebraker on NoSQL and enterprises"
},
{
"paperId": "51d6588ff7c1994f035a5a3be8d2e8ca62b78f22",
"title": "Scalable SQL and NoSQL data stores"
},
{
"paperId": "e909c4352924e38fb9c1e8c00ad59d680a1fa98f",
"title": "A Comparative Study of the Features and Performance of ORM Tools in a .NET Environment"
},
{
"paperId": "c69859fd9ee74e5287c5dbf22ee7e82663fe8bdd",
"title": "Benchmarking cloud serving systems with YCSB"
},
{
"paperId": "787f0438f982ee3d97a7c6d7bfed2592feddbab7",
"title": "Research on Data Persistence Layer Based on Hibernate Framework"
},
{
"paperId": "6cbd8b1decf115ae3024d1281708bb0ca622e0ee",
"title": "Cassandra: a decentralized structured storage system"
},
{
"paperId": "2dcde5b18a81fe6dc1209d59d6e633c8c920c35f",
"title": "SQL databases v. NoSQL databases"
},
{
"paperId": "b0403eaf4480ab1fb63861aa1e10bbd5834bdfbc",
"title": "The influence of optimisations on the performance of an object relational mapping tool"
},
{
"paperId": "6c50121c8671edea8938ab697c335f02950d8ca4",
"title": "A Classification of Object-Relational Impedance Mismatch"
},
{
"paperId": "7d906f6632f8740b540ce4d710f53ab0f97cfd5b",
"title": "Dynamo: amazon's highly available key-value store"
},
{
"paperId": "18a5f443299784479e78d9e77f175af57cb2fa2b",
"title": "Bigtable: a distributed storage system for structured data"
},
{
"paperId": "bc665139c9f1a068889734962255b4f183780702",
"title": "Comparing the performance of object databases and ORM tools"
},
{
"paperId": "b9dc2309aba8b9895ea4587711463099a7a44024",
"title": "Dynamo"
},
{
"paperId": "8bf265ac0c0f83821f75582190b60684884a70eb",
"title": "Schemaless NoSQL Data Stores - Object-NoSQL Mappers to the Rescue?"
},
{
"paperId": null,
"title": "Beyond databases, architectures, and structures: 10th international conference, bdas 2014, ustron"
},
{
"paperId": "70b62bd457f96181c731944a89c044eb4952aca6",
"title": "ONDM : an Object-NoSQL Datastore Mapper"
},
{
"paperId": "2d3d1536352dcb153a6425fb56459cbe2b5d36ea",
"title": "Hibernating in the Cloud - Implementation and Evaluation of Object-NoSQL-Mapping"
},
{
"paperId": "c023101cbdbf16e6bf18f06ebdc4f10c91d2bc98",
"title": "Comparison of JPA providers and issues with migration"
},
{
"paperId": "ea488cd7c1b0ac2e8ed34d576f36578beb760a04",
"title": "Objects and Databases - Third International Conference, ICOODB 2010, Frankfurt/Main, Germany, September 28-30, 2010. Proceedings"
},
{
"paperId": "96acde4969f34190ca9e293c5db5b929cc0fc5e1",
"title": "The Usasge and Performance of Object Databases Compared with ORM Tools in a Java Environment"
},
{
"paperId": "136eefe33796c388a15d25ca03cb8d5077d14f37",
"title": "The End of an Architectural Era (It's Time for a Complete Rewrite)"
},
{
"paperId": "b7f51a20ccddc1680778688f701e087d37cf689a",
"title": "Object-Relational Mapping as a Persistence Mechanism for Object-Oriented Applications"
},
{
"paperId": "41afa56b4dc95e98f5b84268429e1c8e90d5b735",
"title": "An evaluation of the performance and database access strategies of Java object-relational mapping frameworks"
}
] | 20,347
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/031b4c9e48e029f3d9a621b0efef63d7352b6e6a
|
[
"Computer Science"
] | 0.911804
|
Managing Very-Large Distributed Datasets
|
031b4c9e48e029f3d9a621b0efef63d7352b6e6a
|
OTM Conferences
|
[
{
"authorId": "1388366652",
"name": "M. Branco"
},
{
"authorId": "1788575",
"name": "E. Zaluska"
},
{
"authorId": "1708123",
"name": "D. D. Roure"
},
{
"authorId": "1379547191",
"name": "P. Salgado"
},
{
"authorId": "2897755",
"name": "V. Garonne"
},
{
"authorId": "32169062",
"name": "M. Lassnig"
},
{
"authorId": "145301848",
"name": "R. Rocha"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# Managing very-large distributed datasets
Miguel Branco, Ed Zaluska, David de Roure, Pedro Salgado,
Vincent Garonne, Mario Lassnig, and Ricardo Rocha
CERN - European Organization for Nuclear Research,
University of Southampton, UK,
University of Innsbruck, Austria
**Abstract. In this paper, we introduce a system for handling very large**
datasets, which need to be stored across multiple computing sites. Data
distribution introduces complex management issues, particularly as computing sites may make use of different storage systems with different internal organizations. The motivation for our work is the ATLAS Experiment for the Large Hadron Collider (LHC) at CERN, where the authors
are involved in developing the data management middleware. This middleware, called DQ2, is charged with shipping petabytes of data every
month to research centers and universities worldwide and has achieved
aggregate throughputs in excess of 1.5 Gbytes/sec over the wide-area network. We describe DQ2’s design and implementation, which builds upon
previous work on distributed file systems, peer-to-peer systems and Data
Grids. We discuss its fault tolerance and scalability properties and briefly
describe results from its daily usage for the ATLAS Experiment.
**Key words: Data Management, Data Grids, Distributed Systems, Grid**
Computing, Datasets
## 1 Introduction
Our work addresses the problem of managing very large datasets. The motivation
is the LHC project at CERN, which is expected to start operation during the
summer of 2008 and continue in production for about twenty years. The LHC
particle accelerator, extending for a 27 km ring buried 100 meters underground
is illustrated in Figure 1, along with the various LHC detectors. The raw data
produced by just one of the LHC detectors (the ATLAS Experiment [1]) exceeds
ten petabytes per year. ATLAS is a worldwide collaboration that will produce
petabytes of data during its lifetime. These data needs to be distributed and
stored globally for access by a large number of scientists.
In this paper we start with a review of contributions in the area of distributed
file systems, peer-to-peer and data grids. Based on this prior work, we propose
a new architecture, improving the previous contributions in several respects. We
describe important properties of the system and initial experiences running a
real world production infrastructure for the ATLAS Experiment.
-----
2 Managing very-large distributed datasets
**Fig. 1. Schematic overview of the LHC accelerator.**
## 2 Existing Work
A number of different Computer Science areas have devised architectures and
software systems to address the problem of very large datasets. Some of the most
relevant areas are distributed file systems, peer-to-peer (P2P) systems and Data
Grids. In this section we introduce the major contributions from these areas.
**2.1** **Distributed File Systems**
NFS [10] is one of the early distributed file systems which continues to be widely
used. It is based on a stateless (up to version 4) client/server protocol implemented using remote procedure calls and supports POSIX-like semantics. With
large numbers of users or under bandwidth constraints, the POSIX-like semantics
hinder the performance and scalability, resulting in NFS being an unattractive
choice to manage datasets at the petabyte scale.
AFS [9] was the first distributed file system to introduce client-side caching.
This property increases the scalability of AFS but introduces additional complexity when handling updates. UNIX ”last file write wins” semantics are hard
to implement in a scalable manner. AFS introduced ”last file close wins” semantics. This limits the universal applicability of AFS but increases its scalability
for the common cases of multiple reads with infrequent writes. We are not aware
of any AFS-based system handling petabytes of data over a wide-area network.
We believe this is due to fundamental AFS design decisions, such assupporting POSIX-like semantics. Coda [15], a successor of AFS, introduced support
for disconnected operations when the network connection is lost, but at the
petabyte-scale Coda suffers from exactly the same issues as AFS.
Another modern cluster file-system is the block-based IBM General Parallel
File System (GPFS) [16]. GPFS provides high-performance I/O due to its ability
to stripe blocks of data from individual files over multiple disks. GPFS has been
demonstrated to work over a wide-area network [17] but under strict deployment
constraints. Block-based systems can be expected to have difficulties scaling to
very large user numbers, particularly in a shared environment.
The Lustre [21] distributed file system was originally developed by Cluster
File Systems. Lustre is inspired by the architecture devised for the Digital VAXClusters, which were built on top of a local file system by requiring data access to
-----
Managing very-large distributed datasets 3
interact closely with a distributed lock manager. The core components of Lustre
are the distributed lock manager, the metadata servers and object storage targets. Lustre scales to the data handling requirements discussed previously: tens
of thousands of nodes and PetaBytes of storage. Lustre was designed as a cluster
file system for a closed network but since has been expanding to accommodate
multi-site and multi-cluster deployments. Lustre briefly described plans for a
”Lustre Router Control Panel” to allow adjusting of quality of service within a
cluster and wide-area network.
An important lesson from Lustre is on how scalability is achieved by moving
from a block-based approach to an object-based approach, which changes the
fundamental mechanisms used to access data. Lustre, contrary to traditional
block-based devices, assumes that storage devices are intelligent devices and
makes use of more advanced protocols to access data. Lustre clients do not
talk directly to the block-based device but rather to a component called Object
Storage Target (OSTs). This approach eliminates many of the bottlenecks of
traditional block-based I/O in the communication between clients and blockbased storage devices.
Google has designed and implemented the Google File System (GFS) [11],
which provides a scalable system for distributed data-intensive applications.
It is designed for applications handling very large files with many reads and
few writes. GFS drops some of the assumptions of the earlier systems, such as
POSIX-like semantics. It consists of a master node (the ’metadata server’) and
multiple chunkservers. The master node maps a user file to multiple chunks (each
of 64 Mbytes), which are placed in various chunkservers. The file system supports parallel read, write and update operations and has built-in fault-tolerance
features.
GFS can scale to large clusters while running on inexpensive commodity
hardware. Hadoop[1], a top-level Apache project, is a system inspired by the design of GFS but open source and therefore considerably better documented.
A core lesson from these systems is that scalability is achieved by taking advantage of environment constraints: for example, GFS eliminates the complex
distributed locking models of earlier systems by allowing append operations only
and adopting simple mechanisms for fault-tolerance.
**2.2** **Peer to Peer systems**
Peer-to-peer (P2P) systems are particularly interesting for their scalability and
ability to cope with heterogeneous environments. This has been an area of research with many contributions in the past years. Work described in [4] provides
an excellent analysis of the search aspects for P2P systems.
There are several different architectures for P2P networks that may be classified in ’centralized’, ’decentralized but structured’ or ’decentralized and unstructured’ [23]. While decentralized and unstructured systems were commonly used
1 Refer to http://hadoop.apache.org/core/.
-----
4 Managing very-large distributed datasets
given their scalability but most have evolved to have associated structures, usually by relying on super-peer nodes or DHT algorithms [4]. This is an important
lesson for our own architecture. P2P research has also analyzed the searching
aspect in conjunction with storage and replication of data. Work by [23] and [14]
has shown how to minimize exchange of messages between peers whilst providing effective mechanisms to locate data and decide on replication in peer nodes
other than the data requestor, as to optimize future requests.
**2.3** **Data Grids**
GASS [18] or ”Global Access to Secondary Storage” is one of the early Grid
Computing contributions to the large datasets problem. It consists of a system
designed to manage secondary caches, which is a logical evolution of the clientside caches built into distributed file systems such as AFS. GASS claims to support bandwidth management rather than latency management as in distributed
file systems, but its functionality is very limited.
GDMP (Grid Data Mirroring Package) [20] is a file and object replication
tool. It introduced the concept of a storage system subscribing to collections of
files that were then moved using GridFTP [7]. GDMP was envisaged as a limited
prototype system for file movement and its scalability was not investigated.
Ann Chernevak et al [2] introduced the ”Data Grid” in an architecture paper
which defines a specialized Grid architecture for handling large data volumes.
The architecture is loosely defined to accommodate various models of operation
but is tightly integrated with ”Grid dynamics”: security, awareness of virtual
organizations and access to fast-changing large sets of resources. The ”Data
Grid architecture” consists of two main components: one responsible for storing
and retrieving data and another for bookkeeping. The paper also introduces
higher-level services to integrate all the individual lower-level services onto a
coherent set, defining a Replica Management service capable of moving files
between Grid sites and doing all the necessary bookkeeping. In addition it also
defined the Replica Selection and Filtering service that would decide on-demand
replication.
OptorSim [3] is an example of a simulator built mainly to study how to
optimize access to data from Grid jobs, e.g. devising models on how best to
replace replicas when storage space is limited. In the context of the OptorSim
work, an economic model was introduced. Most of this work had a strong focus on
coupling job scheduling with data replication. These simulators introduced novel
research but were not a comprehensive approach to data management, focusing
only on the data placement aspect and not on the search or bookkeeping aspects.
Giggle [22] is the reference work on replica location services. It consisted
of catalogues mapping logical names to physical replicas so that users could
reference data by a logical name independently of its physical location. These
catalogues could be layered. A s the scale increased, the authors moved to P2Pbased approaches for searching data.
An example of an integrated replica management services is also very recent,
by Houda Lamehamedi [19]. It consists of a P2P-based system for replica loca
-----
Managing very-large distributed datasets 5
tion and an ’intelligent’ framework for replication based on user demand and
calculations of replication cost. This paper, despite being the most comprehensive approach to managing large datasets on the Grid to date, stills does not
address real-life problems such as bandwidth management and does not address
issues such as replica consistency or support for tertiary storages, which were
modeled as arbitrary file access penalties.
The SDSC SRB (Storage Resource Broker) [25] supports shared collections
that can be distributed across multiple organizations and heterogeneous storage
systems. It is the system that presents most similarity to our working environment but we require a system that scales further than SRB, to hundreds of sites,
thousands of users and tens of petabytes of data.
**2.4** **Summary**
There are several common architectural design decisions adopted in the systems
discussed above. One is that metadata is handled by a separate service (e.g.
Lustre, GFS or the Data Grid architecture). Even though a central metadata
service is sufficient for most usages, such a design has limited scalability when
compared with a P2P-based implementation. Another observation is that the
more recent systems do not store user files as individual files on the storage.
Finally, most distributed file systems maintain at most POSIX-like semantics
and systems such as GFS or Hadoop are not POSIX compliant at all due to
scalability issues.
Data Grids aimed from the start to support heterogeneous environments.
This trend is now being adopted by distributed file systems: Hadoop already
supports more than one backend. Lustre is working on a Control Panel to support bandwidth management on the WAN, enabling complex setups that spawn
multiple sites (”heterogeneous” network environment).
Nonetheless, none of the existing systems matches the exact needs or environment constraints we will be addressing in our architecture. In the next section,
we will look into these differences in more detail.
## 3 A Data Grid Architecture
Very large datasets at the terabyte or petabyte-scale often need to be hosted
across multiple sites with different storage systems. Presenting a uniform, scalable data management layer is the scope of the current work.
Unlike distributed file systems that require fairly uniform setups across sites
and complex network configurations, we require a management layer that can
scale to hundreds of sites. Unlike commonly used P2P systems, we need to have
reasonably stable associations between sites and have well established security
policies. Unlike all systems presented, we need to be able to impose global policies
based on data properties.
Our system needs to make opportunistic usage of volunteered resources without any centralized administration, while maintaining expected quality of ser
-----
6 Managing very-large distributed datasets
vices overall. As such, we seek to combine the interesting properties of Data
Grids, distributed file systems and P2P for our Data Grid Architecture.
**3.1** **System requirements**
Even though we propose a general-purpose data management system, clearly
one data handling system cannot be applicable to all possible domains. We have
therefore made certain assumptions about our environment:
**– For accessibility and cost reasons, data needs to be distributed among mul-**
tiple computing sites rather than hosted in a single site;
**– Most files are large by traditional standards, with each file being hundreds**
of megabytes or several gigabytes;
**– Data is rarely modified after it has been produced, where the most common**
case is to append data rather than replace existing data;
**– The production of data occurs in highly parallel environments where multiple**
batch nodes are producing part of a large data sample in parallel;
**– There are multiple computing sites with different ’service-level agreements’.**
These include professionally-managed computing centers down to university
clusters managed by students in their spare time. This implies very different
quality of service and scale of resources;
**– There are volunteered contributions of computation and data storage re-**
sources that should be supported in an opportunistic way, while taking into
account their expected quality of service and size;
**– There is no centralized administration of all available resources, which re-**
quires a coexistence of global and local policies;
**– Volunteer contributions of resources requires the ability to adapt to different**
implementations: e.g. supporting different storage systems. It is expected
that such need will introduce higher-failure rates and overall instability.
**3.2** **Core design principles**
The principle design decision we took was not to depend on direct access to
the servers where the files are stored. Our architecture does not replace the
storage system at a site. Instead, it is layered on top of the existing storage
middleware (e.g. on top of a data center-wide Lustre installation or an NFS
server at a university campus). This is a completely different approach from
the systems previously discussed. This considerably extends our ability to make
opportunistic use of storage resources, but can lead to many more potential
inconsistencies. Our design tackles these inconsistency issues. To make efficient
use of the storages, we have defined an abstract layer to interact with the storage.
We decided to provide greater flexibility by not enforcing POSIX semantics,
following a trend observed in other distributed systems. Users of our system
require specific tools to access and manipulate data. Another important design
decision is on the unit of data handling. While files are the underlying unit, all
user requests are for datasets (groups of files). This matches our observation that
-----
Managing very-large distributed datasets 7
users rarely use a single file in isolation but almost always make use of groups
of files (grouped statically by some semantic meaning). To further increase our
flexibility and optimize storage and network usage, we have decided to decouple
the units of data location from the unit of storage and the unit of transfer. Later
in this paper we will look in detail into this decision.
**3.3** **Datasets**
Datasets are natively supported by our architecture. A dataset is a collection
of files, typically containing more than one physical file, which are processed
together and usually comprise the input or output of a computation or data
acquisition process. Datasets are always produced at a single storage system
and later replicated to other storages.
A dataset is, at the lowest level, file metadata: a file is assigned as being part
of one or more datasets. This attribution provides very useful properties, even
if other systems do not make use of it. Knowing that a dataset represents files
that are used together, the system can optimize its units of data transfer and
discovery. Locating datasets as opposed to files implies storing much less entries
on a database, hence improving overall scalability. Similarly, when transferring
data, the dataset provides very good ordering of requests: if there is a long queue
of files to replicate, it makes the most sense to replicate those files that will allow
users to advance with their analysis as soon as possible - and these are typically
the files part of a dataset missing at a site. Additionally, there is often the need to
assign metadata attributes (e.g. ’software version used to produce the output’)
to a set of files. Again, in this case it makes the most sense to assign a single
metadata attribute to a dataset as opposed to assign it individually to a set of
files.
Creating a dataset is typically highly parallel task, where jobs in a batch
system are each producing the constituent files. To facilitate the iterative process
of constructing a dataset, which often lasts weeks, we have defined the possibility
to create ’versions’ of a dataset. Versions allow users to reference a static set of
files at a moment in time. Later versions can add or remove files from the dataset.
Nonetheless, for dataset integrity, datasets can only be replicated to other sites
when they are ’frozen’ - when no further changes are allowed. A correlation can
be established with our model and the ’last close wins’ semantics for distributed
file system, applied to a much higher-level concept.
**3.4** **User Functionallity**
DQ2 provides the following functionality to the users:
**– A user can create a dataset. A dataset is assigned a storage at creation time.**
The dataset can then be modified by adding or removing files, using specific
tools to handle the physical movement of data from the user’s file system
to the storage system. The user does not control or manage the physical
location of the files within the storage system; this is done internally as we
-----
8 Managing very-large distributed datasets
shall describe later. The storage system is seen as a black-box from the user
perspective and all interactions involve DQ2 tools.
**– A user can replicate datasets between storages across the wide-area network**
or within a site. After a dataset has been fully defined but before it can
be replicated, the user must freeze it. This guarantees the dataset can no
longer change. Afterwards, the user may subscribe the dataset to another
storage. The subscription, similar to the principles describe on [20], is used
for asynchronous replication. There are multiple subscription options: e.g.
restrict data flows by using only specific source sites (the default is for DQ2
to choose the best sources); or set the transfer priority among other options.
**– A user can receive events during the replication process. As replication is**
the one of the primary functions of DQ2. The user can choose to receive notifications whenever certain events happen. For instance, when the dataset
has been fully replicated, the system can send a notification to an endpoint
specified by the user at subscription time. This is used to link the data
transfer system with the job submission system: when data is available at
a storage, the production management system gets a notification and automatically launches jobs to process these data. We found this mechanism to
be routinely used. Subscriptions can be cancelled at any time, triggering a
clean-up of any ongoing transfers.
**– Users can retrieve an entire dataset or some of its files to a local file system.**
This allows synchronous downloading of data from the best available sources.
**– Users can query DQ2 for replicas of a dataset to locate data.**
**– Users can also request deletion of replicas. Deletion requests are dealt with**
asynchronously but users are informed when querying for replicas. Similarly,
a user may request the deletion of a dataset in the system: this triggers
deletion of all its replicas.
**3.5** **Architecture**
Figure 2 describes the overall architecture. To implement the functionality previously described, DQ2 uses a combination of local and global services.
**Fig. 2. DQ2 Architecture.**
-----
Managing very-large distributed datasets 9
**Local services. The local services are called storage area services (a storage**
area is loosely defined as a subset of a storage system). These local services are
associated to the storage at a site and typically require privileged access, depending on whether the storage-specific plugins require such local access. There
may be more than one storage area service per storage system. For DQ2, these
areas are independent, each with its own set of services and dedicated disk space.
Local services are designed to be minimal without any global information. This
decision improves overall robustness making components more autonomous.
Storage area services have two distinct roles: to hold dataset definitions and
to handle files in the storage. It is the responsibility of the storage area services where a dataset is created to hold its definition, even if other replicas are
created and the master copy deleted. This information is permanent and needs
to be stored in a reliable way. The other role is to physically move files to the
storage from a remote location (import), to delete files from the storage, to stage
files (preparing a file for export) and to lookup files (to find if a storage has a
certain file). These activities are executed by local agents that rely on transient
information (available only during the lifetime of the request). Coordination of
which files to transfer, delete, stage or lookup is handled by a global component
we describe next. Nonetheless, decisions can be overridden locally (by denying
or re-ordering requests) given site or storage-specific policies.
Note that DQ2 does not include any database with full knowledge of a storage
namespace. When DQ2 needs to know whether a file is present, it will query
the storage system, avoiding the need to maintain and synchronize a separate
database (which could cause both scalability and consistency problems).
**Global services. The global services have an important role in our system.**
Each global service acts as the master for all activities with a dataset. When a
user asks for a dataset to be replicated, the request is redirected to a master,
using a dataset redirection service that guarantees unique mapping between a
dataset and a master service. The master will then queue and schedule the
dataset request. The master does not execute any of the activities: it simply
assigns work to local agents. Work assignments include looking up and staging
source files, doing wide-area transfers or deleting files. The local agents are not
dataset-aware: these only deal with bulk requests of files. It is up to the master
to optimize work assignments based on dataset knowledge.
Throughout its activity, the master will gradually build a cache of dataset
(and file) replica information. This cache serves as the mechanism for users
to locate dataset replicas. Dataset locations are never absolutely correct in a
distributed system: it is always possible that a request fails because data was
lost unexpectedly. Nonetheless, to avoid constant hits to the storage systems to
locate data, we maintain a cache that is gradually renewed from the masters
activity.
**Quotas and accounting. In DQ2, quotas are handled in the master. DQ2**
can only guarantee that within each master a user stays within his quota (or
-----
10 Managing very-large distributed datasets
e.g. a replication request is denied). Global accounting is possible by gathering
statistics from all masters. In practice we have found the model to be sufficiently
flexible: the service that assigns datasets to masters may take into account the
ownership of the dataset and ensure that all datasets belonging to the same user
are mapped to a single master.
## 4 Implementation
This section describes implementation details for DQ2. We describe the implementation of the various components and their interactions. Common across all
components is the usage of HTTP as the communication protocol for client/server
requests.
**4.1** **Dataset Redirection Service**
A user specifies a dataset in every request to DQ2. The request is first sent
to a central dataset redirection service. This service redirects the request, using
HTTP, to the appropriate master. Our current implementation statically assigns
datasets to masters based on a set of rules based on the dataset name. To avoid
single points of failure, multiple instances of the service can be setup, sharing the
same set of rules. Other rules could be foreseen to e.g. have the same load across
all master, which would require some form of coordination among masters. In
practice, this is not required from our experience and we have opted to strictly
partition dataset masters.
This redirect mechanism provides partitioning of requests among multiple
masters. Masters are deployed to serve a single activity. Examples of activities
defined in our production system are raw data taking, when the ATLAS detector
is taking raw data; regional monte-carlo production, encompassing all simulation
activities from a regional group without interest to the collaboration; official
_monte-carlo production, including simulation activities that passed strict physics_
validation and hence are available for use across physics groups.
**4.2** **Storage Area Services**
**Dataset definitions. Storage area services contain dataset definitions in a rela-**
tional database. At creation time, each dataset is assigned a logical name by the
user and a globally unique identifier (UUID [6]) by the system. This guarantees
that datasets are uniquely identified when replicated to other storages. Each file
is also assigned a unique identifier by the system in addition to its logical file
name.
**Agents. There are different agents with distinct roles: to lookup files on the**
storage, to stage files (from a tape system or from the storage to an export
disk buffer), transfer or delete files. Agents use in-memory structures and hold
-----
Managing very-large distributed datasets 11
minimal state. To interact with the storage, each agent makes use of storagespecific plugins for executing the task. For instance: the mechanism to stage files
from tape depends on the tape system being used; similarly, to locate files in a
storage system a POSIX stat command may be sufficient; in other cases, storage
dependent tools are required.
**Interactions with master for schedulling. Agents have a list of masters on**
which to poll for work. Each agent will poll for work, doing round-robin requests
across its masters. For some cases, DQ2 also implements a simple fair-share
_mechanism. In this case, the agent will poll a master, specifying a maximum_
response size. The agent can then maintain a share allocation to each master,
guaranteeing that each master gets an allocation of the agent’s work (e.g. the site
administrator can dedicate half its resources to serving requests from a specific
master: this is regularly used in ATLAS to guarantee that raw data gets shipped
in due time to all sites).
**Wide-area transfers. DQ2 can support a variety of transfer protocols with**
its plugin approach. GridFTP is commonly used due to its broad support by
storage systems but other protocols (HTTP) are also supported.
To interact with the storage systems we make use where possible of a common mass storage interface, called SRM ([8]). The SRM interface (v2.2) is implemented by several storage vendors. In some cases, direct access is still required
as not all required information is exposed through SRM.
**4.3** **Dataset Master**
**Web service. The web service handles user requests to stage, transfer, delete or**
verify consistency of datasets. Requests may be denied immediately if quotas are
exceeded. Users also contact the web service to get the status of asynchronous
requests (e.g. replication status). The web service implements authentication
through Grid X509 proxies ([5]). User read requests are insecure to avoid the
overhead of proxy verification. All other user requests are secure.
There is a second web service endpoint used by agents to request work. The
security on this web service may also be based on grid proxies but is usually configured at the firewall level, avoiding the overhead of proxy verification (reducing
CPU usage on the server).
DQ2 makes use of HTTP URLs providing a simple REST [12] interface.
We have found the REST interface useful for linking external systems to DQ2
(e.g. metadata catalogues refer to the dataset status by using our public dataset
URLs).
**Dataset-based brokering. The dataset master is responsible for assigning**
work to the local agents. When handling a dataset request, the master makes
use of its knowledge about current replica status for decision making. Work
-----
12 Managing very-large distributed datasets
assignments are made synchronously as the agents ask for more work. This synchronous decision-making is an important property of our implementation, enabling a feedback approach: more work is given to a local agent as it finishes its
current set of work. The work assignments are done just in time and thus rely
on the most up to date information.
When transferring a dataset, the master will assign work giving higher priority to the datasets that have most files already present at the destination.
Therefore, dataset transfers will likely last less time and the completion rate is
higher. This implies that the master will scan the list of active dataset transfer
to that storage, replying to the local agent with the set of files that will likely
complete the most datasets in the shortest time.
Other optimizations are possible in the master. An important one is when
using tape backends. The latency to recall a file from tape is usually very high.
These requests can be optimized by doing the least number of tape mounts.
When transferring data that is on tape at a source to another storage across
the wide-area, a good coordination is required. DQ2 is able to coordinate such
transfers, making bulk lookup requests at the source, segmenting stage requests
per tape (based on information provided by the lookup agent) and scheduling
the transfer at the destination as soon as sets of files are made available from
the source. The feedback-based model and the synchronous decision making are
critical properties for this mechanism.
**Caches. To implement the dataset-based brokering, a very fast response to**
agents requests is required. As such, we have implemented various caches on the
master. One cache holds contents of datasets: the first time a request comes for a
dataset, the master does not know its constituent files. It must therefore ask the
storage area services for the dataset definition and caches back this information.
Further usages of the dataset will use this cache.
Another cache contains dataset and file replicas. DQ2 does not have ultimate
knowledge of where data is located: all it can do is act based on previously
known information and expect that data has not been lost in the meantime[2]. As
DQ2 is notified of the state of lookup, stage and transfer request, it caches this
information. Future scheduling decisions rely on this cached information (if the
cache is recent) and the system will gradually renew the information as required.
Users make use of this cache to locate dataset replicas (even knowing additional
information, such as whether a particular file is staged, if DQ2 was required, for
some other request, to stage the file).
**In-memory structures. The master data structures are kept in-memory to**
guarantee better performance, which is important for scheduling decisions (e.g.
choose files to transfer next out of the list of pending requests). In addition to
2 In our early prototypes, we exercised mechanisms such as having a storage notify
DQ2 of data losses but this approach was not possible to implement in practice due
to various issues interfacing with storage systems.
-----
Managing very-large distributed datasets 13
in-memory structures, all data structures are written to a log on disk. This log
is asynchronously fed onto a relational database. When the master process is
restarted, the log is read and the state is re-initialized before the master starts
serving new requests.
## 5 Fault Tolerance
**Interactions between Master and Storage Agents. The dataset master in-**
teracts with storage area services when it needs to resolve a dataset and schedule
work to a local agent. All requests are subjected to timeouts and are retried by
the master. Requests to resolve a dataset will be retried indefinitely until a valid
response is retrieved (as we expect to eventually have a valid response). Other
requests, such as staging or transferring a file, will be attempted a maximum
number of times, with an exponential back-off.
The agents also have a retrial policy when contacting the master. In case
a master goes down, each agent will retry reconnecting with an exponential
truncated back-off, giving the master time to recover if the service is unavailable.
**Early validation of datasets. DQ2 implements early validation of user datasets.**
In our early prototypes, we did not explicitly validate a user dataset before attempting to transfer it. As a result, resources were being wasted trying to transfer
a dataset whose data was badly uploaded, missing or lost. We found that the
majority of cases corresponded to problems uploading files to the storage that
were not detected.
Given that DQ2 is layered on top of existing storage systems, we decided to
shield DQ2 from these errors by having a mandatory validation step before a
dataset can be replicated to other sites. This step is coordinated by the master.
It also serves as the mechanism for the master to be informed of the existence
of the new dataset; and store it in its cache. The step is triggered automatically
when the user notifies that the dataset is frozen (its contents are immutable).
**Data corrupted or lost. In a distributed system with hundreds of storage sys-**
tems, there are frequent occurrences of data corruption or data loss. The master
is able, in many cases, to detect and automatically correct these occurrences.
When files cannot be repeatedly accessed, the master requests the storage with
suspected data to copy over the files again from another available source. At the
same time, it blacklists those replicas so that other interested parties avoid them.
The mechanism is efficient given that the master possesses global knowledge of
the file replicas for a dataset.
**Master availability. The master relies on in-memory structures. Operations on**
the master are checkpointed to disk. There is an asynchronous system feeding the
checkpoint log onto a relational database for increased redundancy. Additional
redundancy mechanisms are possible, such as having master/slave replication of
-----
14 Managing very-large distributed datasets
**Fig. 3. Dominant errors classes (over a 1-month period).**
the database holding the log information or splitting the web server from the
process executing the requests.
**Transfer reliability. Figure 3 shows failure rates observed in our production**
instance of DQ2. The majority of errors are reading data from storage systems.
For this reason, DQ2 validates source files prior to transfer, by doing a source
storage lookup. Only when files are reported as found and staged at the source
can the transfer start.
For wide-area transfers, DQ2 implements a retrial strategy that takes into account previous transfer history and channel performance. A channel is a virtual
unidirectional link[3] between a source and destination storages. Best performing channels will serve more transfer requests, given the feedback-based model
(polling for more work when work is done) implemented between the agents and
master. Therefore, given more than one possible source for a file, it is likely that
the channel performing the best will serve the file first. If a transfer between a
source and destination is persistently failing, the agent responsible for collecting
work at the destination side will back-off and not request more work for some
time. If a specific file transfer is permanently failing, the master will temporarily blacklist the source, allowing other sources to be used. If the failure is very
frequent for a single file, the file is marked as corrupted and there is an attempt
to copy it over from another location.
To validate transfers, we choose ADLER32 checksum because of the its rolling
hash property, which allows the checksum to be computed as the input moves
through a window. This eases the checksum computation without introducing
significant overheads (e.g. such as having to re-read the file to compute the
checksum). Tape drives also often compute ADLER32 at the hardware level
when writing files to tape, which provides another verification step in a optimized
manner.
3 e.g. ’CERN to BNL’ is the channel serving requests from CERN to the Brookhaven
National Laboratory in the US.
-----
Managing very-large distributed datasets 15
Wide area transfers
25
20
15
10
5
0
|Col1|Col2|Col3|Col4|Fast stream original files|Col6|Col7|Col8|Col9|
|---|---|---|---|---|---|---|---|---|
|||||SFS laolswotw ss tts rrt eer aeamamm cc hho uuri nng kkin eea ddl f ffi iil lle ees ss|||||
||||||||||
||||||||||
||||||||||
||||||||||
1 2 3 4 5 6 7 8 9 10
(a) Distribution of files copied per
share.
(b) Comparing usage of export
buffers with original and chunked
files.
**Fig. 4. Overview of scalability properties of DQ2.**
## 6 System Scalability and Data Availability
In this section we describe the mechanisms in DQ2 that guarantee high availability of data and scalability of the system. We also describe the scale on which
DQ2 is operating on a daily basis for the ATLAS Experiment, moving petabytes
of data every month.
**Separation between global and local services. The separation between**
_storage area services and dataset masters has multiple advantages. One is avoid-_
ing having global knowledge present on the local services. Another advantage is
partitioning the system for scalability while maintaining global knowledge about
replicas of a dataset. This knowledge allows the master to make better decisions
about how to handle dataset requests, as it knows the state of the various replicas (if on disk, if being garbage collected, etc). The storage agents continue to
have the ability to throttle access to their storages by simply not asking one of
the masters for more work.
**Tracing dataset popularity. As the number of datasets increase in the sys-**
tem, we expect to have older datasets become less interesting with time. These
are usually kept for archival only (often on tape storage) but are no longer regularly used. DQ2 includes a tracer service that records all usages of a dataset.
This is used for monitoring purposes but could also be used for internal optimizations of the system. Thus, we can detect which datasets are more popular as to
predict hot-spots and implement automatic replication. Similarly, if the master
is unable to keep with the number of datasets it needs to manage, dataset usage
information could be used to rebalance the load, ignoring unused datasets.
**Competition between transfers. When transferring datasets between sites,**
the local agent needs to decide between competing requests, as it is needs to serve
-----
16 Managing very-large distributed datasets
multiple masters. Fair-sharing is used to guarantee fair split of resource usages.
Figure 4(a) from a simulation run illustrates the results of our algorithm with
high priority transfers taking over the channel as needed and according to shares.
Each of these shares maps to a different master, serving different datasets. There
are obvious limitations with this model, such as assuming that all file transfers
are equal within a channel (regardless of file sizes). This is being addressed in
newer versions but has served us well in practice.
**Improving data availability with import/export buffers. After a file is**
staged at the source but before being transferred to a remote storage, DQ2 can
optionally copy it to a export buffer managed by DQ2. Similarly, when importing
data, the destination storage may first place the file onto an import buffer before
writing it to its final location.
These buffers allow DQ2 to split the units of storage from the unit of transfer:
the file may be artificially split or merged for transfer and/or storage. This can be
used to improve storage and transfers and protect DQ2 from storage instabilities.
If a storage has a tape-backend there is a high cost in the mechanical process
of mounting a tape for reading back the data. If a dataset is sufficiently large but
its constituents are small files, it is convenient to aggregate files of a dataset into
larger units as to improve later recalls from tape. As in DQ2 a dataset is usually
read in its entirety, this leads to increase performance and avoids clustering
datasets between different tapes.
Figure 4(b) also illustrates improvements achieved in throughput with this
technique. The figure shows results from (HTTP-based) wide-area transfers
where we compare simple HTTP file transfers with a mode where each file is split
into smaller chunks. In these tests, 64 MByte chunks were used. When transfers
are chunked, a slow read of a big file from a server has less performance impact
on concurrent transfers, because each HTTP request has a shorter lifetime as
it transmits less data. Transmitting long files in a single request blocks a server
’slot’ for a long time, affecting parallel transfers. Our tests were conducted on
loaded servers (10 to 15 clients) with samples of real ATLAS data. The ’slow
stream’ clients were artificially slowed down to match typical competition patterns we observe in our production system, where transfer rates to destination
storages with good connectivity get affected by concurrent transfers from the
same servers to destinations with bad connectivity.
**Real World Usage. The ATLAS production instance of DQ2 currently hosts**
over 1.6 Million datasets. There are over 50 Million unique files with a total of
_80 Million replicas (the average replication factor for ATLAS data is relatively_
small due to lack of disk space). The system is now hosting ˜7.4 PetaBytes of
data over 60 distinct computing centers. One observation is the scale difference
between number of datasets and files, which motivates our choice for natively
supporting datasets.
Figure 5 shows results of large-scale transfer tests using DQ2. In these tests,
we transferred datasets from CERN to our major data centers. The figure covers
-----
Managing very-large distributed datasets 17
(a) Aggregate throughput in
MBytes/sec.
(b) Data copied in GigaBytes.
**Fig. 5. Overview of large-scale tests with DQ2 to multiple computing centers during**
6-hour period.
a large-scale test where the system maintains an average throughput of over
1.5 Gigabytes/s. During this period, storage areas went down and came back
online later, showing the resilience of the system to the frequent occurrence of
temporary failures. The system achieved the rate of 7 TeraBytes of data exported
per hour.
## 7 Future Work and Conclusion
In this paper we addressed the problem of managing very large datasets in a
distributed environment. After presenting and discussing the state-of-the-art as
well as recent trends, we introduced a new system developed using the results of
previous research on P2P, Data Grids and distributed file systems. Our major
contribution is the provision of a more comprehensive feature set for managing
very large distributed datasets in a heterogeneous environment. DQ2 is managing
over 7 PetaBytes of data and has achieved transfer throughput in excess of 1.5
Gigabytes/s. Future work will focus on increasing the scalability of DQ2 and
protecting the system from lower-level middleware instabilities. We will also
conduct dedicated reliability tests to demonstrate the robustness of the system.
**Acknowledgments. We would like to acknowledge the many contributions to**
the design by Torre Wenaus and David Cameron and the help of David and
Benjamin Gaidioz in implementing DQ2.
## References
1. The ATLAS Collaboration, http://atlasexperiment.org/ (1999)
2. A. Chervenak et al., ”The Data Grid: Towards an architecture for the distributed
management and analysis of large scientific datasets,” J. Network and Comp. App.,
Vol. 23, 187-200 (2001)
3. W. H. Bell et al., ”Simulation of dynamic grid replication strategies in OptorSim,”
in Proc. Grid Computing - GRID 2002 : 3rd Int. Workshop, USA (2002)
-----
18 Managing very-large distributed datasets
4. J. Risson et al., ”Survey of research towards robust peer-to-peer networks: search
methods,” in Computer Networks, Vol. 50, Iss. 17 (2006)
5. I. Foster et al., ”A security architecture for computational grids,” in CCS 98: Proc.
of the 5th ACM conference on Computer and communications security, ACM Press,
NY, USA, 83-92 (1998)
6. International Standard ”Generation and registration of Universally Unique Identifiers (UUIDs) and their use as ASN.1 Object Identifier components” (ITU-T Rec.
X.667 — ISO/IEC 9834-8)
7. W. Allcock et al., ”GridFTP protocol specification,” Technical report, GGF
GridFTP WG (2002)
8. A. Shoshani et al. ”Storage resource managers: Middleware components for grid
storage,” in Proc. of Nineteenth IEEE Symposium on Mass Storage Systems (2002)
9. J. H. Howard et al., ”Scale and performance in a distributed file system,” ACM
Trans. Comput. Syst., Vol. 6, No. 1, pp. 51-81 (1988)
10. R. Sandberg et al., ”Design and implementation of the Sun Network Filesystem,”
in Proc. of the Summer 1985 USENIX Conference, pp. 119130, Portland, OR (USA)
(1985)
11. S. Ghemawat et al., ”The Google File System”, 19th ACM Symp. on Op. Sys.
Princ., NY (2003)
12. R. Fielding, ”Architectural Styles and the Design of Network-based Software Architectures,” Ph.D. Thesis, University of California (2000)
13. R. Rocha et al., ”Monitoring the ATLAS Distributed Data Management System,”
in Proc. of Computing in High Energy and Nuclear Physics (CHEP) (2007)
14. E. Cohen et al., ”Replication Strategies in Unstructured Peer-to-Peer Networks,”
in Proc. of the 2002 conference on Applications, technologies, architectures, and
protocols for computer communications, USA (2002)
15. M. Satyanarayanan et al.. ”Coda: a highly available file system for a distributed
workstation environment,” in IEEE Trans. on Comp., Vol. 39, No. 4 (1990)
16. F. Schmuck et al., ”GPFS: A Shared-Disk File System for Large Computing Clusters,” in Proc. of the 1st USENIX Conference on File and Storage Technologies,
(2002)
17. P. Andrews et al., ”Massive High-Performance Global File Systems for Grid Computing,” in IEEE Conference on High Perf. Net. and Comp. (2005)
18. J. Bester et al., ”GASS: a data movement and access service for wide area computing systems,” in Proc. of the 6th workshop on I/O in parallel and dist. systems
(1999)
19. H. Lamehamedi et al.. ”Data replication strategies in grid environments,” in Algorithms and Architectures for Parallel Processing (2002)
20. A. Samar et al., ”Grid Data Management Pilot (GDMP): A Tool for Wide Area
Replication,” in IASTED International Conference on Applied Informatics (2001)
21. P. Schwan, ”Lustre: Building a file system for 1000-node clusters”, in Proc. of the
2003 Linux Symposium (2003)
22. A. Chervenak et al., ”Giggle: a framework for constructing scalable replica location
services”, in SC2002, Baltimore, USA (2002)
23. Q. Liv et al., ”Search and Replication in Unstructured Peer-to-Peer Networks,” in
Proc. of the 16th international conference on Supercomputing, NY, USA (2002)
24. P. Kunszt et al., ”Data storage, access and catalogs in gLite,” Local to Global
Data Interoperability - Challenges and Technologies, pp. 166-170 (2005)
25. C. Baru et al., ”The SDSC storage resource broker”, in Proc. of the 1998 conference
of the Centre for Advanced Studies on Collaborative research, pp. 5 (1998)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-540-88871-0_54?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-540-88871-0_54, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,008
|
[
"JournalArticle",
"Conference"
] | false
| 2008-11-09T00:00:00
|
[] | 11,167
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/031c1f6b0ff63e382950b822c63048b1ffec4667
|
[
"Computer Science"
] | 0.883789
|
ProNet: Network-level Bandwidth Sharing among Tenants in Cloud
|
031c1f6b0ff63e382950b822c63048b1ffec4667
|
arXiv.org
|
[
{
"authorId": "2186746870",
"name": "Zhewen Yang"
},
{
"authorId": "2115422858",
"name": "Chang-Hua Wu"
},
{
"authorId": "2257178",
"name": "Chenfei Tian"
},
{
"authorId": "150358907",
"name": "Zhaochenzi Zhang"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ArXiv"
],
"alternate_urls": null,
"id": "1901e811-ee72-4b20-8f7e-de08cd395a10",
"issn": "2331-8422",
"name": "arXiv.org",
"type": null,
"url": "https://arxiv.org"
}
|
In today's private cloud, the resource of the datacenter is shared by multiple tenants. Unlike the storage and computing resources, it's challenging to allocate bandwidth resources among tenants in private datacenter networks. State-of-the-art approaches are not effective or practical enough to meet tenants' bandwidth requirements. In this paper, we propose ProNet, a practical end-host-based solution for bandwidth sharing among tenants to meet their various demands. The key idea of ProNet is byte-counter, a mechanism to collect the bandwidth usage of tenants on end-hosts to guide the adjustment of the whole network allocation, without putting much pressure on switches. We evaluate ProNet both in our testbed and large-scale simulations. Results show that ProNet can support multiple allocation policies such as network proportionality and minimum bandwidth guarantee. Accordingly, the application-level performance is improved.
|
# ProNet: Network-level Bandwidth Sharing among Tenants in Cloud
### Zhewen Yang
Johns Hopkins University
Baltimore, USA
zyang122@jh.edu
### Changrong Wu
Nanjing University
Nanjing, China
### Chen Tian
Nanjing University
Nanjing, China
### Abstract
In today’s private cloud, the resource of the datacenter
is shared by multiple tenants. Unlike the storage and computing resources, it’s challenging to allocate bandwidth resources among tenants in private datacenter networks. Stateof-the-art approaches are not effective or practical enough
to meet tenants’ bandwidth requirements. In this paper, we
propose ProNet, a practical end-host-based solution for bandwidth sharing among tenants to meet their various demands.
The key idea of ProNet is byte-counter, a mechanism to collect the bandwidth usage of tenants on end-hosts to guide the
adjustment of the whole network allocation, without putting
much pressure on switches. We evaluate ProNet both in
our testbed and large-scale simulations. Results show that
ProNet can support multiple allocation policies such as network proportionality and minimum bandwidth guarantee.
Accordingly, the application-level performance is improved.
### 1 Introduction
Cloud resources (e.g., storage, computing, and network)
are shared among multiple tenants according to their individual requirements and payments. In resource management
in cloud, a Service Level Agreement (SLA) is usually set between the Cloud Provider and the cloud user to meet the
requirements of the allocation of some resources in terms
of the performance (like availability, resource amount and
cost). All these SLAs are mainly set to guarantee the Quality
of Service (QoS) in the cloud. The QoS metric is vary widely
according to the context (like edge computing, cloud mobile, IoT). Most SLAs can successfully measure and optimize
the physical resource in computing and storage like CPU,
Memory, and disk. In the same time, network resource also
plays an essential role in the QoS. The network condition decide the network latency, transmitting efficiency and quality
due to the bandwidth of each service in the cloud. Therefore its management is important to satisfy the bandwidth
requirements of tenants and achieving good application performance in datacenter networks.
However, network bandwidth allocation is different from
storage and computing resources allocation. Unlike storage
### Zhaochen Zhang
Nanjing University
Nanjing, China
and computing resources that can be allocated at a fixed ratio,
network resources are usually shared among tenants dynamically. It is difficult to guarantee the bandwidth allocation
via a simple static end-to-end reservation.
A bandwidth allocation approach should meet several design requirements as follows [2–4, 13, 18, 21, 22, 29]. Firstly,
multiple allocation policies should be supported simultaneously. The bandwidth demands of tenants can be various,
including minimum guarantee (this is what most previous
allocation systems try to achieve), proportional allocation,
and even according to specific utility functions (§ 2.2, i.e.
a more complex allocation strategy than the former two).
Secondly, in datacenters, bandwidth is usually shared by
different tenants (workgroups) with different requirements.
Hence, high bandwidth utilization, i.e., work conservation,
is required to save the cost. Besides, the application-level
performance should not be compromised when ensuring
bandwidth allocation. Last but not least, the approach should
be deploy-friendly and suitable to the cloud environment.
State-of-the-art approaches are not effective or practical
enough to allocate bandwidth according to tenants’ requirements in cloud environment. Most of the bandwidth allocation approaches only target ensuring a minimum bandwidth
guarantee [6, 14, 19, 26]. And we think only achieving the
minimum guarantee of bandwidth among tenants is not
enough to fit the complex cloud environment and some specific SLA’s requirements in nowadays cloud. Besides, most
of them can’t achieve work conservation, so the network resources are wasted at the most time. For the proportionality
allocation, google designed BwE for there WAN environment. However, the environment of the implementation of
BwE [20] is a SDN scenario, all routing information and the
flow transmitting plan is known by the manager. This is impossible for the cloud environment so the BwE can’t be used
in datacenters nowadays. A natural way to achieve proportionality allocation is to leverage in-network priority queues
to ensure weighted fair queuing and isolation. HCSFQ [31]
can approximate weighted priority queues, but it can not
meet other requirements among tenants. In addition, its active packet drop behavior can harm the application-level
1
-----
Conference’17, July 2017, Washington, DC, USA Trovato and Tobin, et al.
performance of the network. PS-N [25] aims at providing
proportionality allocation by leveraging a weight model to
distribute weights among flows. It delivers an ideal proportional allocation model. Unfortunately, PS-N requires at least
as many weighted fair-sharing queues as the number of
tenants (e.g., several thousand) on each switches, which is
impractical. In addition, PS-N is limited to take effects in specific scenarios, i.e., identical (or proportional) link capacities
and tenant weights.
To this end, we propose ProNet, a practical end-host-based
network bandwidth allocation protocol without burdening
switches. In our opinion, the main feature of the bandwidth
inside a cloud environment is considered as limited in local
areas, but unlimited in the whole picture. Therefore, the goal
of ProNet is not to achieve the instance bandwidth allocation fairness among tenants, but to keep the right allocation
strategy in a period of time. The key design of ProNet is
_byte-counter, a simple yet effective mechanism to dynami-_
cally adjust the bandwidth allocation among tenants based
on their real usage without knowing the real networking
situation inside the cloud. The intuition of bytes-counter is
that the usage of a tenant can be measured by counting the
volume of transmitted data, therefore by collection the bandwidth usage of each tenants, the usage can be converted to
the volume of data transmitted in the corresponding period
of time. If the total number of transmitted bytes can be controlled appropriately at end-hosts, the bandwidth over the
whole network can be allocated correctly. Byte-counter does
not make assumptions on bandwidth or targets of tenants,
so it is flexible to be applied to a wide range of scenarios. By
leveraging byte-counter, ProNet does not rely on switches to
support a large number of queues. ProNet can meet multiple
allocation policies in a deploy-friendly way (§ 3.1).
To meet all requirements mentioned above, ProNet proposes several design points aside from byte-counter. To satisfy utility bandwidth functions of tenants, ProNet leverages bandwidth functions to allocate bandwidth hierarchically (§ 3.2). It also makes ProNet able to provide a flexible bandwidth allocation. To achieve work conservation,
ProNet proposes Congestion-Aware Work-Conserving mechanism (CAWC) to perceive the network congestion state on
receiver end-hosts, guiding senders to adjust the transmission rate of tenants, and converge the allocation strategy
into a conserving state in a short time. In addition, tenantcounter is leveraged to differentiate intra-tenant congestion
(i.e., congestion caused by intra-tenant flows) from intertenant congestion (§ 4.3).
We implement a prototype of ProNet using 10 servers
also with Tofino1 [15] switches. Testbed experiments and
large-scale NS-3 [24] simulations validate that ProNet can
converge to the predefined bandwidth allocation targets. Taking the ideal PS-N as the baseline, ProNet can achieve the all
properties mentioned above and converge in ms scale. Compared with HCSFQ [31], ProNet achieves 29% better average
throughput and reduces the flow completion time (FCT) by
24%, benefiting from reducing packet loss and improving the
network utilization.
### 2 Background and Motivation
**2.1** **Tenant-based Bandwidth Allocation**
In modern datecenters, cloud service providers give
infrastructure services to tenants simultaneously through
virtualization. Not only for the computing and storage resources, the bandwidth is an essential part of the network resource. The bandwidth of each tenant’s tasks decide the performance and the quality of the transmitting, and whether it
satisfy the SLAs. Neither a flow-based nor a source-destinationbased allocation is suitable for providing cloud services for
tenants. Also, traditional bandwidth allocation methods usually target distributing bandwidth for each flow. A tenant can
increase its bandwidth share by increasing the number of
its flows maliciously without paying more. Allocating bandwidth based on per source-destination pair can aggregate
the bandwidth used by flows belonging to the same sourcedestination. Likewise, it can not deal with the cases where a
tenant increases the number of destinations it connects to,
getting more bandwidth share than those not.
In order to provide bandwidth according to tenants’ payments, a per-tenant bandwidth allocation approach is necessary. Meanwhile, flows belonging to the same tenants
can have different bandwidth demands. Their individual demands should be satisfied at the same time do not violate
the bandwidth allocation among tenants.
**2.2** **Properties Required**
To meet the bandwidth demands of tenants and ensure
high application-level performance, bandwidth allocation
approaches should satisfy the following properties, which
are preferred to industries.
- Support Multiple Policies. There are usually multiple specific bandwidth demands for different tenants. A
bandwidth allocation approach is necessary to satisfy
multiple policies simultaneously by providing flexible
run-time reconfiguration. Here, we introduce some
of the most vital specific properties for the network
allocation among tenants.
_Network Proportionality. When sharing the bandwidth_
between tenants, bandwidth should be allocated proportionally based on their payments or priorities. In
other words, when two tenants are competing for the
same network resource, the allocation of this resource
should be according to some ratio or priority.
_Minimum Bandwidth Guarantee. The bandwidth allo-_
cated to tenants should at least equal the minimum
bandwidth according to their demands. This property
2
-----
ProNet: Network-level Bandwidth Sharing among Tenants in Cloud Conference’17, July 2017, Washington, DC, USA
is essential for tasks that are sensitive to bandwidth or
flow completion time[8, 11].
_Utility bandwidth functions. Tenants could specify their_
bandwidth demands in the form of utility bandwidth
function. A bandwidth allocation approach support
bandwidth allocation according to tenants’ utility bandwidth functions is more flexible.
- Work Conservation. It denotes that links in a datacenter (usually the congested links) are fully utilized
or meet all demands. Being work conservation means
an effective network and high bandwidth utilization.
- Application-level Performance. The application-level
performance should not be compromised when providing the allocation policies [28]. Hence, the latency
of flows should be reduced, at the same time ensuring
high throughput and low packet loss rate.
- Deploy-friendly. An allocation approach that can be
easily deployed is preferred to datacenter networks.
Hence, the requirements put on switches should be
reduced.
**2.3** **Related Works**
The sharing of cloud infrastructure inevitably causes widespread competition on resources, among which network
bandwidth allocation is one of the most critical and challenging issues. Almost all state-of-the-art bandwidth allocation
approaches cannot achieve all of the properties simultaneously with a practical and deployable design.
We first introduce the approaches to solving the minimum
bandwidth guarantee for tenants (or flows). Generally, these
protocols seek spare bandwidth in the network for tenants
with inadequate allocation or simply limit the bandwidth
for the over-used flows at end-hosts. EyeQ [19] provides
tenants with bandwidth guarantees. However, it is based
on a non-blocking switch model, which does not handle
in-network congestion. Oktopus [3] provides predictable
bandwidth guarantees while it does not achieve work conservation, only resulting in low network efficiency. Besides,
it is not scalable. ElasticSwitch[26] relies on end-to-end rate
control and can not achieve fine-grained bandwidth allocation. PS-P [25] achieves the bandwidth guarantee across
tenants perfectly with the help of weighted fair queuing on
switches.
Meanwhile, other approaches focus on different types of
fairness in bandwidth allocation. These approaches achieve
fairness by leveraging rate monitors and transmission control on switches. Seawall[27] provides per-source fairness
in congested links. However, this is useless for multi-tenant
scenarios. NUMFabric[23] provides a flexible and configurable allocation framework that can achieve different allocation targets, such as weighted allocation. However, it
puts much pressure on the programmable switch for the
algorithm calculation jobs, and WFQ is also required in some
3
particular jobs. FairCloud [25] has proposed two methods
of weighted bandwidth allocation: PS-L targets at link proportionality and PS-N targets at congestion proportionality.
The link proportionality of PS-L means keeping the allocation fairness among each sender-receiver link pear, which is
unnecessary for cloud environments. These two proposals
require as many queues as the number of tenants, which is
impractical (§ 2.4). What’s more, these three methods in FairCloud can only achieve some specific allocation weights and
plans instead of a flexible and more complex allocation strategy. BwE[20] is a bandwidth allocation system developed
by Google, and it can achieve the proportional allocation
with a flexible and hierarchical straucture. However, BwE is
designed for Google WAN, which is a SDN-like environment.
In Google’s WAN, all tasks plans and the flow information
are known and set before the tasks by the administrator. Also
the in-time flow status can also be awarded by the network
managers. And BwE is designed in this strong prerequisites,
which is a totally different environment in cloud. Therefore
BwE can’t solve the allocation job inside the datacenter.
Table 1 summarizes the state-of-the-art bandwidth allocation approaches and compares them according to whether
they satisfy the properties mentioned in § 2.2 and their requirements on switches and network topologies.
**2.4** **Motivation**
In private enterprise networks, different types of bandwidth demands should be met (§ 2.2). The most central goal
among them is providing proportionality bandwidth allocation over the whole network, i.e., weighted fair share. Since
cloud providers should ensure proportional services according to tenants’ payments, which is also called fairness among
tenants.
**Weight fair queuing on switches is not practical and**
**yet not enough. A natural way to satisfy proportional band-**
width allocation is to provide one weighted fair queue (WFQ) [9]
for each tenant on switches. However, it is infeasible for each
tenant to require one queue on switches for WFQ. The number of tenants can be orders of magnitude more than the number of queues that a switch can support (e.g., at most 32/128
queues per port in the latest programmable switches [16]).
Many queuing scheduling designs have appeared in recent
years using relatively limited resources on programmable
switches and other newly developed programmable devices.
AIFO [30] tries to use a single FIFO queue to achieve priority
queue, but it can hardly achieve the weighted bandwidth
fairness allocation. Gearbox [10] tries to approximate the
WFQ by using a hierarchical FIFO-based scheduler. However,
its process is pretty complex and is not able to be applied to
this generation’s programmable switch. Instead, Gearbox is
implied on the smart NICs, which can support FPGAs. The
WFQ only applied on hosts’ NICs is obviously not enough for
-----
Conference’17, July 2017, Washington, DC, USA Trovato and Tobin, et al.
**Design Properties** **System Requirments**
**Min Bandwidth** **Work** **Low Latency**
**Prior Works** **Proportionality** **Hardware Support** **Topo and flow situation Requirement**
**Guarantee** **Conservation** **and High Throughput**
**Oktopus[3]** ✓ × × × None None
**Seawall[27]** × × ✓ × None None
**EyeQ[19]** ✓ × ✓ × None Congestion-free core
**FairCloud PS-P[25]** ✓ × ✓ × None Tree
**ElasticSwitch[26]** ✓ × ✓ × None None
**FairCloud PS-L/N [25]** × ✓ ✓ × Per-VM queues on switch None
**BwE [20]** ✓ ✓ / / None Topo, flow infos and allocation plans
**Trinity[14]** ✓ × ✓ ✓ Priority queues on switch None
**HCSFQ[31]** / ✓ / × Approximate queue model /
**AIFO[30]** / × / × on programmable switches. /
**GearBox[10]** / ✓ / × FPGAs on smart NIC. /
**ProNet** ✓ ✓ ✓ ✓ None or Little None
**Table 1. Properties and requirements comparison of state-of-the-art approaches.**
|Col1|Design Properties|Col3|Col4|Col5|System Requirments|Col7|
|---|---|---|---|---|---|---|
|Prior Works|Min Bandwidth Guarantee|Proportionality|Work Conservation|Low Latency and High Throughput|Hardware Support|Topo and flow situation Requirement|
|Oktopus[3]|✓|×|×|×|None|None|
|Seawall[27]|×|×|✓|×|None|None|
|EyeQ[19]|✓|×|✓|×|None|Congestion-free core|
|FairCloud PS-P[25]|✓|×|✓|×|None|Tree|
|ElasticSwitch[26]|✓|×|✓|×|None|None|
|FairCloud PS-L/N [25]|×|✓|✓|×|Per-VM queues on switch|None|
|BwE [20]|✓|✓|/|/|None|Topo, flow infos and allocation plans|
|Trinity[14]|✓|×|✓|✓|Priority queues on switch|None|
|HCSFQ[31]|/|✓|/|×|Approximate queue model on programmable switches.|/|
|AIFO[30]|/|×|/|×||/|
|GearBox[10]|/|✓|/|×|FPGAs on smart NIC.|/|
|ProNet|✓|✓|✓|✓|None or Little|None|
the bandwidth allocation of the network environment in a
datacenter. Also, the scheduling time scale of Gearbox is also
limited by the number of the FIFO queues. The most practical
WFQ design is HCSFQ [31]. However, in its design, HCSFQ
needs to drop packets proactively until achieving fairness
which could cause a large amount of retransmission. It can
downgrade the application-level performance and is usually
not expected, especially for tasks aiming at low latency.
Moreover, solutions to approximate WFQs are unable to
support other tenants’ requirements, such as minimum guarantee.
**End-host-based approaches to meet bandwidth alloca-**
**tion demands are challenging. Controlling bandwidth al-**
location on end-hosts straightforwardly does not take innetwork congestion into account. However, the condition of
datacenter networks is complex and varies over time. Innetwork congestion is not uncommon, e.g., the network
topology can be oversubscribed, and incast traffic occurs
intermittently [1]. A flow can encounter congestion and
compete for bandwidth with other traffic in the network.
Hence, the actual bandwidth used by the flow does not equal
the bandwidth allocated on the end hosts.
**The bandwidth allocation in cloud is hard as well as**
**achieving the work conservation. In the cloud environ-**
ment, it is impossible to predict or get to know the flow
information head of the allocation job. There are multiple
load balances in the datacenter like ECMP. The flow in the
datacenter is dynamic and have large real time variations.
The only information of a task or flow the manager can
know is the src and the dst of it, instead of knowing the
actual links and path of each flow like in WAN or LAN. So
that the allocation job should be done without knowing the
allocation plan and even the topology in the cloud like BwE.
Also, it is important to achieve the work conservation. This
means the bottleneck of each tasks should be used without
waste. Work conservation also means the high utilization
of the bandwidth, and the bandwidth is usually linked with
the latency and the QoS. However, the flow and task is unpredictable in cloud, so the the place of bottlenecks and the
allocation situation in bottlenecks are also unknown and
variable. So achieving the work conservation is also a target
to overcome.
**Applicable scenarios are limited. The state-of-the-art ap-**
proach aiming at a proportional fair share is a bandwidth
allocation model called Proportional Sharing at Networklevel (PS-N), which is proposed in FairCloud[25]. PS-N relies
on per-tenant WFQs. PS-N can only achieve network-level
fairness in a restricted condition where all bottleneck links
have the same capacity and background weight, or all congested links are proportionally loaded. This is impractical.
Datacenter network is a complex system with many random
factors, e.g., unequal links and transient bursty traffic.
**Unfeasible to meet variable tenants’ goals. The demands**
of tenants in clouds can change along with their application
traffic. Cloud providers should cater to the tenants’ demands
in real-time. However, due to the constant calculation pattern of PS-N, it can only keep a simply fixed allocation ratio
between flows, which can not satisfy the variable demands
of tenants.
**2.5** **Bandwidth Function**
Bandwidth Function (BF) has been used in Google’s Bandwidth Enforcer (BwE) system in their private WAN [17, 20].
BF specifies the bandwidth allocation to a flow group as a
function mapping from a dimensionless available bandwidth
share capacity measurement, called fair share, to the actual
bandwidth allocation value. It is feasible and effective to
model the variable allocation demands of tenants into BFs.
Compared to fixed-weighted fair sharing, BFs can represent
more complex demands of tenants and flows. In fact, the
fixed weighted fair sharing and minimum guarantee can be
denoted by the simplest BFs. Moreover, when the tenants’
demands change, BFs can be re-configured conveniently,
without requesting changes or reconfiguration in networks.
Therefore, ProNet leverages BF to support run-time reconfiguration and is flexible to bandwidth demands.
### 3 Key Ideas
To achieve the properties discussed in § 2.2, we propose
ProNet. In this section, we introduce several key design
4
-----
ProNet: Network-level Bandwidth Sharing among Tenants in Cloud Conference’17, July 2017, Washington, DC, USA
**(a) Bandwidth functions of flows.** **(b) Bandwidth function of the tenant.** **(c) Flows’ BFs after aggregation**
**Figure 1. An example to illustrate how flows’ bandwidth functions (BFs) are aggregated with their tenants’.**
points of ProNet, including byte-counter and hierarchical
control of tenants.
**3.1** **Byte-counter**
One of the most challenging parts of an allocation system is to adapt to the bandwidth demands without relying
on switches to support multiple queues. ProNet leverages
byte-counter to estimate the bandwidth usage of each tenant
on end-hosts. As the name byte-counter implies, it records
the number of bytes sent by each tenant in a period of time.
ProNet’s end-hosts maintain a byte-counter for each tenant
to record the sent bytes. Since flows of a tenant can be distributed on multiple end-hosts, it is necessary to aggregate
byte-counter on different end-hosts for each tenant. Hence,
the local byte-counter is reported to the coordinator periodically. By leveraging byte-counter for each tenant, the
average bandwidth occupation during the corresponding
period of time can be calculated. The average bandwidth can
help ProNet to adjust the bandwidth allocation of tenants in
the next cycle. There can be scenarios where the bandwidth
reports from hosts are delayed, and it does not affect the
convergence of ProNet for that the error of each cycle does
not accumulate (§ 4.2.2).
Byte-counter helps ProNet to strike a balance between accurate bandwidth allocation and practical. It is overlooked by
most prior works that tenants target achieving overall good
application-level performance instead of requiring instant
bandwidth proportionality. Instead of focusing on the instantaneous bandwidth usage of the sending ports (queues),
byte-counter aims at achieving network proportionality over
the network in the long term. Although ProNet can not converge to the target weighted fair share instantly, ProNet does
not compromise the performance of small flows since the
bytes used by small flows can be counted accurately.
**3.2** **Hierarchical Control of Tenants**
**Unit-flow. Applications with different scheduling objectives**
and bandwidth demands require isolation. In order to achieve
the fine-grained management of flows belonging to the same
tenant, we create the unit-flow abstraction. Unit-flow is a
**Figure 2. System Structure.**
group of packets sharing the same source and destination
pair with the same tenant ID. Unit-flow is the minimum
control unit of data in ProNet.
**Bandwidth function aggregation. To meet the varied and**
customized demands of tenants, bandwidth functions (BF) is
leveraged to configure the sharing policy. Each tenant uses
its own bandwidth functions, and so as the flows (unit-flows)
belonging to it. ProNet allocates bandwidth in a hierarchical way, i.e., bandwidth is allocated to tenants according to
their BFs, and then the bandwidth is shared among the tenant’s flows according to flows’ BFs. In order to satisfy the
bandwidth demands of tenants and flows simultaneously,
bandwidth functions of flows should be able to represent the
requirements of its belonged tenants. Inspired by the hierarchical MultiPath Fair Allocation (MPFA) algorithm [20]
targeting WAN distributed computing, ProNet aggregates
bandwidth functions of a flow with its belonged tenants’
BF (§ 4.2.1). Lower level flows’ BFs are transferred into aggregated BFs with a higher level tenant’s BF as input by
leveraging aggregation functions.
In Figure 1, we take a quick look at how flows’ bandwidth functions are aggregated with their tenants’ BF. There
are two flows, and Figure 1a shows their bandwidth functions. Figure 1b shows the BF of the tenant to which the two
flows belong, which represents a total of different allocation
requirements. Figure 1c shows the BFs of flows after aggregation. New features from their tenant have been acquired.
The specific algorithm is shown in § 4.2.1.
5
-----
Conference’17, July 2017, Washington, DC, USA Trovato and Tobin, et al.
**(a) Normally inter-tenant congestion (b) Intra-tenant congestion with a**
poor network utilization
**(c) Fixed intra-tenant congestion with**
tenant-counter
**Figure 3. An example to illustrate different congestion conditions among tenants, i.e., inter-tenant and intra-tenant congestion.**
The target weight ratio among tenants A and B is 2:1.
**3.3** **End-host Based congestion detection and work**
**conservation accomplish**
The most difficult challenge for an end-host-based network
system is how to detect the in-network situation on hosts. To
detect in-network congestion among tenants ant to achieve
a more efficient work conservation goal, ProNet’s end-hosts
leverage the Congestion-Aware Work-Conserving mechanism (CAWC) to detect in-network congestion. CAWC is
responsible for removing uncongested flows from the control loop of the byte-counter to improve network utilization.
Besides, there are corner cases where congestion is only
induced by flows belonging to the same tenant, i.e., intra**tenant flows. In this paper, we identify network behaviors**
within a tenant as intra-tenant, and those among tenants
as inter-tenant. ProNet should leave the rate adaptation of
those flows to congestion control protocols to avoid bandwidth waste. Figure 3 shows the dilemma. The weight ratio
between tenants A and B is 1:2, and the two links are identical. Figure 3a shows the normal cases where both two links
have traffic from both tenants. With ProNet, tenant A’s flows
get 2/3 of the total bandwidth, and tenant B’s flows get the
rest. But when tenants do not compete for bandwidth, things
are different. Figure 3b shows the scenario where each tenant’s flows occupy a link separately. Due to the proportional
bandwidth allocation, flows of tenant A only get half of the
bandwidth, which results in the bandwidth waste of the upper link. In fact, tenants A and B should not be considered to
be allocated in a weighted manner since they do not compete
for bandwidth. ProNet leverages tenant-counter to record
the number of tenants passing through the links, which can
be easily implemented in programmable switches (§ 4.3.3).
### 4 System Design
Built from the key ideas (§ 3), we propose ProNet, a hostbased practical network bandwidth allocation protocol for
tenants. Figure 2 shows the basic framework of ProNet.
ProNet is composed of three parts: sender, coordinator, and
receiver. When a new flow arrives, the traffic controller of
the sender groups them into unit-flows and initiates their
states (§ 4.1). The sender’s rate controller is responsible for
the bandwidth allocation. ProNet allocates bandwidth hierarchically. The bandwidth of tenants is allocated according
to the fair share calculated by the coordinator and their
bandwidth functions. Then, the bandwidth of flows within a
tenant is allocated accordingly (§ 4.2). In addition, to guide
the rate adaptation of flows and achieve work conservation,
the network congestion state should be perceived (§ 4.3).
**4.1** **Allocation Behavior Initiation**
When new flows arrive, the traffic controller regroups the
flows and groups them into unit-flows. Each unit-flow is
assigned a bandwidth function. Generally, the flows’ bandwidth function is initialized according to the weights of
the source and destination pair and the rate limiter set on
the sender hosts. Equation (1) denotes the bandwidth function of flow 𝑥, where 𝑠𝑟𝑐𝑊𝑒𝑖𝑔ℎ𝑡 and 𝑑𝑠𝑡𝑊𝑒𝑖𝑔ℎ𝑡 denote the
weight of the source host and the destination host, and
_𝑑𝑒𝑣𝑖𝑐𝑒𝑅𝑎𝑡𝑒𝐿𝑖𝑚𝑖𝑡_ denotes the rate limit on the sender host.
This BF takes in the fairshare 𝑠 as the function’s input.
_𝐵𝑥_ (𝑠) = min (𝑠𝑟𝑐𝑊𝑒𝑖𝑔ℎ𝑡 + 𝑑𝑠𝑡𝑊𝑒𝑖𝑔ℎ𝑡, 𝑑𝑒𝑣𝑖𝑐𝑒𝑅𝑎𝑡𝑒𝐿𝑖𝑚𝑖𝑡)
(1)
This bandwidth function represents the preference among
different source and destination pairs. In fact, the bandwidth
function supports customized initialization. By setting different bandwidth functions to the unit-flows, ProNet can
obtain flexible and versatile bandwidth allocation. For instance, a tenant could have preferences among different
sender hosts, e.g., one tenant may need one host to obtain
twice as much bandwidth as others. This can be achieved
by simply initializing the bandwidth function to the minimum value of 𝑠𝑟𝑐𝑊𝑒𝑖𝑔ℎ𝑡 and 𝑑𝑒𝑣𝑖𝑐𝑒𝑅𝑎𝑡𝑒𝐿𝑖𝑚𝑖𝑡. In addition,
a minimum bandwidth guarantee can be achieved by simply initializing the bandwidth functions to the bandwidth
guarantee. Also, more complex BFs for unit-flows to achieve
more individualized allocation demands are also capable.
6
-----
ProNet: Network-level Bandwidth Sharing among Tenants in Cloud Conference’17, July 2017, Washington, DC, USA
**4.2** **Bandwidth Allocation**
In this subsection, we first introduce the bandwidth function
aggregation of tenants (§ 4.2.1), which is fundamental to
hierarchical bandwidth allocation (§ 4.2.2).
**4.2.1** **Tenants’ Bandwidth Function Aggregation. Gen-**
erally, a bandwidth function (BF) is a sectional incremental
continuous function so that it can be represented as a group
of interesting points (the points whose gradient of the function is zero) conveniently. Also, it is easy for a BF to get its
inverse function. This means taking a bandwidth value as the
input, and we can get the unique corresponding fair share
with the BF. These properties can be leveraged to process
bandwidth function aggregation.
The major process is shown in A. In this way, ProNet can
get aggregated BFs for each unit-flow which can both satisfy
the BF of the unit-flow and the tenant it belongs to. That’s
essential for ProNet to coordinate between tenants and their
network flows. Due to BFs can be stored as sets of points in
ProNet, this algorithm can be easily applied. As shown in
Figure 1, the interesting points in the original BFs of flows
1 and 2 are transformed into the points shown in Figure 1c
with the BF of the tenant in Figure 1b.
**4.2.2** **Hierarchical Bandwidth Allocation. ProNet con-**
trols the bandwidth allocation between tenants and their
flows in a hierarchical way, i.e., from flows to their tenant,
then to the whole network level. The fair share and the bandwidth functions are calculated hierarchically. The bandwidth
usage of flows belonging to the same tenants is firstly aggregated to calculate a local fair share. Then, the local share
is calculated by leveraging the coordinator to interact with
all end-hosts in networks. Likewise, the bandwidth function
aggregation works in a similar way.
**Intra-tenant bandwidth allocation. The rate controller**
adjusts the sending rate of each flow by leveraging whole
network information exchanged from the coordinator. Figure 4 shows the architecture of the rate controller. The rate
controller calculates the fair share of local tenants by taking the usage of unit-flows into account and exchanges the
fair share with the coordinator to get synchronized network
states.
We now discuss how ProNet allocates bandwidth within a
single tenant by leveraging tenant controllers at the tenant
level. At the start, tenant controllers are assigned with the
initial bandwidth function (§ 4.1), which reflects the allocation policy of the tenant. For each cycle time, the total
bandwidth usage is collected by using byte-counter, as well
as the bandwidth functions of unit-flows belonging to the
corresponding tenant. After that, the tenant controller aggregates all these bandwidth functions with the tenant’s
bandwidth function, getting new bandwidth functions for
**Figure 4. Information Flow Process**
those unit-flows. Therefore, all unit-flows in this tenant will
get a brand new bandwidth function that satisfies both the
allocation targets of its tenant and the unit-flow itself. It ensures that unit-flows of the belonged tenant can be assigned
with the correct bandwidth, which is necessary to achieve
further fairness allocation among the whole network.
After unit-flows are allocated with appropriate bandwidth,
their transmission rate should be limited according to their
obtained bandwidth. Token Bucket Filter (TBF) [12]is leveraged as the rate limiter for unit-flows. For that TBF can
provide relatively stable rate control, and its peak rate is controllable. The token bucket increases its tokens according to
the bandwidth allocated to the unit-flow. Before a packet is
transmitted, the traffic controller checks whether the token
bucket contains sufficient tokens. If so, the packet is sent, and
the number of tokens equivalent to the bytes of the packet
is removed.
**Coordinator for inter-tenant bandwidth allocation. There**
are many end-hosts in the datacenter, ProNet should be able
to coordinate all of them to achieve network level fairness.
For each host, the fair share of every tenant should be kept
at the same level all the time to ensure fairness. The coordinator is leveraged to interact with end-hosts. To avoid the
coordinator becoming the network bottleneck, the logic of
the coordinator is quite simple. For each reporting cycle of
each host, the coordinator collects the overall bandwidth usage (i.e., the fair share value) of each tenant from end-hosts’
rate controller and then sends back the updated target fair
share.
The mean update process come up with an 𝐼𝑛𝑝𝑢𝑡𝐴𝑟𝑟𝑎𝑦 =
[𝑠1, 𝑠2, 𝑠3, . . ., 𝑠𝑁 ] which stands for the input array containing all the collected fair share from each tenant. _𝑠𝑖_ denotes the
fair share of tenant 𝑖. After aggregating all these values, the
coordinator will then calculate as _𝐴𝑣𝑔_ = 𝑎𝑣𝑒𝑟𝑎𝑔𝑒 (𝐼𝑛𝑝𝑢𝑡𝐴𝑟𝑟𝑎𝑦)
to average the possible allocation unfairness condition of
the last cycle. Besides, an accelerating factor 𝑎𝑙𝑝ℎ𝑎, is also
added to all fair shares as: 𝑇𝑎𝑟𝑔𝑒𝑡 = 𝐴𝑣𝑔 ∗(1 + 𝑎𝑙𝑝ℎ𝑎).
7
-----
Conference’17, July 2017, Washington, DC, USA Trovato and Tobin, et al.
After calculation, the coordinator sends back the updated
value of the fair share to all the rate controllers. After each
host receives the new target’s fair share from the coordinator,
it then calculates the new bandwidth (rate) for each unitflow using its bandwidth functions. And the updated rate is
sent to the TBF for rate-limiting. The following process is
similar to the intra-tenant bandwidth allocation. And Figure
4 shows the information flow progress of ProNet.
Also, as for the coordinate pattern, due to the report time
of each byte-counter being varied, it is necessary to support
an asynchronous report and coordinate pattern. To achieve
this goal, in our design, the coordinator keeps a report window for byte-counters to report their usages. Within a report
window, the coordinator will balance all the usage of the
reported unit-flows in the corresponding hosts. To avoid the
potential allocation unfairness due to the reporting delay of
each host, we set a rule that if a host-only continues the next
cycle of balanced allocation when an allocation instruction
from the coordinator is received, i.e., the host won’t send
its usage report twice with only one report window on the
coordinator.
**4.3** **Local Rate Adjustment**
The flows’ rate can not be simply adjusted according to
the received fair share. Instead, the rate adaptation should
take the network congestion states into consideration. There
can be bursty intermittent flows that come and leave quickly.
Hence, flows should occupy a higher bandwidth when congestion does not occur to utilize the network. At the same
time, the rate of flows passing through the congestion paths
should be reduced to avoid aggravating congestion in the
network. Furthermore, to ensure work conservation, the
intra-tenant congestion should be left for congestion control
instead of leveraging ProNet.
**4.3.1** **CAWC Mechanism.** ProNet’s receiver is responsible for detecting the in-network congestion state according
to the congestion information carried in packets, called the
Congestion-Aware Work-Conserving mechanism (CAWC).
CAWC is leveraged to detect whether the network bottleneck
is fully loaded or not. In this way, to achieve the work conservation aspect of ProNet. ProNet’s receiver maintains a scoreboard to count the bytes of received packets within a given
period of time _𝑠𝑙𝑖𝑑𝑖𝑛𝑔_𝑡𝑖𝑚𝑒_ (e.g., 10 𝜇s), according to whether
they are marked with ECN or not. When the ratio of ECNmarked packets exceeds a given threshold 𝑐𝑜𝑛𝑔𝑒𝑠𝑡𝑖𝑜𝑛_𝑡ℎ𝑟𝑒,
the receiver sends back the congestion signal notification
to the sender. Note that the CAWC signal uses the highest priority to guarantee in-time delivery. When the sender
receives the CAWC signal, it adjusts the flows’ fair share
according to the Algorithm 1. The control loop of uncongested flows is separated from the congested flows, i.e., the
transmission of uncongested flows is not affected by the
in-network congestion that they do not contribute to. This
can avoid unnecessary bandwidth waste and ensure work
conservation. Besides, the transmission control of congested
flows can be more robust to scenarios where bursty small
flows leave the network quickly by setting the congestion
ratio. And the allocation will converge in a real short time,
normally in a sub-ms scale.
**Algorithm 1 Distributed Rate Adaptation Algorithm**
**Input: 𝑇𝑎𝑟𝑔𝑒𝑡𝐹𝑆** // updated fair share received from the
coordinator
_𝑘_ // the rate attenuation factor when congested
_𝑟𝑎𝑡𝑒𝐶𝑜𝑛𝑡𝑟𝑜𝑙𝐶𝑦𝑐𝑙𝑒_ // the execution period of this algorithm
_𝑟𝑒𝑝𝑜𝑟𝑡𝐶𝑦𝑐𝑙𝑒_ // the communication interval to the coordinator
_𝑜𝑙𝑑𝐹𝑆_ // the fair share of last cycle
_𝑛𝑒𝑤𝐹𝑆_ // the fair share of next cycle
1: This algorithm runs at the 𝑟𝑎𝑡𝑒𝐶𝑜𝑛𝑡𝑟𝑜𝑙𝐶𝑦𝑐𝑙𝑒 periodically
2: for 𝑓𝑙𝑜𝑤 in flowTable do
3: **if 𝑓𝑙𝑜𝑤** does not transmit in the last cycle then
4: Mark the flow as inactive
5: **continue**
6: **if 𝑜𝑙𝑑𝐹𝑆** ≠ _𝑇𝑎𝑟𝑔𝑒𝑡𝐹𝑆_ **then**
7: _𝑛𝐹𝑆_ = the compensation fair share according to the BF of 𝑓𝑙𝑜𝑤
8: _𝑜𝑙𝑑𝐹𝑆_ += 𝑛𝐹𝑆
9: **if congestion is detected then**
10: **if congestion is detected in the prior cycle then**
11: _𝑛𝑒𝑤𝐹𝑆_ = 𝑜𝑙𝑑𝐹𝑆 − _𝑘_
12: **else**
13: _𝑛𝑒𝑤𝐹𝑆_ = 𝑜𝑙𝑑𝐹𝑆
14: **else**
15: _𝑛𝑒𝑤𝐹𝑆_ = 𝑜𝑙𝑑𝐹𝑆 ∗(1 + 𝑟𝑎𝑡𝑒𝐶𝑜𝑛𝑡𝑟𝑜𝑙𝐶𝑦𝑐𝑙𝑒/𝑟𝑒𝑝𝑜𝑟𝑡𝐶𝑦𝑐𝑙𝑒)
16: Calculate 𝑛𝐵𝑊 with 𝑛𝑒𝑤𝐹𝑆 as the input of the BF of 𝑓𝑙𝑜𝑤
17: _𝑟𝑎𝑡𝑒𝑆𝑢𝑚+ = 𝑛𝐵𝑊_
18: 𝑠𝑐𝑎𝑙𝑖𝑛𝑔𝐹𝑎𝑐𝑡𝑜𝑟 = 1
19: if 𝑟𝑎𝑡𝑒𝑆𝑢𝑚 _>= 𝑑𝑒𝑣𝑖𝑐𝑒𝑅𝑎𝑡𝑒𝐿𝑖𝑚𝑖𝑡_ **then**
20: _𝑠𝑐𝑎𝑙𝑖𝑛𝑔𝐹𝑎𝑐𝑡𝑜𝑟_ ∗ = (𝑑𝑒𝑣𝑖𝑐𝑒𝑅𝑎𝑡𝑒𝐿𝑖𝑚𝑖𝑡 /𝑟𝑎𝑡𝑒𝑆𝑢𝑚)
21: for active 𝑓𝑙𝑜𝑤 in flowTable do
22: Set the allocated rate of 𝑓𝑙𝑜𝑤 as 𝑛𝐵𝑊 ∗ _𝑠𝑐𝑎𝑙𝑖𝑛𝑔𝐹𝑎𝑐𝑡𝑜𝑟_
**4.3.2** **Distributed Rate Adaptation Algorithm. The main**
idea of ProNet is that the bandwidth is infinite in the whole
picture of cloud, but it’s finite in local and instantly. So in
this part we concentrate on the adjustment of bandwidth
allocation in the granularity of tenant in a period of time.
The rate controller at the end-host runs a rate adaptation
algorithm to optimize the bandwidth allocation distributedly, as demonstrated in Algorithm 1. The host handles each
unit-flow recorded in the flowTable to allocate their rates
periodically. The unit-flows are checked whether it is active
first. Only the active unit-flows are allocated with bandwidth (Line 2-5). ProNet compares the updated fair share
TargetFS received from the coordinator with the fair share
oldFS of the flow in the last allocating cycle, where oldFS is
initialized according to the starting rate of flows. In order
to keep the overall allocation fairness, ProNet calculates a
new fair share for the unit-flow, which can compensate for
8
-----
ProNet: Network-level Bandwidth Sharing among Tenants in Cloud Conference’17, July 2017, Washington, DC, USA
the unfairness in the last cycle allocation. The integration
of the compensation nFS equals the integration of TargetFS∫ _𝑜𝑙𝑑𝐹𝑆+𝑛𝐹𝑆_ ∫ _𝑇𝑎𝑟𝑔𝑒𝑡𝐹𝑆_
oldFS, i.e., _𝑜𝑙𝑑𝐹𝑆_ _𝐵𝐹𝑓_ (𝑠) 𝑑𝑠 = _𝑜𝑙𝑑𝐹𝑆_ _𝐵𝐹𝑓_ (𝑠) 𝑑𝑠 (Line
7-8). Note that 𝑛𝐹𝑆 can be negative numbers when 𝑜𝑙𝑑𝐹𝑆
is larger than 𝑇𝑎𝑟𝑔𝑒𝑡𝐹𝑆. After this, the host adjusts each
unit-flow’s rate according to the network state and the local
bandwidth usage. To avoid being too sensitive to congestion
and utilize the network, here we use two consecutive rate
control cycles to determine whether a unit-flow takes part
in the in-network congestion (Line 9-15). If the congestion is
detected for the first time, the allocation is maintained as the
old fair share 𝑜𝑙𝑑𝐹𝑆. But for the second time, the bandwidth
allocation for this unit-flow is decreased by k to alleviate the
congestion. If no congestion is detected, i.e., the flow does
not take part in the congestion, its rate is increased by a
fraction less than double its rate to climb up to utilize the
bandwidth (Note that 𝑟𝑎𝑡𝑒𝐶𝑜𝑛𝑡𝑟𝑜𝑙𝐶𝑦𝑐𝑙𝑒 is smaller than the
_𝑟𝑒𝑝𝑜𝑟𝑡𝐶𝑦𝑐𝑙𝑒). The bandwidth allocation can be calculated by_
taking the fair share and the bandwidth function as input
(Line 16). When the total allocation on the host reaches the
rate limit, ProNet scales the overall allocation (Line 19-20).
Later, with the new calculated fair share, ProNet can then
assign the next allocation target bandwidth of each unit-flow
(Line 22).
With this algorithm, every host can distributedly optimize
the next cycle allocation goal for each unit-flow. For unitflows that contribute to the in-network congestion are enforced for rate adaptation. As for uncongested unit-flows are
enforced to utilize the network to achieve work-conserving.
Evaluations show that the sending hosts can converge quickly
with several rounds of rate adaptation (§ 6.3).
**4.3.3** **Tenant-counter. To add the ability of ProNet to de-**
tect the difference between inter-tenant and intra-tenant
congestion, we design an optional module called tenantcounter on the programmable switch. What ProNet requires
for the programmable switch is just a Tenant register table.
This is a table with a preset capacity (we usually set as 2)
of entries to record the tenant information of passing by
flows. In this table, we keep at most a number of (2) least
met tenants’ ID and recent encountered time. When a packet
(flow) passes through the switch, the table registers its corresponding tenant ID. When another tenant’s packet passes
this switch, a new entry is added to the table. To reduce the
network overhead, the switch does not transmit the tenant
table. Instead, if the congestion occurs (i.e., exceeding the
ECN-marking threshold) and the number of entries is larger
than one, the packet is carried with an inter-tenant congestion flag directly. The receiver carries back the flag to the
sender by adding an extra bit in the congestion signals. In addition, the expired tenants are removed from the table when
the switch does not receive their flows for a timeout value
which can be calculated with the recorded encounter time.
In addition, our evaluation shows that the tenant-counter is
not necessary to be applied to all switches. Instead, tenantcounter can be applied to the switch where congestion is
more likely to occur, i.e., the last hop of the network or the
oversubscribed nodes.
On the host-end, a module called non-competitive pool
is added. At first, all unit-flows are set as non-competitive
and classified into this pool. Unit-flows in the pool are not
controlled by ProNet. After receiving the inter-congestion
signals from the receiver, the corresponding unit-flows are
moved from the pool for further allocation control. The noncompetitive pool is a useful abstraction that can be leveraged
for user-defined performance optimization. For instance,
small flows can be put into the non-competitive pool for
better performance. If some flows require a high priority
or should maintain the quality of service, they can also be
assigned to the non-competitive pool.
### 5 Implementation
We implement a prototype of ProNet in both real machine
testbed and ns-3 simulation codes. We built ProNet as a
user-level process that implement the token bucket filter to
rate-limit the transmission rate of flows in the Linux kernel.
Our current implementation has around 2000 lines of C++
code. To evaluate ProNet, we build a dumbbell testbed with
16 servers connected to Pronto-3295 48-port Gigabit switches
and setup a fat-tree (k=3) topo to simulate the datacenter
structure. We have also turned on the ECN and ECMP of our
testbed to for the congestion control and the load banlancer
to simulate the cloud environment.
The tenant-counter is implemented on Tofino1 [5]. Tenantcounter only requires little help from the programmable
switch. Three registers of the data plane are used. The state
machine of our P4 program logic is shown in Figure 5. There
are two main states of an output port of ProNet’s switch:
competitive and non-competitive. Competitive denotes that
there are multiple tenants’ flows forwarded to the same output port. Packets are supposed to be tagged to notify the end
hosts. A non-competitive state denotes that the output link
is occupied by a single tenant’s flows, and packets passing
through a non-competitive port are forwarded as normal.
The initial state of the switch’s port is non-competitive.
There are two conditions that should be satisfied for the state
transformation. First, the arriving packet should belong to
a different tenant from the prior packet’s tenant. Second,
the time interval between these two packets should be less
than a preset timeout threshold. The above two conditions
stand for the beginning of an inter-tenant congestion, and
the state is transformed into a Competitive state. Next, the
switch is supposed to tag the incoming packet for the notification for the receiver. Also, the switch is supposed to be
9
-----
Conference’17, July 2017, Washington, DC, USA Trovato and Tobin, et al.
aware of whether the competition no longer appears in the
output ports. Therefore, the state will be transferred to the
initial state when this condition is satisfied: the time interval
between the arriving packet and the last different tenant’s
packet is larger than the timeout threshold. This stands for
the termination of an inter-tenant congestion.
To satisfy the state machine we mentioned above, we
develop the P4 program of our testbed, also shown in Figure
5. All modification is made in the egress port of the switch,
which stands for the competing congestion of each output
link. And we only use 3 (of 12) logical stages of a Tofino 1
programmable switch.
The incoming packet’s tenant ID (Tid) and timestamp
(TS) are taken as input along with the forwarding process
of the packet. For the first stage, we use two registers to
record information. The first register records the tenant ID
of the last arriving packet and uses the second register for
the storage of the timestamp of the last arriving packet (LTS).
When a packet arrives, the value in these two registers will
be replaced by the information in the packet and sent the old
value to the next stage. And then, in stage 2, the switch will
judge two requirements: whether the Tid is the same as the
last packet’s tenant Id (LTid) also whether the time interval
between LST and the TS is smaller than the timeout threshold
(TH). We use the third register to record the timestamp of the
last different tenant’s packet’s timestamp (LDTS). If both of
these two requirements are satisfied, this register will update
the value of LDTS with LTs and send the old LDTS to the
next stage. Otherwise, just send LDTS value to the next stage.
For the last stage, the switch will simply compare the time
interval between TS and LDTS to the TH. If the former is
larger, then the inter-congestion will be determined, and the
packet will be tagged and sent. If not, the packet will be sent
without modification.
In this way, the programmable switch can easily identify
the inter-tenant congestion situation and let the intended
congested packets carry the signal to the receiver for our
CAWC congestion control mechanism. Also, we can achieve
this in a really simple implementation and resource usage of
the switch, which can hardly influence the performance of
ProNet.
### 6 Evaluation
We evaluate ProNet both in large-scale NS-3 network
simulation[24] and testbed. Firstly, we show that the weighted
bandwidth allocation among tenants is achievable by using
ProNet. After that, we evaluate the bandwidth guarantee and
work-conserving ability of ProNet, using the ideal PS-N as
the baseline of ProNet. ProNet reduces the packet loss ratio
compared with HCSFQ, a state-of-the-art queue scheduling
**Figure 5. P4 Program State Machine and Procedure. (Tid:**
Tenant ID, Ts: Timestamp, LTid: Tenant ID of the last packet,
LTs: Timestamp of last packet, LDTS: Timestamp of the last
packet from a different tenant, TH: Timeout threshold. R1,
R2, and R3 are stateful memory (i.e., registers) in the switch.)
approach. Hence, the flow completion time is reduced accordingly. At last, we use the testbed to evaluate the transmitting
lost and latency of the coordination job of ProNet.
**Parameter Setup. For ProNet, we have a set of default**
settings. The update cycle of the byte-counter is the time
period for ProNet to refresh and collect the bandwidth usage
from each host for the allocation adjustment for the next period. This cycle time decides the time granularity of ProNet.
With a shorter value, more quickly can ProNet converge, at
the same time bringing more communication costs between
hosts and the coordinator. The cycle time we set in our experiments later is 0.01s. CAWC period stands for the frequency
of congestion checking and feedback of ProNet. The more
this value is set, the faster will ProNet detect the congestion
situation but come along with more traffic occupied by the
feedback packets. In our evaluation, it is set to per 50 packets.
Last but not least is the accelerated ratio of the rate controller
for the allocation convergence. The higher this is set, the
faster convergence might come. However, intenser shaking
might also be caused. This value is set as 10% in our tests.
The time-out value of the tenant-register-table in the switch
is set as 0.1s in our evaluation which means if a unit-flow is
not passing through the switch for 0.1s, it will be regarded
as expired. The lower this value is set, the preciser the congestion detection will be, as long as a heavier workload will
occur on network traffic and switch.
**Metrics. We have four major performance metrics: (i) Through-**
put, (ii) packet dropping ratio, (iii) flow complete time (FCT),
and (iv) the fairness and accuracy of bandwidth allocation.
**6.1** **Weighted Fairness Allocation Experiments**
Firstly, we prove that the weighted fairness allocation
among tenants is guaranteed by using ProNet. We cover
both UDP and TCP traffic with equal or different weights.
In the experiments, we use the topology based on Clos[7],
which is a k=4 fat-tree topology. It has eight servers as the
10
-----
ProNet: Network-level Bandwidth Sharing among Tenants in Cloud Conference’17, July 2017, Washington, DC, USA
**(a) Equal Weights** **(b) Weighted allocation**
**(a) Work conservation Test** **(b) A bandwidth guarantee exper-**
**Figure 6. Allocation experiments with UDP traffic.** for the full usage of the 10Gb iment as a min. bandwidth set as
total bandwidth. 1Gbps.
**Figure 9. Work conservation and bandwidth guarantee ex-**
periments.
**(a) Equal Weights** **(b) Weighted allocation**
8, without ProNet, the congestion control in TCP protocol
**Figure 7. Allocation experiments with TCP traffic.** cannot interfere with the UDP flows. Thus the UDP flow
sending rate is much higher than TCP flows, which does not
meet the requirement of fairness in multi-tenant networks.
**ProNet can guarantee the fairness allocation among**
**tenants. As for the performance of ProNet, first of all, in**
order to show the ability of ProNet to keep fairness, we set
all of the tenants in the same weights. In Figure 6a, 7a, and
8a, although they have the same weight, we set the initial
sending rates differently. We set 16 flows with the same
weight. Flows 1-16 belong to tenant 1 and flows 17-32 belong
**(a) Equal Weights** **(b) Weighted allocation** to tenant 2.
**Figure 8. Allocation experiments with mixed traffic.** As shown in Figure 6-8, for all of these tests, using ProNet,
we can see that the fairness among flows is kept at the same
senders and two servers as the receiver, and we send packets level, regardless of the network protocol and communication
in two groups. Each sender sends 8 flows (based on five- pattern of the network. These experiments show the ability
tuple), and a total of 64 flows are sent to receivers, and we of ProNet to achieve the fairness allocation among tenants
select 32 flows for demonstration. All servers and switches and flows.
are connected to 40Gbps links. Also, the bottleneck of our
**ProNet can guarantee the weighed fairness allocation**
topology is 40Gbps.
**among tenants. The weighted allocation among tenants**
As the control group, we imply the traffic above with a is also shown in our experiments. In our setups, flows 1-16
normal UDP and TCP traffic with our ProNet. For the UDP belong to tenant 1, and flows 17-32 belong to tenant 2. The
tests, we set different rates for the UDP flows. 24 flows (Flow relative weight between tenants 1 and 2 is assigned as 1:2.
1-24) are sent at 2Gbps, and 8 flows (Flow 25-32) are sent at We also evaluate our system in UDP traffic, TCP traffic, and
8Gbps. As shown in Figure6, flows 25-32 achieved a much also the mix Traffic of them. Figure 6b, 7b, and 8b shows the
higher bandwidth than flows 1-24, which does not meet the results. We can see that the flows in two different tenants
fairness requirement. For the TCP tests, we also performed a are properly allocated the bandwidth as the preset ratio. All
simulation of TCP flows, and as shown in the Figure 7, the 32 flows in tenant 1 keep a throughput of about 1/2 of the flows
flows achieve a nearly fair sending rate due to the congestion in tenant 2 for the whole 40Gbps bandwidth. Especially for
control in the TCP protocol. We also performed experiments the TCP and UDP mix traffic pattern, we have random flows
for the mixed TCP and UDP flow situation. As shown in the of UDP and TCP from each tenant’s traffic. In Figure 8, our
figure, flows 6, 7, 14, 15, 22, 23, 30, and 31 are UDP flows, and system can still perform well for the weighted allocation
others are TCP lows sent at 3.2Gbps. As shown in the Figure between these two tenants as a 1:2 ratio.
11
-----
Conference’17, July 2017, Washington, DC, USA Trovato and Tobin, et al.
**6.2** **Work Conservation and Bandwidth Guarantee**
**Experiments**
**Work Conservation Experiment. For the work conser-**
vation part, we meanly monitor the congestion links, which
is the critical allocation path in the whole network. In this
experiment, we set 4 flows from 2 different tenants with
different arrive and leave times. The tenants’ weight ratio is
assigned as 1:2, flow 0 and flow 1 belong to tenant 1, and the
other two belong to tenant 2. All these four flows share the
total 10Gbps bottleneck. The weights assigned to the flows
inside these tenants are equal. And for the traffic, we mainly
concentrate on the congested link to track the behavior of
ProNet. The result is shown in Figure 9a.
As a comparison, we also calculate the summary of these
flows’ bandwidth usage shown in the figure, also with the
full bandwidth capacity. With the result in Figure 9a, in
the congestion link, each flow can immediately acquire the
correct bandwidth share, and at the same time, all flows in
the bottleneck link can occupy the total capacity of the link.
For example, at 6s, flow 2 comes into the bottleneck. The
allocation of these three flows here instantly changes. As
shown in the figure, flow 2 occupies about 6 Gbps bandwidth,
and for the other two flows from another tenant, flows 0 and 1
are adjusted to sharing an amount of about 3 Gbps bandwidth
equally. This is a perfect example of the correctness of ProNet.
Above all, this test shows our system can achieve the work
conservation whenever flows come or leave.
**Bandwidth Guarantee Experiment. We also evaluate the**
minimum bandwidth guarantee of ProNet. In this experiment, we set several flows from different tenants with different allocation weights. Also, for each tenant’s flow, have
set a minimum bandwidth for the allocation. And with the
minimum guarantee preset by using the bandwidth function and other mechanisms in ProNet . In the experiment,
flows 1 to 4 are from different tenants, and the ratio between
these four tenants is 1:2:3:4. As for the minimum bandwidth,
all these tenants are set as a 10Gbps bandwidth guarantee
by setting the corresponding bandwidth functions. Figure
9b shows the testing result. Also, with the min. guarantee
marked in the figure, we can see that all the throughput of
each tenant’ flows starts all at their guaranteed throughput
regardless of the condition of other flows. Also, all flows
can achieve the preset weights bandwidth allocation ratio.
This experiment shows the minimum guarantee goal can be
achieved by using ProNet.
**6.3** **Performance comparison with HCSFQ**
As shown in Figure 10, we compare ProNet with HCSFQ,
a state-of-art work that implements weight-fair-queuing
through active packet dropping in programmable switches.
Results show that ProNet achieve better performance than
**(a) TCP NewReno** **(b) TCP BIC**
**Figure 10. Throughput of flows between HCSFQ and ProNet**
under different TCP protocols.
HCSFQ in weighted share allocation. We measure the performance of ProNet and HCSFQ on a simple topology, where
three hosts, A, B, and C, are connected by 30 Gbps, 1 µsdelay cables to a single switch with a maximum buffer size
of 250KB per port. Hosts A and B each start 10 TCP flows,
which send 0.2GB and 0.1GB to host C, respectively. The TCP
flows started by host A have twice the weight of that of host
B. ECN is enabled in this experiment, with a minimum threshold of 50KB and a maximum threshold of 200KB. To avoid
pathological behavior of TCP flows under HCSFQ’s proactive packet dropping, both the reconnect time-out (time-out
after an SYN packet is not responded) and the minimum RTO
are set to 10 ms.
As shown in Figure 10a, the throughput of the flows with
ProNet is always higher than the throughput of flows with
HCSFQ under TCP NewReno. This is because HCSFQ’s proactive packet dropping wastes network bandwidth on the one
hand and causes the congestion window to be too low to utilize the bandwidth on the other. In the experiment, HCSFQ
proactively drops about 5% of the packets, which resulted in
a severe performance impairment of the TCP flows despite
a small time-out set to alleviate the impairment of packets
dropping. In contrast, flows with ProNet have almost no
packet loss because the rate is precisely controlled. As a result, the total throughput of flows with HCSFQ is only 78% of
that of flows with ProNet. For FCT, HCSFQ is on average 31%
longer than ProNet. Another noticeable observation is that
ProNet provides fairer bandwidth allocation than HCSFQ,
with the coefficient of variation [1] of throughput of flows with
HCSFQ is 36 times higher than that with ProNet. Finally, it
is found that ProNet can accurately allocate bandwidth in
proportion to the weight, with a throughput ratio of 1.99 for
two groups of flows in ProNet and 2.11 for that in HCSFQ.
As demonstrated in Figure 10b, we also compare ProNet with
HCSFQ under other congestion control protocols, and the
aforementioned results still hold, with ProNet outperforming
HCSFQ in terms of throughput, fairness, and accuracy of
bandwidth allocation.
1The coefficient of variation is also known as relative standard deviation,
which is defined as the ratio of the standard deviation to the mean. The
higher the coefficient of variation, the greater the dispersion.
12
-----
ProNet: Network-level Bandwidth Sharing among Tenants in Cloud Conference’17, July 2017, Washington, DC, USA
**(a) Flows’ bandwidth result.** **(b) Tenants’ bandwidth result.**
**Figure 11. Large scale experiments result.**
**6.4** **Large Scale Experiments**
In this part, we evaluate the stability and the correctness
of the large-scale simulation experiments. The topology we
use is a k=10 which contains 250 hosts and tens of switching
nodes in it to simulate the real network condition. We have
deployed over 2000 Poisson random flows belonging to 48
tenants with different weight ratios. The flows we set are to
simulate the web searching task, which is commonly seen in
the datacenter, containing burst flows, short flows, and other
kinds of uncommon flows that might appear in a datacenter.
And the experiment is deployed for a relatively long period
of time to prove the overall network allocation situation.
**ProNet can guarantee the fairness allocation among**
**tenants and flows in large-scale experiments. To show**
the allocation result of ProNet more clearly, we choose some
representative flows and tenants to show in our paper. Figure
11 shows the result of this allocation experiment. In figure
11a, we pick 5 flows, flows 1 and 3 belong to a tenant with
a normalized weight of 2, and other flows are from another
tenant with a normalized weight of 1. We can see that all
the tenants (’ flows) share can be kept relative stability and
remains consistent with the same normalized share value.
Also, figure 11b shows the allocation result in a bottleneck
link between two tenants. The tenants we choose in this
figure have a 2:1 weight ratio. And we can see the allocation
among flows is also fairness guaranteed. As a result, tenants
0-23 have an average throughput of around 1.239 Gbps, and
tenants 24-27 have an average throughput of around 2.423
Gbps. Tenants 1 to 24 have normalized weight 1, and others
have normalized weight 2. We can see in the result, ProNet
can achieve the weighted bandwidth allocation perfectly
among tenants. All in all, we can see ProNet is able to perform
well in the real datacenter environment and is a practical
bandwidth management system.
**6.5** **Congestion Awareness Experiments**
In this part, our experiments are on our real-machine
testbed with a Tofino 1 switch for the evaluation of our
tenant-counter design with the implementation we mentioned in § 5. And to prove the ability to detect different
kinds of congestion we mentioned in § 4.3.3. The topology
**(a) Intra-tenant Congestion.** **(b) Inter-tenant Congestion.**
**Figure 12. ProNet with tenant-counter Experiments.**
is set as two links between two servers. One of the links
has a programmable switch installed in the middle, and the
other link is installed with one ordinary switch. Here, we
meanly prove the tenant-counter mentioned above to detect the intra-tenant congestion by using the programmable
switch is feasible and deployable.
We set up two scenarios in this experiment. In the first
one, one of the links is used by flows all from tenant 1, and
the other one is used by tenant 2, in which we create an
intra-tenant congestion situation. For the second scenario,
the flows in each link are mixed with flows from both tenants
1 and 2, which is a normal inter-tenant congestion situation.
For the first one, the congestion signal is due to the flows in
the same tenant, which shouldn’t influence the allocation to
the flows of other tenants. And the second one’s congestion
is caused by flows belonging to different tenants. Figure
12 shows the testing results. Figure 12a shows the intracongestion situation, the allocation of other tenants’ flow is
not affected by it. However, in Figure 12b, the flows from
different tenants are congested, and as shown in the result,
the allocation performs as usual with a weighted share.
This experiment proves the correctness and the feasibility
of our design of tenant-counter in the deployment of the
actual programmable switch.
### 7 Conclusion
In this paper, we propose ProNet, a practical end-hostbased bandwidth allocation protocol designed for private
datacenters. At the core of ProNet, it leverages byte-counter
to monitor and adjust the bandwidth usage on end-hosts.
ProNet supports proportional bandwidth allocation among
tenants, and minimum bandwidth guarantee, simultaneously
achieving work conservation. In addition, flexible bandwidth
allocation is also supported according to tenant-specified
bandwidth functions. ProNet improves the application-level
performance by reducing the packet loss ratio and improving
the network throughput. Both our implementation and simulation results indicate that ProNet is a promising bandwidth
allocation protocol.
13
-----
Conference’17, July 2017, Washington, DC, USA Trovato and Tobin, et al.
### References
[1] Mohammad Alizadeh, Albert Greenberg, David A Maltz, Jitendra Padhye, Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, and Murari
Sridharan. 2011. Data center TCP (DCTCP). In ACM SIGCOMM.
[2] Sebastian Angel, Hitesh Ballani, Thomas Karagiannis, Greg O’Shea,
and Eno Thereska. 2014. End-to-end performance isolation through
virtual datacenters. In 11th USENIX Symposium on Operating Systems
_Design and Implementation (OSDI 14). 233–248._
[3] Hitesh Ballani, Paolo Costa, Thomas Karagiannis, and Ant Rowstron.
2011. Towards predictable datacenter networks. In Proceedings of the
_ACM SIGCOMM 2011 Conference. 242–253._
[4] Hitesh Ballani, Keon Jang, Thomas Karagiannis, Changhoon Kim,
Dinan Gunawardena, and Greg O’Shea. 2013. Chatty tenants and
the cloud network sharing problem. In 10th USENIX Symposium on
_Networked Systems Design and Implementation (NSDI 13). 171–184._
[5] Pat Bosshart, Dan Daly, Glen Gibb, Martin Izzard, Nick McKeown,
Jennifer Rexford, Cole Schlesinger, Dan Talayco, Amin Vahdat, George
Varghese, et al. 2014. P4: Programming protocol-independent packet
processors. ACM SIGCOMM Computer Communication Review 44, 3
(2014), 87–95.
[6] Mosharaf Chowdhury, Zhenhua Liu, Ali Ghodsi, and Ion Stoica. 2016.
{HUG}:{Multi-Resource} Fairness for Correlated and Elastic Demands. In 13th USENIX Symposium on Networked Systems Design and
_Implementation (NSDI 16). 407–424._
[7] Charles Clos. 1953. A study of non-blocking switching networks. Bell
_System Technical Journal 32, 2 (1953), 406–424._
[8] Jeffrey Dean and Sanjay Ghemawat. 2008. MapReduce: simplified data
processing on large clusters. Commun. ACM 51, 1 (2008), 107–113.
[9] Alan Demers, Srinivasan Keshav, and Scott Shenker. 1989. Analysis
and simulation of a fair queueing algorithm. ACM SIGCOMM Computer
_Communication Review 19, 4 (1989), 1–12._
[10] Peixuan Gao, Anthony Dalleggio, Yang Xu, and H Jonathan Chao. 2022.
Gearbox: A Hierarchical Packet Scheduler for Approximate Weighted
Fair Queuing. In 19th USENIX Symposium on Networked Systems Design
_and Implementation (NSDI 22). 551–565._
[11] Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. 2003. The
Google file system. In Proceedings of the nineteenth ACM symposium
_on Operating systems principles. 29–43._
[12] J Glasmann, M Czermin, and A Riedl. 2000. Estimation of token bucket
parameters for videoconferencing systems in corporate networks. Pro_ceedings of SoftCOM 2000 10 (2000)._
[13] Chuanxiong Guo, Guohan Lu, Helen J Wang, Shuang Yang, Chao Kong,
Peng Sun, Wenfei Wu, and Yongguang Zhang. 2010. Secondnet: a data
center network virtualization architecture with bandwidth guarantees.
In Proceedings of the 6th International COnference. 1–12.
[14] Shuihai Hu, Wei Bai, Kai Chen, Chen Tian, Ying Zhang, and Haitao
Wu. 2018. Providing bandwidth guarantees, work conservation and
low latency simultaneously in the cloud. IEEE Transactions on Cloud
_Computing 9, 2 (2018), 763–776._
[[15] Intel. 2020. Intel Tofino. https://www.intel.com/content/www/us/](https://www.intel.com/content/www/us/en/products/network-io/programmable-ethernet-switch/tofino-series.html)
[en/products/network-io/programmable-ethernet-switch/tofino-](https://www.intel.com/content/www/us/en/products/network-io/programmable-ethernet-switch/tofino-series.html)
[series.html.](https://www.intel.com/content/www/us/en/products/network-io/programmable-ethernet-switch/tofino-series.html)
[16] Intel. 2020. Intel Tofino2 – A 12.9Tbps P4-Programmable Ethernet
[Switch. https://ieeexplore.ieee.org/document/9220636.](https://ieeexplore.ieee.org/document/9220636)
[17] Sushant Jain, Alok Kumar, Subhasree Mandal, Joon Ong, Leon
Poutievski, Arjun Singh, Subbaiah Venkata, Jim Wanderer, Junlan
Zhou, Min Zhu, et al. 2013. B4: Experience with a globally-deployed
software defined WAN. ACM SIGCOMM Computer Communication
_Review 43, 4 (2013), 3–14._
[18] Keon Jang, Justine Sherry, Hitesh Ballani, and Toby Moncaster. 2015.
Silo: Predictable message latency in the cloud. In Proceedings of the
_2015 ACM Conference on Special Interest Group on Data Communication._
435–448.
[19] Vimalkumar Jeyakumar, Mohammad Alizadeh, David Mazières, Balaji
Prabhakar, Albert Greenberg, and Changhoon Kim. 2013. {EyeQ}:
Practical Network Performance Isolation at the Edge. In 10th USENIX
_Symposium on Networked Systems Design and Implementation (NSDI_
_13). 297–311._
[20] Alok Kumar, Sushant Jain, Uday Naik, Anand Raghuraman, Nikhil
Kasinadhuni, Enrique Cauich Zermeno, C Stephen Gunn, Jing Ai,
Björn Carlin, Mihai Amarandei-Stavila, et al. 2015. BwE: Flexible,
hierarchical bandwidth allocation for WAN distributed computing. In
_Proceedings of the 2015 ACM Conference on Special Interest Group on_
_Data Communication. 1–14._
[21] Vinh The Lam, Sivasankar Radhakrishnan, Rong Pan, Amin Vahdat,
and George Varghese. 2012. Netshare and stochastic netshare: predictable bandwidth allocation for data centers. ACM SIGCOMM Com_puter Communication Review 42, 3 (2012), 5–11._
[22] Jeongkeun Lee, Yoshio Turner, Myungjin Lee, Lucian Popa, Sujata
Banerjee, Joon-Myung Kang, and Puneet Sharma. 2014. Applicationdriven bandwidth guarantees in datacenters. In Proceedings of the 2014
_ACM conference on SIGCOMM. 467–478._
[23] Kanthi Nagaraj, Dinesh Bharadia, Hongzi Mao, Sandeep Chinchali,
Mohammad Alizadeh, and Sachin Katti. 2016. Numfabric: Fast and
flexible bandwidth allocation in datacenters. In Proceedings of the 2016
_ACM SIGCOMM Conference. 188–201._
[24] nsnam. 2011. ns3 – A discrete-event network simulator for internet
[systems. https://www.nsnam.org/.](https://www.nsnam.org/)
[25] Lucian Popa, Gautam Kumar, Mosharaf Chowdhury, Arvind Krishnamurthy, Sylvia Ratnasamy, and Ion Stoica. 2012. FairCloud: Sharing the
network in cloud computing. In Proceedings of the ACM SIGCOMM 2012
_conference on Applications, technologies, architectures, and protocols for_
_computer communication. 187–198._
[26] Lucian Popa, Praveen Yalagandula, Sujata Banerjee, Jeffrey C Mogul,
Yoshio Turner, and Jose Renato Santos. 2013. Elasticswitch: Practical
work-conserving bandwidth guarantees for cloud computing. In Pro_ceedings of the ACM SIGCOMM 2013 conference on SIGCOMM. 351–362._
[27] Alan Shieh, Srikanth Kandula, Albert Greenberg, and Changhoon Kim.
2010. Seawall: Performance isolation for cloud datacenter networks.
In 2nd USENIX Workshop on Hot Topics in Cloud Computing (HotCloud
_10)._
[28] Christo Wilson, Hitesh Ballani, Thomas Karagiannis, and Ant Rowtron.
2011. Better never than late: Meeting deadlines in datacenter networks.
_ACM SIGCOMM Computer Communication Review 41, 4 (2011), 50–61._
[29] Di Xie, Ning Ding, Y Charlie Hu, and Ramana Kompella. 2012. The only
constant is change: Incorporating time-varying network reservations
in data centers. In Proceedings of the ACM SIGCOMM 2012 conference
_on Applications, technologies, architectures, and protocols for computer_
_communication. 199–210._
[30] Zhuolong Yu, Chuheng Hu, Jingfeng Wu, Xiao Sun, Vladimir Braverman, Mosharaf Chowdhury, Zhenhua Liu, and Xin Jin. 2021. Programmable packet scheduling with a single queue. In Proceedings of
_the 2021 ACM SIGCOMM 2021 Conference. 179–193._
[31] Zhuolong Yu, Jingfeng Wu, Vladimir Braverman, Ion Stoica, and Xin
Jin. 2021. Twenty Years After: Hierarchical {Core-Stateless} Fair
Queueing. In 18th USENIX Symposium on Networked Systems Design
_and Implementation (NSDI 21). 29–45._
### A Tenants’ Bandwdith Function Aggregation
In this section, we introduce the progress and algorithm
for the aggregation of BF which mainly inspired by the BwE
paper.
14
-----
ProNet: Network-level Bandwidth Sharing among Tenants in Cloud Conference’17, July 2017, Washington, DC, USA
The target (aggregated) bandwidth function 𝐵[𝑡]
_𝑓_ [(][𝑠][)][ for]
unit-flow 𝑓 of tenant 𝑡 must satisfy Equation (2), where 𝑠
denotes the fair share and 𝐵𝑡 (𝑠) is the original (before aggregation) bandwidth function of the tenant. It ensures that
bandwidth allocated to a tenant will be allocated to all of its
unit-flows eventually, i.e., the sum of all unit-flows’ aggregated BF should be equal to the original tenant’s BF.
∑︁
∀𝑠, _𝐵[𝑡]𝑓_ [(][𝑠][)][ =][𝐵][𝑡] [(][𝑠][)] (2)
∀𝑓 |𝑓 ∈𝑡
To satisfy Equation (2), we first define an add-up bandwidth function 𝐵𝑡[𝑎] [by summing up the original bandwidth]
functions of all unit-flows of tenant 𝑡:
∑︁
∀𝑠, 𝐵𝑡[𝑎] [(][𝑠][)][ =] _𝐵𝑓_ (𝑠) (3)
_𝑓_ ∈𝑡
Next, in order to link the original unit-flows’ BF, which is
represented by the add-up BF 𝐵𝑡[𝑎] [in][ (3)][ to the tenant’s BF][ 𝐵][𝑡] [,]
a transforming function 𝑇 from unit-flows to tenants that
satisfies Equation (4) should be found:
_𝑇_ (𝑠) = 𝑠 [′]|𝐵𝑡[𝑎] [(][𝑠][)][ =][ 𝐵][𝑡] [(][𝑠] [′][)] (4)
The transforming function 𝑇 in ProNet is a mapping between the fair share 𝑠 of the add-up BF and the fair share
_𝑠_ [′] of the tenant’s which correspond to the same bandwidth
value.
At last, for each unit-flow 𝑓 ∈ _𝑡, apply_ _𝑇_ on 𝐵𝑓 ’s fair share
to get 𝐵[𝑒]
_𝑓_ [for each flow:]
_𝐵[𝑡]𝑓_ [(][𝑇] [(][𝑠][))][ =][ 𝐵][𝑡] [(][𝑠][)] (5)
In this way, ProNet can get aggregated BFs for each unitflow which can both satisfy the BF of the unit-flow and the
tenant it belongs to. That’s essential for ProNet to coordinate
between tenants and their network flows.
15
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2305.02560, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://arxiv.org/pdf/2305.02560"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-05-04T00:00:00
|
[
{
"paperId": "3723757bb9638642ea06321f48a976a0c9f3b032",
"title": "Programmable packet scheduling with a single queue"
},
{
"paperId": "a7e2ebaa08b84052763fba0794065aa238154d29",
"title": "Intel Tofino2 – A 12.9Tbps P4-Programmable Ethernet Switch"
},
{
"paperId": "5d15260a68829fca0d374af306fb28d396a36722",
"title": "Providing Bandwidth Guarantees, Work Conservation and Low Latency Simultaneously in the Cloud"
},
{
"paperId": "0847b10090ee40c23c629a9f3a31a4a4bc09e218",
"title": "NUMFabric: Fast and Flexible Bandwidth Allocation in Datacenters"
},
{
"paperId": "215dfc71d758ca354980446634c05bee459dfbaf",
"title": "HUG: Multi-Resource Fairness for Correlated and Elastic Demands"
},
{
"paperId": "2de63b0c867b290d4f7217459c968aa98e5ad39d",
"title": "BwE: Flexible, Hierarchical Bandwidth Allocation for WAN Distributed Computing"
},
{
"paperId": "6e77f723d156e1f62baf3ad95779ed9377beea42",
"title": "Silo: Predictable Message Latency in the Cloud"
},
{
"paperId": "c24aaa6a8fa5dc298c284e9d75e899dc45e2f981",
"title": "End-to-end Performance Isolation Through Virtual Datacenters"
},
{
"paperId": "7afd148a4a74b64d695cf56b188565a435417278",
"title": "Application-driven bandwidth guarantees in datacenters"
},
{
"paperId": "463e7d2a62b1f15cc2daba34230297827e7c6757",
"title": "P4: programming protocol-independent packet processors"
},
{
"paperId": "dca45bd363820bce269a176a1ecde7e1885e2ea6",
"title": "B4: experience with a globally-deployed software defined wan"
},
{
"paperId": "e6b2d4dc40a1d50a52413c5d64af49fb314a4d28",
"title": "ElasticSwitch: practical work-conserving bandwidth guarantees for cloud computing"
},
{
"paperId": "db0774765430e3b4e70ceb67d7dee4a3c1d87c98",
"title": "Chatty Tenants and the Cloud Network Sharing Problem"
},
{
"paperId": "8c9a91b774fcc126db7ce7c67bd97d1d16143932",
"title": "EyeQ: Practical Network Performance Isolation at the Edge"
},
{
"paperId": "4c85be0340a1409787109cf77ced5bc379b00c50",
"title": "FairCloud: sharing the network in cloud computing"
},
{
"paperId": "27c04dce51362fcc7531acbe74823a7f0a4e48bf",
"title": "The only constant is change: incorporating time-varying network reservations in data centers"
},
{
"paperId": "d4245061eb00914eab306d966e5598bd77d35d4d",
"title": "Netshare and stochastic netshare: predictable bandwidth allocation for data centers"
},
{
"paperId": "6c0c61afc3e65b6a23806d8eb6f3f9e1bfe15990",
"title": "Better never than late: meeting deadlines in datacenter networks"
},
{
"paperId": "94ff2db9609acba01d47f89f41b7b2084d04784d",
"title": "Towards predictable datacenter networks"
},
{
"paperId": "26ec1a9efe80faa0c9ed0e72047c02b01f44e4d7",
"title": "SecondNet: a data center network virtualization architecture with bandwidth guarantees"
},
{
"paperId": "f7f38f4d4a0fc0e0da963b5166730e6321b6c171",
"title": "Data center TCP (DCTCP)"
},
{
"paperId": "c9122d111c75aa2816a79e865799a819788bb495",
"title": "Seawall: Performance Isolation for Cloud Datacenter Networks"
},
{
"paperId": "8460e3671bf22580ff46a9d4dbb9c50f6d2cfebc",
"title": "Analysis and simulation of a fair queueing algorithm"
},
{
"paperId": "148c1c241f4b4bd3848b6c0d6ff99e90a534dfce",
"title": "A study of non-blocking switching networks"
},
{
"paperId": "6c5fec5a188ade59d3b6a544a0c7230d05a0a2f7",
"title": "Twenty Years After: Hierarchical Core-Stateless Fair Queueing"
},
{
"paperId": null,
"title": "Arvind Krishnamurthy, Sylvia Ratnasamy, and Ion Stoica"
},
{
"paperId": null,
"title": "ns3 – A discrete-event network simulator for internet systems"
},
{
"paperId": "627be67feb084f1266cfc36e5aed3c3e7e6ce5f0",
"title": "MapReduce: simplified data processing on large clusters"
},
{
"paperId": "ccdcb6682f3286c3fefdfe96fc7eafdf43d5dd6d",
"title": "ESTIMATION OF TOKEN BUCKET PARAMETERS FOR VIDEOCONFERENCING SYSTEMS IN CORPORATE NETWORKS"
},
{
"paperId": "4f6844a36b29814e57a6358d3a4bb32b461a1d88",
"title": "This paper is included in the Proceedings of the 19th USENIX Symposium on Networked Systems Design and Implementation"
}
] | 21,025
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/031d7857740437347f5bdba486daa440fa8c5532
|
[] | 0.882326
|
Call for Papers: Web3
|
031d7857740437347f5bdba486daa440fa8c5532
|
IEEE Communications Magazine
|
[] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Commun Mag"
],
"alternate_urls": [
"https://ieeexplore.ieee.org/servlet/opac?punumber=35"
],
"id": "a1b15bc8-157e-45a9-b4c8-8211f938775d",
"issn": "0163-6804",
"name": "IEEE Communications Magazine",
"type": "journal",
"url": "http://www.comsoc.org/commag"
}
| null |
**Call** **for Papers**
IEEE
## OMMUNICATIONS
# C
### MAGAZINE
#### Feature Topic: Web3
##### Background
Different from “read” based Web1 and “read-write” based Web2, “read-write-own” based Web3 is proposed as a typical user-centric internet to open the new generation of World Wide Web, which is expected to not allow the power to
rest with a few big internet companies. Generally, Web3 is decentralized and semantic depending on user behavior,
and thus the zero-trust architecture should be created initially. Second, to access Web3, it is essential to study how to
establish an identity management system. Meanwhile, for resource description and data verification, it is necessary
to set up decentralized identifiers (DID), and link the data to identifiers in the form of DID document. In particular, a
decentralized network operating system is an indispensable underlying technology for Web3, incorporating concepts
such as decentralization and user-driven philosophy. Therefore, the corresponding technologies for the operating
system such as blockchain and distributed ledger technology should be further studied and developed. Moreover,
in order to reduce the consensus cost, a large-scale incentive mechanism is also the basis of long-term sustainability,
which can attract and motivate distributed players to participate in the maintenance of Web3. Last but not the least,
Web 3 is built on a physical infrastructure relying on communication, networking, storage and computing, which is
crucial to establishing an effective and secure Web3. This encourages us to study communication, networking, storage and computing in Web3, as well as the specific requirements of running Web3.
In order to more thoroughly explore the potential of Web 3 and promote its progress, this Feature Topic (FT) will provide a
forum for the latest researches, innovations, and applications of Web3, which will bridge the gap between theory and practice in the design of Web3. Prospective authors are invited to submit original articles on topics including, but not limited to:
- Zero-trust architecture and protocol design for Web3
- Incentive and consensus mechanisms for Web3
- Identity Management System for Web3
- Distributed storage, identifiers, and data verification in Web3
- Fundamental limits and theoretical guidance for Web3
- Machine learning, edge computing, metaverse and other emerging technologies for Web3
- Hardware and infrastructure implementation for Web3
- Semantic computing and services in Web3
- Web3 applications
- Web3 standardizations
##### n Submission Guidelines
Manuscripts should conform to the standard format as indicated in the Information for Authors section of the IEEE
_Communications Magazine’s Manuscript Submission Guidelines. Please, check these guidelines carefully before sub-_
mitting since submissions not complying with them will be administratively rejected without review.
All manuscripts to be considered for publication must be submitted by the deadline through Manuscript Central.
Select the “FT-2213/Web3: Blockchain in Communications” topic from the drop-down menu of Topic/Series titles.
Please observe the dates specified here below noting that there will be no extension of submission deadline.
##### n Important Dates
**Manuscript Submission Deadline: 1 August 2022**
**Decision Notification: 15 January 2023**
**Final Manuscript Due: 1 February 2023**
**Publication Date: April 2023**
##### n Guest Editors
**Bin Cao**
Beijing University of Posts
and Telecommunications,
China
caobin65@163.com
**Zheng Yan**
Aalto University, Finland
zhengyan.pz@gmail.com
**Mahmoud Daneshmand**
Stevens Institute of
Technology, USA
mdaneshm@stevens.edu
**Xu Xia**
China Telecom
Research Institute, China
xiaxu@chinatelecom.cn
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/mcom.2022.9777278?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/mcom.2022.9777278, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://ieeexplore.ieee.org/ielx7/35/9776595/09777278.pdf"
}
| 2,022
|
[] | true
| 2022-05-01T00:00:00
|
[] | 859
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/031f82362d76fd19234857e6c3bf56366a7dbe49
|
[] | 0.913831
|
A New Proposed Public Key Cryptography Based on Bio Strands
|
031f82362d76fd19234857e6c3bf56366a7dbe49
|
Technium
|
[
{
"authorId": "1406427476",
"name": "Auday H. Al-Wattar"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Technium"
],
"alternate_urls": null,
"id": "d5a68c1a-7a3a-441d-a67b-a0fa3d5fad5b",
"issn": "2668-778X",
"name": "Technium",
"type": null,
"url": "https://techniumscience.com/index.php/technium/index"
}
|
The recent rapid advancement of technology has increased the capability of attackers. The main challenge to information security is the requirement for using unconventional philosophies and alternative means and focusing on new aspects to achieve security. This article proposes a new method that uses the collected genetic information on GenBank and its characteristics. The statistics that were calculated for the data that were hidden using this method prove that they meet the security standards. This paper employs unique elements for achieving information hiding based on that information.
|
## www
# A New Proposed Public Key Cryptography Based on Bio Strands
**Auday H. AL-Wattar***
University of Mosul
ahsa.alwattar@uomosul.edu.iq
**Abstract. The recent rapid advancement of technology has increased the capability of attackers.**
The main challenge to information security is the requirement for using unconventional
philosophies and alternative means and focusing on new aspects to achieve security. This article
proposes a new method that uses the collected genetic information on GenBank and its
characteristics. The statistics that were calculated for the data that were hidden using this method
prove that they meet the security standards. This paper employs unique elements for achieving
information hiding based on that information.
**Keywords. Steganography, hiding information, Public key, and GenBank.**
1. **Introduction**
Computer security is a broad term that refers to actions, techniques, procedures, and technologies to
preserve, safeguard, and defend computer systems' information and data by restricting unauthorized
access to systems.
A secure connection is required for every entity to exchange data reliably. The internet has served as the
foundation for all e-business and finance activities. The rise of the Internet of Things has introduced
substantial security concerns centered on identifying acceptable approaches to achieve security, mainly
because the IoT demands a unique environment, and specific requirements must be considered. Many
strategies and systems have been created within traditional steganographic methods to meet these
security criteria, specifically in the theoretical area of cryptographic protocols. Most research is
concerned with Hiding in providing security for the Internet of Things. Since rare studies have been
involved with the use of steganography in providing IoT security
This paper proposes a novel process that uses GenBank DNA data as steganography.
Due to its indispensable nature in modern society, data security has perpetually occupied a preeminent
position on the list of top priorities. As computers have become increasingly prevalent in everyday life,
so too has this fascination. The term "data security" is used to describe a wide range of activities,
strategies, and resources that aim to keep intruders out of computer systems and their data and
information. If two parties are going to be able to exchange information with complete confidence, they
must use a secure connection. All online financial and retail transactions depend on the reliability of the
internet. Particularly given that the IoT necessitates a distinct environment and specialized
circumstances must be considered, the advent of the Internet of Things has introduced important security
issues, focused on identifying acceptable approaches to accomplish security. In the mathematical realm
of data hiding and extraction, various approaches and systems have been created within traditional
steganographic methods to fulfill these security needs. These strategies are defeated using new
steganography algorithms. Scholars have tried biotech steganography. This article proposes GenBank
DNA data as novel public-key steganography.
-----
## www
_1.1._ _Steganography_
Steganography is the art that comprises communicating undisclosed information in a suitable
transporter; that is to say, it is the method of embedding data (message) inside another file [1].
Steganography has several valuable tenders. Undisclosed communications where private data could be
sent without concern of drawing attention to the threat from possible invaders[2]. Conventionally,
steganography is recognized as a technique allowing two or more parties to create a secret message over
an insecure channel that is observable to snooping. A significant area of security goals is achieved by
tools of steganographic techniques [3].
_1.2._ _GenBank_
In [4], "GenBank® is a comprehensive public database that provides publicly available nucleotide
sequences to enable bibliographic and biological notations." NCBI's GenBank is part of the International
Nucleotide Sequence Database Collaboration (ENA), which comprises GenBank at NCBI and the DNA
Data Bank of Japan (DDBJ).
The National Center for Biotechnology Information (NCBI), a division of the National Library of
Medicine (NLM), is responsible for its development and dissemination on the Bethesda, Maryland,
campus of the National Institutes of Health (NIH) [5].
Genome shotgun (WGS) data and other high-throughput sequencing data from sequencing centers are
the primary sources of NCBI's GenBank. Additionally, the US Patent and Trademark Office makes
patent sequences available. GenBank collaborates with the EMBL-EBI European Nucleotide Archives
(ENA) and the DNA Data Bank of Japan (DDBJ) as part of the International Nucleotide Sequence
Database Collaboration (INSDC) [6]. There are monthly meetings amongst the INSDC partners to keep
the global sequence of information collections consistent and comprehensive. GenBank data can be
accessed for free via the internet, FTP, and a wide range of web-based tools for analysis and recovery [
[7, 8] from the NCBI .
Steganography techniques that use segmental arithmetic or are based on biological workroom research
aren't fit for digital computing environments, for example. A new, secure steganography approach is
proposed, and its performance is evaluated. To conceal information, this steganography makes use of
GenBank data segments.
_1.3._ _Steganography Public key principle_
Two or more parties who have never met or shared a secret may use the public-key steganography
protocol informally to communicate secret messages over a public channel without the adversary being
able to discern their existence[9, 10].:
Party Alice can hide a message using Bob's public key if he wants to interact with Party Bob
confidentially. Only Bob can unembed such communication as only Bob had access to the respective
private key. This can be demonstrated in Figure 1,
As follows:
-----
## www
**Figure 1. Public key Steganography**
Formally:
f(x) is a one-way function from a set X set Y so that f(x) is easily computed by all x ϵ X, but it is
"computationally ineffective," to locate any x ϵ X, such as f(x) = y for "fundamentally all" elements y ϵ
Y.
_1.4._ _Man in the middle attack (MIMT)_
A MITM attack occurs when a hostile third-party intercepts data travelling from a transmitter to a
receiver and then maliciously modifies the data before sending it on to the receiver. The consequences
of this MITM attack, which involves transmitting false information via a network, are severe[11, 12].
_1.5._ _Black hole attack_
In most cases, the black hole assault is a denial-of-service attack, often known as a DoS attack, and is
one of the most apparent types of attacks. The Black Node appeared during the process of determining
the best route to take; initially, the sender was unaware of the most direct route to the receiver. A
malicious node used its routing protocol to announce that the node obtains the shortest way to the target
node, although there is no route to the receiver of the black hole node. This caused the target node to be
unable to receive information from the black hole node. The black hole node is present along the data
channel in this particular scenario; the black hole node is current along the data channel. If the routes
have been established, the transmitter will deliver the packets to the black hole node, where it will then
begin dropping the packets without sending them to the target node [13, 14].
2. **The Proposed Method**
Figure 2 illustrates a protocol between two entities (Alice and Bob) that can be used to describe the
proposed technique. The following are the components that make up the suggested procedure:
- (Alice) - Transmitter.
- (Bob)- Recipient.
- Message (m): Represents the data to be hidden.
- The cover (c): The file in which the message is hidden.
- Stego-File (St): The file after hiding the message in it.
- GenBank’s: DNA banks.
- Pi: The segment with the index which is included in GenBank.
-----
## www
- Sec_Bob: Recipient’s secret key.
- Pub_Bob: Recipient’s public key.
DNA Databases can be found within GenBank. There is a specific position and value associated with
each segment (several bases with a specific length). Within the framework of the proposed approach,
the private and public keys of the receiver will be derived from the locations of the values of the
segments (Bob). In the next parts, the scenario of the job will be discussed in greater depth.
**Figure 2. The scenario of the proposed method**
The position of the selected DNA segment will be denoted by the public key that is assigned to the
receiver (P_Bob).while Bob's secret key (Sec_Bob) may be one or more DNA segments (P1, P2,..., Pn)
within the GenBank. This would mean that the value of the selected DNA segment would have a certain
number of DNA bases and a certain length. While the position of the selected DNA segment will be
denoted by the public key that is assigned to the receiver (P_Bob). The following is an overview of the
possible combinations for the pairs of keys:
- The segment's position inside GenBank provides Bob's public key.
- The slice's value reflects Bob's secret key (Sec_Bob).
- Only Bob could determine Sec Bob using DNA segments.
- Sec_Bob is now a GenBank DNA segment with a value and length that can be any sequence of
DNA bases.
Figure 3 shows the proposed keys (Pub_Bob) and (Sec_Bob).
As Sec_Bob PiL.
-----
## www
Where:
Pi is the GenBank DNA slice P, and L is its length.
The selected DNA segment's length determines Sec_Bob. The recipient can obtain this key using many
approaches. Bob may use an accurate mathematical way to retrieve a particular segment or a chaotic
way to retrieve a particular part or portion of the segment. This might allow random access to segment
bases. The receiver can produce or utilize the key using any method. Bob knows the key, but Alice
doesn't.
Alice knows the receiver's public key, which is the DNA segment's GenBank location as HMD (Pi).
Where HMD (Pi) Pi's GenBank location
Message m may be extracted using the paired public and private keys (the DNA segment's GenBank
location (Pub_Bob) and its bases) (Sec_Bob). DNA segment size varies by the party (sender and
receiver).
**Figure 3. Pub_Bob and Sec_Bob using the DNA segments in GenBank**
Figure 4 displays the Hiding and Extraction operations as an algorithm for both Alice and Bob.
-----
## www
**Figure 4. The steps of Hiding and extraction methods on both sides (Alice and Bob)**
The suggested Hiding and Extraction techniques leverage public keys without using any mathematical
calculations (modules or elliptic curves). GenBank's vast DNA data can be used. Alice knows the
segment's position as a key; while Bob knows its value and length. Using DNA as a private key also
involves Bob alone.
3. Discussion
By eliminating the requirement to send the secret key via a public channel, the suggested technique
makes data embedding and extracting more secure. Depending on the terms of the agreement, the public
key may be shared with a wide variety of recipients. This key will be used to embed the message m by
the sender. Any meaningless code of digits or letters can serve as the key to a GenBank site, and each
of those locations can store a billion different pieces of DNA. GenBank is useful for our suggested
embedding approach since, even if the adversary has the key, he cannot examine it. When considering
the attacker's computational resources and time, it's also difficult to examine all sites in GenBank. The
secret or private key is known only to the recipient, with the sender having no access to it under any
circumstances.
The recipient extracts the stego-file using his public and private keys. He uses the public key to find the
segment in GenBank and the segment's value to get his private key for extraction. This private key can
have any form, depending on how it's obtained. Regardless of the strategy, it will provide robust security
as only the receiver knows the DNA key.
GenBank is a public resource of 15.3 trillion base pairs from 2.5 billion nucleotide bases which be used
as a private key and are known only by the receiver himself sequences for 504 000 species, according
to [4]. So, calculating the likelihood of finding the location of one DNA segment used as a public key
from these segments is tough. The search must attempt all feasible sites, which is time-consuming. Even
if the attacker finds the exact position, i.e., the public key, he will obtain a fuzzy meaningless number.
If we know that the amount of bases that are stored in GenBank has typically doubled every 18 months
according to [5]. Also, the secret key, which may be any collection of DNA bases, is challenging or
impossible if each base is two bits and the four bases compose one byte. As 00-A, 01-C, 10-G, and 11T. So, the private key will be a group of DNA bases and a chosen technique done on them by the receiver.
The number and the locations of DNA.
-----
## www
15.3 trillion bases, each of which can be C, A, T, or G. So, the probability of getting the value of the
DNA chosen DNA segment will be 4^(17.3 trillion ) if the segment length consists only of 4 DNA bases.
This probability will increase if we use the binary coding for the DNA bases as every base can be
represented by two bits with different coding as in Table 1
Table 1 lists the possible DNA base coding
**Table 1. DNA Base Coding.**
Coding Bases
A C G T
Code1 00 01 10 11
Code 2 01 00 11 10
Code 3 10 11 00 01
Code 4 11 10 01 00
The NIH genetic sequence database is freely available online, allowing access anytime, anywhere.
The attacker must check every GenBank to estimate public and private keys. He must also divide the
DNA segment's worth by billions, an impossible task. The recipient alone knows the attacker's approach.
If the secret key has 4 DNA bases, the attacker must try 4^(〖17.3 trillion〗^16! )In this scenario, the
DNA bases were handled as one unit consisting of 4 bases. However, the potential increases
considerably if these 4 bases were gathered based on a particular sequence as a key. The Hiding approach
requires no arithmetic or computations. It might give robust security utilizing biological concerns like
GenBank DNA data and DNA sequence features.
_3.1._ _Steganalysis_
Information connections may be attacked in numerous ways, the man in the middle attack is one of
the most prominent. This attack is described as the person in the middle breaks off the data handed on
by the sender and sends it.
If an MITM or black hole attacker tries to access data, it must be extracted. The attacker can't see the
sender's data in transit. Increased DNA data volume leads to data storage and privacy difficulties, yet
the attacker has no knowledge of the receiver's private key or sent data. The attacker also obtains
ciphertext. DNA coding is changed. The attacker can't access the hidden plaintext and ciphertext
information.
The proposed method will be very effective when the message to be hidden is converted, as well as
the carrier file is converted to the same encoding as the DNA format. Here, the attacker's task will
become very difficult, if not completely impossible, as the public key will be specific sites for the DNA
Segments, while the secret key will be either the number of bases within these sites or it will be
calculated in a chaotic manner that also depends on these bases.
-----
## www
4. CONCLUSIONS
This work uses public and private keys for hiding and extraction. The sender encrypts the public
critical public key to embed, while the receiver uses public and private keys to extract. The sender,
receiver, and private key are unknown. DNA Banks and segments are used. This approach provides
good security with fewer arithmetic operations. The simple method is presented. Due to limited power
and storage, the recommended solution can be employed in IoT security.
**References**
[1] M. Bishop, "Introduction to computer security," 2005.
[2] B. A. Forouzan and D. Mukhopadhyay, Cryptography and network security vol. 12: Mc Graw
Hill Education (India) Private Limited New York, NY, USA:, 2015.
[3] N. Hamid, A. Yahya, R. B. Ahmad, and O. M. Al-Qershi, "Image steganography techniques:
an overview," International Journal of Computer Science and Security (IJCSS), vol. 6, pp.
168-187, 2012.
[4] D. A. Benson, M. Cavanaugh, K. Clark, I. Karsch-Mizrachi, J. Ostell, K. D. Pruitt, et al.,
"GenBank," Nucleic acids research, vol. 46, pp. D41-D47, 2018.
[5] E. W. Sayers, M. Cavanaugh, K. Clark, K. D. Pruitt, C. L. Schoch, S. T. Sherry, et al.,
"GenBank," Nucleic acids research, vol. 49, pp. D92-D96, 2021.
[6] M. Y. Galperin and X. M. Fernández-Suarez, "The 2012 nucleic acids research database issue
and the online molecular biology database collection," Nucleic acids research, vol. 40, pp.
D1-D8, 2012.
[7] E. W. Sayers, J. Beck, E. E. Bolton, D. Bourexis, J. R. Brister, K. Canese, et al., "Database
resources of the national center for biotechnology information," Nucleic acids research, vol.
49, p. D10, 2021.
[8] N. R. Coordinators, "Database resources of the national center for biotechnology information,"
_Nucleic acids research, vol. 46, p. D8, 2018._
[9] I. Hussain and N. Pandey, "Carrier data security using public key steganography in ZigBee,"
in 2016 International Conference on Innovation and Challenges in Cyber Security (ICICCS_INBUSH), 2016, pp. 213-216._
[10] Z. K. Al-Ani, A. Zaidan, B. Zaidan, and H. Alanazi, "Overview: Main fundamentals for
steganography," arXiv preprint arXiv:1003.4086, 2010.
[11] M. Conti, N. Dragoni, and V. Lesyk, "A survey of man in the middle attacks," IEEE
_communications surveys & tutorials, vol. 18, pp. 2027-2051, 2016._
[12] V. Annapurna, S. N. Rao, and M. Giriprasad, "A Survey of different video steganography
approaches against man-in-the middle attacks," in 2021 Fifth International Conference on I_SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC), 2021, pp. 1601-1607._
[13] K. J. Sarma, R. Sharma, and R. Das, "A survey of black hole attack detection in manet," in
_2014 International Conference on Issues and Challenges in Intelligent Computing Techniques_
_(ICICT), 2014, pp. 202-205._
[14] G. M. Keerthi, M. Lalli, and V. Palanisamy, "Secured Solution and Detection against Black
Hole Attack in MANET by finding the Optimum Path in AODV protocol and high secured
data transmission using Steganography."
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.47577/technium.v5i.8238?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.47577/technium.v5i.8238, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://techniumscience.com/index.php/technium/article/view/8238/3025"
}
| 2,023
|
[] | true
| 2023-01-16T00:00:00
|
[] | 4,533
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Mathematics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/03230b6d7a853f7c50d1b05e012bf4bfbedecce0
|
[
"Computer Science",
"Mathematics"
] | 0.845006
|
Composition of Zero-Knowledge Proofs with Efficient Provers
|
03230b6d7a853f7c50d1b05e012bf4bfbedecce0
|
IACR Cryptology ePrint Archive
|
[
{
"authorId": "3322386",
"name": "Eleanor Birrell"
},
{
"authorId": "1723744",
"name": "S. Vadhan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IACR Cryptol eprint Arch"
],
"alternate_urls": null,
"id": "166fd2b5-a928-4a98-a449-3b90935cc101",
"issn": null,
"name": "IACR Cryptology ePrint Archive",
"type": "journal",
"url": "http://eprint.iacr.org/"
}
| null |
# Composition of Zero-Knowledge Proofs
with Efficient Provers[⋆]
Eleanor Birrell[1] and Salil Vadhan[2]
1 Department of Computer Science, Cornell University
eleanor@cs.cornell.edu
2 School of Engineering and Applied Sciences and Center for Research on
Computation and Society, Harvard University[⋆⋆]
salil@seas.harvard.edu
**Abstract. We revisit the composability of different forms of zero-**
knowledge proofs when the honest prover strategy is restricted to be
polynomial time (given an appropriate auxiliary input). Our results are:
1. When restricted to efficient provers, the original Goldwasser–Micali–
Rackoff (GMR) definition of zero knowledge (STOC ‘85), here called
_plain zero knowledge, is closed under a constant number of sequen-_
tial compositions (on the same input). This contrasts with the case
of unbounded provers, where Goldreich and Krawczyk (ICALP ‘90,
SICOMP ‘96) exhibited a protocol that is zero knowledge under the
GMR definition, but for which the sequential composition of 2 copies
is not zero knowledge.
2. If we relax the GMR definition to only require that the simulation
is indistinguishable from the verifier’s view by uniform polynomialtime distinguishers, with no auxiliary input beyond the statement
being proven, then again zero knowledge is not closed under sequential composition of 2 copies.
3. We show that auxiliary-input zero knowledge with efficient provers
is not closed under parallel composition of 2 copies under the assumption that there is a secure key agreement protocol (in which it
is easy to recognize valid transcripts). Feige and Shamir (STOC ‘90)
gave similar results under the seemingly incomparable assumptions
that (a) the discrete logarithm problem is hard, or (b)
_UP ̸⊆BPP_
and one-way functions exist.
## 1 Introduction
Composition has been one of the most active subjects of research on zeroknowledge proofs. The goal is to understand whether the zero-knowledge
property is preserved when a zero-knowledge proof is repeated many times. The
The original version of this chapter was revised: The copyright line was incorrect. This has been
[corrected. The Erratum to this chapter is available at DOI: 10.1007/978-3-642-11799-2_36](http://dx.doi.org/10.1007/978-3-642-11799-2_36)
_⋆_ These results first appeared in the first author’s undergraduate thesis [5] and in the
full version of the paper is available on the Cryptology ePrint Archive [6].
_⋆⋆_ [33 Oxford Street, Cambridge, MA 02138. http://seas.harvard.edu/~salil/.](http://seas.harvard.edu/~ salil/)
Supported by NSF grant CNS-0831289.
-----
answers vary depending on the variant of zero knowledge in consideration and
the form of composition (e.g. sequential, parallel, or concurrent). The study of
composition was first aimed at reducing the soundness error of basic constructions of zero-knowledge proofs (via sequential or parallel composition), but was
later also motivated by considering networked environments in which an adversary might be able to open several instances of a protocol (even concurrently).
Soon after Goldwasser, Micali, and Rackoff introduced the concept of zero
knowledge proofs [20], it was realized that composability is a subtle issue. In
particular, this motivated a strengthening of the GMR definition, known as
_auxiliary-input zero knowledge [21,19,9], which was shown to be closed under se-_
quential composition [19]. The need for this stronger definition was subsequently
justified by a result of Goldreich and Krawczyk [16], who showed that the original GMR definition is not closed under sequential composition. Specifically, they
exhibited a protocol that is plain zero knowledge when executed once, but fails
to be zero knowledge when executed twice sequentially.
The starting point for our work is the realization that the Goldreich–Krawczyk
protocol is not an entirely satisfactory counterexample, because the prover strategy is inefficient (i.e. super-polynomial time). Most cryptographic applications
of zero-knowledge proofs require a prover strategy that can be implemented efficiently given an appropriate auxiliary input (e.g. NP witness). Prover efficiency
can intuitively have an impact on the composability of zero-knowledge proofs, because an adversarial verifier may be able to use the extra computational power
of one prover copy to “break” the zero-knowledge property of another copy.
Indeed, known positive results on the parallel and concurrent composability of
witness-indistinguishable proofs (a weaker variant of zero-knowledge proofs) rely
on prover efficiency [9].
Thus, we revisit the sequential composability of plain zero knowledge, but re
stricted to efficient provers. Our first result is positive, and shows that such proofs
_are closed under any constant number of sequential compositions (in contrast to_
the Goldreich–Krawczyk result with unbounded provers). The case of a superconstant or polynomial number of compositions remains an interesting open question.
This positive result refers to the standard formulation of plain zero knowledge,
where the simulation and the verifier’s view are required to be indistinguishable
by nonuniform polynomial-time distinguishers (or distinguishers that are given
the prover’s auxiliary input in addition to the statement being proven).
We then consider the case where the distinguishers are uniform probabilistic
polynomial-time algorithms, whose only additional input is the statement being
proven. In this case, we obtain a negative result analogous to the one of Goldreich
and Krawczyk, showing that zero knowledge is not closed under sequential composition of even 2 copies (assuming that ). Informally, these two re_NP ̸⊆BPP_
sults say that plain zero knowledge is closed under a constant number of sequential
compositions if and only if the distinguishers are at least a powerful as the prover.
We also examine the parallel composability of auxiliary-input zero knowledge.
Here, too, Goldreich and Krawczyk [16] gave a negative result that utilizes an
inefficient prover. Feige and Shamir [9], however, gave a negative result with an
-----
efficient prover, under the assumption that the discrete logarithm is hard, or
more generally under the assumptions that and one-way functions
_UP ̸⊆BPP_
exist. We are interested in whether the complexity assumption used by Feige
and Shamir can be weakened. To this end, we provide a negative result under a
seemingly incomparable assumption, namely that there exists a key agreement
protocol (in which it is easy to recognize valid transcripts).
## 2 Definitions and Preliminaries
**2.1** **Interactive Proofs**
Given two interactive Turing machines – a prover P and a verifier V – we consider
two types of interactive protocols: proofs of language membership (interactive
proofs) and proofs of knowledge. In each case, both parties receive a common
input x, and P is trying to convince V that x _L for some language L. We will_
_∈_
allow P to have an extra “auxiliary input” or “witness” y. We use the notation
(P, V ) to denote an interactive protocol and the notation _P_ (x, y), V (x) to
_⟨_ _⟩_
denote the verifier V ’s view of that protocol with inputs (x, y) and x respectively.
The choices for y will be given by a relation of the following kind:
**Definition 2.1 (Poly-balanced Relation). A binary relation R is poly-**
balanced if there exists a polynomial p such that for all (x, y) _R,_ _y_ _p(_ _x_ ).
_∈_ _|_ _| ≤_ _|_ _|_
_The language generated by such a relation is denoted LR = {x : (x, y) ∈_ _R}._
Observe that we don’t require R to be polynomial-time verifiable, so every language L is generated by such a relation, for example the relation R = (x, y) :
_{_
_y_ = _x_ and x _L_ .
_|_ _|_ _|_ _|_ _∈_ _}_
**Definition 2.2 (Interactive Proof). We say that an interactive protocol**
(P, V ) is an interactive proof system for a language L if there exists a poly_balanced relation R such that L = LR and the following properties hold:_
**– (Verifier Efficiency): The verifier V runs in time at most poly(** _x_ ) on input
_|_ _|_
_x._
**– (Completeness): If (x, y)** _R then the verifier V (x) accepts with probability_
_∈_
_1 after interacting with the prover P_ (x, y) on common input x and prover
_auxiliary input y._
**– (Soundness): There exists a function s(n)** 1 1/poly(n) (called the sound_≤_ _−_
ness error) for which it holds that for all x / _L and for all prover strategies_
_∈_
_P_ _[∗], the verifier V (x) accepts with probability at most s(|x|) after interacting_
_with P_ _[∗]_ _on common input x and prover auxiliary input y._
**Definition 2.3 (Proof of Knowledge). Let R be a poly-balanced relation.**
_Given an interactive protocol (P, V ), we let p(x, y, r) be the probability that V_
_accepts on common input x when y is P_ _’s auxiliary input and r is the random_
_input generated by P_ _’s random coin flips. Let Px,y,r be the function such that_
_Px,y,r(m) is the message sent by P after receiving messages m. An interactive_
_protocol (P_ (x, y), V (x)) is an interactive proof of knowledge for the relation R
_if the following three properties hold:_
-----
**– (Verifier Efficiency): The verifier V runs in time at most poly(** _x_ ) on input
_|_ _|_
_x._
**– (Completeness): If (x, y)** _R, then V accepts after interacting with P on_
_∈_
_common input x._
**– (Extraction): There exists a function s(n)** 1 1/poly(n) (called the sound_≤_ _−_
ness error), a polynomial q, and a probabilistic oracle machine K such that
_for every x, y, r_ 0, 1 _, K satisfies the following condition: if p(x, y, r) >_
_∈{_ _}[∗]_
_s(|x|) then on input x and with access to oracle Px,y,r machine K out-_
_puts w such that (x, w)_ _R within an expected number of steps bounded_
_∈_
_by q(_ _x_ )/(p(x, y, r) _s(_ _x_ )).
_|_ _|_ _−_ _|_ _|_
Observe that extraction implies soundness, so a proof of knowledge for R is also
an interactive proof for LR.
Although the above definitions require a polynomial-time verifier, neither
places any restriction on the computational power of the prover P . In keeping with the standard model of “realistic” computation, we sometimes prefer to
limit the computational resources of both parties to polynomial time. Specifically, we add the additional requirement that there exists a polynomial p such
that the prover P (x, y) runs in time p( _x_ _,_ _y_ ) where x is the common input
_|_ _|_ _|_ _|_
and y is the prover’s auxiliary input. We refer to such protocols as efficient or
_efficient-prover proofs._
**2.2** **Zero Knowledge**
In keeping with the literature, we define zero knowledge in terms of the indistinguishability of the output distributions.
**Definition 2.4 (Uniform/Nonuniform** **Indistinguishability).** _Two en-_
_sembles of probability distributions {Π1(x)}x∈S and {Π2(x)}x∈S are uniformly_
(resp. nonuniformly) indistinguishable if for every uniform (resp. nonuniform)
_probabilistic polynomial-time algorithm D, there exists a negligible function μ_
_such that for every x_ _S,_
_∈_
���Pr[D(1|x|, Π1(x)) = 1] − Pr[D(1|x|, Π2(x)) = 1]��� _≤_ _μ(|x|),_
_where the probability is taken over the samples of Π1(x) and Π2(x) and the coin_
_tosses of D._
Often, definitions of computational indistinguishability give the distinguisher
the index x (not just its length). This makes no difference for nonuniform distinguishers – since they can have x hardwired in – but it does matter for uniform
distinguishers. Indeed, we will see that zero-knowledge proofs demonstrate different properties under composition depending on how much information the
distinguisher is given about the inputs.
Also, uniform indistinguishability is usually not defined with a universal quan
tifier over x _S, but instead with respect to all polynomial-time samplable dis-_
_∈_
tributions on x _S (e.g. [2][12]). We use the above definition for simplicity, but_
_∈_
our results also extend to the usual definition.
-----
For the purposes of this paper, we consider two different definitions of zero
knowledge. The first, which has primarily been of interest for historical reasons,
is the one originally introduced by Goldwasser, Micali, and Rackoff [20]:
**Definition 2.5 (Plain Zero Knowledge). An interactive proof system (P, V )**
_for a language L = LR is plain zero knowledge (with respect to nonuniform_
_distinguishers) if for all probabilistic polynomial-time machines V_ _[∗], there ex-_
_ists a probabilistic polynomial-time algorithm MV ∗_ _that on input x produces_
_an output probability distribution {MV_ _[∗](x)} such that {MV_ _[∗](x)}(x,y)∈R and_
_{⟨P_ (x, y), V _[∗](x)⟩}(x,y)∈R are nonuniformly indistinguishable._
As is standard, the above definition refers to nonuniform distinguishers (which
can have x, y and any additional information depending on x, y hardwired in
as nonuniform advice). However, it is also natural to consider uniform distinguishers. In this setting, it is important to differentiate between the case where
the distinguisher is only given the single verifier input x and the case where the
distinguisher is given both x and the prover’s auxiliary input y.
**Definition 2.6. An interactive proof system (P, V ) for a language L = LR is**
plain zero knowledge with respect to V -uniform distinguishers if for all prob_abilistic polynomial-time machines V_ _[∗], there exists a probabilistic polynomial-_
_time algorithm MV ∗_ _that on input x produces an output probability distribution_
_{MV ∗_ (x)} such that {(x, MV ∗ (x))}(x,y)∈R and {(x, ⟨P (x, y), V _[∗](x)⟩)}(x,y)∈R are_
_uniformly indistinguishable._
**Definition 2.7. An interactive proof system (P, V ) for a language L = LR is**
plain zero knowledge with respect to P -uniform distinguishers if for all prob_abilistic polynomial-time machines V_ _[∗], there exists a probabilistic polynomial-_
_time algorithm MV ∗_ _that on input x produces an output probability distribution_
_{MV ∗_ (x)} such that {(x, y, MV ∗ (x))}(x,y)∈R and {(x, y, ⟨P (x, y), V _[∗](x)⟩)}(x,y)∈R_
_are uniformly indistinguishable._
The next definition of zero knowledge that we will consider is the more standard
definition which incorporates an auxiliary input for the verifier.
**Definition 2.8 (Auxiliary-Input Zero Knowledge). An interactive proof**
_system (P, V ) for a language L is auxiliary-input zero knowledge if for every_
_probabilistic polynomial-time machine V_ _[∗]_ _and every polynomial p there exists a_
_probabilistic polynomial-time machine MV_ _[∗]_ _such that the probability ensembles_
_{⟨P_ (x, y), V _[∗](x, z)⟩}(x,y)∈R,z∈{0,1}p(|x|) and {MV ∗_ (x, z)}(x,y)∈R,z∈{0,1}p(|x|) are
_nonuniformly indistinguishable._
Observe that although this last definition is given only in terms of nonuniform
indistinguishability, this is actually equivalent to requiring only uniform indistinguishability; any nonuniform advice used by the distinguisher can instead be
incorporated into the verifier’s auxiliary input z.
-----
**2.3** **Composition**
In this section, we explicitly state the definitions of sequential and parallel composition that will be used throughout this paper. These definitions can be applied
to any of the definitions of zero knowledge given in the previous section.
**Definition 2.9. Given an interactive proof system (P, V ) and a polynomial**
_t(n), we consider the t(n)-fold sequential composition of this system to be the_
_interactive system consisting of t(n) copies of the proof executed in sequence._
_The i[th]_ _copy of the protocol is initialized after the (i_ 1)[th] _copy has concluded._
_−_
_All copies of the protocol are initialized with the same inputs._
We can extend our notion of zero knowledge to this setting in the natural way.
**Definition 2.10. An interactive proof (P, V ) for the language L is sequential**
zero knowledge if for all polynomials t(n), the t(n)-fold sequential composition
_of (P, V ) is a zero knowledge proof for L._
Note that although the verifiers in the different proof copies may be distinct
entities and may in fact be honest, this definition implicitly assumes the worst
case in which a single adversary controls all verifier copies. That is, it considers
a sequential adversary (verifier) to be an interactive Turing machine V _[∗]_ that is
allowed to interact with t(n) independent copies of P (all on common input x)
in sequence.
Our definition of parallel composition is analogous to the above definition:
**Definition 2.11. Given an interactive proof system (P, V ) and a polynomial**
_t(n), we consider the t(n)-fold parallel composition of this system to be the_
_interactive system consisting of t(n) copies of the proof executed in parallel. Each_
_message in the i[th]_ _round of a copy of the protocol must be sent before any message_
_from the (i + 1)[th]_ _round. All copies of the protocol are initialized with the same_
_inputs._
We can again extend our notion of zero knowledge to this setting:
**Definition 2.12. An interactive proof (P, V ) for the language L is parallel zero**
knowledge if for all polynomials t(n) the t(n)-fold parallel composition of (P, V )
_is a zero-knowledge proof for L._
Thus a parallel adversary (verifier) is an interactive Turing machine V _[∗]_ that is
allowed to interact with t(n) independent copies of P (all on common input x) in
parallel. That is the i[th] message in each copy is sent before the (i +1)[th] message
of any copy of the protocol.
## 3 Sequential Zero Knowledge
**3.1** **Previous Results**
In the area of sequential zero knowledge, there are two major results. The first
is a negative result concerning the composition of plain zero-knowledge proofs.
-----
**Theorem 3.1 (Goldreich and Krawczyk [16]). There exists a plain zero-**
_knowledge proof (with respect to nonuniform distinguishers) whose 2-fold sequen-_
_tial composition is not plain zero-knowledge._
The second significant result to emerge from the area concerns the composition
of auxiliary-input zero-knowledge proofs. In this case it is possible to show that
the zero-knowledge property is retained under sequential composition.
**Theorem 3.2 (Goldreich and Oren [19]). If (P, V ) is auxiliary-input zero**
_knowledge, then (P, V ) is auxiliary-input sequential zero knowledge._
These two results provide a context for our new results on sequential
composition.
**3.2** **New Results**
While Theorem 3.1 demonstrates that the original definition of zero knowledge
is not closed under sequential composition, it relies on the fact that the prover
can be computationally unbounded. In this section, we address the question:
what happens when you compose efficient-prover plain zero-knowledge proofs?
We obtain two results that partially characterize this behavior.
First we show that the Goldreich and Krawczyk result (Theorem 3.1) cannot
be extended to efficient-prover plain zero-knowledge proofs. Indeed, we show
that such proofs are closed under a constant number of compositions.
**Theorem 3.3. If (P, V ) is an efficient-prover plain zero-knowledge proof system**
_with respect to nonuniform (resp., P_ _-uniform) distinguishers then for every con-_
stant k, the k-fold sequential composition of (P, V ) is also plain zero knowledge
_w.r.t. nonuniform (resp., P_ _-uniform) distinguishers._
We leave the case of a super-constant number of compositions as an intriguing
open problem.
Next we consider the case of V -uniform distinguishers, and we show that
such protocols are not closed under 2-fold sequential composition with efficient
provers.
**Theorem 3.4. If NP ⊈** _BPP then there exists an efficient-prover plain zero-_
_knowledge proof with respect to V -uniform distinguishers whose 2-fold composi-_
_tion is not plain zero knowledge with respect to V -uniform distinguishers._
Informally, Theorems 3.3 and 3.4 say that plain zero knowledge is closed under
a constant number of sequential compositions if and only if the distinguishers
are at least as powerful as P .
**Proof of Theorem 3.3. We now prove that efficient-prover plain zero-knowledge**
is closed under O(1)-fold sequential composition.
-----
_Proof. Let (Pk, Vk) denote the sequential composition of k copies of (P, V ). We_
prove by induction on k that (Pk, Vk) is plain zero knowledge with respect to
nonuniform (resp., P -uniform) distinguishers.
(P1, V1) is zero knowledge by assumption.
Assume for induction that (Pk−1, Vk−1) is zero knowledge, and consider the
interactive protocol (Pk, Vk). Let Vk[∗] [be s][o][me seque][n][t][i][a][l v][er][i][fier strateg][y][ f][o][r]
interacting with Pk, and let Vk[∗]−1 [de][no][te the seque][n][t][i][a][l v][er][i][fier that emu][l][ates]
_Vk[∗][’][s][ in][teract][ion][s][ wi][th the first][ k][ −]_ [1][ c][o][p][i][es][ o][f the the pr][oo][f s][y][stem][ (][P, V][ )][ a][n][d]
then halts. Since (Pk−1, Vk−1) is zero knowledge, there exists a simulator Mk−1
that successfully simulates Vk[∗]−1[.]
Define Hk[∗] [t][o][ be the][ “][h][y][br][i][d][” v][er][i][fier strateg][y (][f][o][r][ in][teract][ion wi][th][ P] [)][ that]
consists of running the simulator Mk−1 to obtain a simulated view v of the
first k − 1 interactions, and then emulates Vk[∗] [(][start][in][g fr][o][m the s][i][mu][l][ated][ vi][e][w]
_v) in the kth interaction. Since (P, V ) is plain zero knowledge, there exists a_
polynomial-time simulator Mk for this verifier strategy.
We now show that Mk is also a valid simulator for (Pk, Vk[∗][). Sin][ce b][y]
induction (Pk−1, Vk−1) is plain zero knowledge versus nonuniform (resp., P uniform) distinguishers, the ensembles Π1(x, y) = (x, y, ⟨Pk−1(x, y), Vk[∗]−1[(][x][)][⟩][)]
and Π2(x, y) = (x, y, Mk−1(x)) are nonuniformly (resp., uniformly) indistinguishable when (x, y) ∈ _R. Consider the function f_ (x, y, v) = (x, y, v[′]) that
emulates Vk[∗] [start][in][g fr][o][m][ vi][e][w][ v][ in on][e m][o][re][ in][teract][ion wi][th][ P] [(][y][)][ t][o o][bta][in]
view v[′]. Since f is polynomial-time computable, we have that f (Π1(x, y)) and
_f_ (Π2(x, y)) are also nonuniformly (resp., uniformly) indistinguishable. Observe
that f (Π1(x, y)) = (x, y, ⟨Pk(x, y), Vk[∗][(][x][)][⟩][)][ a][n][d][ f] [(][Π][2][(][x, y][)) = (][x, y, M][k][(][x][))]
therefore Mk is a valid simulator for (Pk, Vk[∗][)][ a][n][d he][n][ce][ (][P][k][, V][k][) i][s p][l][a][in][ zer][o]
knowledge with respect to nonuniform (resp., P -uniform) distinguishers.
_⊓⊔_
In this proof, we implicitly rely on the fact that the number of copies k is a
constant. It is possible that the running time of the simulation is Θ(n[g][(][k][)]) for
some growing function g, and hence super-polynomial for nonconstant k.
Note that this result doesn’t conflict with either Theorem 3.1 (in which the
prover was allowed to use exponential time and was therefore able to distinguish
between a simulated interaction and a real interaction) or Theorem 3.4 (in which
the prover is polynomial time but the distributions are only indistinguishable to
a V -uniform distinguisher, so the prover was still able to distinguish between
a simulated interaction and a real interaction). Instead, it demonstrates that
when neither party has more computational resources than the distinguisher, it
is possible to prove a sequential closure result for plain zero knowledge, albeit
restricted to a constant number of compositions.
**Proof of Theorem 3.4. We now prove Theorem 3.4, showing that plain**
zero knowledge with respect to V -uniform distinguishers is not closed under
sequential composition. Our proof of Theorem 3.4 is a variant of the GoldreichKrawczyk [16] proof of Theorem 3.1, so we be begin by reviewing their
construction.
-----
_Overview of the Goldreich-Krawczyk Construction [16]. In the proof of Theo-_
rem 3.1, the key to constructing a zero-knowledge protocol that breaks under
sequential composition lies in taking advantage of the difference in computational power between the unbounded prover and the polynomial-time verifier.
The proof requires the notion of an evasive pseudorandom ensemble. This is
simply a collection of sets Si ⊆{0, 1}[p][(][i][)] such that each set is pseudorandom
and no polynomial-time algorithm can generate an element of Si with nonnegligible probability. The existence of such ensembles was proven by Goldreich
and Krawczyk in [17]. Using this, Goldreich and Krawczyk [16] construct a protocol such that in the first sequential copy, the verifier learns some element s ∈ _S|x|._
In the second iteration, the verifier uses this s (whose membership in S|x| can
be confirmed by the prover) to extract information from P . A polynomial-time
prover would be unable to generate or verify s ∈ _S|x|, therefore the result inher-_
ently relies on the super-polynomial time allotted to the prover.
_Overview of our Construction. As in the Goldreich-Krawczyk construction, we_
take advantage of the difference in computational power between the two parties.
However, since both are required to be polynomial-time machines, the only advantage that the prover has over the verifier is in the amount of nonuniform input
each machine receives. The prover is allowed poly( _x_ ) bits of auxiliary input y
_|_ _|_
whereas the verifier receives only the _x_ bits from the common input x. In order
_|_ _|_
to take advantage of this difference, we define efficient bounded-nonuniform evasive pseudorandom ensembles. Using the newly defined ensembles, we construct
an analogous protocol; in the first iteration, the verifier learns some element
of an efficient bounded-nonuniform evasive pseudorandom ensemble, and in the
second it uses this information to extract otherwise unobtainable information
from P .
**Definition 3.5. Let q be a polynomial and let S = {S1, S2, . . . } be a sequence**
_of (non-empty) sets such that each Sn ⊆{0, 1}[n]. We say that S is a efficient_
_q(n)-nonuniform evasive pseudorandom ensemble if the following three properties_
_hold:_
_(1) For all probabilistic polynomial-time machines A with at most q(n) bits_
_of nonuniformity, Sn is indistinguishable from the uniform distribution on_
_strings of length n. That is, there exists a negligible function ϵ such that for_
_all sufficiently large n,_
Pr _ϵ(n)._
����x∈Sn[[][A][(][x][) = 1]][ −] _x∈[P]U[r]n[[][A][(][x][) = 1]]����_ _≤_
_(2) For all probabilistic polynomial-time machines B with at most q(n) bits of_
_nonuniformity, it is infeasible for B to generate any element of Sn except_
_with negligible probability. That is, there exists a negligible function ϵ such_
_that for all sufficiently large n,_
Pr
_r∈{0,1}[q][(][n][)][[][B][(][x, r][)][ ∈]_ _[S][n][]][ ≤]_ _[ϵ][(][n][)][.]_
-----
_(3) There exists a polynomial p(n) and a sequence of strings {πn}n∈N of length_
_|πn| = p(n) such that:_
_(a) There exists a probabilistic polynomial-time machine D such that for all_
_n ∈_ N and x ∈{0, 1}[n], D(πn, x) = 1 if x ∈ _Sn and D(πn, x) = 0 else._
_(b) There exists an expected probabilistic polynomial-time machine E such_
_that for all n E(πn) is a uniformly random element of Sn._
_That is there exist efficient algorithms with polynomial-length advice for_
_checking membership in the ensemble and for choosing an element uniformly_
_at random._
This definition is similar in spirit to the notion of an evasive pseudorandom ensemble used by Goldreich and Krawczyk in the proof of Theorem 3.1. However,
we add the additional requirement that a polynomial-time machine with an appropriate advice string πn can identify and generate elements of the ensemble.
In order for this to be possible, we relax the pseudorandomness and evasiveness
requirements to only hold with respect to distinguishers with bounded nonuniformity rather than with respect to nonuniform distinguishers.
The introduction of this definition begs the question of whether or not such
ensembles exist. Fortunately it turns out that they do.
**Theorem 3.6. There exists an efficient n/4-nonuniform evasive pseudorandom**
_ensemble._
The proof of this theorem appears in the full version [6]. It shows that if we
select a hash function hn : {0, 1}[n] _→{0, 1}[5][n/][16]_ from an appropriate pairwise
independent family then with high probability Sn = h[−]n [1][(0][5][n/][16][) i][s a][n][ n/][4][-]
nonuniform evasive pseudorandom set. The pseudorandomness and evasiveness
conditions (items (1) and (2)) are obtained by using pairwise independence and
taking a union bound over all algorithms with n/4 bits of nonuniformity. The
efficiency condition (item (3)) is obtained by taking hn to be from a standard
family (e.g., hn(x) = the first 5n/16 bits of a · x + b) and taking πn to be the
descriptor of hn (e.g., (a, b)).
We use this result to demonstrate that efficient-prover plain zero-knowledge
proofs with respect to V -uniform distinguishers are not closed under sequential composition. The construction is analogous to the one by Goldreich and
Krawczyk, and can be found in the full version of the paper [6].
## 4 Parallel Zero Knowledge
**4.1** **Previous Results**
There are two classic results that provide context for our new result concerning
the parallel composition of efficient-prover zero-knowledge proof systems. In both
cases, the result applies to auxiliary-input (as well as plain) zero knowledge, and
both results are negative.
The first result establishes the existence of non-parallelizable zero-knowledge
proofs independent of any complexity assumptions.
-----
**Theorem 4.1 (Goldreich and Krawczyk [16]). There exists an auxiliary-**
_input zero knowledge proof whose 2-fold parallel composition is not auxiliary-_
_input zero knowledge (or even plain zero knowledge with respect to nonuniform_
_distinguishers)._
While this result demonstrates that zero knowledge is not closed under parallel
composition in general, the proof (like that of Theorem 3.1) inherently relies
on the unbounded computational power of the provers. Without the additional
computational resources necessary to generate a string and test membership in
an evasive pseudorandom ensemble, the prover would be unable to execute the
defined protocol.
The second such result constructs an efficient-prover non-parallelizable zero
knowledge proof based on a zero-knowledge proof of knowledge of the discretelogarithm relation.
**Theorem 4.2 (Feige and Shamir [9]). If the discrete logarithm assumption**
_holds then there exists an efficient-prover auxiliary-input zero-knowledge proof_
_whose 2-fold parallel composition is not auxiliary-input zero knowledge (or even_
_plain zero knowledge with respect to V -uniform distinguishers)._
This proof relies on the very specific assumption that the discrete logarithm problem is intractable. However as Feige and Shamir observed [9], the only properties
of this problem which are actually necessary are the fact that discrete logarithms
are unique and that they have a zero-knowledge proof of knowledge. It is therefore natural to consider generalizing the result to proofs of language membership
for any language L with exactly one witness for each element x _L. The_
_∈NP_ _∈_
class of such languages is known as . Moreover, if one-way functions exist,
_UP_
then every problem in (and hence in ) has a zero-knowledge proof of
_NP_ _UP_
knowledge [18]. Thus:
**Theorem 4.3 (Feige and Shamir [9]). If UP ⊈** _BPP and one-way functions_
_exist then there exists an efficient-prover auxiliary-input zero-knowledge proof_
_whose 2-fold parallel composition is not auxiliary-input zero knowledge (or even_
_plain zero knowledge with respect to V -uniform distinguishers)._
**4.2** **New Results**
In this work,webroaden thecomplexity assumptionsunder which wehave _efficient-_
_prover non-parallelizable zero-knowledge proofs under more general complexity_
assumptions. Specifically, we show that such protocols can be constructed from any
key agreement protocol (satisfying an additional technical condition). Following
the standard notion of key agreement, we introduce the following definition.
**Definition 4.4. A key agreement protocol is an efficient protocol between two**
_parties P1, P2 with the following four properties:_
**– Input: Both parties have common input 1[ℓ]** _which is a security parameter_
_written in unary._
-----
**– Output: The outputs of both parties are k-bit strings (for some k = poly(ℓ)).**
**– Correctness: The parties have the same output with probability 1 (when they**
_follow the protocol). This common output is called the key._
**– Secrecy: No probabilistic polynomial time Turing machine E given 1[ℓ]** _and_
_the transcript of the protocol (messages between P1, P2) can distinguish with_
_non-negligible advantage the key from a uniformly distributed k-bit string._
_That is, {(1[ℓ], transcript(P1, P2), output(P1, P2))}1ℓ:ℓ∈N is nonuniformly in-_
_distinguishable from {(1[ℓ], transcript(P1, P2), Uk)}1ℓ:ℓ∈N._
For technical reasons, we impose an additional technical condition.
**Definition 4.5. Let (P1, P2) be a key agreement protocol. We say that a pair**
(i, r) 1, 2 0, 1 _is consistent with a transcript t of messages if the mes-_
_∈{_ _} × {_ _}[∗]_
_sages from Pi in t are what Pi would have sent had its coin tosses been r and had_
_it received the prior messages specified by t. We say that t is valid if there exist_
_r1, r2 such that t is consistent with both (1, r1) and (2, r2); that is, t occurs with_
_nonzero probability when the honest parties P1 and P2 interact. We say that_
(P1, P2) has verifiable transcripts if there is a polynomial-time algorithm that
_can decide whether a transcript t is valid given t and any pair (i, r) consistent_
_with t._
We note that many existing key agreement protocols have verifiable transcripts,
including the Diffie-Hellman key exchange and the protocols constructed from
any public-key encryption scheme with verifiable public keys.
Our main result on non-parallelizable zero knowledge proofs follows:
**Theorem 4.6. If key agreement protocols with verifiable transcripts exist then**
_there exists an efficient-prover auxiliary-input zero-knowledge proof whose 2-fold_
_parallel composition is not auxiliary-input zero knowledge (or even plain zero_
_knowledge with respect to V -uniform distinguishers)._
The existence of secure key agreement protocols with verifiable transcripts seems
incomparable to the assumption that UP ⊈ _BPP which was used in Theorem 4.3._
**Proof of Theorem 4.6**
_Proof. By assumption, key agreement protocols with verifiable transcripts exist._
We consider an occurrence of a key agreement protocol to consist of the coin
tosses of the two parties (r1, r2 respectively) together with the transcript t of
messages exchanged between the parties during the protocol.
Define a language L = {t : ∃(i, ri) consistent with t}. L = LR for the relation
_R = {(t, (i, ri)) : (i, ri) is consistent with t}; we do not claim or require that L /∈_
. Observe that L, so there exists an efficient-prover zero-knowledge
_BPP_ _∈NP_
proof of knowledge (ZKPOK) of a pair (i, ri) that is consistent with t with
error s(n) ≤ 2[−][m] where m is the maximum length of a witness (i, ri)[18]. If
necessary, the required error can be achieved by sequential composition of any
initial ZKPOK.
We can use this proof as a subprotocol for constructing the following interac
tive proof for the language L. V begins by sending the message c = 0 to P . If
-----
_c = 0, then P uses the ZKPOK to demonstrate that he knows (i, ri) consistent_
with the transcript t. If c ̸= 0, V demonstrates knowledge of (j, rj ) using the
same ZKPOK. If the proof is successful and the transcript is valid (which can
be checked by P by our assumption of verifiable transcripts), then P shows in
zero knowledge that he too knows a witness (i, ri) and then sends the common
key k to V .
The protocol is summarized below.
Step _P_ (t, (i, ri)) _V (t)_
1 _c = 0_
_←_ _c_
2 if c = 0: ZKPOK of (i, ri) →
consistent with t
_←_ if c ̸= 0 : ZKPOK of (j, rj ) consistent with t
3 if c ̸= 0: ZKPOK of (i, ri) →
consistent with t
4 if c ̸= 0, V ’s ZKPOK
is successful, and t is valid:
send k _→_
**Fig. 1. A efficient-prover non-parallelizable zero-knowledge proof for L**
The described protocol is a zero-knowledge proof for the language L.
**Efficient-Prover Interactive Proof. The fact that this protocol is an interac-**
tive proof follows directly from the fact that the subprotocol is (by assumption)
a proof of knowledge. Completeness and soundness follow from completeness
and extraction properties of the ZKPOK that P conducts in Step 2 or Step
3 respectively. Prover and verifier efficiency likewise follow from the respective
properties of the ZKPOK subprotocol.
**Zero Knowledge. Given any verifier strategy V** _[∗]_ we can construct a simulator
_MV ∗_ . MV ∗ begins by randomly choosing and fixing the coin tosses of the verifier
_V_ _[∗], and then runs the verifier V_ _[∗]_ in order to obtain its first message c. If c = 0,
_MV_ _[∗]_ then emulates the simulator for the ZKPOK to simulate Step 2. It then
does nothing for Step 3. If c ̸= 0, then MV ∗ simulates the ZKPOK in Step 2 by
following the correct “verifier” protocol and running V _[∗]_ in order to simulate the
“prover” half of the protocol. MV _[∗]_ then simulates Step 3 using the simulator
for the subprotocol. The expected time of all of these steps is polynomial; this
follows directly from the running time of the simulators provided by the various
subprotocols.
Finally, the simulator proceeds to Step 4. If c = 0 then there is no message
sent in Step 4. If c = 0 and the ZKPOK in Step 2 was unsuccessful, then
_̸_
there is again no message sent in Step 4. If c = 0 and the proof in Step 2 was
_̸_
successful, then MV ∗ runs the following two extraction techniques in parallel,
halting when one succeeds: First, it attempts to extract some (j, rj ) consistent
-----
with t by employing the extractor K using V _[∗]’s strategy from Step 2 as an_
“oracle.” Second it attempts to learn some witness (j, rj ) by trying each of the
2[m] possible witnesses in sequence. If MV ∗ has successfully found a witness, it
uses (j, rj ) together with the transcript t to determine whether t is valid and
then to determine the common key k by emulating the actions of one party and
responding to the “messages” from the other party as described in the transcript
_t. This key k is then used to simulate Step 4._
The indistinguishability and expected polynomial running time of the sim
ulation follow from those of the ZKPOK simulator, except for the simulation
of Step 4 in the case c ̸= 0. To analyze this, let p be the probability that V _[∗]_
succeeds in the ZKPOK in Step 2. If p > 2 2[−][m], then there exists such an
_·_
extractor K that extracts a witness (j, rj ) in expected time q(|x|)/(p − _s(|x|)._
Since this occurs with probability p, the expected time for this case is bounded
by (p _q(_ _x_ ))/(p _s(_ _x_ )) (p _q(_ _x_ ))/(p 2[−][m]) (p _q(_ _x_ ))/(p/2) 2q( _x_ ) =
_·_ _|_ _|_ _−_ _|_ _|_ _≤_ _·_ _|_ _|_ _−_ _≤_ _·_ _|_ _|_ _≤_ _|_ _|_
poly( _x_ ). If p 2 2[m] then the brute force technique will find a witness in
_|_ _|_ _≤_ _·_
expected time p 2[m] 2 = poly( _x_ ). Checking t’s validity takes polynomial
_·_ _≤_ _|_ _|_
time by assumption, and determining k takes time Θ( _x_ ), therefore the entire
_|_ _|_
simulation runs in expected polynomial time.
The indistinguishability of the final step of this simulation relies on the fact
that the transcript t is valid. Therefore, by the correctness of the key agreement
protocol, the same key will be computed using the extracted witness (j, rj ) as
with the prover’s witness (i, ri) even if they are not the same, so the simulation
is polynomially indistinguishable from V _[∗]’s view of the interactive protocol._
**Parallel Execution. Consider now two executions, (P[�]1,** _V[�] ) and (P[�]2,_ _V[�] ) in par-_
allel. A cheating verifier V _[∗]_ can always extract some witness w ∈{(1, r1), (2, r2)}
from _P[�]1 and_ _P[�]2 using the following strategy: in Step 1, V_ _[∗]_ sends c = 0 to _P[�]1 and_
_c = 1 to_ _P[�]2. Now V_ _[∗]_ has to execute the protocol (P, V ) twice: once as a verifier
talking to the prover _P[�]1, and once as a prover talking to the verifier_ _P[�]2. This he_
does by serving as an intermediary between _P1 and_ _P2, sending_ _P1’s messages_
[�] [�] [�]
to _P2, and_ _P2’s messages to_ _P1. Now_ _P2 willfully sends k to_ _V (which, by the_
[�] [�] [�] [�] [�]
secrecy property of the key agreement protocol, _V[�] is incapable of computing on_
his own).
_⊓⊔_
## 5 Conclusions and Open Problems
We view our results as pointing out the significance of prover efficiency, as well as
the power of the distinguishers, in the composability of zero-knowledge proofs.
Indeed, we have shown that with prover efficiency, the original GMR definition
enjoys a greater level of composability than without. Nevertheless, the nowstandard notion of auxiliary input zero knowledge still seems to be the appropriate one for most purposes. In particular, we still do not know whether plain zero
knowledge is closed under a super-constant number of compositions. We also
have not considered the case that different statements are being proven in each
of the copies, much less (sequential) composition with arbitrary protocols. For
-----
these, it seems likely that auxiliary input zero knowledge, or something similar,
is necessary.
One way in which our negative result on sequential composition (of plain
zero knowledge with respect to V -uniform distinguishers, Theorem 3.4) can be
improved is to provide an example where the prover’s auxiliary inputs are defined by a relation that can be decided in polynomial time (in contrast to our
construction, where the prover’s auxiliary input contains the advice string π4n,
which may be hard to recognize).
For the parallel composition of auxiliary-input zero knowledge with efficient
provers, it remains open to determine whether a negative result can be proven
under a more general assumption such as the existence of one-way functions. The
methods of Feige and Shamir [9] (Theorem 4.3) can be generalized to replace the
assumption with the assumption that there is a a problem in
_UP ̸⊆BPP_ _NP_
for which the witnesses have a “uniquely determined feature” [22] that is hard to
compute. That is, there is a poly-balanced, poly-time relation R, an efficiently
computable f, and a function g such that (a) if (x, w) _R, then f_ (x, w) = g(x),
_∈_
and (b) there is no probabilistic polynomial-time algorithm that computes g(x)
correctly for all x ∈ _LR. (The assumption that UP ̸⊆BPP corresponds to the_
case that f (x, w) = w. In general, we allow the witnesses for x to have a “unique
part,” namely g(x), which is still hard to compute.) Our result (Theorem 4.6)
can be viewed as constructing such an R, f, and g from a key agreement protocol.
Our construction complements that of Haitner, Rosen, and Shaltiel [22] —
they consider the parallel repetition of natural zero-knowledge proofs (such as 3Coloring [18] or Hamiltonicity [7]), and argue that “certain black-box techniques”
cannot prove that a feature g(x) will remain hard to compute by the verifier (on
average). In contrast, we consider the parallel repetition of a contrived zeroknowledge proof and show that a cheating verifier can always learn a certain
hard-to-compute feature g(x).
## Acknowledgments
We thank the TCC 2010 reviewers for helpful comments.
## References
1. Barak, B.: How to go beyond the Black-Box Simulation Barrier. In: 42nd IEEE
Symposium on Foundations of Computer Science, pp. 106–115 (2001)
2. Barak, B., Lindell, Y., Vadhan, S.: Lower Bounds for Non-Black-Box Zero Knowl
edge. In: Proc. of the 44th IEEE Symposium on the Foundation of Computer
Science, pp. 384–393 (2003)
3. Bellare, M., Goldreich, O.: On defining proofs of knowledge. In: Brickell, E.F. (ed.)
CRYPTO 1992. LNCS, vol. 740, pp. 390–420. Springer, Heidelberg (1993)
4. Ben-Or, M., Goldreich, O., Goldwasser, S., Hastad, J., Kilian, J., Micali, S., Ro
gaway, P.: Everything provable is provable in zero-knowledge. In: Goldwasser, S.
(ed.) CRYPTO 1988. LNCS, vol. 403, pp. 37–56. Springer, Heidelberg (1990)
-----
5. Birrell, E.: Composition of Zero-Knowledge Proofs. Undergraduate Thesis. Harvard
University (2009)
6. Birrell, E., Vadhan, S.: Composition of Zero Knowledge Proofs with Efficient
Provers. Cryptology eprint archive (2009)
7. Blum, M.: How to prove a theorem so no one else can claim it. In: Proceedings of
the International Congress of Mathematicians, pp. 1444–1451 (1987)
8. Diffie, W., Hellman, M.: New Directions in Cryptography. IEEE Trans. on Info.
Theory IT-22, 644–654 (1976)
9. Feige, U., Shamir, A.: Witness Indistinguishability and Witness Hiding Protocols.
In: 22nd ACM Symposium on the Theory of Computing, pp. 416–426 (1990)
10. Feige, U., Shamir, A.: Zero-Knowledge Proofs of Knowledge in Two Rounds. In:
Brassard, G. (ed.) CRYPTO 1989. LNCS, vol. 435, pp. 526–544. Springer, Heidelberg (1990)
11. Goldreich, O.: Foundations of Cryptography - Basic Tools. Cambridge University
Press, Cambridge (2001)
12. Goldreich, O.: A Uniform Complexity Treatment of Encryption and Zero Knowl
edge. Journal of Cryptology 6(1), 21–53 (1993)
13. Goldreich, O.: Zero-Knowledge twenty years after its invention. Cryptology ePrint
[Archive, Report 2002/186 (2002), http://eprint.iacr.org/](http://eprint.iacr.org/)
14. Goldreich, O., Goldwasser, S., Micali, S.: How to Construct Random Functions.
Journal of the Association for Computing Machinery 33(4), 792–807 (1986)
15. Goldreich, O., Kahan, A.: How to Construct Constant-Round Zero-Knowledge
Proof Systems for NP. Journal of Cryptology 9(2), 167–189 (1996)
16. Goldreich, O., Krawczyk, H.: On the Composition of Zero-Knowledge Proof Sys
tems. SIAM Journal on Computing 25(1), 169–192 (1996); Preliminary version in
ICALP 1990
17. Goldreich, O., Krawczyk, H.: Sparse Pseudorandom Distributions. Random Struc
tures & Algorithms 3(2), 163–174 (1992)
18. Goldreich, O., Micali, S., Wigderson, A.: Proofs that Yield Nothing but their Va
lidity or All Languages in NP have Zero-Knowledge Proof Systems. Journal of the
ACM 38(1), 691–729 (1991)
19. Goldreich, O., Oren, Y.: Definitions and Properties of Zero-Knowledge Proof Sys
tems. Journal of Cryptology 7(1), 1–32 (1994)
20. Goldwasser, S., Micali, S., Rackoff, C.: Knowledge Complexity of Interactive
Proofs. In: Proc. 17th STOC, pp. 291–304 (1985)
21. Goldwasser, S., Micali, S., Rackoff, C.: The Knowledge Complexity of Interactive
Proof Systems. SIAM Journal on Computing 18, 186–208 (1989)
22. Haitner, I., Rosen, A., Shaltiel, R.: On the (Im)possibility of Arthur-Merlin Witness
Hiding Protocols. In: Reingold, O. (ed.) TCC 2009. LNCS, vol. 5444, pp. 220–237.
Springer, Heidelberg (2009)
23. Vadhan, S.: Pseudorandomness. Foundations and Trends in Theoretical Computer
Science (to appear, 2010)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-642-11799-2_34?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-642-11799-2_34, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "CLOSED",
"url": ""
}
| 2,010
|
[
"JournalArticle"
] | false
| 2010-02-09T00:00:00
|
[] | 13,265
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0326c661579353d25cbb42210410960d7235e7e2
|
[
"Computer Science"
] | 0.882143
|
Managing data persistence in network enabled servers
|
0326c661579353d25cbb42210410960d7235e7e2
|
Scientific Programming
|
[
{
"authorId": "1803718",
"name": "E. Caron"
},
{
"authorId": "1400579692",
"name": "Bruno Del-Fabbro"
},
{
"authorId": "145607029",
"name": "F. Desprez"
},
{
"authorId": "1795494",
"name": "E. Jeannot"
},
{
"authorId": "49430820",
"name": "J. Nicod"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Sci Program"
],
"alternate_urls": [
"https://www.iospress.nl/html/10589244.php",
"https://www.wiley.com/legacy/compbooks/compjournals/sciprog.html",
"http://content.iospress.com/journals/scientific-programming"
],
"id": "d5b215f3-f737-45b2-875b-df309ce14c43",
"issn": "1058-9244",
"name": "Scientific Programming",
"type": "journal",
"url": "https://www.hindawi.com/journals/sp/"
}
|
The GridRPC model [17] is an emerging standard promoted by the Global Grid Forum (GGF) that defines how to perform remote client-server computations on a distributed architecture. In this model data are sent back to the client at the end of every computation. This implies unnecessary communications when computed data are needed by an other server in further computations. Since, communication time is sometimes the dominant cost of remote computations, this cost has to be lowered. Several tools instantiate the GridRPC model such as NetSolve developed at the University of Tennessee, Knoxville, USA, and DIET developed at LIP laboratory, ENS Lyon, France. They are usually called Network Enabled Servers (NES). In this paper, we present a discussion of the data management solutions chosen for these two NES (NetSolve and DIET) as well as experimental results.
|
Scientific Programming 13 (2005) 333–354 333
IOS Press
# Managing data persistence in network enabled servers[1]
###### Eddy Caron[a][,][∗], Bruno DelFabbro[b], Fr´ed´eric Desprez[a], Emmanuel Jeannot[c] and Jean-Marc Nicod[b]
aGRAAL Project, LIP ENS Lyon, 46 Alle d’Italie, 69364 Lyon Cedex 07, France
_E-mail: Eddy.Caron@ens-lyon.fr_
bGRAAL Project, LIFC, Universit´e de Franche-Comt´e, 16 route de Gray, 25030 Besanc¸on Cedex, France
_E-mail: delfabbro@lifc.univ-fcomte.fr_
cALGORILLE Project, LORIA,INRIA-Lorraine, Nancy, France
_E-mail: Emmanuel.Jeannot@loria.fr_
**Abstract. The GridRPC model [17] is an emerging standard promoted by the Global Grid Forum (GGF) that defines how to**
perform remote client-server computations on a distributed architecture. In this model data are sent back to the client at the end
of every computation. This implies unnecessary communications when computed data are needed by an other server in further
computations. Since, communication time is sometimes the dominant cost of remote computations, this cost has to be lowered.
Several tools instantiate the GridRPC model such as NetSolve developed at the University of Tennessee, Knoxville, USA, and
DIET developed at LIP laboratory, ENS Lyon, France. They are usually called Network Enabled Servers (NES). In this paper, we
present a discussion of the data management solutions chosen for these two NES (NetSolve and DIET) as well as experimental
results.
**1. Introduction**
Due to the progress in networking, computing intensive problems from several areas can now be solved
using network scientific computing. In the same way
that the World Wide Web has changed the way that
we think about information, we can easily imagine the
kind of applications we might construct if we had instantaneous access to a supercomputer from our desktop. The GridRPC approach [20] is a good candidate to
build Problem Solving Environmentson computational
Grid. It defines an API and a model to perform remote
computation on servers. In such a paradigm, a client
can submit a request for solving a problem to an agent
that chooses the best server amongst a set of candidates. The choice is made from static and dynamic information about software and hardware resources. Re
1This work was supported in part by the ACI GRID (ASP) and the
RNTL (GASP) from the French ministry of research.
_∗Corresponding author._
quest can be then processed by sequential or parallel
servers. This paradigm is close to the RPC (Remote
_Procedure Call) model. The GridRPC API is the Grid_
form of the classical Unix RPC approach. They are
commonly called Network Enabled Server (NES) environments [16].
Several tools exist that provide this functionality like
NetSolve [7], Ninf [13], DIET [4], NEOS [18], or
RCS [1]. However, none of them do implement a general approach for data persistence and data redistribution between servers. This means that once a server has
finished its computation, output data are immediately
sent back to the client and input data are destroyed.
Hence, if one of these data is needed for another computation, the client has to bring it back again on the
server. This problem as been partially tackled in NetSolve with the request sequencing feature [2]. However, the current request sequencing implementation
does not handle multiple servers.
In this paper, we present how data persistence can be
handled in NES environments. We take two existing
environments (NetSolve and DIET) and describe how
ISSN 1058-9244/05/$17.00 © 2005 – IOS Press and the authors. All rights reserved
-----
334 _E. Caron et al. / Managing data persistence in network enabled servers_
we implemented data management in their kernels. For
NetSolve, it requires to change the internal protocol,the
client API and the request scheduling algorithm. For
DIET we introduce a new service, called the Data Tree
Manager (DTM), that identify and manage data within
this middleware. We evaluate the gain that can be
obtained from these features on a grid. Since we show
that data management can greatly improve application
performance we discuss a standardization proposal.
The remaining of this paper is organized as follows.
In Section 2, we give an overview of Network Enabled
Server (NES) architecture. We focus on NetSolve and
DIET. We show why this is important to enable data
persistence and redistribution to NES. We describe how
we implemented data management in NetSolve and
DIET respectivelyin Section 3 and in Section 4. Experimental results are presented in Section 5. In Section 6
we discuss the standardization of data management in
NES. Finally, Section 7 concludes the paper.
**2. Background**
_2.1. Network enabled server architectures_
_2.1.1. General architecture_
The NES model defines an architecture for executing
computation on remote servers. This architecture is
composed of three components:
**– the agent is the manager of the architecture. It**
knows the state of the system. Its main role is to
find servers that will be able to solve as efficiently
as possible client requests,
**– servers are computational resources. Each server**
registers to an agent and then waits for client requests. Computational capabilities of a server are
known as problems (matrix multiplication, sort,
linear systems solving, etc.). A server can be sequential (executing sequential routines) or parallel
(executingoperationsin parallel on several nodes),
**– a client is a program that requests for computa-**
tional resources. It asks the agent to find a set
of servers that will be able to solve its problem.
Data transmitted between a client and a server is
called object. Thus, an input object is a parameter
of a problem and an output object is a result of a
problem.
The NES architecture works as follows. First, an
agent is launched. Then, servers register to the agent
by sending information of problems they are able to
solve as well as information of the machine on which
they are running and the network’s speed (latency and
bandwidth) between the server and the agent. A client
asks the agent to solve a problem. The agent scheduler
selects a set of servers that are able to solve this problem
and sends back the list to the client. The client sends the
input objects to one of the servers. The server performs
the computation and returns the output objects to the
client. Finally local server objects are destroyed.
This client API for such an approach has been standardized within the Global Grid Forum. The GridRPC
workinggroup [12] proposed an API that is instantiated
by several middleware such as DIET, Ninf, NetSolve,
and XtremWeb.
_2.1.2. NetSolve_
NetSolve [7] (Fig. 1) is a tool built at the University
of Tennessee and instantiate the GridRPC model described above. It is out of the scope of this paper to
completely describe NetSolve in detail. In this section
we focus only on data management.
_2.1.2.1. Request sequencing_
In order to tackle the problem of sending to
much data on the Network, the request sequencing
feature has been proposed since NetSolve 1.3 [2].
Request sequencing consists in scheduling a sequence of NetSolve calls on one server. This is
a high level functionality since only two new sequence delimiters netsl sequence begin and
netsl sequence start are added in the client
API. The calls between those delimiters are evaluated
at the same time and the data movements due to dependencies are optimized.
However request sequencing has the following deficiencies. First, it does not handle multiple servers
because no redistribution is possible between servers.
An overhead is added to schedule NetSolve requests.
Indeed, the whole Directed Acyclic Graph of all the
NetSolve calls within the sequence is built before being
sent to the chosen computational server. Second, for
loops are forbidden within sequences, and finally the
execution graph must be static and cannot depend on
results computed within the sequence.
Data redistribution is not implemented in the NetSolve’s request sequencing feature. This can lead to
sub-optimal utilization of the computational resources
when, within a sequence, two or more problems can
be solved in parallel on two different servers. This is
the case, for instance, if the request is composed of
the problems foo1, foo2 and foo3 given Fig. 4. The
performance can be increased if foo1 and foo2 can be
executed in parallel on two different servers.
-----
_E. Caron et al. / Managing data persistence in network enabled servers_ 335
NS
Applications Client Library Users
NS Agent
Resource Discovery Load Balancing
Resource Allocation Fault Tolerance
NS NS NS
Server Server Server
Fig. 1. NetSolve architecture.
Client
Client
Client
Client
MA
SeD
A
SeD LA
LA SeD
SeD SeD LA
SeD
SeD SeD
Fig. 2. DIET Architecture.
_2.1.2.2. Distributed storage infrastructure_
To make a data persistent and to take advantage of
its placement in the infrastructure, NetSolve proposes
the Distributed Storage Infrastructure. The DSI helps
the user for controlling the placement of data that will
be accessed by a server (see Fig. 3). Instead of multiple
transmissions of the same data, DSI allows the transfer of the data once from the client to a storage server.
Considering these storage servers closer from computational servers than from the client, the cost of transferring data will be cheaper. NetSolve is able to manage
several DSI. Currently, NetSolve proposes this storage
service using IBP (InternetBackplaneProtocol). [2] Files
or items managed by a DSI are called DSI objects. To
generate a DSI data, the client has to know the server
in which it wants to store its data. Note that the data location is not a criteria for the choice of a computational
server. NetSolve maintains its own File Allocation Table to manage DSI objects. Typically, when a request
is submitted to a NetSolve Server, the server looks for
input data and verify its existence in its FAT. If the data
is referenced (the client had passed a DSI object), data
2http://loci.cs.utk.edu/.
-----
336 _E. Caron et al. / Managing data persistence in network enabled servers_
###### Netsolve
(4)results
NetSolve client computational
(2) send problem
space
###### Client
(1)send data
(3) data and results
storage space
###### IBP
Fig. 3. Distributed storage infrastructure.
is get from the storage server, the server gets it from
the client elsewhere.
DSI improves the data transfer but does not prevent
from data going back and forth from computational
servers to storage servers. Indeed, this feature does
not fully implement data persistence and therefore may
lead to over-utilization of the network.
_2.1.3. DIET architecture_
NetSolve and Ninf projects are built on the same
approach. Unfortunately, in these environments, it is
possible to launch only one agent responsible of the
schedulingfor a given group of computationalservers. [3]
The drawback of the mono-agent approach is that the
agent can become bottleneck if a large number of requests have to be processed at the same time. Hence,
NetSolve or Ninf cannot be deployed for large groups
of servers or clients.
In order to solve this problem, DIET proposes to
distributed the load of the agent work. It is replaced
by several agents which organization follows two approaches: a peer-to-peer multi-agents approach that
helps system robustness [6] and a hierarchical approach
that helps scheduling efficiency [9]. This repartition
offers two main advantages: first, we assume a better
load balancing between the agents and a higher system
stability (if one of the agents dies, a reorganization of
the others is possible to replace it). Then, it is easier
to manage each group of servers and agents by delegation which is useful for scalability. DIET is built upon
several components:
3In Ninf, a multi-agents platform exists (Metaserver) but each
agent has the global knowledge of the entire platform.
**– a client is an application that uses DIET to solve**
problems. Several client types must be able to connect to DIET. A problem can be submitted from a
Web page, a problem solving environment such as
Scilab [3] or Matlab or from a compiled program.
**– a Master Agent (MA) is directly linked to the**
clients. It is the entry point of our environmentand
thus receives computationrequests from clients attached to it. These requests refer to some DIET
problems that can be solved by registered servers.
Then the MA collects computation abilities from
the servers and chooses the best one. A MA has
the same information than a LA, but it has a global
and high level view of all the problems that can be
solved and of all the data that are distributed in all
its subtrees.
**– a Leader Agent (LA) forms a hierarchical level**
in DIET. It may be the link between a Master
Agent and a SeD or between two Leader Agents
or between a Leader Agent and a SeD. It aims
at transmitting requests and information between
Agents and several servers. It maintains a list of
current requests and the number of servers that can
solve a given problem and information about the
data distributed in its subtrees.
**– a Server Daemon (SeD) is the entry point of a**
computational resource. The information stored
on an SeD is a list of the data available on its
server (with their distribution and the wayto access
them), the list of problems that can be solved on it,
and all information concerning its load (memory
available, number of resources available, . . .). A
SeD declares the problems it can solve to its parent.
For instance, a SeD can be located on the entry
point of a parallel computer.
-----
##### a = foo1(b,c) d = foo2(e,f) g = foo3(a,d)
###### (a) Sample C code.
_E. Caron et al. / Managing data persistence in network enabled servers_ 337
#### Function Server 1 Server 2 foo1 6s 9s foo2 2s 3s foo3 6s 11s
###### (b) Execution time.
|Function|Server 1|Server 2|
|---|---|---|
|foo1 foo2 foo3|6s 2s 6s|9s 3s 11s|
S 1 Receive Receive foo1 Send Receive Receive foo3 Send
a g
Send Send Send Send Receive Receive Send Send Receive
Client b c e f a d
Without data persistence and redistribution
S2 Receive Receive foo2 Send execution time: 26s
d
S 1 Receive Receive foo1 Receive foo3 Send
g
Send Send Send Send Receive
Client b c e f
With data persistence and redistribution
S2 Receive Receive foo2 Send execution time: 21s
d
###### (c) Execution without (top) and with (bottom) persistence.
Fig. 4. Sample example where data persistence and redistribution is better than retrieving data to the client.
|Receive|Receive|foo2|Col4|Send d|
|---|---|---|---|---|
|Receive|Col2|Receive|Col4|foo1|Col6|Col7|Col8|Col9|Col10|Receive|Col12|foo3|Send g|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||||
|Send b||Send c||Send e||Send f|||||||Receive|
|||||||||||||||
|Receive|Col2|Receive|Col4|foo1|Col6|Col7|Send a|Col9|Col10|Receive|Col12|Receive|Col14|foo3|Send g|Col17|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||||||
|Send b||Send c||Send e|Send f||Receive||Receive|Send a||Send d|||Receive||
|Without data persistence and redistribution Receive Receive foo2 Send execution time: 26s d|||||||||||||||||
|Receive Receive foo1 Receive foo3 Send g Send Send Send Send Receive b c e f With data persistence and redistribution Receive Receive foo2 Send execution time: 21s d|||||||||||||||||
|Receive|Receive|foo2|Send d|
|---|---|---|---|
A new DIET client contacts a Master Agent (the
closest for instance) and posts its request. The Master Agent transmits the request to its subtrees[4] to find
data already present in the platform and servers that are
able to solve the problem. The LAs which receive the
request forward it down to every one of their sub-trees
which contains a server that might be involved in the
computation and wait for the responses. The requests
traverse the entire hierarchy down to the SeDs. When
4An extension is possible for the multi-agent approach: broadcast
the request to the others MA considering them as Leader Agents.
a SeD receives a request, it sends a response structure
to its father. It fills the fields for the variables it owns,
leaving a null value for the others. If it can solve the
problem, it also puts an entry with its evaluated computation time acquired from our performance forecasting
tool FAST [19]. Each LA gathers responses coming
from its children and aggregates them into a structure.
The scheduling operations are realized at each level
of the tree when the response is sent back to the Master
Agent. Note that a time-out is set and when an agent
has not got a response over a given time, this response
is ignored. However, this time-out is not an informa
-----
338 _E. Caron et al. / Managing data persistence in network enabled servers_
tion enough to say that an agent has failed. When the
responses come back to the MA, it is able to take a
scheduling decision. The evaluated computation and
communication times are used to find the server with
the lowest response time to perform the computation.
Then the MA is able to send the chosen server reference
to the client (it is also possible to send a bounded list
of best servers to the client). Then, the Master Agent
orders the data transfer. Here we can distinguish two
cases: data resides in the client and are transferred from
the client to the chosen server or data are already inside
the platform and are transferred from the servers that
holds them to the chosen server. Note that these two operations can be processed in a parallel way. Once data
are received by the server, computation can be done.
The results may be sent to the client. For performance
issues, data are let in the last computational server if
possible.
_2.2. On the importance of data management in NES_
A GridRPC environmentsuch as NetSolve and DIET
is based on the client-server programming paradigm.
This paradigm is different than other ones such as parallel/distributed programming. In a parallel program
(written in PVM or MPI for instance) data persistence
is performed implicitly: once a node has received some
data, this data is supposed to be available on this node
as long as the application is running (unless explicitly
deleted). Therefore, in a parallel program, data can be
used for several steps of the parallel algorithm.
However,in a GridRPC architecture no data management is performed. Like in the standard RPC model,
request parameters are sent back and forth between the
client and the server. A data is not supposed to be
available on a server that used it for another step of
the algorithm (an new RPC) once a step is finished (a
previous RPC has returned). This drawback can lead
to very high execution time as the execution and the
communications can be performed over the Internet.
_2.2.1. Motivating example_
Now we give an example where the use of data persistence and redistribution improves the execution of a
GridRPC session. Assume that a client asks to execute
the three functions/problems shown in the sample code
given in Fig. 4(a).
Let us consider that the underlying network between
the client and the server has a bandwidth of 100 Mbit/s
(12.5 Mbytes per seconds). Figure 4(b) gives the execution time for each function and for each server. Finally
let us suppose that each object has a size of 25 Mbytes.
The GridRPC architecture will execute foo1 and foo3
on server S1 and foo2 on S2 and sends the objects in
the following order: b, c, e, f (Fig. 4). Due to the
bandwidth limitation, foo1 will start 4 seconds after the
request and foo2 after 8 seconds. Without data persistence and redistribution a will be available on S 1 16
seconds after the beginning of the session and d, 18
seconds after the beginning (S 2 has to wait that the
client has completely received a before starting to send
_d). Therefore, after the execution of foo3, g will be_
available on the client 26 seconds after the beginning.
With data persistence and redistribution, S 2 sends d to
_S1 which is available 13 seconds after the beginning of_
the request. Hence, g will be available on the client 21
seconds after the beginning of the request which leads
to a 19% improvement.
_2.2.2. Goal of the work_
In this paper, we show how to add data management
into NES environments. We added data persistence and
data redistribution to NetSolve and DIET and therefore
modified the client API.
Data persistence consists in allowing servers to keep
objects in place to be able to use these objects again for
a new call without sending them back and forth from
and to the client. Data redistribution enables interserver communications to avoid object moving though
the client.
Our modifications to NetSolve are backward compatible. Data persistence and data redistribution require the client API to be modified but standard client
programs continue to execute normally. Moreover, our
modifications are stand-alone. This means that we do
not use an other software to implement our optimizations. Hence, NetSolve users do not have to download and compile new tools. Finally, our implementation is very flexible without the restrictions imposed by
NetSolve’s request sequencing feature.
We also proposed a model of distributed data managementin DIET. The DIET data managementmodel is
based on two key elements: the data identifiers and the
Data Tree Manager (DTM) [10,11]. To avoid multiple
transmissions of the same data from a client to a server,
the DTM allows to leave data inside the platform after
computation while data identifiers will be used further
by the client to reference its data.
-----
_E. Caron et al. / Managing data persistence in network enabled servers_ 339
**3. New data management in NetSolve**
In this section we describe howwe haveimplemented
data redistribution and persistence within NetSolve.
This required to change the three components of the
software: server, client, and agent.
_3.1. Server modifications_
NetSolve communications are implemented using
sockets. In this section, we give details about the low
level protocols that enable data persistence and data
redistribution between servers.
_3.1.1. Data persistence_
When a server has finished its computation, it keeps
all the objects locally, listen to a socket and waits for
new orders from the client. So far, the server can receive
five different orders.
1. Exit. When this order is received, the server terminates the transaction with the client, exits, and
therefore data are lost. Saying that the server exits
is not completely correct. Indeed, when a problem is solved by a server, a process is forked, and
the computation are performed by the forked process. Data persistence is also done by the forked
process. In the following, when we say that the
server is terminated, it means that the forked process exits. The NetSolve server is still running
and it can solve new problems.
2. Send one input object. The server must send an
input object to the client or to an other server.
Once this order is executed, data are not lost and
the server is waiting for new orders.
3. Send one output object. This order works the
same way than the previous one but a result is
sent.
4. Send all input objects. It is the same as “send one
input object” but all the input objects are sent.
5. Send all output objects. It is the same as “send
one output object” but all the results are sent.
_3.1.2. Data redistribution_
When a server has to solve a new problem, it has first
to receive a set of input objects. These objects can be
received from the client or from an other server. Before
an input object is received, the client tells the server if
this object will come from a server or from the client. If
the object comes from the client, the server has just to
receive the object. However, if the object comes from
an other server, a new protocol is needed. Let call S 1
the server that has to send the data, S2 the server that is
waiting for the data, C and the client.
1. S2 opens a socket s on an available port p.
2. S2 sends this port to C.
3. S2 waits for the object on socket s.
4. C orders S1 to send one object (input or output).
It sends the object number, forward the number
of the port p to S1 and sends the hostname of S2.
5. S1 connects to the socket s on port p of S 2.
6. S1 sends the object directly to S2 on this socket:
data do not go through the client.
_3.2. Client modifications_
_3.2.1. New Structure for the Client API_
When a client needs a data to stay on a server, three
information are needed to identify this data. (1) Is
this an input or an output object? (2) On which server
can it be currently found? (3) What is the number of
this object on the server? We have implemented the
ObjectLocation structure to describe these informations needed. ObjectLocation has 3 fields:
1. request id which is the request number of
the non-blocking call that involves the data requested. The request id is returned by the
netslnb standard NetSolve function, that performs a non blocking remote execution of a problem. If request id equals 1, this means that
_−_
the data is available on the client.
2. type can have two values: INPUT OBJECT or
OUTPUT OBJECT. It describes if the requested
object is an input object or a result.
3. object number is the number of the object as
described in the problem descriptor.
_3.2.2. Modification of the NetSolve code_
When a client asks for a problem to be solved, an array of ObjectLocation data structures is tested. If
this array is not NULL, this means that some data redistribution have to be issued. Each element of the array
corresponds to an input object. For each input object
of the problem, we check the request id field. If it
is smaller than 0, no redistribution is issued, everything
works like in the standard version of NetSolve. If the
request id field is greater than or equal to zero then
data redistribution is issued between the server corresponding to this request (it must have the data), and the
server that has to solve the new problem.
-----
340 _E. Caron et al. / Managing data persistence in network enabled servers_
_3.2.3. Set of new functions_
In this section, we present the modifications of the
client API that uses the low-level server protocol modifications described above. These new features are
backward compatible with the old version. This means
that an old NetSolve client will have the same behavior
with this enhanced version: all the old functions have
the same semantic, except that when starting a nonblocking call, data stay on the server until a command
that terminates the server is issued. These functions
have been implemented for both C and Fortran clients.
They are very general and can handle various situations. Hence, unlike request sequencing, no restriction
is imposed to the input program. In Section 3.4, a code
example is given that uses a subset of these functions.
_3.2.3.1. Wait functions_
We have modified or implemented three functions:
netslwt, netslwtcnt and netslwtnr. These
functions block until the current computations are finished. With netslwt, the data are retrieved and the
server exits. With netslwtcnt and netslwtnr,
the server does not terminate and other data redistribution orders can be issued. The difference between these two functions is that unlike netslwtcnt,
netslwtnr does not retrieve the data.
_3.2.3.2. Terminating a server_
The netslterm orders the server to exit. The
server must have finished its computation. Local object
are then lost.
_3.2.3.3. Probing servers_
As in the standard NetSolve, netslpr probes the
server. If the serverhas finished its computation,results
are not retrieved and data redistribution orders can be
issued.
_3.2.3.4. Retrieving data_
A data can be retrieved with the netslretrieve
function. Parameters of this functions are the type
of the object (input or output), the request, the object
number and a pointer where to store the data.
_3.2.3.5. Redistribution function_
netslnbdist, is the function that performs the
data redistribution. It works like the standard nonblocking call netslnb with one more parameter: an
ObjectLocation array, that describes which objects are redistributed and where they can be found.
1 For all server S that can resolve the problem
2 _D_ 1(S ) = estimated amount of time to transfer
input and output data.
3 _D_ 2(S ) = estimated amount of time to solve the
problem.
4 Choose the server that minimizes D 1(S ) + D 2(S ).
Fig. 5. MCT algorithm.
_3.3. Agent scheduler modifications_
The scheduling algorithm used by NetSolve is Minimum Completion Time (MCT) [15] which is described
in Fig. 5. Each time a client sends a request MCT
chooses the server that minimizes the execution time
of the request assuming no major change in the system
state.
We have modified the agent’s scheduler to take into
account the new data persistence features. The standard
scheduler assumes that all data are located on the client.
Hence, communication costs do not depend on the fact
that a data can already be distributed. We have modified the agent’s scheduler and the protocol between
the agent and the client in the following way. When
a client asks the agent for a server, it also sends the
location of the data. Hence, when the agent computes
the communication cost of a request for a given server,
this cost can be reduced by the fraction of data already
hold by the server.
_3.4. Code example_
In Fig. 6 we show a code that illustrates the features
described in this paper. It executes 3 matrix multiplications: c=a*b, d=e*f, and g=d*a using the DGEMM
function of the level 3 BLAS provided by NetSolve,
where a is redistributed from the first server and d is
redistributed from the second one. We will suppose
that matrices are correctly initialized and allocated. In
order to simplify this example we will also suppose
that each matrix has n rows and columns and tests of
requests are not shown.
In the two netslnb calls different parameters of
dgemm (c = β _c + α_ _a_ _b, for the first call) are_
_×_ _×_ _×_
passed such as the matrix dimension (always n here),
the need to transpose input matrices (not used here), the
value of α and β (respectively 1 and 0) and pointers to
input and output objects. All these objects are persistent
and therefore stay on the server: they do not move back
to the client.
-----
_E. Caron et al. / Managing data persistence in network enabled servers_ 341
```
ObjectLocation *redist;
netslmajor("Row");
trans="N";
alpha=1;
beta=0;
/* c=a*b */
request_c=netslnb("DGEMM()",&trans,&trans,n,n,n,&alpha,a,n,b,n,&beta,c,n);
/* after this call c is only on the server */
/* d=e*f */
request_d=netslnb("DGEMM()",&trans,&trans,n,n,n,&alpha,e,n,f,n,&beta,d,n);
/* after this call d is only on the server */
/* COMPUTING REDISTRIBUTION */
/* 7 input objects for DGEMM */
nb_objects=7;
redist=(ObjectLocation*)malloc(nb_objects*sizeof(ObjectLocation));
/* All objects are first supposed to be hosted on the client */
for(i=0;i<nb_object;i++)
redist[i].request_id=-1;
/* We want to compute g=d*a */
/* a is the input object No 4 of DGEMM and the input object No 3 of request_c */
redist[4].request_id=request_c;
redist[4].type=INPUT_OBJECT;
redist[4].object_number=3;
/* d is the input object No 3 of DGEMM and the output object No 0 of request_d */
redist[3].request_id=request_d;
redist[3].type=OUTPUT_OBJECT;
redist[3].object_number=0;
/* g=d*a */
request_g=netslnbdist("DGEMM()",redist,&trans,&trans,n,n,n,&alpha,NULL,n,NULL,n,
&beta,g,n);
/* Wait for g to be computed and retrieve it */
netslwt(request_g);
/* retrieve c */
netslretrieve(request_c,OUTPUT_OBJECT,0,c);
/* Terminate the server that computed d */
netslterm(request_d);
```
Fig. 6. NetSolve persistence code example.
-----
342 _E. Caron et al. / Managing data persistence in network enabled servers_
CLIENT
call(pb, A, ...)
call(pb1,A, ...)
Server A
Execute
Service
Server A
Execute
Service
= A
CLIENT
call(pb, A, ...)
call(pb1,&A, ...)
Execute
Service
Fig. 7. Sending A twice.
Then the redistribution is computed. An array of
ObjectLocation is build and filled for the two objects that need to be redistributed (a and d). The call to
netslnbdist is similar to previous netslnb call
except that the redistribution parameter is passed. At
the end of the computation, a wait call is performed for
the computation of g, the matrix c is retrieved and the
server that computed d is terminated.
In Section 5.3, we present our experimental results
on executing a set of DGEMM requests both on a LAN
and on a WAN.
**4. Data management in DIET**
We have developed a data management service in
the DIET platform. Our motivation was based on the
need to decrease the global computation time. A way
to achieve such a goal is to decrease data transfers between clients and the platform when possible. For example, a client that submits two successive calls with
the same input data needs to transfer them twice (see
Fig. 7). Our goal is to providea service that allows only
one data transfer as shown in Fig. 8. An other objective
is to allows the use of the data already stored inside
the platform in later computations and more generally
in later sessions or by others clients. This is why data
stored needed to be handled by an unique identifier.
Our service has also to fit with DIET platform characteristics, and this is why our components are build in a
hierarchical way. After a short description of the principles we retain in order to build a data management
service in DIET, we review the various components of
our implementation called Data Tree Manager [10].
Execute
Service
Fig. 9. Two successive calls.
_4.1. Principles_
In this section, we present the basic functionalities
that we choose for a data management service in such
an ASP environment.
_4.1.1. Data storage_
A data can be stored onto a disk or in memory. In
NES environments, a challenge is to store data as near
as possible to a computationalserver where they will be
needed. In addition, physical limitations of storage resources will imply the definition of a data management
policy. Simple algorithms as LRU will be implemented
in order to remove the most older data. This will avoid
to overload the system.
_4.1.2. Data location_
When a data item has been sent once from a client
to the platform, the data management service has to
be able to find where data is stored to use it in other
computations on other servers. Furthermore, in order
Execute
Service
Fig. 8. Sending A only once.
= A
= A
CLIENT
call(pb, A, ...)
call(pb1,&A, ...)
Server A
Execute
Service
-----
_E. Caron et al. / Managing data persistence in network enabled servers_ 343
to obtain a scalable infrastructure, we need to separate
the logical view of the data from its physical location.
Even if the solution of metadata [8] is elegant, a data
management service in NES environments has not exactly the same characteristics than other data management systems implemented in Grid Computing Environments. In fact, in these environments, clients need
to access huge data for analysis. Hence, these systems
are built in order to provide a reliable access to data
that are geographically distributed. In ASP environments, numerical applications to which NES platforms
give access have generally data that are produced and
directly accessed by the client that sends the request.
ASP environments have to give a reliable access to
computational servers even if the problematic of data
access by clients is also a constraint. This is why it is
not necessary to define data along their characteristics.
Nevertheless, it is mandatory to fully identify data that
are stored inside the platform.
_4.1.3. Data movement_
As seen above, a data management service in ASP
environments is able to store and locate data. But
when data is required for more than one computation
on more than one server, it is also mandatory to be
able to move data between computational servers. In
fact, if we consider that time to transfer data between
servers is smaller than time to transfer data between
clients and servers, we need to define a data movement
mechanism. Obviously, when data is moved from one
server to an other computational server, information on
its location have to be updated.
_4.1.4. Persistence mode_
A data can be stored inside the platform and moved
between storage resources. But, have all data sent by
clients or produced by servers to be stored inside the
platform? For obvious performance motivations, it is
better to limit data persistence to those that are really
useful. We think that only clients know which data
have to stay inside the system. Hence, this is why we
define a persistence mode in the help of which clients
can tell if their data should be stored or not.
_4.1.5. Security_
Once data are stored inside the platform, we need
to define a policy to make secured operations on data.
In fact, data stored inside the platform can be shared
between clients. However,all the clients of the platform
are not able to realize all operations on all data. As
data stored are identified inside the platform, only the
client that has produced the data has to be informed of
the identifier that has been bound to its data in order
to use it for later computation requests. Moreover, in
collaborativeprojects for example, a client may want to
share its stored data with other researchers but he does
not want them to delete its data. We propose to add
an access key in addition to the identifier. Thus, if a
client wants to get read/write rights on a specified data,
he has to join this key to the data identifier. Indeed,
if the client that has produced the data does not want
the others to have write access on it, he just have to
provide the identifier. This leaves the responsibility of
the management of its own data to the client. Simple
mechanisms such as md5, sha1 algorithms or routines
like urandom will be chosen to generate such a key.
_4.1.6. Fault tolerance_
The fault tolerance policy is directly linked to the
consistency policy. In fact, our approach does not
define fault recovery mechanisms but only a consistency mechanism of the infrastructure when faults occur. Thus, only a context/contents model is defined.
We ensure that all operations (add, remove) made on
data by clients are made such that all the infrastructure
is consistent. If a component that manages the physical
data fails (named DataManager), updates on the architecture are made. We distinguish two possible cases of
fault. A component that manages the logical view of
data fails (named LocManager) or DataManager fails.
If a LocManager fails, all its subtrees are considered as
lost. We only ensure that the parent of the LocManager removes all references of data referenced on this
branch. If a DataManager fails, we ensure that all references of data owns by it are removed in the hierarchy.
No data recovery is made. We also consider that all
data transfers are realized in a correct way but we make
sure that updates are realized only when transfers are
complete. A solution will be to replicate data.
_4.1.7. Data sources heterogeneity_
Generally, a data is sent from the local machine of
a client. However, it is also possible that a client does
not owns the data it wants to send to the platform but
only knows its location. Hence, we propose to give
the possibility for a client to inform the server to pull
data from a remote storage depot that is extern to the
platform. This model has to deal with the support heterogeneity. We have first developed a model that allow
the use of ftp and http protocols. These models have
to be completed to interact with other protocols such
as gridFTP. This approach is quite similar to the Stork
approach for multi-protocols data transfers presented
in [14].
-----
344 _E. Caron et al. / Managing data persistence in network enabled servers_
_4.1.8. Replication_
One mandatory aspect of a data management service
is to provide a data replication policy. In fact, the need
of data replication is particularly required for parallel
tasks that share data. Thus, a data management service
needs to provide an API in order to move or replicate
data between computational servers. This API will be
used by a task scheduler for example.
_4.2. The DIET Data tree manager_
The data management service we implemented is
based on the principles defined above. In this section,
we present our implementation.
_4.2.1. The persistence mode_
A client can choose whether a data will be persistent
inside the platform or not. We call this property the
persistence mode of a data. We have defined
several modes of data persistence as shown in Table 1.
_4.2.2. The data identifier_
When a data is stored inside the platform, an identifier is assigned to it. This identifier (also known as data
handler) allows us to point out a data in an unique way
within the architecture. It is clear that a client has to
know this identifier in order to use the corresponding
data. Currently, a client knows only the identifiers of
the persistent data it has generated. It is responsible for
propagating this information to other clients. Note that
identifying data in NES environments is a relatively
new issue. This is strongly linked to the way we are
considering data persistence. In NetSolve, the idea is
that data is persistent for a session time and deleted
after. In DIET, we think that a data can survive to a
session and could be used by other clients than the producer or in later sessions. Nevertheless, a client can
also decide that its data are only available in a single
session. Currently, as explained before, data identifiers
are stored in a file in a client directory.
_4.2.3. Logical data manager and physical data_
_manager_
In order to avoid interleaving between data messages
and computation messages, the proposed architecture
separates data managementfrom computation management. The Data Tree Manager is build around two main
entities.
_4.2.3.1. Logical data manager_
The Logical Data Manager is composed of a set of
LocManager objects. A LocManager is set onto
the agent with which it communicates locally. It manages a list of couples (data identifier, owner) which represents data that are present in its branch. Hence, the
hierarchy of LocManager objects provides the global
knowledge of the localization of each data.
_4.2.3.2. Physical data manager_
The Physical Data Manager is composed of a set of
DataManager objects. The DataManager is located onto each SeD with which it communicates locally. It owns a list of persistent data. It stores
data and has in charge to provide data to the server when
needed. It provides features for data movement and
it informs its LocManager parent of updating operations performed on its data (add, move, delete). Moreover, if a data is duplicated from a server to another one,
the copy is set as non persistent and destroyed after it
uses with no hierarchy update.
This structure is built in a hierarchical way as shown
in Fig. 11. It is mapped on the DIET architecture.
There are several advantages to define such a hierarchy. First, communications between agents (MA or
LA) and data location objects (LocManager) are local
like those between computational servers (SeD) and
data storage objects (DataManager). This ensures that
this is not costly, in terms of time and network bandwidth, for agents to get information on data location
and for servers to retrieve data. Secondly, considering
the physical repartition of the architecture nodes (a LA
front-end of a local area network for example), when
data transfers between servers localized in the same
subtree occur, the consequently updates of the infrastructure are limited to this subtree. Hence, the rest of
the platform is not involved in the updates.
_4.2.4. Data mover_
The Data Moverprovidesmechanisms for data transfers between Data Managers objects as well as between
computational servers. The Data Mover has also to initiate updates of DataManager and LocManager when a
data transfer has finished.
_4.2.5. Client API_
A client can specify the persistence mode of its data.
This is done when the problem profile is build. Moreover, after the problem has been evaluated by the platform and persistent data are sent or produced, a unique
identifier is affected to each data. A client can execute
several operations using the identifier:
-----
_E. Caron et al. / Managing data persistence in network enabled servers_ 345
Table 1
Persistence modes
**mode** **Description**
DIET VOLATILE not stored
DIET PERSISTENT RETURN stored on server, movable and copy back to client
DIET PERSISTENT stored on server and movable
DIET STICKY stored and non movable
DIET STICKY RETURN stored, non movable and copy back to client
Logical
Agent
Data Manager
Physical
SeD Data Mover
Data Manager
Fig. 10. DTM: data tree manager.
MA
LocMgr1
LA1 LA2
LocMgr2 LocMgr3
SeD1 SeD2 SeD3
DataMgr1 DataMgr2
DataMgr3
Fig. 11. DataManager and LocManager objects.
|Agent SeD|Logical Data Manager T S A Physical F Data Mover Data Manager|
|---|---|
_4.2.5.1. Data handle storage_
The store id() method allows the data identifier
to be stored in a local client file. This will be helpful to
use data in other session for the same client or for other
clients.
store_id(char *handle, char *msg);
_4.2.5.2. Utilization of the data handle_
The diet use data() method allows the use of
a data stored in the platform identified by its handle.
The description of the data (its characteristics) is also
stored.
diet_use_data(char *handle);
_4.2.5.3. Data remove_
The diet free persistent data() method
allows to free the persistent data identified by handle
from the platform.
diet_free_persistent_data(char
*handle);
-----
346 _E. Caron et al. / Managing data persistence in network enabled servers_
900
without data management
800 Request Sequencing
DSI one server
700 DSI : three servers
600
500
400
300
200
100
0
0 5 10 15 20 25 30 35
matrix size in MByte
Fig. 12. Standard NetSolve tests.
##### b d e
f
g
Fig. 13. Matrix multiplication program task graph.
##### c=a*b f=d*e g=c*f
4.2.5.4. Read an already stored data
The diet read data(char * handle) method allows to read a data identified by handle already
stored inside the platform.
diet_data_t diet_read_data(char
*handle);
data transfer and storage and the Request Sequencing
used to decrease network traffic amongst client and
servers.
_5.2. Experiments_
**5. Experimental results**
_5.1. Standard netSolve data management_
In this section we test the standard version of NetSolve. We first make experiments on NetSolve without
data management and then with the two NetSolve data
management approaches described in Section 2.1.2:
the Distributed Storage Infrastructure, used to provide
Servers are distributed on a site far from approximatively 100 kilometers to the client. Wide area network is
a 16 Mbits/s network while the local area network is an
Ethernet 100 Mbits/s network. The platform built for
NetSolve tests is composed of three servers, an agent,
and an IBP depot.
The experiments consist in a sequence of calls in a
session: C = A _B then D = C + E then A =[t]_ _A._
_∗_
We made three series of test for NetSolve. First, a
test using three consecutive blocking calls. Then, a
request sequencing test and finally a test with DSI. The
last test is divided into two parts: first, a single server
-----
_E. Caron et al. / Managing data persistence in network enabled servers_ 347
3 Matrix Multiplications
2000
1800
1600
1400
1200
1000
+
×
800
600
400
200
0 ×+◊ ×+◊ ×
200 400 600 800 1000 1200 1400 1600 1800 2000
Matrix size
+ 3 DGEMM with NetSolve
× 3 DGEMM with NetSolve and 2 in parallel
- 3 DGEMM with Scilab
|Col1|+ ◊ × ◊ + × + × ◊ + × ◊ × + ×◊ + ×◊ ×+◊ +◊ ×+◊|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
|×||||||||||
|||||||||||
Fig. 14. Matrix multiplications using NetSolve with data persistence on a LAN.
computes all the sequence, then each call is computed
by a different server.
Results of the series of tests are exposed in Fig. 12.
We note that Request Sequencing is the best solution
for such a sequence of calls. When using DSI, we note
also that the best solution is when three servers are involved in the computation. This is a bit surprising but it
is confirmed by different others tests we made building
several different topologies (a server that is also an IBP
depot, an IBP depot closest from one server than for
the others). In fact, in order to confirm this fact, we
try to choose the best server (in terms of processing
power and memory capacity) that compute the three
calls: but the best solution is always when three servers
were involved. We can explain this fact by the memory limitations of the servers involved. A server that
have to process three computations does not free its
memory implying an overload of this server for further
computation.
_C11 = A 11B 11 ; C22 = A 21B 12_
_C12 = A 11B 12 ; C21 = A 21B 11_
_C11 = C11 + A 12B 21 ; C22 = C22 + A 22B 22_
_C12 = C12 + A 12B 22 ; C21 = C21 + A 22B 21_
Fig. 15. Matrix multiplication using block decomposition.
_5.3. NetSolve with data persistence and redistribution_
In this section we show several experiments that
demonstrate the advantage of using data persistence
and redistribution within NetSolve as described in Section 3. Figures 14 and 16 show our experimental results using NetSolve as a NES environment for solving
matrix multiplication problems in a grid environment.
-----
348 _E. Caron et al. / Managing data persistence in network enabled servers_
Matrix Multiplication
1200
1000
800
600
400
200
0
|Col1|× × × × × + + × × + × + + × + × + × + + × + × + + ×+ ×+ × + +|Col3|Col4|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
0 400 800 1200 1600 2000 2400
Matrix size
+ Matrix multiplication with data persistence
× Matrix multiplication without data persistence
Fig. 16. NetSolve with data persistence: WAN experiments.
_5.3.1. LAN experiments_
In Fig. 14, we ran a NetSolve client that performs
3 matrix multiplications using 2 servers. The client,
agent, and servers are in the same LAN and are connected through Ethernet. Computation and task graphs
are shown in Fig. 13. The first two matrix multiplications are independentand can be done in parallel on two
different servers. We use Scilab[5] as the baseline for
computation time. We see that the time taken by Scilab
is about the same than the time taken using NetSolve
when sequentializing the three matrix multiplications.
When doing the first two ones in parallel on two servers
using the redistribution feature, we see that we gain
exactly one third of the time, which is the best possible
gain. These results show that NetSolve is very efficient
in distributing matrices in a LAN and that non-blocking
5www.scilab.org.
calls to servers are helpful for exploiting coarse grain
parallelism.
_5.3.2. WAN experiments_
We have performed a blocked matrix multiplication
(Fig. 15). The client and agent were located in one
University (Bordeaux) but servers were running on the
nodes of a cluster located in Grenoble. [6] The computation decomposition done by the client is shown in
Fig. 16. Each matrix is decomposed in 4 blocks, each
block of matrix A is multiplied by a block of matrix
_B and contributes to a block of matrix C. The first_
two matrix multiplications were performed in parallel.
Then, input data were redistributed to perform matrix
multiplications 3 and 4. The last 4 matrix multiplica
6Grenoble and Bordeaux are two French cities separated by about
800 km.
-----
_E. Caron et al. / Managing data persistence in network enabled servers_ 349
tions and additions can be executed using one call to
the level 3 BLAS routine DGEMM and requires input
and output objects to be redistributed. Hence, this experiment uses all the features we have developed. We
see that with data persistence (input data and output
data are redistributed between the servers and do not go
back to the client), the time taken to perform the computation is more than twice faster than the time taken to
perform the computation without data persistence (in
that case, the blocks of A, B, and C are sent back and
forth to the client). This experiment demonstrates how
useful the data persistence and redistribution features
that we have implemented within NetSolve are.
_5.4. DIET data management_
The first experiments consist in a sequence of calls
in a session: C = A _B, D = C +E and A =[t]_ _A. The_
_∗_
DIET platform is composed of one MA, two LAs and
three servers. Servers are distributed on a site far from
approximatively 100 kilometers from the client. The
wide area network is a 16 Mbits/s network while the
local area network is an Ethernet 100 Mbits/s network.
Computers (0.5 Ghz up to 1.8 Ghz) are heterogeneous
and run the Linux operating system. We conducted
three series of tests: first, a test using three synchronous
calls without using DTM. Then, the same sequence
using DTM (i.e. using persistence): in this way, A, B,
and E matrices are defined as persistent, C matrix must
be persistent because it is an input data for the second
problem. D matrix can be non persistent because it
is not used anywhere else after. Hence, for this case,
_A, B, E are sent once, and C is not sent. For the last_
test, only identifiers are sent since all data are already
present in the infrastructure.
Results of the series of tests are exposed in Fig. 17.
If we can avoid multiple transmissions of the same
data, the overall computational time is equal to the
transfer time of data into the infrastructure plus the tasks
computation time plus the results transfer time to the
client. Unsurprisingly again, the last scenario appears
to be the best one and confirms the feasibility and the
low cost of our approach in the case of a sequence
of calls. Using the CORBA space, we can avoid the
copy of data by using CORBA memory management
methods. These methods allow to get a value without
making a memory copy. Moreover, notice that the
update of the hierarchy is performedinanasynchronous
way, so its cost is very small and does not influence
the overall computational time. However, for large
data, this approach has the limitations of the memory
management.
To complete experiments already lead in [5] and the
above results, we have conducted series of tests in order to show the overall advantages of using persistence
in DIET. This target architecture is composed of one
MA, two LA and two SeD located in a local network.
A client is located in a remote site far from 100 kilometers to DIET. The wide area network is a 16 Mbits/s
network while the local area network is an Ethernet
100 Mbits/s network. The deployed application is a
linear algebra application in which computation time is
relatively independent from data size.
In the first experiment, data are in input mode. As
seen in Fig. 18, the time of executionvaries enormously
according to the case. When data is persistent and locally stored onto the computational server, the global
execution time is equal to the application computation
time. This difference corresponds to the data transfer time profit: approximately 87% for a 400 MBytes
matrix. When data is moved between computational
servers the gain is of an order of 77% for a 400 MBytes
matrix. The difference in gain corresponds to the data
transfer time.
In the second experiment, the mode of data is inout. Profits are less important than for the first experiment, as shown Fig. 19: approximatively 45% for a 400
MBytes matrix if the data is local to the computational
server and 40% if the data is moved.
These results confirm the feasibility of our approach
and the gains in term of execution time.
_5.5. DIET and NetSolve comparison_
We summarize here the differences between standard
NetSolve, NetSolve with data persistence and redistribution (called NetSolve-PR here), and DIET with data
management.
**– In standard NetSolve request sequencing ap-**
proach, the sequence of computations has to be
processed by an unique server. In this case, a
client needs to have the knowledge of the services
provided by a server in order to use this approach.
Now, when using DSI, it is useful to have a DSI
depot near computational servers in order to decrease transfer time. Hence, the way that DSI architecture is implemented is very important. In
NetSolve-PR and in DIET DTM, a client does not
need to know which server is able to solve a given
problem (considering that a submitted request can
-----
350 _E. Caron et al. / Managing data persistence in network enabled servers_
400
without persistency
350 local data
data inside the platform
300
250
200
150
100
50
0
450
400
350
300
250
200
150
100
|without persistency local data data inside the platform|Col2|
|---|---|
|||
|||
0 5 10 15 20 25 30 35
matrix size (MByte)
Fig. 17. DIET Tests with and without persistence.
50
0
0 50 100 150 200 250 300 350 400
matrices size (MBytes)
Fig. 18. Sending IN data.
be processed by the platform), and we assume that
the data management architecture allows data to
be close to the computational server.
**– The DIET Data Mover is directly managed by the**
DTM that allows data to be moved near computational servers. In standard NetSolve with DSI,
considering for example two far away computational servers that will need the same data, data
must be sent on a DSI depot that is close to each
computational server. Hence, data could be sent
twice by a client. In NetSolve-PR data always stay
on a server and do not use a depot. Data can be
sent directly from a client to a server.
**– Using NetSolve approach, a client does not need**
to specify the way its data will be managed. Using
request sequencing or DSI, data are considered to
be persistent. In DIET DTM, users need to precise
the persistence mode of all their data, even for
the non persistent ones. NetSolve-PR is backward
compatible. This means that when persistence is
not needed nothing as to be specified. However,
when using persistence the client has to specify it
-----
_E. Caron et al. / Managing data persistence in network enabled servers_ 351
800
700
600
500
400
300
200
100
0
0 50 100 150 200 250 300 350 400
matrices size (MBytes)
Fig. 19. Sending INOUT data.
in the request.
**– In DIET, we think that persistent data must “sur-**
vive” to a client session and so must be fully identified. Data are kept as long as a client needs it
(for later use in other sessions and for other clients
in case of collaborative projects for example). In
NetSolve (with or without data persistence and redistribution), data are persistent in a session, for a
set of computations: data are lost when the client
terminate.
**– In NetSolve, the system cannot be overloaded by**
data since data are removed from its depot after
computation or removed after a set of computation
(request sequencing). In DIET and NetSolve-PR,
the way data is managed may lead to a memory
overload since data is cached on servers when they
are not explicitly send as files.
**6. Standardizing data management**
As we seen, data management in ASP environments
leads to several approaches. However, the need of a
common API for ASP environments is essential. Indeed, NetSolve, Ninf and DIET are members of the
GridRPC working Group in the GGF which work is to
standardize and to implement a remote procedure call
mechanismfor Grid Computing. This work has already
lead to a programming model [20].
Within this GridRPC working group an on-going
work supervised by Craig Lee aims at standardizing
data management for this model. So far, the proposal
is based on two points: a data must be fully identified
and a programmer can choose whether a data will be
persistent inside the platform or not. This proposal
must take into account the different approaches in ASP
environments in order to obtain a common layer on
which each policy can be integrated.
In order that each data will be fully identified, we
define the data handle (DH) which is the reference
of a data that may reside anywhere. This enables the
virtualization of data since it can be read or written
without knowing or caring where it is coming from or
going to. The creation of a data handle is realized by
the create(data handle t *dh); function.
Once the data reference created, it is also possible to bind it with a data. If data is bound, it
must be on the client or on a storage server. Otherwise, data is already stored inside the platform.
The bind operation is also used to specify if the
data must be keep or not. This operation is realized
by the bind(data handle t dh, data loc t
loc, data site t site); function.
**– data loc t loc (data location): client side or stor-**
age server.
**– data site t site: location of the machine where**
data will be stored If (site == NULL) data will
be stored on the last computational server (client
transparent) If (site == loc) data forwarded to site
(client or storage server) If (site <> loc) data
moved from loc to site.
-----
352 _E. Caron et al. / Managing data persistence in network enabled servers_
**CLIENT** **SERVICE A** **SERVICE B**
create input data
create input_DH
bind input_DH to input data
create output_DH
call(input_DH, output_DH)
Note : output_DH is unbound
read input_DH
data sent
EXECUTE SERVICE
create output data
bind out. data to output_DH
return bound output_DH
(output data still available on this server)
create output2_DH
bind output2_DH to client
call(output_DH, output2_DH)
read output_DH
data sent
EXECUTE SERVICE
write data on output2_DH
Fig. 20. Using the GridRPC API for data management.
|EXECUTE|SERVICE|
|---|---|
|create o|utput data|
|bind out. data|to output_DH|
|(outp|ut data still available on this server)|
|---|---|
|Col1|put data put_DH ttoo iinnppuutt ddaattaa put_DH call(input_DH, output_DH) Note : output_DH is unbound read input_DH data sent EXECUTE create o bind out. data return bound output_DH (outp put2_DH DH to client call(output_DH, output2_DH)|Col3|SERVICE utput data to output_DH ut data still available on this server) read output_DH data sent EXECUT write data on output2_DH|Col5|Col6|
|---|---|---|---|---|---|
|create in|put data|||||
|create in|put_DH|||||
|bbiinndd iinnppuutt__DDHH|ttoo iinnppuutt ddaattaa|||||
|create out|put_DH|||||
|||||||
|create out|put2_DH|||||
|bind output2_|DH to client|||||
|||call(output_DH, output2_DH)||||
|||||EXECUT|E SERVICE|
|||||||
From these to functions, we can define operations on
data handles.
**– data t read(data handle t dh): read (copy) the**
data referenced by the DH from whatever machine
is maintaining the data. Reading on an unbound
DH is an error.
**– write(data t data, data handle t dh): write data**
to the machine, referencedby the DH, that is maintaining storage for it. Writing on an unbound DH
could have the default semantics of binding to the
local host. This storage does not necessarily have
to be pre-allocated nor does the length have to be
known in advance.
**– data arg t inspect(data handle t dh): Allow the**
user to determine if the DH is bound,what machine
is referenced, the length of the data, and possibly
its structure. Could be returned as XML.
**– bool free data(data handle t dh): free the data**
(storage) referenced by the DH.
**– bool free handle(data handle t dh): frees the**
DH.
Figure 20 shows an example of data management
within this proposed framework. In this figure, a client
submits a problem to a server that is able to compute it
and a second problem on an other server. The second
server has best performance. For this second computation, the client have not to send data an other time, this
data is already in the network.
**7. Conclusion and future Work**
The litterature proposes several approaches for executing applications on computational grids. The
-----
_E. Caron et al. / Managing data persistence in network enabled servers_ 353
GridRPC standard implemented in several NES middleware (DIET, NetSolve, Ninf, etc.) is one of most
popular paradigm. However, this standard does not
define how data can be managed by the system: each
time a request is performed on a server, input data are
sent from the client to the server and output data are
sent back to the client and thus data are not persistent.
This implies a large overhead that needs to be avoided.
Moreover, no redistribution of persistent data between
servers is available. When a data is computed by one
server and needed by an other server for the next step
of computation it always goes through the client, increasing the transfer time.
In this paper, we have proposed and implemented
data management features in two NES (DIET and NetSolve). In NetSolve we changed the internal protocol
in order to allow data to stay on server and to move data
from one server to an other. We modified the API in
order clients to allow data persistence and redistribution and we enhanced the request scheduling algorithm
in order to take into account data location. Concerning
DIET, we developed a data management service called
Data Tree Manager (DTM). This service is based on
three key points: a data must be fully identified inside
the platform, it must be located and moved between
computational servers. The way to think this service
was relatively a new concept in NES community. Indeed, our service is able to keep information on data
stored as long as the client does not want to remove
them.
In our experimental results, we tested our implementations and the standard NetSolve one (which features
request sequencing). We shown that data management
improvesthe performanceof applications (for both systems) when requests have dependences because it reduces the amount of data that circulates on the Network.
Since we show that the implementation of data management is feasible and it provides an increase of performance, we discuss, in the last section of this article,
the standardization proposal (joint work with C. Lee
within the GGF) of such a feature. It is based on two
points: data is fully and globally identified, and the
programmer can choose whether a data is persistent or
not in an explicit way.
In our future work, we want to study and propose
new scheduling algorithms that efficiently takes into
account data management. For instance we believe
that a better scheduling algorithm than the proposed
enhancement of MCT can be designed in this context.
In the context of DIET, the overview of NetSolve DSI
policy leads us thinking about the possibility to keep
data on storage servers. The definition of an efficient
storage policy will allow to avoid servers overload. Our
idea is to keep data onto a server as long as it does
not decrease server performance. The data will then be
stored in available storage service systems (like IBP).
**References**
[1] P. Arbenz, W. Gander and J. Mor´e, The remote computational
system, Parallel Computing 23(10) (1997), 1421–1428.
[2] D.-C. Arnold, D. Bachmann and J. Dongarra, Request Se_quencing: Optimizing Communication for the Grid, In Euro-_
Par 2000 Parallel Processing, 6th International Euro-Par Conference, volume volume 1900 of Lecture Notes in Computer
Science, pages 1213–1222, Munich Germany, August 2000.
Springer Verlag.
[3] E. Caron, S. Chaumette, S. Contassot-Vivier, F. Desprez, E.
Fleury, C. Gomez, M. Goursat, E. Jeannot, D. Lazure, F.
Lombard, J.M. Nicod, L. Philippe, M. Quinson, P. Ramet, J.
Roman, F. Rubi, S. Steer, F. Suter and G. Utard, Scilab to
Scilab, the OURAGAN Project, Parallel Computing 27(11),
2001.
[4] E. Caron and F. Desprez, DIET: A Scalable Toolbox to Build
_Network Enabled Servers on the Grid, International Journal of_
High Performance Computing Applications, 2005. To appear.
Also available as INRIA Research Report RR-5601.
[5] E. Caron, F. Desprez, B. Del-Fabbro and A. Vernois, Ges_tion de donn´ees dans les nes, In DistRibUtIon de Donn´ees `a_
grande Echelle. DRUIDE 2004, Domaine du Port-aux-Rocs,
Le Croisic. France, may 2004. IRISA.
[6] E. Caron, F. Desprez, F. Petit and C. Tedeschi, Resource Local_ization Using Peer-To-Peer Technology for Network Enabled_
_Servers, Research report 2004-55, Laboratoire de l’Informa-_
tique du Parall´elisme (LIP), December 2004.
[7] H. Casanova and J. Dongarra, NetSolve: A Network-Enabled
Server for Solving Computational Science Problems, Interna_tional Journal of Supercomputer Applications and High Per-_
_formance Computing 11(3) (Fall 1997), 212–213._
[8] A. Chervenak, I. Foster, C. Kesselman, C. Salisbury and S.
Tuecke, The data grid: Towards an architecture for the dis_tributed management and analysis of large scientific datasets,_
http://www.globus.org/, 1999. 132.
[9] S. Dahan, J.M. Nicod and L. Philippe, Scalability in a GRID
_server discovery mechanism, In 10th IEEE Int. Workshop on_
Future Trends of Distributed Computing Systems, FTDCS
2004, pages 46–51, Suzhou, China, May 2004. IEEE Press.
[10] B. Del-Fabbro, D. Laiymani, J. Nicod and L. Philippe, Data
_management in grid applications providers, In Procs of the_
1st IEEE Int. Conf. on Distributed Frameworks for Multimedia Applications, DFMA’2005, pages 315–322, Besanc¸con,
France, February 2005.
[11] B. Del-Fabbro, D. Laiymani, J.-M. Nicod and L. Philippe, A
data persistency approach for the diet metacomputing environment, in: International Conference on Internet Comput_ing, H.R. Arabnia, O. Droegehorn and S. Chatterjee, eds, Las_
Vegas, USA, June 2004. CSREA Press, pp. 701–707.
[12] GridRPC Working Group, https://forge.gridforum.org/projects/gridrpc-wg/.
[13] S. Sekiguchi, H. Nakada and M. Sato, Design and Implementations of Ninf: Towards a Global Computing Infrastructure.
Future Generation Computing Systems, Metacomputing Issue
**15 (1999), 649–658.**
-----
354 _E. Caron et al. / Managing data persistence in network enabled servers_
[14] T. Kosar and M. Livny, Stork: Making data placement a first
_class citizen in the grid, 2004. In Proceedings of the 24th Int._
Conference on Distributed Computing Systems, Tokyo, Japan,
March 2004.
[15] M. Maheswaran, S. Ali, H.J. Siegel, D. Hengsen and R.F.
Freund, Dynamic Matching and Scheduling of a class of In_dependent Tasks onto Heterogeneous Computing System, In_
Proceedings of the 8th Heterogeneous Computing Workshop
(HCW ’99), April 1999.
[16] S. Matsuoka, H. Nakada, M. Sato and S. Sekiguchi, De_sign Issues of Network Enabled Server Systems for the Grid,_
http://www.eece.unm.edu/˜dbader/grid/WhitePapers/satoshi.
pdf, 2000. Grid Forum, Advanced Programming Models
Working Group whitepaper.
[17] H. Nakada, S. Matsuoka, K. Seymour, J. Dongarra, C. Lee
and H. Casanova, A GridRPC Model and API for End-User
_Applications, December 2003. https://forge.gridforum. org/_
projects/gridrpc-wg/document/GridRPC EndUse%r16dec03/
en/1.
[18] NEOS. http://www-neos.mcs.anl.gov/.
[19] M. Quinson, Dynamic performance forecasting for network_enabled servers in a meta- computing environment, In In-_
ternational Workshop on Performance Modeling, Evaluation,
and Optimization of Parallel and Distributed Systems (PMEOPDS’02), in conjunction with IPDPS’02, April 15–19 2002.
[20] K. Seymour, C. Lee, F. Desprez, H. Nakada and Y. Tanaka, The
_End-User and Middleware APIs for GridRPC, In Workshop_
on Grid Application Programming Interfaces, In conjunction
with GGF12, Brussels, Belgium, September 2004.
-----
|Col1|Col2|
|---|---|
|Modelling & Simulation in Engineering hHtitnpd:/a/wwiw Pwu.bhliinsdhainwgi .Ccoormporation|Volume 2014|
Advances in
###### Artificial Intelligence
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Advances in
Human-Computer
Interaction
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
International Journal of
Computer Games
Technology
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
Advances in
Software Engineering
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
_Advances in_
## Multimedia
_Hindawi Publishing Corporationhttp://www.hindawi.com_ _Volume 2014_
Journal of
Electrical and Computer
Engineering
Hindawi Publishing Corporationhttp://www.hindawi.com Volume 2014
##### The Scientific World JournalHindawi Publishing Corporation http://www.hindawi.com Volume 2014
International Journal of
Reconfigurable
Computing
Hindawi Publishing Corporation http://www.hindawi.com Volume 2014
**Advances in**
### Fuzzy Systems
International Journal of
Reconfigurable
Computing
Journal of
**Computer Networks**
**and Communications**
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1155/2005/151604?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1155/2005/151604, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://downloads.hindawi.com/journals/sp/2005/151604.pdf"
}
| 2,005
|
[
"JournalArticle"
] | true
| 2005-10-01T00:00:00
|
[
{
"paperId": "ae1e174ffc7164c7d1efb22ca9bc2fb64515a489",
"title": "Diet: A Scalable Toolbox to Build Network Enabled Servers on the Grid"
},
{
"paperId": "6d083b9bc60f50a3bd18196e06a6a674e6028e66",
"title": "Data management in grid applications providers"
},
{
"paperId": "ff12266817e2151fbe818ccba61493942112fad7",
"title": "Resource Localization Using Peer-To-Peer Technology for Network Enabled Servers"
},
{
"paperId": "6a962420b90d1cd64ca9074aa6e527cddcf1a452",
"title": "Scalability in a Grid server discovery mechanism"
},
{
"paperId": "9890c5768c7397e3c88c9346cbbf26e9e3708759",
"title": "Gestion de données dans les NES"
},
{
"paperId": "2f8ab6e3284a132f90479f748923b9f04b191d0e",
"title": "Stork: making data placement a first class citizen in the grid"
},
{
"paperId": "0102870560c64f43d9dd2b2910ee6714e99c2e88",
"title": "Dynamic performance forecasting for network-enabled servers in a metacomputing environment"
},
{
"paperId": "4ab45fa32edc2bb408d1da247c878866a0c5a0bc",
"title": "SCILAB to SCILAB//: The OURAGAN project"
},
{
"paperId": "b816f0b756ca33689a573080d168e67f64e01c3f",
"title": "Design Issues of Network Enabled Server Systems for the Grid"
},
{
"paperId": "f949d1b9680e7de17e65b4859ffda1078e1e5dc4",
"title": "SCILAB to SCILAB"
},
{
"paperId": "10c2a6b398eb9260b334d652092e6b08b2cb6bf7",
"title": "Request Sequencing: Optimizing Communication for the Grid"
},
{
"paperId": "a46bdac76b219b97364710b176eba0b7dd736e42",
"title": "The data grid: Towards an architecture for the distributed management and analysis of large scientific datasets"
},
{
"paperId": "d20fef14037853bcb5aaa52d7f77367e4cfc75d7",
"title": "Design and implementations of Ninf: towards a global computing infrastructure"
},
{
"paperId": "a31619cbb026690b3b8c18f3d0b684afd9754a6a",
"title": "Dynamic matching and scheduling of a class of independent tasks onto heterogeneous computing systems"
},
{
"paperId": "96a9ecb263aa39f137264a33cdb1abf163296a1a",
"title": "Netsolve: a Network-Enabled Server for Solving Computational Science Problems"
},
{
"paperId": "549ccd9bfd7fbaaed948f70300f9659b91a0ded5",
"title": "The Remote Computation System"
},
{
"paperId": "af26259e0705bc7512644f4dcf5326eccb22a980",
"title": "A GridRPC Model and API for End-User Applications"
},
{
"paperId": "547a89ab919fdbd4f1508d8e23cf36e597a841c7",
"title": "A Data Persistency Approach for the DIET Metacomputing Environment"
},
{
"paperId": null,
"title": "The End-User and Middleware APIs for GridRPC"
},
{
"paperId": null,
"title": "Fr(cid:19)ed(cid:19)eric"
},
{
"paperId": null,
"title": "Grid Forum, Advanced Programming Models Working Group whitepaper"
},
{
"paperId": null,
"title": "LORIA, Technopôle de Nancy-Brabois -Campus scientifique 615, rue du Jardin Botanique -BP 101 -54602"
},
{
"paperId": null,
"title": "https://forge.gridforum.org/projects/gridrpc-wg"
},
{
"paperId": null,
"title": "Unité de recherche INRIA Rhône-Alpes 655, avenue de l'Europe -38334 Montbonnot"
},
{
"paperId": null,
"title": "Managing Data Persistence in Network Enabled Servers"
},
{
"paperId": null,
"title": "GridRPC Working Group"
},
{
"paperId": null,
"title": "S 1 connects to the socket s on port p of S"
},
{
"paperId": null,
"title": "Send one output object"
}
] | 18,782
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Mathematics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0327294f14eb690b841d356ca05f16f4f31dcdac
|
[
"Computer Science"
] | 0.830609
|
Automated Analysis of Cryptographic Assumptions in Generic Group Models
|
0327294f14eb690b841d356ca05f16f4f31dcdac
|
Journal of Cryptology
|
[
{
"authorId": "145501438",
"name": "G. Barthe"
},
{
"authorId": "2239486",
"name": "Edvard Fagerholm"
},
{
"authorId": "40651488",
"name": "D. Fiore"
},
{
"authorId": "2248542272",
"name": "John C. Mitchell"
},
{
"authorId": "1749608",
"name": "A. Scedrov"
},
{
"authorId": "2070252037",
"name": "Benedikt Schmidt"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Cryptol"
],
"alternate_urls": [
"https://www.iacr.org/jofc/jofc.html",
"http://www.springeronline.com/sgw/cda/frontpage/0,11855,5-0-70-1009426-detailsPage=journal|description|description,00.html?referer=www.springeronline.com/journal/00145/about"
],
"id": "de5467ac-3f75-47f8-8397-1c10f6f9fc09",
"issn": "0933-2790",
"name": "Journal of Cryptology",
"type": "journal",
"url": "https://link.springer.com/journal/145"
}
| null |
# Automated Analysis of Cryptographic Assumptions in Generic Group Models
Gilles Barthe[1], Edvard Fagerholm[1][,][2], Dario Fiore[1], John Mitchell[3],
Andre Scedrov[2], and Benedikt Schmidt[1]
1 IMDEA Software Institute, Madrid, Spain
_{gilles.barthe,dario.fiore,benedikt.schmidt}@imdea.org_
2 University of Pennsylvania, USA
_{edvardf,scedrov}@math.upenn.edu_
3 Stanford University, USA
mitchell@cs.stanford.edu
**Abstract. We initiate the study of principled, automated, methods for**
analyzing hardness assumptions in generic group models, following the
approach of symbolic cryptography. We start by defining a broad class of
generic and symbolic group models for different settings—symmetric or
asymmetric (leveled) k-linear groups—and by proving “computational
soundness” theorems for the symbolic models. Based on this result, we
formulate a very general master theorem that formally relates the hardness of a (possibly interactive) assumption in these models to solving
problems in polynomial algebra. Then, we systematically analyze these
problems. We identify different classes of assumptions and obtain decidability and undecidability results. Then, we develop and implement
automated procedures for verifying the conditions of master theorems,
and thus the validity of hardness assumptions in generic group models.
The concrete outcome of this work is an automated tool which takes as
input the statement of an assumption, and outputs either a proof of its
generic hardness or shows an algebraic attack against the assumption.
## 1 Introduction
Sophisticated abstractions have often been instrumental in recent breakthroughs
in the design of cryptographic schemes. Bilinear maps are perhaps the most striking instance of such an abstraction; over the last fifteen years, they have been
used for building advanced and previously unknown cryptographic schemes. Now
it is believed that multilinear maps will lead to similar breakthroughs. Compared to the “classical” algebraic settings based on the purported hardness of
the Factoring/RSA or Discrete-log/Diffie-Hellman problems, bilinear and multilinear maps indeed provide richer and more versatile algebraic structures that
are particularly suitable for new constructions. At the same time, one unsettling
consequence of using such sophisticated abstractions is a significant growth in
the number of hardness assumptions used in security proofs. Moreover, these
assumptions are not as well studied as their classical and standard counterparts.
J.A. Garay and R. Gennaro (Eds.): CRYPTO 2014, Part I, LNCS 8616, pp. 95–112, 2014.
_⃝c_ International Association for Cryptologic Research 2014
-----
96 G. Barthe et al.
While it is widely acknowledged that this situation is far from ideal, relying on
non-standard assumptions is sometimes the only known way to construct some
new (or some efficient) cryptographic scheme, and hence it cannot be completely
disregarded. A common view to resolving this dilemma is to develop principled, rigorous approaches for analyzing and comparing non-standard hardness
assumptions.
This question has been previously considered in the literature, in which we
identify at least two approaches. One approach is to devise assumptions that
are general enough to be reused and allow for simple security proofs, and at the
same time are shown to hold under more classical assumptions (e.g., [15,32]).
A second approach is to develop idealized models, such as the Generic Group
[31,33,28] and the Generic Bilinear Group [10] models, and to provide (in the
form of so-called master theorems) necessary and sufficient conditions for the
security of an assumption in these models. Proving the hardness of an assumption in these models is essentially a way to rule out the possibility of algebraic
attacks against the underlying algorithmic problem, and it can be considered
the minimal level of guarantee we need to gain confidence in an assumption.
Two prominent examples along this direction are the “Uber assumption” (aka
“Master theorem”) of Boneh, Boyen and Goh [10,14] and the Matrix Decisional
Diffie-Hellman assumption family recently proposed by Escala et al. [17].
However, although these results are quite general, they can be quite difficult
to apply. Indeed, in order to argue the hardness of an assumption using the
Uber assumption in [10,14] (resp. the Matrix-DDH assumption in [17]) one has
to show the independence (resp. irreducibility) of certain polynomials contained
in the statement of the assumption. A similar problem arises in the context of
interactive assumptions such as [27,2], in which the hardness crucially relies on
the restrictions posed on the queries performed by the adversary. In summary,
applying these general results to verify the validity of a given assumption is far
from being a trivial task, and may be error-prone, as witnessed by unfortunate
failures [35,23].
In this paper, we initiate the study of principled, automated methods for an
alyzing hardness assumptions in generic group models. Our main contribution
is essentially threefold. First, we reformulate master theorems in the style of the
celebrated “computational soundness” theorem of Abadi and Rogaway [1], and
formally show that the problem of analyzing assumptions in the generic group
reduces to solving problems in polynomial algebra. Second, we systematically
analyze these problems: while we show that the most general problem is undecidable, we distill a set of properties (capturing most interesting cases) for
which the problem is decidable. Finally, by applying tools from linear algebra,
we develop and implement automated procedures for verifying the conditions of
master theorems, and thus the validity of hardness assumptions in generic group
models. The concrete outcome of this work is an automated tool[1] which takes
as input an assumption and outputs either a proof of its generic hardness (along
with concrete bounds) or shows an algebraic attack against the assumption.
[1 The tool is available at http://www.easycrypt.info/GGA](http://www.easycrypt.info/GGA)
-----
Automated Analysis of Assumptions in Generic Models 97
**1.1** **An Overview of Our Contribution**
The key contribution of our work is the development of automated decision
procedures for testing the validity of hardness assumptions in generic group
models. Towards this goal, we first settle a rigorous framework for carrying out
this analysis. Basically, this framework consists of formalizing a class of generic
group models and then stating a general master theorem. Finally, our decision
procedures will be aimed at verifying the side conditions of our master theorem.
Generic Group Models. We formalize a broad class of generic group models
capturing many interesting cases used in cryptography: symmetric and asymmetric k-linear groups, with both leveled and non-leveled maps, and with the
possibility of modeling efficiently computable isomorphisms between the groups.
For any experiment stated in these generic models, we generalize the commonlyused step of applying the Schwartz-Zippel Lemma, and obtain a generic transformation (cf. Theorem 1) for switching from the generic group model experiment,
in which variables are uniformly sampled in the underlying field, to a completely
deterministic experiment that works in a corresponding symbolic group model.
A General Master Theorem. We give a general version of the Master theorem in [10] which can be stated in any of the generic group models mentioned
above. As in [10], we formulate an assumption as a list L of polynomials in
Fp[X1, . . ., Xn] where X1, . . ., Xn is a set of random variables. In particular,
a decisional (aka left-or-right) assumption is defined by two lists of polynomials L and L[′] (one for the “left” and one for the “right” distribution), and the
assumption is said to hold if the adversary cannot distinguish whether it receives polynomials from L or L[′]. Very informally, our Master theorem states
that viewing L and L[′] as the generating sets of two vector spaces[2], then the
linear dependencies within L and within L[′] are the same. Previous master theorems [10,17] considered only decisional assumptions with the real-or-random
formulation in which the adversary is given a list of polynomials L and either a
“challenge” polynomial f or a fresh random variable Z.
Beyond obtaining a theorem that works in (leveled) k-linear groups, our gen
eral formulation allows us to capture virtually all decisional assumptions, based
on k-linear groups (for any k 1), that are used in cryptography. To mention
_≥_
some examples, assumptions captured by our theorem include the Matrix-DDH
assumption [17], the k-BDH assumption [5], and recently proposed assumptions
such as (n, k)-MMDHE [22].
Automated Methods. Once we have settled the above framework, our goal is
to develop a collection of automated methods to verify the side condition of the
Master theorem for any given assumption stated in the framework. While the
statement of the above side condition already suggests how to use linear algebra
to make these checks, a crucial challenge is that in many important cases (e.g.,
_ℓ-BDHI, k-Lin, etc.) the size of the lists L and L[′]_ is a variable parameter. That
2 We are oversimplifying. More precisely, one has to consider lists C and C _′ containing_
all polynomials computable by doing multiplications over L and L[′] respectively, and
then look at linear dependencies in C and C _[′]._
-----
98 G. Barthe et al.
**Assumption Type** **Algorithm Examples**
DBDH [12], 2-lin, 3-lin,
Non-parametric D, C
Freeman assm. 3&4 [18]
Parametric (real-or-random, monomials inputs)
Fixed #vars, Par. linear degree and Par. arity U, I (ℓ, k)-MMDHE [22]
Fixed #vars, Par. linear degree, Fixed arity D, C _ℓ-DHI [9], ℓ-DHE [13]_
Parametric #vars, Par. arity, Fixed degree I (k)-BDH [5], k-Lin in k-linear groups
LRSW [27], CDDH 1&2 [2],
Interactive bounded I,C
M-LRSW [7], IBSAS-CDH [8]
LRSW [27], Strong-LRSW [3],
Interactive unbounded I
s-LRSW [20]
**Fig. 1. Summary of our automated analysis methods. U=undecidable problem,**
D=decision procedure, I = incomplete procedure, C=find counterexample for invalid
assumptions.
is, to check that the side condition holds, one would have to do computations
on a vector space of variable dimension: a challenging problem for automation.
We study this problem for three main categories of hardness assumptions: (1)
non-parametric, (2) parametric, and (3) interactive. Non-parametric assumptions are non-interactive assumptions in which the number of inputs is fixed, no
input is quantified over a variable and the number of levels is fixed (examples
include DDH, DBDH [12], as well as assumptions in k-linear groups for fixed k,
e.g., 3-Lin in 3-linear groups). Conversely, an assumption is parametric if one
or more of the above restrictions do not hold. Finally, interactive assumptions
are those ones where the adversary is granted access to additional oracles (in
addition to the oracles for the algebraic operations). By carefully analyzing each
of these categories, we obtain the following results summarized in Fig. 1.
For non-parametric assumptions, we show how to reduce the check on the side
condition to computing the kernels of certain matrices (of fixed dimension) that
are derived from the lists of polynomials in the assumption’s definition. Using
computer algebra tools (SAGE [34]), we implement a decision procedure that
shows a concrete hardness bound in the corresponding generic group model in
the positive case, and an algebraic attack if the assumption does not hold.
Our methods for non-parametric assumptions offer a complete decision pro
cedure to verify arbitrary instances of parametric assumptions where all the
parameters have been fixed. This might be sufficient to test quickly a new assumption (and find attacks if any), but it is often desirable to obtain stronger
guarantees that hold for all parameters. We show that, contrary to the nonparametric case, the side condition becomes undecidable in general. However, we
identify classes of assumptions for which we develop automated methods. Interestingly, these classes still contain most cryptographic assumptions. Considering
the class of real-or-random assumptions, we develop two different methods. The
first method focuses on the case in which the number of random variables is fixed,
and the input elements are monomials. Our method shows how to reduce the
check of the side condition to an integer programming problem. Interestingly, we
can show the following: if the degree of the monomials is not a linear polynomial,
or the arity of the map is variable, then the problem is undecidable; otherwise (if
the monomials have linear degree and the arity of the map is fixed) the problem
-----
Automated Analysis of Assumptions in Generic Models 99
is decidable. We implemented the translation procedure to integer programming
problems and use SMT solvers to check satisfiability. For the decidable fragment
of assumptions mentioned above, we obtain a complete decision procedure that
also shows an attack if the assumption is invalid. For the undecidable fragment,
our procedure successfully analyzes all significant examples from the literature.
Our second method focuses on the case where the number of random variables
is parametric. As in the previous case, our method provides a way to reduce
the side condition to a system of equations. However, the same idea as before
does not work since a parametric number of variables would lead to an infinite
number of equations. Therefore, we focus on a restricted, but significant, class
of assumptions (one restriction is that inputs are expressed as monomials). Our
method is incomplete but successfully analyzes all relevant examples in this class.
Finally, we study interactive assumptions such as LRSW [27]. To analyze interactive assumptions, we first formulate an interactive version of our master
theorem. Interestingly, once applying our general “computational soundness”
theorem and switching to the symbolic model, our interactive master theorem
essentially becomes a variant of the non-interactive master theorem for parametric computational assumptions. This allow us to apply similar techniques as
for parametric assumptions. More specifically, we use SMT solvers and Gr¨obner
bases computations as an incomplete method to show the validity of such assumptions and find attacks. For instance, our tool automatically proves the validity of LRSW [27] and exhibits attacks for m-LRSW [7] and CDDH [2].
Extensions and Additional Material. We extend our results to compositeorder groups. Precisely, we formulate the generic group model and our master
theorem in a general way that captures also composite-order groups, and we
show how to extend our decision procedures for non-parametric assumptions to
this setting. Another extension of our results is handling assumptions in which
the adversary receives rational values in the exponent. These extensions, full
detailed proofs and some running examples appear only in the full version [4].
Limitations. While our master theorem is very general, our automated methods
require to specify the assumptions in a concrete language, essentially to describe
the distribution of the polynomials defining the assumption. Such language cannot support the expression of very abstract properties, and thus rules out a
few examples. For instance, the definition of the Decision Multilinear No-ExactCover Assumption [19] is parametrized by an instance (with no solution) of the
Exact-Cover NP-complete problem. Although fixing a specific Exact-Cover instance yields lists of polynomials which can be analyzed using our methods, a
definition for any instance is too general. For a similar reason, our tool cannot handle the Matrix-DDH assumption in its full generality, unless one fixes a
specific distribution for the matrix (e.g., k-Lin).
**Discussion. Although well-studied standard assumptions should always be pre-**
ferred when designing cryptographic schemes, the use of non-standard ones is not
likely to stop. In this sense, we believe the study and development of rigorous
methods for analyzing cryptographic assumptions is relevant, and that automated analysis tools can support cryptographers in multiple directions. Mainly,
-----
100 G. Barthe et al.
they provide a rigorous, fast way to test the validity of candidate assumptions in
generic models by delegating this task to a machine. This is especially relevant
in the recent setting of leveled multilinear maps, that have a rich algebraic structure and for which even simple assumptions may become difficult to analyze. We
believe that the importance of such tools is motivated by the fact that proofs
validating the hardness of an assumption in the generic group model fall exactly
in the so-called “mundane part”[3] of cryptographic proofs mentioned by Halevi
[21], and constitute a perfect candidate of a proof to be delegated to a machine.
Our work shows the feasibility and relevance of developing automated methods
to analyze assumptions in generic group models. It can also be seen as the first
step towards analyzing cryptographic protocols directly in the generic model; we
expect that such analyses would allow to discover subtle flaws in protocols and
supplant existing methods based on symbolic cryptography.
**1.2** **Related Work**
The problem of analyzing and comparing hardness assumptions has been earlier considered in the literature, e.g., [30]. In particular, we identify two main
approaches in previous work. The first approach aims to define generalized assumptions that reduce to standard ones. Examples of works in this direction
include: the Square Diffie-Hellman assumption, shown to be equivalent to CDH
by Maurer and Wolf [29]; the (P, Q)-Decisional Diffie-Hellman assumption of
Bresson et al. [15] which is shown to reduce to DDH; and the decisional subspace problems of Okamoto-Takashima [32] that are reduced to DLin.
The other approach aims at directly analyzing assumptions by means of ide
alized models, such as the generic group model. This model was introduced by
Nechaev [31] and further refined and generalized by Shoup [33], and Maurer [28].
Our work follows closely Maurer’s model, in which the main difference compared
to previous proposals is to model the adversary’s access to group elements via
handles instead of random bitstrings as in [31,33]. These two models have been
proven equivalent in [25]. Worth mentioning in this context is the semi-generic
group model of Jager and Rupp [24]. This is a weaker version of the bilinear
generic group model, and its basic idea is to model the base groups of pairings
as generic groups, whereas the target group is given in the standard model.
Two works that address the problem of devising general assumptions in the
generic group are the Master theorem of Boneh, Boyen and Goh [10] (generalized
by Boyen [14]), and the Matrix DDH assumption of Escala et al. [17]. Roughly
speaking, the former provides a framework for arguing about the validity of several pairing-based assumptions in the generic group model, and it captures a
significant fraction of assumptions in the literature. The latter is an assumption that subsumes classical problems like DDH or DLin and also introduces
3 In [21], Halevi informally divides proofs in two categories (quoting): “Most (or all)
_cryptographic proofs have a creative part (e.g., describing the simulator or the reduc-_
_tion) and a mundane part (e.g., checking that the reduction actually goes through)._
_It often happens that the mundane parts are much harder to write and verify, and it_
_is with these parts that we can hope to have automated help.”_
-----
Automated Analysis of Assumptions in Generic Models 101
assumptions, such as k-Casc, that are proven hard in the generic k-linear group
model. Also worth mentioning is the work of Freeman [18] which extends the
BBG Master theorem to challenges in the source group and uses the computer
algebra system Magma to verify the side conditions required to prove two of
the assumptions. Our work is also close to the line of work on automation of
cryptographic proofs in both the computational and symbolic models, see [6] for
an overview.
**1.3** **Preliminaries**
In our work, we denote by λ the security parameter. We use Gi to denote additive
cyclic groups of prime order and Pi to denote a generator of Gi. For any element
_Q = xPi, we denote with x = dlog(Q) its discrete logarithm. We use a or v_
to denote vectors, a **_b for the concatenation of two vectors, and a_** **_b to denote_**
_∥_ _·_
their inner product. We denote the power set of S with (S), the i-th element
_P_
of a list with L[i], the range _n, . . ., n + l_ with [n, n + l], and [1, n] with [n].
_{_ _}_
A symmetric k-linear group is a pair of groups G1 and G2 together with
an admissible k-linear map e : G[k]1 _[→]_ [G][2][. An][ asymmetric][ k][-linear group][ is a]
sequence of groups G1, . . ., Gk, Gk+1 together with an admissible k-linear map
_e : G1 × · · · × Gk →_ Gk+1. For a k-linear map e : G1 × · · · × Gk → Gk+1, we
call Gk+1 the target group and other groups Gi source groups. We can further
assume existence of isomorphisms Gi Gj between source groups.
_→_
A symmetric leveled k-linear group is a sequence of groups G1, . . ., Gk together
with bilinear maps e : Gi × Gj → Gi+j for i, j ∈ [1, k] and i + j ≤ _k. We say_
that Gn is the group at level n and call Gk the target group. An asymmetric
_leveled k-linear group is a collection of groups {GS} for S ∈P([k]) together with_
bilinear maps eS,T : GS × GT → GS∪T for all S ∩ _T = ∅._
## 2 Generic Group Models and Symbolic Group Models
In this section, we define a class of generic group models that captures the
previously described group settings. Afterwards, we define a symbolic group
model where instead of computing with (randomly sampled) group elements,
the challenger computes with (fixed) polynomials. We prove that this model is
equivalent to the generic group model up to some usually small error.
**Generic Group Models. A generic group model for a concrete group setting**
captures all operations that an adversary with black-box access can perform.
**Definition 1. A group setting is a tuple GS = (p, G, Φ, E) where G = {Gi}i∈I**
_is a set of cyclic groups of prime order p indexed by a totally ordered set_ _, Φ is_
_I_
_a set of isomorphisms φ : Gi →_ Gj, and E is a set of maps, where for each e ∈E,
_there is a k s.t. e : Gi1 × . . . × Gik →_ Gik+1 is an admissible k-linear map.
_The generic model for a group setting (p,_ _, Φ,_ ) and a distribution _on in-_
_G_ _E_ _D_
_dexed sets {Li}i∈I of lists of elements of Gi is defined as follows. The challenger_
_maintains lists L = {Li}i∈I where each list Li contains elements from Gi. The_
-----
102 G. Barthe et al.
_lists are initialized by sampling from_ _and the adversary can apply the group_
_D_
_operations, isomorphisms, and k-linear maps to list elements by providing the_
_indices of elements as handles. For an operation o : Gi1 ×_ _. . ._ _×_ Gik → Gik+1 _, the_
_corresponding oracle takes handles h1, . . ., hk, computes a = o(a1, . . ., ak) for_
_aj = Lij_ [hj], appends a to Lik+1 and returns a’s handle h = |Lik+1 _|. Note that_
_handles are not unique, but the challenger provides an equality oracle to check_
_if two handles refer to the same group element. A formal definition of the game_
_appears in the full version._
_Remark 1. As mentioned in Section 1.2, our generic group model closely follows_
Maurer’s model [28]. We provide the adversary with access to the internal state
variables of the challenger via handles, and we assume that the equality queries
are “free”, in the sense that they do not count when measuring the computational
complexity of the adversary.
_Example 1. To model a asymmetric leveled k-linear map, we use the index set_
_I = P([k]), Φ = ∅, and E = {eT,R : GT × GR →_ GT ∪R | T, R ∈I ∧ _T ∩_ _R = ∅}._
**Definition 2. For a list of lists L = L1, . . ., Lk of polynomials over Fp[X1, .., Xn],**
_we define the distribution DL by the following procedure. Uniformly sample a point_
**_x ∈_** F[n]p _[and return the list of lists][ L][′][ =][ L]1[′]_ _[, . . ., L][′]k_ _[where][ L]i[′]_ [= [][f][1][(][x][)][P][i][, . . .,]
_f|Li|(x)Pi] for fj = Li[j]. A distribution D is polynomially induced if D = DL for_
_some L._
Most hardness assumptions in generic group models belong to the following
classes of decisional, computational, or generalized extraction problems stated
with respect to a group setting :
_GS_
**– Decisional problem for DL and DL[′]:**
Return b 0, 1 to distinguish the corresponding generic group models.
_∈{_ _}_
**– Computational problem for DL, polynomial f**, and group index i:
Return handle to f (x)Pi, where x is the random point sampled by DL.
**– Generalized extraction problem for DL, n, m, i1, . . ., im, H:**
Return a ∈ F[n]p [and handles][ h][1][, . . ., h][m][ such that the random point][ x][ sampled]
by DL satisfies H(x, a, dlog(Li1 [h1]), . . ., dlog (Lim[hm])) = 0.
The above classification generalizes the one proposed by Maurer [28]. Precisely,
in addition to decisional and computational assumptions, Maurer considered
“straight” extraction problems (such as discrete logarithm) in which the adversary has to extract the random value x of a handle. Our class of generalized
_extraction problems captures extraction problems like discrete logarithm, but_
also captures problems like the Strong Diffie-Hellman Problem [9].[4] Moreover,
note that our class of generalized extraction problems contains the class of computational problems.
**From Generic to Symbolic Group Models. The symbolic group model for**
a group setting (p, G, Φ, E) and a distribution DL provides the same adversary
4 Set n = 1, m = 0, H(X, a1) = X − _a1 for DLOG and n = m = 1,H(X, a1, Y ) =_
(X − _a1)Y −_ 1 for SDH.
-----
Automated Analysis of Assumptions in Generic Models 103
interface as the corresponding generic group model. The difference is that, internally, the challenger now stores lists of polynomials in Fp[X1, . . ., Xn] where
_X1, . . ., Xn are the variables occurring in L. The oracles perform addition, nega-_
tion, and equality checks in the polynomial ring. To define the polynomial operations corresponding to applications of isomorphisms and n-linear maps, observe
that for all isomorphisms φ there is an a ∈ F[×]p [such that][ φ][(][g][i][) =][ g]j[a][. We therefore]
define the oracle isomφ(h) such that it computes a · Li[h]. Similarly, we define
the oracle mape(h1, . . ., hk) such that it computes a · (Li1 [h1] · · · Lik [hk]). We
also define a symbolic version S(E) of a generic winning condition E. For decisional problems and computational problems, the symbolic event is equal to
the generic event, i.e., S(E) = E. For generalized extraction problems, the event
_E is translated to checking whether H(X1, . . ., Xn, a, Li1[h1], . . ., Lim[hm]) = 0_
holds in the polynomial ring. We denote the symbolic group model for a group
setting GS and a distribution DL with Sym[D]GS[L] [and the corresponding generic]
group model with Gen[D]GS[L][.]
**Theorem 1. Let (p, G, Φ, E) denote a group setting, DL a distribution, A an**
_adversary performing at most q queries, and E the winning event of a decisional,_
_computational, or generalized extraction assumption. If d is an upper bound on_
_the degrees of the polynomials occurring in the internal state of Sym[D]GS[L][(][A][)][ and]_
_S(E), s is the sum of the sizes of the lists in L, and the event S(E) contains at_
_most e equality tests, then_
_|Pr[ Gen[D]GS[L][(][A][) :][ E][ ]][ −]_ _[Pr][[][ Sym][D]GS[L][(][A][) :][ S][(][E][) ]][| ≤]_ [(][s][ +][ q][)][2][ ∗] _[d/][2][p][ +][ ed/p]_
_where the probability is taken over the coins of Gen[D]GS[L]_ _[and][ A][.]_
By applying this theorem, we can therefore analyze the hardness of assumptions in the simpler symbolic model. We note that existing master theorems
usually include a similar step in their proofs. Here we explicitly prove the equivalence of the Gen and Sym experiments. This stronger result is required for our
decidability results.
## 3 Master Theorem for Non-interactive Assumptions
In this section we state our master theorem for decisional, non-interactive problems. In Section 5, we give a master theorem for interactive assumptions which
cover generalized extraction problems (and computational ones per Section 2).
To state our theorem, we first define the completion (L) of a list L with
_C_
respect to the group setting (p, _, Φ,_ ). This notion will be instrumental to
_G_ _E_
define the side condition of our master theorem. Intuitively speaking, given a
list L, its completion (L) is the list of all polynomials that can be computed
_C_
by the adversary by applying isomorphisms and maps to polynomials in L.
We compute the completion (L) of L in two steps. In the first step, we com_C_
pute the recipe lists {Ri}i∈I using the algorithm given in Figure 2. The elements
of the recipe lists are monomials over the variables Wi,j for (i, j) ∈I × [|Li|].
-----
104 G. Barthe et al.
foreach i ∈I : Si[′] [=][ ∅] [;][ S][i] [=][ {][W][i,][1][, . . ., W]i,|Li|[}]
while S ̸= S[′] :
**_S[′]_** := S
foreach e : Gj1 × . . . × Gjn → Gjn+1 ∈E :
_Sjn+1 := Sjn+1 ∪{f1 · · · fn | fi ∈_ _Sji_ _, i ∈_ [n]}
foreach φ : Gi → Gj ∈ _Φ : Sj := Sj ∪_ _Si_
foreach i ∈I : Ri := setToList (Si)
**Fig. 2. Computation of lists of recipes Ri for input lists Li.**
The monomials characterize which products of elements in L the adversary can
compute by applying isomorpisms and maps. The result of the first step is independent of the elements in the lists L and only depends on the lengths of the
lists. In the second step, we compute the actual polynomials from the recipes as
_C(L)i = [m1(L), . . ., m|Ri|(L)] for [m1, . . ., m|Ri|] = Ri_
where every mi is a monomial over the variables Wi,j and mi(L) denotes the
result of evaluating the monomial mi for the values Li[ji].
To ensure that the computation of the recipes terminates, we restrict ourselves
to group settings without cycles. We also assume that the group setting contains
a target group. Formally, for a group setting (p, _, Φ,_ ), we define the weighted
_G_ _E_
directed graph G = (V, E) with V = and E defined as follows. For each
_G_
isomorphism Gi Gj _Φ, there is an edge from Gi to Gj of weight 0. Similarly,_
_→_ _∈_
given any Gi1 Gin Gin+1, there are edges from Gij to Gin+1 of
_× · · · ×_ _→_ _∈E_
weight 1 for j [n]. We assume that the graph G contains no loops of positive
_∈_
weight. Furthermore, we assume there is a unique Gt ∈ _V called the target_
_group, such that from any Gi_ _V there is a path to Gt and Gt does not have_
_∈_
any outgoing edges.
**Theorem 2. Let GS = (p, {Gi}i∈I, Φ, E) denote a group setting, and DL, DL[′]**
_be polynomially-induced distributions such that |Li| = |L[′]i[|][ for all][ i][ ∈I][. Let][ t]_
_denote the index of the target group, s =_ [�]i∈I _[|][L][i][|][,][ r][ =][ |C][(][L][)][t][|][, and let][ d][ denote]_
_an upper bound for the total degrees of the polynomials in the completions of the_
_lists. If_
_{a ∈_ F[r]p _[|][ a][ · C][(][L][)][t][ = 0][}][ =][ {][a][ ∈]_ [F]p[r] _[|][ a][ · C][(][L][′][)][t][ = 0][}][,]_
_then_
_|Pr[ Gen[GS]DL[(][A][) = 1 ]][ −]_ _[Pr][[][ Gen][GS]DL′_ [(][A][) = 1 ]][| ≤] [(][s][ +][ q][)][2][ ∗] _[d/p]_
_for all adversaries_ _that perform at most q operations._
_A_
Note that deciding the side condition is sufficient for deciding the hardness of
the corresponding decisional problem for a fixed group setting and fixed distributions. Either the side condition is satisfied or there exists an a ∈ F[r]p [that is]
-----
Automated Analysis of Assumptions in Generic Models 105
included in one of the sets, but not in the other one. In the first case, the distinguishing advantage is upper-bounded by the ϵ given above. In the second case,
we can construct an adversary that distinguishes the two symbolic models with
probability 1, which implies that it distinguishes the corresponding generic models with probability 1 _ϵ. Note that for real-or-random assumptions where the_
_−_
adversary is given L[ˆ] and must distinguish f from a fresh variable Z in the target
group Gt, our side condition simplifies to [�]j[r]=1 _[a][j][C][(ˆ][L][)][t][[][j][]][ ̸][=][ f][ for all][ a][ ∈]_ [F]p[r][.]
This is similar to the independence condition in the BBG master theorem [11].
## 4 Automated Analysis of Non-interactive Assumptions
In this section, we present methods to automatically verify or falsify the hardness
of decisional assumptions. As mentioned earlier, our master theorem is stated
with respect to a fixed group setting and fixed distributions. To consider multiple
group settings or distributions at once, we define a decisional assumption A as a
possibly infinite set of triples (GS, DL, DL[′]). A is generically hard if the distinguishing probability is upper-bounded by ϵ in Theorem 2 for all triples in A.
We distinguish between non-parametric assumptions and parametric assump
tions. An assumption is non-parametric if only the concrete groups, isomorphisms, and n-linear maps vary, but the structure of the group setting and the
lists L and L[′] defining the distributions remain fixed. This captures assumptions
such as “3-lin is hard in all groups with a symmetric 3-linear map”. Conversely,
an assumption is parametric if one or more of these restrictions do not hold.
**4.1** **Non-parametric Assumptions**
We perform the following computations over Z to decide the hardness of a decisional assumption defined by lists L and L[′] for all group settings with a
_GS_
given index set and types of isomorphisms and n-linear maps.
1. Initialize the set T of distinguishing tests and the set E of exceptional primes
to the empty set .
_∅_
2. Compute the completions C(L) and C(L[′]) and set Lt := C(L)t, L[′]t := C(L[′])t
3. Compute a generating set K of the Z-module {a ∈ Z[|][L][t][|] _| a · Lt = 0} as_
follows:
(a) Represent all polynomials g ∈ _Lt as vectors v1, . . ., vn and denote by M_
the matrix, where row i is vi with respect to the basis monomials(Lt).
(b) Compute the Hermite Normal Form N of M and read off a generating
set K of the left kernel from N and the transformation matrix. Set
_E := E_ _F where F is the set of factors of pivots of N_ .
_∪_
Perform the same steps for L[′]t to obtain M _[′]_ and K _[′]._
4. Check for every k ∈ _K if kM_ _[′]_ = 0. If kM _[′]_ = c ̸= 0, then set T := T ∪ **_k and_**
_E := E_ _F where F denotes the set of common factors of c. Perform the_
_∪_
same steps for K _[′]_ and M .
5. Compute distinguishing probability ϵ from degrees in Lt and L′t[.]
-----
106 G. Barthe et al.
6. If T is empty, return that distinguishing probability is upper-bounded by
_ϵ except (possibly) for primes in E. If T is nonempty, return that using_
the tests in T, an adversary can distinguish with probability 1 _ϵ except_
_−_
(possibly) for primes in E.
Note that performing division-free computations over Z allows us to track the
set of exceptional primes, which we return. We have implemented this algorithm
in a tool that takes a group setting and two sequences of group elements as input
and decides if the corresponding decisional assumption is hard returning ϵ, E,
and the distinguishing tests T (if nonempty).
**4.2** **Parametric Assumptions**
For parametric decisional assumptions, we restrict ourselves to the real-or-random
case. The approach can also be adapted to handle computational assumptions. We
distinguish parametricity in two dimensions. First, an assumption may be parameterized by range limits l1, . . ., lm (ranging over N) that determine the size of the
adversary input. We use range expressions ∀r ∈ [α, β]. hr, where α and β are polynomials over range limits, to express such assumptions. The polynomials hr can
use the range index r in the exponent or as the index of an indexed variable Xr.
We will denote range expressions with capital letters R. Second, the group setting
of an assumption may be parameterized by an arity k that captures the maximum
number of multiplications that can be performed.
Parametricity in the input size allows us to analyze assumptions such as “lDHE is hard for all l”. Parametricity in the arity allows us to analyze assumptions such “2-BDH is hard for all k-linear groups”. Combining both types of
parametricity allows us to analyze assumptions such as “k-lin is hard in k-linear
groups” or “(l, k)-MMDHE is hard for all l and k 3”. In the following, we will
_≥_
present two methods that deal with both parametricity in the input size and
parametricity in the arity. The first method assumes a fixed number of random
variables. The second method allows for indexed random variables, but assumes
that the degree of adversary input and challenge is fixed.
**Fixed Number of Variables. We assume a real-or-random decisional assump-**
tion in a (leveled) k-linear group where the challenge polynomial g is in the target
group, and the adversary input is expressed using range expressions R1, . . ., Rn
on the levels λ1, . . ., λn. Here λi is either of the form c or of the form k − _c for_
a constant c ∈ N. Furthermore, we assume that the assumption uses random
variables X and range limits l. To simplify the presentation, we will use the
notation X **_[f]_** = X1[f][1] _m_ [. Then the ranges are of the form]
_[· · ·][ X]_ _[f][m]_
_Ri = ∀ri,1 ∈_ [αi,1, βi,1], . . ., ri,ti ∈ [αi,ti _, βi,ti_ ]. X **_[f][i]_**
where every αi,j and βi,j is a polynomial over l and every f ∈ **_f i is a poly-_**
nomial over k, l, and ri,1, . . ., ri,ti . The challenge polynomial is of the form
_g =_ [�]i[w]=1 _[c][i][X]_ **_[u][i][. Using the independence condition derived from Theorem 2, it]_**
follows that real distribution and the random distribution are indistinguishable
iff there is a monomial X **_[u][i]_** that is not an element of the completion of the Ri.
-----
Automated Analysis of Assumptions in Generic Models 107
To check this condition, we proceed in two steps. In the first step, we compute
a single range expression R that denotes the completion of the Ri in the target
group. In the second step, we check for each X **_[u][i]_** whether X **_[u][i]_** _R, by encoding_
_∈_
the required equalities of the exponent-polynomials into a set of diophantine
(in)equalities. We then show that satisfiability checking for such constraints is
undecidable in general. Nevertheless, we identify two decidable fragments and
demonstrate that SMT solvers can handle most instances derived from practical
cryptographic assumptions, even those that are not in the decidable fragments.
If R1, . . ., Rn denote the sets S1, . . ., Sn, then the completion R of R1, . . ., Rn
in the target group must denote the set
�
**_δ∈N[n]_** s.t. [�]i[n]=1 _[δ][i][·][λ][i][=][k]_
_S1[δ][1]_ _[· · ·][ S]n[δ][n]_
where SS[′] = {ss[′] _| s ∈_ _S ∧_ _s[′]_ _∈_ _S[′]} and S[δ]_ = {[�]i[δ]=1 _[s][i][|][s][1][ ∈]_ _[S][ ∧]_ _[. . .]_ _[∧]_ _[s][δ][ ∈]_ _[S][}][. We]_
therefore define multiplication of range expressions with distinct range indices as
(∀r1 ∈ [α1, β1], . . ., rt ∈ [αt, βt]. X **_[f]_** )(∀r1[′] _[∈]_ [[][α]1[′] _[, β]1[′]_ []][, . . ., r]s[′] _[∈]_ [[][α]t[′] _[′]_ _[, β]t[′][′]_ []][.][ X] **_[f][ ′][)]_**
= ∀r1 ∈ [α1, β1], . . ., rt ∈ [αt, βt], r1[′] _[∈]_ [[][α]1[′] _[, β]1[′]_ []][, . . ., r]s[′] _[∈]_ [[][α]t[′] _[′]_ _[, β]t[′][′][]][.][ X]_ **_[f]_** [+][f][ ′][.]
To definethe _δ-foldproductofa rangeexpression,werestrictourselvesto exponent-_
polynomials that can be expressed as f[ˆ] + f[˜] such that f[ˆ] = [�]j[t]=1 _[r][j][ φ][j]_ [(][l][, k][) for poly-]
nomials φj in Z[l, k] and such that f[˜] is a polynomial in Z[l, k]. The δ-fold product
is then defined as
(∀r1 ∈ [α1, β1], . . ., rm ∈ [αt, βt]. X
**_fˆ+f[˜])δ_**
= ∀r1 ∈ [δα1, δβ1], . . ., rm ∈ [δαt, δβt]. X
**_fˆ+δf[˜]._**
Given range expressions R1, . . ., Rn, we can now compute R by introducing fresh
variables δ1, . . ., δn, computing the range expressions Ri[δ][i][, and then computing]
the product of these range expressions.
The remaining task is now to check if
**_X_** **_[u]_** _∈_ (∀r1 ∈ [α1, β1], . . ., rt ∈ [αt, βt]. X **_[f]_** ) = R
where u ∈ Z[l, k][m], αi, βi ∈ Z[δ, l], f ∈ Z[l, k, r1, . . ., rt][m], and [�]i[n]=1 _[δ][i][ ·][ λ][i][ =][ k][.]_
To achieve this, we compute the following set of integer constraints that is satisfiable iff X **_[u]_** _R:_
_∈_
⎧⎪⎪⎨ _α0 ≤i ≤δiri ≤_ _βi_ forfor i i ∈ ∈ [1[1, n, t]]
⎪⎪⎩ _u�i =ni=1 f[δ]i[i],[λ][i][ =][ k]_ for i ∈ [1, m]
If we allow for both types of parametricity, it is possible to reduce Hilbert’s
10th problem to the generic hardness of cryptographic assumptions expressed as
previously described. This yields the following theorem.
-----
108 G. Barthe et al.
**Theorem 3. Deciding hardness of parametric assumptions with a fixed number**
_of variables in the generic group model is undecidable, even if all exponent-_
_polynomials are linear in range limits, range indices, and the arity._
However, for a restricted class of assumptions, the problem is decidable.
**Theorem 4. For all parametric assumptions with a fixed number of variables**
_such that all exponent-polynomials fi,j and range bounds αi,j and βi,j in the_
_input are linear, and either (1) the arity k is fixed or (2) the assumption does_
_not contain range limits li and the input exponent-polynomials do not use k,_
_deciding hardness in the generic group model is decidable._
_Proof (Sketch). In both cases, we transform the constraint system into a sys-_
tem of linear constraints. Note that the first type of constraint is already linear.
In the first case, the arity k is fixed and we can eliminate the variables δi by
performing a case distinction since there are only finitely many possible values.
Then, the constraints of the first and fourth type are constant and the constraints of the second and third type are linear. If there are no range limits,
then the range bounds are constants and we can eliminate the range indices
by expanding all range expressions into finite sets of monomials. Then the constraints of the second type are constant and we can linearize the constraints of
the last type since λi is either a constant c or of the form k − _c. For constraints_
of the third type, every ui is a linear polynomial in Z[k] and every fi is a linear
polynomial in Z[δ, k].
We have implemented this method in our tool and use Z3 [16] to check the
constraints. Our experiments confirm that Z3 can prove most assumptions taken
from the literature, even those outside the decidable fragment.
**Indexed Random Variables. For the case of indexed random variables, we**
have developed an (incomplete) constraint solving procedure that deals with assumptions parametric in the arity k and a range limit l. Let M denote monomials
built from indexed variables and M _[′]_ denote monomials built from non-indexed
variables. Our procedure supports all assumptions where the challenge is of the
form [�]i∈[0,l] _[MM][ ′][ and the input consist of ranges][ ∀][i][ ∈]_ [[0][, l][]][. MM][ ′][ and non-]
indexed monomials M _[′]._
## 5 Interactive Assumptions
In this section, we present our methods for the analysis of interactive assumptions
such as LRSW [27]. To simplify the presentation, we focus on assumptions where
exactly one additional oracle is provided to the adversary and the problem
_O_
is a generalized extraction problem. In the remainder, we fix a group setting
_GS = (p, {G}i∈I, Φ, E) and a distribution DL. We use X to denote the variables_
occurring in L and x to denote the point sampled by DL.
**Generalizing Gen and Sym. Our first step is generalizing the generic group**
and symbolic group models to the interactive setting. Let q[′], n, m, l denote positive integers, let i, and let F denote an l-dimensional vector of polynomials
_∈I_ _[l]_
-----
Automated Analysis of Assumptions in Generic Models 109
in Fp[X, Y1, . . ., Ym, A1, . . ., An]. We say O is defined by (q[′], n, m, l, i, F ) if O
answers at most q[′] queries and answers queries for parameter a ∈ F[n]p [by sampling]
a point y ∈ F[m]p [and returning handles to the group elements][ F][j][(][x][,][ y][,][ a][)][P][i]j _[∈]_ [G][i]j
for j ∈ [l] where Pij is the generator of Gij . Similarly, the symbolic version of O
answers queries for a ∈ F[n]p [by choosing][ m][ fresh variables][ Y][, adding the polyno-]
mials Fj (X, Y, a) to the lists Lij for j ∈ [l], and returning their handles. To formalize winning conditions of interactive assumptions, we extend the previously
given definition of generalized extraction problem with inequalities. Concretely,
the winning condition is formalized by polynomials H1, . . ., Hd1, G1, . . ., Gd2
that capture the required equalities and inequalities for the field elements b
and the handles h returned by the adversary. These polynomials are elements of
Fp[X, (Yi)i∈[q′], (Ai)i∈[q′], B, Z]. Intuitively, X and Yi model random variables
sampled initially and by O, Ai and B model parameters chosen by the adversary, and Z models group elements referenced by the handles h. An adversary,
that queries the oracle with a1, . . ., aq′ and returns b and h, wins if the following
conditions are satisfied for yj sampled in the j-th oracle call:
_Hj(x, y1, . . ., yq′, a1, . . ., aq′, b, dlog(Li1_ [h1]), . . ., dlog (Lim[hm])) = 0, j ∈ [d1]
_Gj(x, y1, . . ., yq′, a1, . . ., aq′, b, dlog_ (Li1[h1]), . . ., dlog (Lim [hm])) ̸= 0, j ∈ [d2]
Since Theorem 1 captures generalized extraction problems (with inequalities)
in such an interactive setting, we can analyze such assumptions in the symbolic
group model. As mentioned earlier, the symbolic version of the winning event can
be obtained by plugging in the polynomials Lij [hj] for the variables Zj instead
of using the discrete logarithm.
**Interactive Master Theorem. To define the interactive master theorem, we**
introduce the notion of parametric completion. The parametric completion of L
with respect to a group setting and an oracle defined by (q[′], n, m, l, i, F )
_GS_ _O_
is a family Li of lists of polynomials in Fp[X, Y, A]. Here, the variables Yu,v
range over u ∈ [m] and v ∈ [q[′]] and the variables Au,v range over u ∈ [n] and
_v ∈_ [q[′]]. They model the random values sampled by O and the parameters given
to O. The parametric completion first extends the lists Lij with
_{Fj(X, Y1,v, . . ., Ym,v, A1,v, . . ., An,v) | v ∈_ [q[′]]}
for j [l]. Then, it performs the previously defined completion with respect to
_∈_
the isomorphisms and n-linear maps in . We denote the result with (L).
_GS_ _C[O]_
To state our interactive master theorem, we exploit that in the symbolic
model, we can translate a generalized extraction problem to an equivalent generalized extraction problem where the adversary returns only elements in Fp and
no handles. Let C[O](L) = Li1 _, . . ., Lil denote the lists in the completion. Then,_
we can translate H(X, (Yi)i∈[q′], (Ai)i∈[q′], B, Z1, . . ., Zl) to
_H_ _[′](X,_ _[−→]Y,_ _[−→]A, B, C1, . . ., Cl) = H(X,_ _[−→]Y,_ _[−→]A, V, C1 · Li1_ _, . . ., Cl · Lil_ ).
The two problems are equivalent since the adversary can return a handle to a
polynomial f in Lij if and only if f is in the span of Lij .
-----
110 G. Barthe et al.
**Theorem 5. Let GS denote a group setting and let DL denote a polynomially-**
_induced distribution. Consider the (ˆn, ˆm, j, H, G)-extraction problem in the_
_generic and symbolic group models for GS, DL, and the oracle defined by_
(q[′], n, m, l, i, F ). Let H _[′]_ _and G[′]_ _denote the translations of H and G with respect_
_to this model that do not use handles. Then the problem is symbolically hard if_
_there exist no vectors a, b, and c in Fp such that_
��|H _′|_ � ��|G′| �
_j=1_ _[H]j[′]_ [(][X][,][ Y][,][ a][,][ b][,][ c][) = 0] _∧_ _j=1_ _[G]j[′]_ [(][X][,][ Y][,][ a][,][ b][,][ c][)][ ̸][= 0] _._
_In this case, the winning probability for the generic version is upper-bounded by_
(s + q + q[′] _l)[2]_ _d/2p + ed/p where p is the group order, s is the sum of the sizes_
_∗_
_of the lists in L, q the number of queries to the group-oracles, q[′]_ _the number_
_of queries to_ _, d an upper bound on the degrees (in X and Y ) stored by the_
_O_
_corresponding symbolic model and occuring in H_ **_[′]_** _and G[′], and e = |H_ **_[′]| + |G[′]|._**
In the proof of this theorem, we use Theorem 1 to switch to the symbolic model.
In the symbolic model, the winning condition is equivalent to our side condition.
**Automated Analysis. We have developed two methods for the automated**
analysis of interactive assumptions. Our first method deals with the bounded
case, i.e., where the number of oracle queries q[′] is fixed. Informally, we use
Gr¨obner basis techniques and SMT solvers to prove that there is (1) no solution
for all primes, (2) no solution for all primes except for some bad primes, (3) a
solution over the rationals which can be converted into an attack for almost all
primes, or (4) a solution over C. Even though we only encountered cases (1-3) in
practice, case (4) is the reason for the incompleteness of our algorithm since the
existence of a solution over C does not imply the existence of solutions over Fp.
In the unbounded case, we perform most steps symbolically to obtain results
that are valid for all possible values of q[′]. Concretely, we encode the hardness
of the assumption into a formula in the theory of non-linear arithmetic over C
with uninterpreted function symbols, which we use to encode parameters used in
queries and returned by the adversary. We use Z3 to prove the unsatisfiability of
these formulas exploiting the support for nonlinear arithmetic over the reals [26]
by encoding complex numbers as pairs of reals. In our experiments, Z3 can prove
the unsatisfiability of formulas obtained from most valid assumptions in seconds.
**Acknowledgements. This work is supported in part by ONR grant N00014-**
12-1-0914, Madrid regional project S2009TIC-1465 PROMETIDOS, and Spanish
projects TIN2009-14599 DESAFIOS 10 and TIN2012-39391-C04-01 Strongsoft.
Additional support for Mitchell, Scedrov, and Fagerholm is from the AFOSR
MURI “Science of Cyber Security: Modeling, Composition, and Measurement”
and from NSF Grants CNS-0831199 (Mitchell) and CNS-0830949 (Scedrov and
Fagerholm). The research of Fiore and Schmidt has received funds from the
European Commission’s Seventh Framework Programme Marie Curie Cofund
Action AMAROUT II (grant no. 291803).
-----
Automated Analysis of Assumptions in Generic Models 111
## References
1. Abadi, M., Rogaway, P.: Reconciling two views of cryptography (the computational
soundness of formal encryption). Journal of Cryptology 20(3), 395 (2007)
2. Abdalla, M., Pointcheval, D.: Interactive Diffie-Hellman assumptions with applications to password-based authentication. In: S. Patrick, A., Yung, M. (eds.) FC
2005. LNCS, vol. 3570, pp. 341–356. Springer, Heidelberg (2005)
3. Ateniese, G., Camenisch, J., de Medeiros, B.: Untraceable RFID tags via insubvertible encryption. In: Atluri, V., Meadows, C., Juels, A. (eds.) ACM CCS 2005,
pp. 92–101. ACM Press (November 2005)
4. Barthe, G., Fagerholm, E., Fiore, D., Mitchell, J., Scedrov, A., Schmidt, B.: Automated analysis of cryptographic assumptions in generic group models. Cryptology
ePrint Archive 2014 (2014)
5. Benson, K., Shacham, H., Waters, B.: The k-BDH assumption family: Bilinear
map cryptography from progressively weaker assumptions. In: Dawson, E. (ed.)
CT-RSA 2013. LNCS, vol. 7779, pp. 310–325. Springer, Heidelberg (2013)
6. Blanchet, B.: Security protocol verification: Symbolic and computational models.
In: Degano, P., Guttman, J.D. (eds.) POST 2012. LNCS, vol. 7215, pp. 3–29.
Springer, Heidelberg (2012)
7. Boldyreva, A., Gentry, C., O’Neill, A., Yum, D.H.: Ordered multisignatures and
identity-based sequential aggregate signatures, with applications to secure routing.
In: Ning, P., di Vimercati, S.D.C., Syverson, P.F. (eds.) ACM CCS 2007, pp. 276–
285. ACM Press (October 2007)
8. Boldyreva, A., Gentry, C., O’Neill, A., Yum, D.H.: Ordered multisignatures and
identity-based sequential aggregate signatures, with applications to secure routing.
Cryptology ePrint Archive, Report 2007/438 (2007) (revised February 21, 2010)
9. Boneh, D., Boyen, X.: Short signatures without random oracles. In: Cachin, C.,
Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 56–73. Springer,
Heidelberg (2004)
10. Boneh, D., Boyen, X., Goh, E.-J.: Hierarchical identity based encryption with constant size ciphertext. In: Cramer, R. (ed.) EUROCRYPT 2005. LNCS, vol. 3494,
pp. 440–456. Springer, Heidelberg (2005)
11. Boneh, D., Boyen, X., Goh, E.-J.: Hierarchical identity based encryption with constant size ciphertext. Cryptology ePrint Archive, Report 2005/015 (2005)
12. Boneh, D., Franklin, M.: Identity-based encryption from the weil pairing. In: Kilian,
J. (ed.) CRYPTO 2001. LNCS, vol. 2139, pp. 213–229. Springer, Heidelberg (2001)
13. Boneh, D., Gentry, C., Waters, B.: Collusion resistant broadcast encryption with
short ciphertexts and private keys. In: Shoup, V. (ed.) CRYPTO 2005. LNCS,
vol. 3621, pp. 258–275. Springer, Heidelberg (2005)
14. Boyen, X.: The uber-assumption family. In: Galbraith, S.D., Paterson, K.G. (eds.)
Pairing 2008. LNCS, vol. 5209, pp. 39–56. Springer, Heidelberg (2008)
15. Bresson, E., Lakhnech, Y., Mazar´e, L., Warinschi, B.: A generalization of DDH
with applications to protocol analysis and computational soundness. In: Menezes,
A. (ed.) CRYPTO 2007. LNCS, vol. 4622, pp. 482–499. Springer, Heidelberg (2007)
16. de Moura, L., Bjørner, N.: Z3: An efficient SMT solver. In: Ramakrishnan, C.R.,
Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg
(2008)
17. Escala, A., Herold, G., Kiltz, E., R`afols, C., Villar, J.: An algebraic framework
for Diffie-Hellman assumptions. In: Canetti, R., Garay, J.A. (eds.) CRYPTO 2013,
Part II. LNCS, vol. 8043, pp. 129–147. Springer, Heidelberg (2013)
-----
112 G. Barthe et al.
18. Freeman, D.M.: Converting pairing-based cryptosystems from composite-order
groups to prime-order groups. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS,
vol. 6110, pp. 44–61. Springer, Heidelberg (2010)
19. Garg, S., Gentry, C., Sahai, A., Waters, B.: Witness encryption and its applications.
In: Boneh, D., Roughgarden, T., Feigenbaum, J. (eds.) 45th ACM STOC, pp. 467–
476. ACM Press (ACM Press)
20. Gjøsteen, K., Thuen, Ø.: Password-based signatures. In: Petkova-Nikova, S., Pashalidis, A., Pernul, G. (eds.) EuroPKI 2011. LNCS, vol. 7163, pp. 17–33. Springer,
Heidelberg (2012)
21. Halevi, S.: A plausible approach to computer-aided cryptographic proofs. Cryptology ePrint Archive, Report 2005/181 (2005)
22. Hohenberger, S., Sahai, A., Waters, B.: Full domain hash from (Leveled) multilinear
maps and identity-based aggregate signatures. In: Canetti, R., Garay, J.A. (eds.)
CRYPTO 2013, Part I. LNCS, vol. 8042, pp. 494–512. Springer, Heidelberg (2013)
23. Hwang, J.Y., Lee, D.H., Yung, M.: Universal forgery of the identity-based sequential aggregate signature scheme. In: Li, W., Susilo, W., Tupakula, U.K., SafaviNaini, R., Varadharajan, V. (eds.) ASIACCS 2009, Mar. 2009, pp. 157–160. ACM
Press (March 2009)
24. Jager, T., Rupp, A.: The semi-generic group model and applications to pairingbased cryptography. In: Abe, M. (ed.) ASIACRYPT 2010. LNCS, vol. 6477, pp.
539–556. Springer, Heidelberg (2010)
25. Jager, T., Schwenk, J.: On the equivalence of generic group models. In: Baek, J.,
Bao, F., Chen, K., Lai, X. (eds.) ProvSec 2008. LNCS, vol. 5324, pp. 200–209.
Springer, Heidelberg (2008)
26. Jovanovi´c, D., de Moura, L.: Solving non-linear arithmetic. In: Gramlich, B., Miller,
D., Sattler, U. (eds.) IJCAR 2012. LNCS, vol. 7364, pp. 339–354. Springer, Heidelberg (2012)
27. Lysyanskaya, A., Rivest, R.L., Sahai, A., Wolf, S.: Pseudonym systems (Extended
abstract). In: Heys, H.M., Adams, C.M. (eds.) SAC 1999. LNCS, vol. 1758, pp.
184–199. Springer, Heidelberg (2000)
28. Maurer, U.M.: Abstract models of computation in cryptography. In: Smart, N.P.
(ed.) Cryptography and Coding 2005. LNCS, vol. 3796, pp. 1–12. Springer, Heidelberg (2005)
29. Maurer, U.M., Wolf, S.: Diffie-Hellman oracles. In: Koblitz, N. (ed.) CRYPTO
1996. LNCS, vol. 1109, pp. 268–282. Springer, Heidelberg (1996)
30. Naor, M.: On cryptographic assumptions and challenges. In: Boneh, D. (ed.)
CRYPTO 2003. LNCS, vol. 2729, pp. 96–109. Springer, Heidelberg (2003)
31. Nechaev, V.I.: Complexity of a determinate algorithm for the discrete logarithm.
Mathematical Notes 55(2), 165–172 (1994)
32. Okamoto, T., Takashima, K.: Fully secure functional encryption with general relations from the decisional linear assumption. In: Rabin, T. (ed.) CRYPTO 2010.
LNCS, vol. 6223, pp. 191–208. Springer, Heidelberg (2010)
33. Shoup, V.: Lower bounds for discrete logarithms and related problems. In: Fumy,
W. (ed.) EUROCRYPT 1997. LNCS, vol. 1233, pp. 256–266. Springer, Heidelberg
(1997)
34. Stein, W., et al.: Sage Mathematics Software (Version 5.12). The Sage Development
[Team (2013), http://www.sagemath.org](http://www.sagemath.org)
35. Szydlo, M.: A note on chosen-basis decisional diffie-hellman assumptions. In: Di
Crescenzo, G., Rubin, A. (eds.) FC 2006. LNCS, vol. 4107, pp. 166–170. Springer,
Heidelberg (2006)
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s00145-018-9302-3?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s00145-018-9302-3, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007%2F978-3-662-44371-2_6.pdf"
}
| 2,018
|
[
"JournalArticle"
] | true
| 2018-12-18T00:00:00
|
[] | 16,542
|
en
|
[
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/03283aebf609693e3e6b8fc674141080cecd5d6c
|
[] | 0.885132
|
Maximizing the Electricity Cost-Savings for Local Distribution System Using a New Peak-Shaving Approach Based on Mixed Integer Linear Programming
|
03283aebf609693e3e6b8fc674141080cecd5d6c
|
Electronics
|
[
{
"authorId": "8604502",
"name": "H. Mosbah"
},
{
"authorId": "14622060",
"name": "Eduardo Castillo Guerra"
},
{
"authorId": "52186963",
"name": "J. C. Barrera"
}
] |
{
"alternate_issns": [
"2079-9292",
"0883-4989"
],
"alternate_names": null,
"alternate_urls": [
"http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-247562",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-247562",
"https://www.mdpi.com/journal/electronics"
],
"id": "ccd8e532-73c6-414f-bc91-271bbb2933e2",
"issn": "1450-5843",
"name": "Electronics",
"type": "journal",
"url": "http://www.electronics.etfbl.net/"
}
|
The objective of this study is to perform peak load shaving at a virtual power plant (VPP) to maximize the electricity cost-saving for local distribution companies (LDCs) while satisfying the necessary operational constraints. It can be achieved by implementing an efficient algorithm to control the conservation voltage reduction technique (CVR) with embedded energy resources (EERs) to optimize electricity costs during peak hours. EERs consist of distributed energy resources (DERs) such as solar and diesel generators and energy storage systems (ESSs) such as utility-scale and residential batteries. An objective function of mixed integer linear programming is formulated as the electricity cost function. Different operational constraints of EERs are formulated to solve the peak shaving optimization problem. The proposed algorithm is tested using data from a real Australian power distribution network. This paper discusses four cases to demonstrate the performance and economic benefits of the control algorithm. Each of these cases illustrates how EERs contribute differently each year, month, and day. Results showed that the proposed algorithm offers significant cost savings and can shave up to three daily peaks.
|
# electronics
_Article_
## Maximizing the Electricity Cost-Savings for Local Distribution System Using a New Peak-Shaving Approach Based on Mixed Integer Linear Programming
**Hossam Mosbah *** **, Eduardo Castillo Guerra and Julian L. Cardenas Barrera**
Department of Electrical & Computer Engineering, University of New Brunswick (UNB),
Fredericton, NB E3B 5A3, Canada
*** Correspondence: hmosbah@unb.ca**
**Citation: Mosbah, H.; Guerra, E.C.;**
Barrera, J.L.C. Maximizing the
Electricity Cost-Savings for Local
Distribution System Using a New
Peak-Shaving Approach Based on
Mixed Integer Linear Programming.
_Electronics 2022, 11, 3610._
[https://doi.org/10.3390/](https://doi.org/10.3390/electronics11213610)
[electronics11213610](https://doi.org/10.3390/electronics11213610)
Academic Editor: Jahangir Hossain
Received: 21 September 2022
Accepted: 2 November 2022
Published: 4 November 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: The objective of this study is to perform peak load shaving at a virtual power plant (VPP)**
to maximize the electricity cost-saving for local distribution companies (LDCs) while satisfying
the necessary operational constraints. It can be achieved by implementing an efficient algorithm
to control the conservation voltage reduction technique (CVR) with embedded energy resources
(EERs) to optimize electricity costs during peak hours. EERs consist of distributed energy resources
(DERs) such as solar and diesel generators and energy storage systems (ESSs) such as utility-scale and
residential batteries. An objective function of mixed integer linear programming is formulated as the
electricity cost function. Different operational constraints of EERs are formulated to solve the peak
shaving optimization problem. The proposed algorithm is tested using data from a real Australian
power distribution network. This paper discusses four cases to demonstrate the performance and
economic benefits of the control algorithm. Each of these cases illustrates how EERs contribute
differently each year, month, and day. Results showed that the proposed algorithm offers significant
cost savings and can shave up to three daily peaks.
**Keywords: energy storage system; distrusted energy resources; virtual power plant; optimization;**
peak load shaving; peak load leveling; demand response; solar power
**1. Introduction**
The goal of peak load shaving is to flatten the load curve by reducing the amount of
load and shifting it to lower load periods. Peak shaving benefits both customers and utilities.
Utility companies can obtain a significant cost saving by reducing consumption when
electricity charge rates are relatively high, which results in lower electricity bills for their
customers [1]. Chao Lu in [2] proposed both charging and discharging control models of
battery energy system storage (BESS). These two models were established with two different
optimization objectives. The first objective function is to reduce peak load demand and the
second is to minimize the daily load variance. The authors also considered the fluctuation
of penalty cost in the objective functions. A rolling load forecasting technique was used to
improve the optimization performance. A combination of CVR and EV2G reactive power
was investigated in [3] to reduce peak load demand while maintaining the voltage profile
within limits. A combination of CVR with EV2G reactive power is assessed in three modes:
with no CVR, CVR standalone, and CVR with EV2G reactive power. The technique is
verified using a modified IEEE 13 node. Simulated results indicate that the CVR standalone
operation reduced peak load demand by 2.60%. However, CVR standalone mode at deeper
levels of voltage reduction leads to violations of the minimum node voltage limit and
higher system losses. Thus, CVR with EV2G coordinated operation is very helpful to
maintain feeder voltage profiles within limits and reduce system losses, even at the deepest
levels of voltage reduction. Additionally, the simulation results indicate that a combination
of CVR and EV2G mode performs better than CVR standalone operation in terms of peak
-----
_Electronics 2022, 11, 3610_ 2 of 18
power shaving, voltage profile improvement, and loss reduction. An integrated scheme
combining conservation voltage reduction (CVR) and intelligent photovoltaic inverter
control functions is proposed in [4] to reduce substation demand more effectively than
with only CVR. IEEE-123 (medium-size) and IEEE-8500 (large-size) unbalanced three-phase
distribution systems are considered for evaluating the proposed scheme in conjunction
with a voltage-dependent load model. As a result, a higher demand reduction is achieved
during the more profound voltage reduction range, which keeps the distribution network
reliable and efficient. Chowdhury in [5] studied the effect of combining PV with battery
energy storage in standalone mode. Southeastern and western U.S. peak demand curves
were reshaped using PV and battery energy storage. Initially, the PV power is used to
charge the battery, and then after the battery is fully charged, it is used to supply the grid.
Different photovoltaic array orientations are used to demonstrate substantial savings on
the size of the battery storage system necessary to reduce peak loads. The results showed
that Western U.S. utilities saved a significant amount of battery capacity compared to
southeastern utilities. These are 43% for the south-facing scheme compared to 39% for
the two-axis tracking scheme. Furthermore, southeastern utilities have somewhat smaller
savings. These are 18% for the two-axis tracking scheme compared to 13% for the southfacing array. The authors in [6] proposed an efficient technique known as a decision treebased technique to reduce peak load in residential networks. PV arrays, battery storage,
and coordinated electric vehicle management were utilized with the technique. Smart
meters were implemented to read the residential load in real time, allowing the algorithm
to take the necessary action. The proposed algorithm demonstrated 96% reduction in the
peak demand with 74% load factor. Bidirectional V2G services are proposed in [7–9] for
peak load shaving. These services can be achieved with active power support during peak
hours. The EV system operates by charging the EV during off-peak hours and injecting
extra EV energy into the power grid during peak hours. V2G can provide reactive power for
grid voltage regulation in addition to active power. Furthermore, two different results were
obtained. Firstly, the maximum peak shaving value at 10:00 a.m. was approximately 10% of
the general power load. Secondly, the maximum peak shaving from 2:00 p.m. to 4:00 p.m.
was around 9% of the maximum power load. An integrated battery storage system, a PV
system, a heat pump system, a thermal storage system, and an electrical storage system
was proposed by Baniasadi et al. [10]. A hybrid system is used in a residential building
to optimize the real-time energy storage systems. Particle swarm optimization (PSO) is
implemented to mitigate both daily electricity and life cycle costs of the smart building.
The Min-Max model predictive controller is then used to minimize electricity costs for end
users by managing the energy flows of storage systems. A demand response technique
is implemented to optimally control HP operation and battery charge/discharge actions.
The controller adjusts the flow of water in the storage tank to meet designated thermal
energy requirements by controlling HP operation. Additionally, the battery’s power flow is
controlled to minimize electricity costs during peak-load hours. The proposed methods
reduced annual electricity costs by 80% and life cycle costs by over 42%. The authors of [11]
presented a smart grid project that aims at reaching 15% of peak load reduction. The project
includes both residential and commercial loads, as well as 230 PV panels equipped with
large-scale utility storage which utilizes two technologies to provide smoothing capacity of
0.5 MW and storage capacity of 1 MWh. The GridLAB-D software was primarily used for
modeling the proposed system. Vanhoudt et al. [12] used a virtual heat pump equipped
with PV panels or a small wind turbine to limit the peak load on a single residential
building. A second goal was to reduce the curtailment of renewable energy installations.
This was achieved by switching on the heat pump whenever the local renewable energy
source produces energy (maximizing self-consumption of renewable energy). This reduced
the overall use of gray electricity from the grid. An active heat pump was controlled by a
market-based multi-agent system (MAS) and compared to conventional heat-driven control
of the heat pump. The comparison showed the MAS actively controlled heat pump is able
to lower the peak load from 2% to 5% on the coldest week and 17% for an average week.
-----
_Electronics 2022, 11, 3610_ 3 of 18
The Binary Particle Swarm Optimization algorithm (BPSO) was proposed by Sepulveda et al. [13] to schedule power consumption of a domestic electric water heater. The algorithm used to maximize the level of customer comfort while minimizing peak load
demand. Matlab is used to simulate the data collected from 200 households by smart
meters to test the performance of the demand response. Authors of [14] proposed a peak
shaving mechanism that takes into account the interests of utility companies as well as
their customers. The energy model and the price model were both employed to optimally
schedule individual water heaters. The energy model allowed for a minimum of electricity
consumption for water heaters while maintaining user comfort.
An economic analysis was conducted by Martins et al. [15] to determine the optimal
sizing and design for BESS based on monthly and annual billing. An analysis of a case
study showed that monthly billing reduces battery aging as the number of cycles increases.
Furthermore, the results show that batteries can shorten the payback period when used for
large industrial loads in peak shaving applications. Cheng et al. [16] used mixed integer
linear programming (MILP) to optimize the scheduling problem for peak-shaving hydropower. The proposed technique is validated using six cascaded hydropower reservoirs
along the Lancang River in China. A comparison is made with traditional peak-shaving
methods that require determining peak-shaving order. A model is tested from an engineering perspective to determine its efficiency and practicality. A new short-term peak-shaving
method was introduced by Liao et al. [17] that took into account load characteristics and
water spillage to address modeling, solving, and water spillage treatment issues associated
with HSCHPs during the wet season. A fuzzy cluster analysis is used to identify the valley
periods of the daily load curve to determine when more water should be released. The best
WSRs are determined by solving a mixed-integer linear programming model linearized by
special ordered sets of type two. It is demonstrated that the proposed method can achieve
a reasonable peak-shaving effect without significantly reducing power generation or introducing an additional water spill. A vehicle-to-buildings/grid (V2B/V2G) system was
considered simultaneously by the authors in [18] for peak shaving and frequency regulation
using a combined multi-objective optimization strategy that considered battery state of
charge (SoC), EV battery degradation, and EV driving scenarios. Study objectives included
achieving superior economic benefits within controlled SOCs. The authors in [19] developed a control algorithm for a biomass-based micro combined heat and power (mCHP)
plant aimed at reducing electricity consumption. Two scenarios of the mCHP’s operation,
namely with and without the control strategy, are discussed. EnergyPRO software was
used to simulate mCHP operation in this study. When power is overproduced due to
low demand, excess power is redirected into heat generation, and vice versa. The authors
concluded that the proposed mCHP system covers the household’s total power demand
during the morning peak and reduces the evening peak by up to 71%. The authors in [20]
proposed an optimal energy management algorithm (OEMA) to reduce peak load by
scheduling EV charging and discharging with PV system, RES, and ESS. The case study
focuses on a university campus with EVs, solar panels, and an energy storage system (ESS),
in addition to an educational building that has laboratories and a smart parking lot with
100 charging stations. Simulated results indicated that immediate charging impacted the
building’s power consumption significantly. In contrast, scheduling EV charging with
the help of the PV system and ESS decreased the building’s on-peak power consumption
while minimizing EV charging costs. The work in [21] considered a model predictive
control-based multi-objective optimization was considered for a hybrid energy storage
system. This model consists of a PV system, a battery, and a combined 60 heat pump/heat
storage device. The goal was to minimize operation costs and reduce power exchanged
with the electrical grid while maintaining user comfort. As a result, a reduction of 8 to 88%
could be achieved in PV grid feed-in depending on PV capacity. Further benefits can be
achieved by MPC of multiple components such as PV/battery/heat pump systems or by
controlling air source heat pumps.
-----
_Electronics 2022, 11, 3610_ 4 of 18
Prior studies have not addressed the capability of their techniques to shave more than
one peak per day or the economic analysis. Therefore, this paper aims to illustrate the
capability of the control algorithm to shave more than one peak per day, followed by an
economic analysis of peak load shaving.
Two reasons can be attributed to multiple peaks:
- Weather conditions could contribute to multiple peaks on the same day. Customers
heavily use devices such as EWH, HP, BBH, and ETS during cold weather. Thus,
multiple peaks were caused by an increase in customers’ consumption.
- The second reason is energy storage systems. Since the batteries are discharged
during peak times, thousands of homes can use less utility power during peak times.
However, if the batteries are charged off-peak at the same time, this could result in
multiple peaks.
_Contribution and Paper Organization_
An effective, fast, and beneficial control algorithm for peak load shaving is presented
in this paper. Mixed integer linear programming (MILP) is formulated at a virtual power
plant level (VPP) to perform peak load shaving in order to minimize the total electricity
cost including energy and demand charge costs for local distribution companies (LDCs).
The embedded energy resources are optimized to reduce the load demand during peak
hours, resulting in increased electricity cost savings. The EERs consist of distributed energy
resources and energy storage systems. Distributed energy resources include solar and diesel
generators which provide electricity during peak hours to reduce customers’ consumption.
Energy storage systems such as utility-scale and residential batteries are discharged to
reduce the reliance on utility power during peak periods. Additionally, demand response
is applied during peak hours to ask customers to limit the use of their EWH, HP, BBH,
and ETS devices. Four different cases are discussed to illustrate the performance and the
economic benefit of peak load shaving mechanism for LDCs. The control algorithm proved
to be capable of shaving up to three daily peaks and maximizing the cost savings over
AUD 600,000 a year. Moreover, the control algorithm offers direct benefits to utilities such
as stability, reliability, and generation costs.
The rest of the paper is organized as follows: Section 2 presents an introduction of VPP.
Section 3 presents the calculations of the initial power threshold. Section 4 discusses the
system model of ESSs and DERs presented in this paper. Section 5 presents the mathematical
model of the proposed algorithm. Section 6 presents the results of the algorithm. Finally,
the paper presents brief conclusions in Section 7.
**2. Virtual Power Plant (VPP)**
The demand for electricity in developed countries increases, while the construction of
new large power plants is significantly slowing down due to high costs and environmental
concerns. Therefore, VPPs are designed to dispatch a group of decentralized energy assets
that can be remotely controlled as a group but operate independently. Local assets such
as solar and diesel generators are dispatched by VPPs for LDCs. As shown in Figure 1,
VPPs receive different types of load forecasts which represent forecasting of customers’
consumption in order to determine the highest consumption and the time when these assets
should be optimized to reduce peak consumption. Afterward, VPPs share peak information
with customers so aggregators can be reduced during the peak period by implementing
demand response. The VPP has the responsibility of sharing information with the system
operator for assessing the entire system’s security. The VPPS will request the start and stop
of EERs, generation capacity, and optimization of generation costs. Figure 1 shows the
entire process of VPPs.There are no clear definitions of VPPs in literature at the moment.
According to [22], VPPs consist of different types of distributed resources which may
be dispersed across a medium voltage distribution network. VPPs consist of several
technologies with diverse operating patterns and availability that can connect to different
points in the distribution system [23]. VPPs are defined in the EU’s virtual fuel cell power
-----
_Electronics 2022, 11, 3610_ 5 of 18
plant project [24] as a network of interconnected decentralized residential micro-chips that
utilize full-cell technology installed in multifamily homes, businesses, and other public
buildings for individual heating, cooling, and electricity production. Fenix in [25] defines
VPPs as a flexible representation of a portfolio of distributed energy resources (DER) that
is capable of making contracts in the wholesale market and provide services to system
operators. VPPs are classified into two types: commercial VPP that combines the capacity
of a variety of distributed energy resources and optimizes revenue from contracting DERS
and demand portfolios. However, this does not take into consideration any aspects of
stable operation. The second type is technical VPPs which consist of portfolio inputs from
DERs that have the same geographical location to characterize the local network at the
transmission boundary. Both the cost and operational characteristics of the portfolio are
represented at the transmission boundary. The details about their characteristics and how
they were implemented in the framework of the control algorithm are discussed in [26]. We
only consider commercial VPPs in this paper. No information was shared with the system
operator at the transmission side.
**Figure 1. The process of VPPs for LDCs.**
**3. Load Demand Threshold**
The load demand threshold is calculated to determine the peak load hours in the
load profile. Time-of-use billing is widely used by utilities to charge their customers.
The electricity rate may vary depending on the time of day when it is consumed. On-peak
and off-peak times of the day will be defined by the utilities based on the amount of demand
at those times. A higher rate is charged to customers during peak hours. Local distribution
companies (LDCs) pay for electricity based on both demand charges and consumption
charges. A demand charge is determined by multiplying the peak demand rate by the peak
demand (kW), and an energy charge is calculated by multiplying the consumption (kWh)
by the energy rate. The initial load demand threshold is calculated based on historical data.
The calculation is based on previous years in the same month. The initial load demand
threshold for February 2021 would be calculated by taking the maximum of the previous
years in the same month, for example, the maximum of February 2020 and February 2019,
and then taking the average of these two previous months. The final value will be the initial
load demand threshold for February 2021. In addition, as the load demand slightly varies
from day to day, the load demand threshold is changed. The load demand threshold is
updated daily based on the new peak after peak load shaving performance, so if the new
peak is at the load demand threshold, the load demand threshold will remain the same for
the next 24 h. Otherwise, the peak is partially shaved and the new peak will continue as a
new load demand threshold in the next 24 h.
-----
_Electronics 2022, 11, 3610_ 6 of 18
**4. System Model**
_4.1. Aggregators (AGGs)_
The energy cost is calculated for each aggregator and then optimized based on its least
energy cost during the peak hours. Section 5 described the steps of calculating the energy
cost in details. Detailed information on the design and operation of the aggregators can be
found in [27].
_4.2. Conservation Voltage Reduction (CVR)_
The CVR strategy is one of the most efficient ways to reduce load demand and maintain
proper voltage. Standalone CVR is used in this paper to reduce the load demand. CVR is
selected as the first option to contribute to peak load shaving. Detailed information on the
implemented CVR can be found in [4].
_4.3. PV Model_
Solar irradiation Is (t) data is used to calculate PV output power [28]:
_PPV (t) = Is (t) × APV × NPV × ηPV × ηt_ (1)
where APV and NPV indicate the area and the number of PV module, respectively. ηPV
and ηt represent efficiency of PV system and temperature coefficient:
_ηt = 1 −_ [µ(Tc − _Tstc)]_ (2)
where Tc and Tstc represent temperature of the PV cell and standard test conditions, respectively. A maximum temperature of Tstc= 25 _[◦]C indicates the peak of the optimal_
temperature range for photovoltaic solar panels. This is when solar photovoltaic cells are at
their most efficient and expect their performance to be at its best. µ represents temperature
coefficient of the maximum output power of the panel 1/degree _[◦]C. There is a range of_
0.5% to 0.9% depending on the panel; however, 0.005 (0.5%) is an acceptable range. When
the temperature of the cell rises above 25 _[◦]C, the cell’s efficiency begins to decline._
_4.4. Diesel Generator_
A diesel generator is necessary for the hybrid system to perform as a backup power
source. It operates when all embedded energy resources (EERs) are unable to fully shave
the load. Diesel generators are the last option because of their high fuel costs:
_tmax_
_Min_ ∑
_t=1_
_Ngen_
### ∑ Fi�Pi[t]� (3)
_i=1_
where Fi (Pi[t] [)][ denoted as diesel generator cost that should be minimized.][ a][i][,][ b][i][,][ and][ c][i]
are fuel coefficients. Pi[t] [is the capacity of each diesel generator.][ N][gen][ is number of diesel]
generators. Pload[t] [is the remainder load:]
_Fi (Pi[t]_ [) =][ a][i] [+][ b][i] _[×][ P][i]_ [(][t][) +][ c][ ×][ P]i[2] [(][t][)] (4)
Subject to
� _Ngen_
∑i=1 _[P]i[t]_ [=][ P]load[t] (5)
_Pi[min]_ _≤_ _Pi(t) ≤_ _Pi[max]_
VPP optimizes four diesel generators to shave the remainder of the load that could not
be fully shaved by the EERs. Unit commitment approach based on dynamic programming
is used to perform the optimization. Both fuel consumption coefficients and the capacity
for each generator are provided in Table 1. Detailed information on optimization of unit
commitment using dynamic programming can be found in [29].
-----
_Electronics 2022, 11, 3610_ 7 of 18
**Table 1. Capacities and coefficients for each diesel generator.**
**Gen No.** **Capacity KW** **a $** **b $/MWh** **c $/MWh[2]**
Gen 1 1250 1000 16.19 0.00048
Gen 2 600 970 17.26 0.00031
Gen 3 600 700 16.60 0.00200
Gen 4 750 680 16.50 0.00211
**5. Formulation of the Proposed Method**
The work here aims at optimizing the outputs of the local decentralized assets, which
are covered in detail in Section 4, with the goal of reducing customers’ consumption
during peak periods by implementing peak load shaving. The electricity cost for LDCs is
minimized by reducing the reliance on utility power during peak periods.
The objective function is used to minimize electricity costs. The problem is solved
using mixed integer linear programming because it uses binary variables to control the
charging/discharging of the utility battery [30]:
Min CT (6)
_CT = CE + CP_ (7)
_N_
_CE = ζ ·_ ∑ _PLShave_ (8)
_i=1_
_CP = α · Pshave_ (9)
Equations (6)–(9) represent the total charge CT that includes energy charge CE and
peak charge CP based on the peak load shaving. PLShave and Pshave are both indicators of
the demand after performing peak shaving and maximum peak shaving during the billing
cycle, respectively. Both ζ($/KWh)and α($/KW) represent the energy and peak charging
rates, respectively. N denotes the 24 h load forecast:
Soc(t) = Soc(t 1) + _[λ][ ∑]t[t]−1_ _[P][batt][(][t][ −]_ [1][)][dt][ −] _[µ][ ∑]t[t]−1_ _[P][batt][(][t][ −]_ [1][)][dt] (10)
_−_
_δ_
The state of charge Soc(t) in Equation (10) is calculated based on the active power
_Pbatt from the utility battery, as well as the charging λ and discharging µ efficiencies. In_
addition, the kWh rating of the energy battery rate is determined by using the δ parameter.
The state of charge is initialized and updated every time:
0 _CVR(t)_ _CVR[max]_
0Soc ≤ ≤[min]PG(≤t) ≤Soc ≤P(tG[max]) ≤ Soc[max]
(11)
_Pbatt[min]_ _Pbatt(t)_ _Pbatt[max]_
00≤≤ _PPDGiPV((t≤t)) ≤ ≤_ _PPPV[max]DGi[max]_ _≤_
Equation (11) indicates that all local resources as seen in Section 4 are subject to certain
operational constraints. The active power of the utility-scale battery, Pbatt(t), is limited to
1.25 MW. In addition, the state of charge, Soc(t), should range from 25% to 95%. The four
diesel generators are as follows: PDGi with varying capacity limits, as shown in Table 1.
CVR also has a specific limit, as explained in Section 4.2. The PV capacity is also limited by
the number of PV panels, as described in Section 4.3:
_n_
### ∑ Pi(t) + PG(t) = PL(t) (12)
_i=1_
-----
_Electronics 2022, 11, 3610_ 8 of 18
_n_
### ∑ Pi(t) = PPV(t) ∓ Pbatt(t) + PRbatt(t) + PEWH(t) + PHP(t) + PBBH(t) + PETS(t) + PDGi(t) (13)
_i=1_
Equation (12) represents the power balance between the main generation PG(t) (power
plant), local assets Pi(t) (explained in detail in Section 4), and load forecast PL(t) (forecast
of customers’ consumption over the next 24 h). A balance should be maintained between
power generation and demand. There are two types of power generation: local resources
(DERs and ESSs) and main generation (Power Plants). Our algorithm optimizes the capacity
that is received from decentralized local resources, so that, when the local resources are
maximized, the peak shaving level is maximized toward the threshold level. As a result
of maximization of local resources, the main generation is reduced. Equation (13) shows
the sum of the local assets. A minimum energy cost will be used for optimizing and
coordinating these local assets, as represented in Equations (16)–(21). The priority asset
will be selected based on the lowest energy cost at each point in time for each asset in
the timeseries:
γc(t) M ≥ _Pbatt_
1 − _γc(t) M ≥_ _Pbatt_ (14)
γc(t) = 0 or 1
_γc(t) represents the binary variable of the utility battery. If γc(t) = 0, the charging of the_
battery will be off, and the battery will be discharged. If γc(t) = 1, the discharging of the
battery will be off and the battery will be charged. The purpose of γc(t) is to simultaneously
avoid charging and discharging the battery. M is the desired capacity of the battery to be
charged or discharged. The available power for charging is provided by main generation
(Power Plant). During the charging period, this power will be rated and paid by the utility:
_Pshave ≤_ _Peak_
_Pshave ≥_ _Pthreshold_
_PG(t) ≤_ _Pshave_
(15)
_Pshave is optimally set between the peak of the hourly load forecast and the desired level_
(Pthreshold) because standalone aggregators often cannot fully shave the load. The main
generation can therefore shave any additional load that is above the desired level. The main
generation PG(t) is set to be less than or equal to Pshave because main generation will be
reduced to a new peak shaving level when EERs are engaged.
Energy cost is calculated for each aggregators and resources based on the following
equations:
_CostT = CostTInvest + CostEPC_ (16)
where CostT and CostTInvest denote the total cost of each aggregator and the investment of
units, respectively. CostEPC is the energy cost:
_CostTInvest = CostU + CostMis_ (17)
where CostU and CostMis denote unit cost and Miscellaneous cost for each aggregators,
respectively,
_CostU = CostUEWH + CostUHP + CostUBBH + CostUETS_ (18)
where CostUEWH, CostUHP, CostUBBH, and CostUETS denote unit cost of electric water
heater, heat pump, baseboard heater, and electric thermal storage, respectively,
_CostMis = CostMisIns + CostMisCont + CostMisMain_ (19)
-----
_Electronics 2022, 11, 3610_ 9 of 18
where CostMisIns, CostMisCont, and CostMisMain represent installation, controller, and maintenance costs for each aggregators, respectively:
_CostEPC = (β × Pc)_ (20)
where β and Pc represent price rate on and off peak and power consumption during on
and off peak for each aggregators, respectively,
_CostThour =_ _[N][U][ ×][ Cost][TInvest]_ + (β × Pc) (21)
10 365 24
_∗_ _∗_
where CostThour, NU, and CostTInvest are hourly total energy cost, number of units, and total cost of investment for each aggregators, respectively. The denominator of the above
equation represents the aggregators performing maintenance every ten years calculated
in hours. The above equation calculates the total cost of each aggregator. It consists of
two components including the static and the dynamic components. The total investment
cost is considered as a static component. The cost is determined based on both unit and
miscellaneous costs. This part is almost constant and steady—in addition to that, the dynamic components which are computed based on the change in the power consumption
during 24 h. Moreover, power consumption occurs when units of aggregators are charging.
The charging rates in this section is published in [31]. Figure 2 describes the flowchart of
the entire control algorithm process
START
Read 30 and 7 Set the objective funcdays Load Forecast tion Equation(6)-(9)
Track the highest Peak Set the assets ’conduring the billing Cycle straints Equation 11
Apply CVR then, discharge Utility battery
Switch ON DG only highest Peak > Calculate PV Power
in the highest Peak Yes 50% other Peaks Equation(1)-(2)
Calculate the energy
cost for each aggregator Equation(16)-(21)
No
Determine the initial
threshold (Section 4) Optimize the committed
capacities received
from aggregators by
the least energy cost
Run 24 h Load Forecast
Optimize DGs only
when all aggregators
fail to fully shave the
Charge Utility bat- peak Equation(3)-(5)
tery and aggregators Yes LF < Threshold
Update threshold based
on the new reduced peak
No
Demand response is
applied by reducing the
use of EWh,HP,BBH,and
ETS devices(Section 4.1)
Billing Cycle Days=30
Next day No
Yes
END
**Figure 2. Flowchart of the proposed algorithm.**
-----
_Electronics 2022, 11, 3610_ 10 of 18
**6. Case Study**
A single-line diagram of the real Australian power distribution network used as the
VPP is shown in Figure 3. There are additional resources being considered for this actual
network. There are two solar farms at the VPP, a utility-scale battery with a capacity of
2.50 MWh and four residential batteries totaling 5 MW. There are four load substations
totaling 80 MW, including 20.5 MW curtailable (the curtailment of the load is fully controlled
by the VPP operator at $400/MWh), and a 25 MW/90 MWh BESS. There are four diesel
generators with four different capacities are shown in Table 1. Furthermore, decentralized
energy storage systems are distributed around the distribution network and the information
on numbers and capacities are demonstrated in Table 2.
**Table 2. Capacities and coefficients for each diesel generator.**
**Cost $** **EWh** **HP** **BBH** **ETS**
Unit 400 1400 150 1500
Controller 200 150 200 200
Maintenance 150 400 150 400
Installation 1000 4000 400 2500
No. Units 1000 200 1200 500
**Figure 3. A real Australian power distribution network.**
**7. Results and Discussion**
Four case studies are presented in this section explaining how peak load shaving
works and how it can benefit LDCs economically. Each of these cases illustrates how EERs
contribute differently each year, month, and day. A detailed discussion of the economic
benefits and cost savings for LDCs is presented in this section as well.
_7.1. Case 1: Hourly EERs Contributions for 1 February 2021_
Table 1 illustrates the information about four diesel generators. A genetic algorithm is
used to optimize diesel generators using a unit commitment approach. The operation of a
diesel generator is classified as either OFF (zero capacity) or ON (full capacity). Table 1 pro
-----
_Electronics 2022, 11, 3610_ 11 of 18
vides the coefficient parameters as well. Table 2 summarizes the costs of four aggregators.
Both HP and ETS are the most expensive in terms of unit and installation costs. Other costs
are not significantly different. This table also shows the size of units used in the algorithm.
Figure 4 represents the output of different EERs and the state of charge of utility-scale
and residential batteries. Moreover, it shows the dynamic response of solar irradiance and
temperature on 1 February 2021. An optimized diesel generator is shown in Figure 4a. Due
to fuel-related costs, four diesel generators are optimized as the last resource to fully shave
the peak. Thus, four diesel generators switch ON/OFF based on the amount needed to
completely shave the peak. The diesel generators 1, 3, and 4 are on, and generator 2 is off
for the specific peak shown in Figure 5. Figure 4b shows the contribution of a utility-scale
battery. The battery’s capacity is 2.50 MWh, and its maximum output power is always at
1.25 MW, so, in two hours, it will be fully depleted. Figure 4c illustrates the contributions of
the four aggregators. The yellow area shows the capacity reserved by aggregators for peak
load shaving as requested by VPPs. Other parts show how these aggregators operate, such
as pre-charging before peak and post-charging after peak periods. Figure 4d represents
how the algorithm works. The algorithm reads the peak of the 24-h load forecast and looks
for contributions from EERs to reduce the peak level. More EERs contributions lead to more
peak shaving. The contribution of residential batteries is shown in Figure 4e. The number
of residential batteries in this paper is 50. Residential batteries contributed 0.537 MWh, as is
apparent from Table 3. Figure 4f represents the dynamic behavior of solar power during the
day. Due to low radiation in the area, solar contributes a very small percentage compared to
other EERs. Figure 4g shows the state of charge of 50 residential batteries distributed across
the real Australian distribution network. It took four hours for the residential batteries to
fully charge (6:00 a.m. to 10:00 a.m.) and discharge (11:00 a.m. to 3:00 p.m.) during the
peak time. Figure 4h illustrates solar irradiance. Most of the irradiance occurred between
8:00 a.m. and 8:00 p.m. on 1 February 2021. Based on Figure 4i, there is a low irradiance
during the day due to lower temperatures. A utility-scale battery’s state of charge is shown
in Figure 4j. The utility-scale battery is modeled to fully charge and discharge within 2 h
due to its capacity of 2.50 MWh and maximum output power of 1.25 MW.
00:00 04:00 08:00 12:00 16:00 20:00 24:00
Time (hh:mm)
(a)
00:00 04:00 08:00 12:00 16:00 20:00 24:00
Time (hh:mm)
(b)
2.2
2
10[5]
1.8
1.6
|105|Col2|Col3|
|---|---|---|
||||
||||
|Threshold More EERs Less EERs No EERs Demand (kW)|||
(c)
**Figure 4. Cont.**
00:00 04:00 08:00 12:00 16:00 20:00 24:00
Time (hh:mm)
(d)
-----
_Electronics 2022, 11, 3610_ 12 of 18
00:00 04:00 08:00 12:00 16:00 20:00 24:00
Time (hh:mm)
(e)
100
80
60
40
20
0
00:00 03:00 06:00 09:00 12:00 15:00 18:00 21:00 00:00
Time (hh:mm)
(g)
-2
-4
-6
-8
-10
00:00 04:00 08:00 12:00 16:00 20:00 24:00
Time (hh:mm)
(i)
00:00 04:00 08:00 12:00 16:00 20:00 24:00
Time (hh:mm)
(f)
0.8
0.6
0.4
0.2
0
-0.2
00:00 04:00 08:00 12:00 16:00 20:00 24:00
Time (hh:mm)
(h)
100
80
60
40
20
0
00:00 03:00 06:00 09:00 12:00 15:00 18:00 21:00 00:00
Date (hh:mm)
(j)
**Figure 4. (a) Diesel generators; (b) utility scale battery; (c) aggregators; (d) peak shaving algorithm;**
(e) residential battery; (f) solar power; (g) RB state of charge; (h) solar irradiance; (i) area temperature
on 1 February 2021; (j) UB state of charge.
**Table 3. Contributions of EERs for Monthly Peak Load Shaving.**
**February** **Peak** **CVR** **UBatt** **RBatt** **BBH** **HP** **Diesel** **EWh** **ETS**
**2021** **Type** **MWh** **MWh** **MWh** **MWh** **MWh** **MWh** **MWh** **MWh**
1 Morning 8.816 2.380 0.537 0.871 2.116 4.219 0.941 0.206
2 Morning 2.350 1.340 0 0 0 0 0 0
14 Morning 1.766 0 0 0 0 0 0 0
14 Evening 2.345 0 0 0 0 0 0 0
15 Morning 7.356 2.380 0.551 0.915 2.118 0 0.976 0.181
16 Morning 2.505 1.660 0 0 0 0 0 0
25 Morning 2.400 0 0 0 0 0 0 0
25 Evening 2.453 0 0 0 0 0 0 0
March 1 Morning 1.414 0 0 0 0 0 0 0
_7.2. Case 2: Daily Peak Load Shaving for the Month of February 2021_
The billing cycle of February is chosen due to the highest consumption in 2021. The table below shows the peak load shaving for February 2021. There were two peak shavings
-----
_Electronics 2022, 11, 3610_ 13 of 18
on 14 and 25 February: one in the morning and one in the evening. Other EERs including
utility-scale batteries were off because CVR was able to fully shave the peak. The EERs will
be optimized based on the least-cost option, so CVR and utility-scale battery are both given
priority due to its lower cost. A second priority will be given to aggregators, and diesel
generators will be used only when all other EERs fail to completely shave the peak. Figure 5
shows the 24-h demand. The yellow area represents EERs, and the blue area represents
the main generation. There was a peak between 11:00 a.m. and 3:00 p.m. on 1 February.
The duration of the peak is four hours. The red line represents the threshold, so any load
above the threshold should be shaved. As it can be seen, EERs have completely shaved the
peak. In addition, there were also three peaks on 2 March. The first peak occurred between
2:00 p.m. and 8:00 p.m., and it lasted for six hours. It can be seen that the EERs cannot fully
shave the demand as the blue area is still visible. Furthermore, the EERs completely shaved
the second peak from 8:30 p.m. to 10:00 p.m. on the same day. Another consecutive peak
occurred near midnight. The duration of the third peak is less than an hour. Figure 5d is
zoomed in to clearly show the small third peak. The goal is to shave off any peaks that last
longer than 15 min using this method. The algorithm is capable of tracking and shaving
more than one peak per day.
2.2
2
2.1
2
10[8]
10[8]
1.9
1.8
1.7
1.6
1.5
12:00 13:00 14:00 15:00
03:00 06:00 09:00 12:00 15:00 18:00 21:00 24:00
|108|Col2|Col3|Col4|Col5|Col6|
|---|---|---|---|---|---|
|||||||
|Demand after Shaving Orinignal Demand Threshold level EERs Main Gen|108 2.08 2.06 2.04 2.02 2 1.98 12||Demand after Shaving Orinignal Demand|||
||||EERs Main Gen|EERs Main Gen||
|108|Col2|Col3|Col4|
|---|---|---|---|
|||||
|Demand after Shaving Orinignal Demand Threshold level EERs Main Gen 1.9 1.|Dema Orinig 108 EERs 2 Main 5 9 14:00 16:00 18:0||nd after Shaving nal Demand|
|||Main|Gen|
03:00 06:00 09:00 12:00 15:00 18:00 21:00 24:00
Time (hh:mm)
(b)
1.8
1.6
1.4
1.2
1
Demand after Shaving
Orinignal Demand
EERs
Main Gen
14:00 16:00 18:00
Time (hh:mm)
(a)
10[8]
2.2
2
10[8]
Demand after Shaving 10[8]
Orinignal Demand 1.955
Threshold level 1.954
EERs 1.953
Main Gen 23:42 23:44 23:46 23:48 23:50
1.8
1.6
1.4
1.2
1
|108|Col2|Col3|Col4|
|---|---|---|---|
|||||
|Demand after Shaving Orinignal Demand T E Mh E ar R ie ns s h Go el nd level 1.96 108 L O EF Eri Rna 1.94 Mai 1.92||||
|||LF a Orin|fter Shaving ignal LF s n Gen|
||EER Mai|EER Mai||
|||||
|108|Col2|Col3|Col4|Col5|Col6|Col7|Col8|
|---|---|---|---|---|---|---|---|
|Demand after Shaving Orinignal Demand EERs Main Gen Demand after Shaving 108||||||||
||Orinignal Demand Threshold level EERs Main Gen|1.955 1.954 1.953||||||
|||||||||
03:00 06:00 09:00 12:00 15:00 18:00 21:00 24:00
Time (hh:mm)
(c)
20:40 20:50 21:00 21:10 21:20
2
1.98
1.96
1.94
1.92
16:00 18:00 20:00 22:00 24:00
Time (hh:mm)
(d)
**Figure 5. (a) First peak shave on 1 February; (b) first peak shave on 2 March; (c) second peak shave**
on 2 March; (d) third peak shave on 2 March.
The contribution of each EERs on 1 February 2021 is shown in Figure 6a. The CVR
contributed 44%, which is the largest, followed by 21% for diesel generators. Utilityscale batteries contributed by 12%, and other cheap EERs contributed from 1% to 10%.
Furthermore, Figure 6b represents 100% peak shaving by EERs. Figure 7 represents the
entire billing cycle for February 2021. Consumption is higher at the beginning and middle
of the billing cycle, so most peak shavings occurred at those times. The highest peak was
on 15 February. A detailed description of peak shavings can be found in Table 3.
-----
_Electronics 2022, 11, 3610_ 14 of 18
Diesel HP
BBH
10%
21% 4% Rbatt
3%
EWh
ETS 1%
12% Ubatt
CVR
(a)
100% EERs
(b)
**Figure 6. (a) Contributions of EERs; (b) total contributions of EERs and main generation.**
220
200
180
160
140
120 Threshold
LoadBShaving
100 LoadAShaving
80
0 100 200 300 400 500 600 700 800
Time (hrs)
**Figure 7. Billing cycle for February 2021.**
_7.3. Case 3: Monthly Peak Load Shaving for the Year of 2021_
Table 4 shows the contributions of each EERs for the monthly peak load shaving.
The optimization technique optimizes the EERs based on the lowest cost. CVR, utility,
and residential batteries are the first selection to shave the load since there is lower cost.
Other priorities are given to aggregators, which are optimized based on the energy cost.
Diesel generators will be the last choice due to its fuel-related costs. The table includes the
number of peaks for each billing cycle. It is clear that June has the highest number of peaks.
Despite that, June’s cost saving is not high due to low consumption. As a result, the number
of peaks does not directly correlate with cost savings. CVR, 21.92 MWh and utility-scale
battery, 2.74 MWh were used in September since they are our first choices. Other resources
turned off to be reserved for other peak shavings. In addition, December has the highest
use of both CVR, 81.48 MWh, and utility-scale battery, 8.96 MWh, but that did not fully
shave the peak load, which requires other resources such as aggregators to engage to fully
shave the peak. A high CVR capacity indicates high consumption, so winter CVR capacity
tends to be higher than summer.
-----
_Electronics 2022, 11, 3610_ 15 of 18
**Table 4. Contributions of EERs for monthly peak load shaving.**
**No.** **Month** **CVR** **UBatt** **RBatt** **BBH** **HP** **Diesel** **EWh** **ETS**
**Peaks** **2021** **MWh** **MWh** **MWh** **MWh** **MWh** **MWh** **MWh** **MWh**
11 January 58.8059 8.6400 2.1235 3.5592 9.2995 1.5780 3.8472 0.9684
9 February 31.4093 7.7600 1.0878 1.7859 4.2348 4.2198 1.9166 0.3865
6 March 31.2262 4.2800 1.1038 1.7414 5.5876 3.0176 2.0155 1.8916
5 April 36.0006 6.4400 1.6511 2.8061 8.9259 7.2819 2.9904 2.1631
8 May 41.1970 8.3400 2.0687 4.0062 11.2082 2.3523 3.1062 1.0817
18 June 43.4794 2.3800 1.2925 0.5972 1.9393 1.8786 0.9876 0.5483
11 July 42.0988 7.1400 1.7247 1.9152 3.4082 1.9592 2.1908 1.9275
11 August 32.9884 7.6600 1.1498 1.9299 7.4148 0 2.1427 2.0351
9 September 21.9147 2.7400 0 0 0 0 0 0
8 October 17.8657 4.7600 1.0829 1.8773 5.1234 3.1746 1.9168 0.5371
10 November 23.8643 1.8200 0.1259 1.9293 1.8548 0 0.9667 0.7447
10 December 81.4757 8.9600 1.8325 5.1198 11.8830 2.0409 3.9998 2.6960
Figure 8 represents Table 4. The approach only used 89% of CVR and 11% of utilityscale battery in September due to its capability to provide full load shaving. Both August
and November did not use diesel generators due to their fuel-related costs, so other cheap
resources completely shaved the peak load. The second reason is that diesel generators
are subjected to time constraints, so, if the highest peak is not significantly higher than
other peaks, diesel generators will not be reserved for the highest peak, leading to lower
billing cycle cost savings. Heat pumps have the highest percentage of contribution among
other aggregators.
Ubatt
BBH
6%
HP 6%
Ubatt
EWh 6%
ETS 3%
2%
Rbatt 1%
CVR
CVR
(a)
(b)
**Figure 8. (a) Contributions of EERs for peak load shaving in September 2021; (b) contributions of**
EERs for peak load shaving in November 2021.
_7.4. Case4: Monthly Economical Analysis of Peak Load Shaving in 2021_
Table 5 shows the highest peaks for each billing cycle in 2021. As can be seen, the highest peak in 2021 was in February, 212.27 MW. The second highest peak in 2021 is in March,
201.4 MW, followed by the third highest peak in December, 195.86 MW. By contrast, summer months have the lowest peak except for August, which is somewhat greater than other
summer months. Both June and July have the lowest peak compared to the rest of the year.
**Table 5. Maximum peaks and for each billing cycle of peak load shaving.**
**January** **February** **March** **April** **May** **June** **July** **August** **September** **October** **November** **December**
**Peak Dates**
**21** **15** **2** **4** **8** **1** **15** **25** **9** **25** **29** **24**
**Peak MW** 185 212 201 148 117 91 93 100 94 109 164 195
**Threshold MW** 175 200 190 138 112 87 87 97 91 103 159 180
-----
_Electronics 2022, 11, 3610_ 16 of 18
Figure 9 illustrates how cost savings are calculated based on two factors. The peak
load and off-peak load have two different electricity prices. First, multiply the maximum
peak by the kW rate and, second, multiply the total consumption during the billing cycle
by the kWh rate. These rates are provided by the Australian Energy Market Operator
(AEMO). The two factors should be added together to compute the total cost. Cost savings
determined by comparing the total cost before and after peak shaving and will be calculated
as the difference between them. As can be seen from the figure, February has the highest
cost savings of AUD 123,000, followed by December with AUD 115,200. June has the lowest
cost savings of AUD 10,600 compared to the rest of 2021. Additionally, there is a slight
difference in cost savings during the summer months since consumption varies less. In the
winter, however, cost savings vary significantly due to fluctuations in consumption. To sum
up, the total cost savings were AUD 632,822 in 2021.
10[5]
_·_
1.2
1
0.8
0.6
0.4
0.2
0
|·105|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|
|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||||
||||||||||||||
Jan. Feb. Mar. Apr. Ma. Jun. Jul. Aug. Sep. Oct. Nov. Dec.
Months 2021
**Figure 9. Monthly cost saving for 2021 peak load shaving.**
**8. Conclusions**
The work presents an effective algorithm to control embedded energy resources in
order to optimize electricity cost for local distribution networks. This algorithm has proven
to be capable of reducing peak demand in real-time scenarios. Up to three daily peaks
can be shaved using this mechanism. Assets are prioritized by the control algorithm.
CVR is the first option selected because it has no charging cost, batteries are the second
option, aggregators are the third option, and then diesel generators are the last option
because of fuel-related costs. Different aggregators are optimized based on lower energy
cost. This paper examines four different case scenarios. The first case shows the different
contributions of EERs during the peak periods. Peak load shaving is performed for the
entire year of 2021 as shown in the second case. The third case illustrates peak shaving
performance in specific months with details on each EER contribution. The economic
analysis of peak shaving for the entire year 2021 is determined to assess the overall benefit
of the algorithm. The presented algorithm proved the capability of shaving up to three
daily peaks and providing significant cost savings of more than AUD 600,000 in 2021. This
paper will be extended in the future to include the degradation of PV power over the years
and the impact of greenhouse gas emissions (CO2) produced by diesel generators.
**Author Contributions: Conceptualization, H.M.; Formal analysis, H.M.; Methodology, H.M.; Soft-**
ware, H.M.; Supervision, E.C.G. and J.L.C.B.; Writing—original draft, H.M.; Writing—review &
editing, H.M. All authors have read and agreed to the published version of the manuscript.
-----
_Electronics 2022, 11, 3610_ 17 of 18
**Funding: This research was funded by Emera & NB Power Research Center for Smart Grid Technolo-**
[gies at University of New Brunswick https://www.unb.ca/smartgrid (accessed on 12 October 2022).](https://www.unb.ca/smartgrid)
**[Data Availability Statement: “Australian Energy Market Operator AEMO” at https://aemo.com.](https://aemo.com.au/en)**
[au/en (accessed on 8 August 2022).](https://aemo.com.au/en)
**Conflicts of Interest: The authors declare no conflict of interest.**
**Abbreviations**
The following abbreviations are used in this manuscript:
VPP Virtual Power Plant
BESSs Battery Energy Storage Systems
DERs Distributed Energy Resources
EERs Embedded Energy Resources
ESSs Energy Systems Storage
BBH Baseboard Heater
EWh Electric Water Heater
HP Heat Pump
RB Residential Battery
LF Load Forecast
ETS Electric Thermal Energy Storage
Ubatt Utility Scale battery
Rbatt Residential battery
CVR Conservation Voltage Reduction
LoadBShaving Load before shaving
LoadAShaving Load after shaving
DG Diesel Generator
AGGs Aggregators
AEMO Australian Energy Market Operator
EV2G Electric Vehicle to Grid
**References**
1. Uddin, M.; Romlie, M.F.; Abdullah, M.F.; Abd Halim, S.; Kwang, T.C. A review on peak load shaving strategies. Renew. Sustain.
_[Energy Rev. 2018, 82, 3323–3332. [CrossRef]](http://doi.org/10.1016/j.rser.2017.10.056)_
2. Lu, C.; Xu, H.; Pan, X.; Song, J. Optimal sizing and control of battery energy storage system for peak load shaving. Energies 2014,
_[7, 8396–8410. [CrossRef]](http://dx.doi.org/10.3390/en7128396)_
3. Singh, S.; Singh, S.P. Peak load relief in MV/LV distribution networks through smart grid-enabled CVR with droop control EV2G
reactive power support. In Proceedings of the 2018 International Conference on Power, Instrumentation, Control and Computing
(PICC), Thrissur, India, 18–20 January 2018.
4. Arora, S.; Satsangi, S.; Kaur, S.; Khanna, R. Substation demand reduction by CVR enabled intelligent PV inverter control functions
[in distribution systems. Int. Trans. Electr. Energy Syst. 2021, 31, 12724. [CrossRef]](http://dx.doi.org/10.1002/2050-7038.12724)
5. Chowdhury, B.H. Central-station photovoltaic plant with energy storage for utility peak load leveling. In Proceedings of the 24th
Intersociety Energy Conversion Engineering Conference, Washington, DC, USA, 6–11 August 1989.
6. Mahmud, K.; Hossain, M.J.; Town, G.E. Peak-load reduction by coordinated response of photovoltaics, battery storage, and
[electric vehicles. IEEE Access 2018, 6, 29353–29365. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2837144)
7. Wang, Z.; Wang, S. Grid power peak shaving and valley filling using vehicle-to-grid systems. IEEE Trans. Power Deliv. 2013, 28,
[1822–1829. [CrossRef]](http://dx.doi.org/10.1109/TPWRD.2013.2264497)
8. Ghosh, D.P.; Thomas, R.J.; Wicker, S.B. A privacy-aware design for the vehicle-to-grid framework. In Proceedings of the 2013 46th
Hawaii International Conference on System Sciences,Wailea, HI, USA, 7–10 January 2013.
9. López, M.A.; De La Torre, S.; Martín, S.; Aguado, J.A. Demand-side management in smart grid operation considering electric
[vehicles load shifting and vehicle-to-grid support. Int. J. Electr. Power Energy Syst. 2015, 64, 689–698. [CrossRef]](http://dx.doi.org/10.1016/j.ijepes.2014.07.065)
10. Baniasadi, A.; Habibi, D.; Al-Saedi, W.; Masoum, M.A.; Das, C.K.; Mousavi, N. Optimal sizing design and operation of electrical
[and thermal energy storage systems in smart buildings. J. Energy Storage 2020, 28, 101186. [CrossRef]](http://dx.doi.org/10.1016/j.est.2019.101186)
11. Lavrova, O.; Cheng, F.; Abdollahy, S.; Barsun, H.; Mammoli, A.; Dreisigmayer, D.; Willard, S.; Arellano, B.; Van Zeyl, C. Analysis
of battery storage utilization for load shifting and peak smoothing on a distribution feeder in New Mexico. In Proceedings of the
2012 IEEE PES Innovative Smart Grid Technologies (ISGT), Washington, DC, USA, 16–20 January 2012; pp. 1–6.
12. Vanhoudt, D.; Geysen, D.; Claessens, B.; Leemans, F.; Jespers, L.; Van Bael, J. An actively controlled residential heat pump:
[Potential on peak shaving and maximization of self-consumption of renewable energy. Renew. Energy 2014, 63, 531–543. [CrossRef]](http://dx.doi.org/10.1016/j.renene.2013.10.021)
-----
_Electronics 2022, 11, 3610_ 18 of 18
13. Sepulveda, A.; Paull, L.; Morsi, W.G.; Li, H.; Diduch, C.P.; Chang, L. A novel demand side management program using water
heaters and particle swarm optimization. In Proceeding of the 2010 IEEE Electrical Power & Energy Conference, Halifax, NS,
Canada, 25–27 August 2010.
14. Belov, A.; Vasenev, A.; Kartak, V.; Meratnia, N.; Havinga, P.J. Peak load reduction of multiple water heaters: Respecting consumer
comfort and money savings. In Proceeding of the 2016 IEEE Online Conference on Green Communications (OnlineGreenComm),
Piscataway, NJ, USA, 14 November–17 December 2016.
15. Martins, R.; Hesse, H.C.; Jungbauer, J.; Vorbuchner, T.; Musilek, P. Optimal Component Sizing for Peak Shaving in Battery Energy
[Storage System for Industrial Applications. Energies 2018, 11, 2048. [CrossRef]](http://dx.doi.org/10.3390/en11082048)
16. Cheng, X.; Feng, S.; Huang, Y.; Wang, J. A New Peak-Shaving Model Based on Mixed Integer Linear Programming with Variable
[Peak-Shaving Order. Energies 2021, 14, 887. [CrossRef]](http://dx.doi.org/10.3390/en14040887)
17. Liao, S.; Zhang, Y.; Liu, B.; Liu, Z.; Fang, Z.; Li, S. Short-Term Peak-Shaving Operation of Head-Sensitive Cascaded Hydropower
[Plants Based on Spillage Adjustment. Water 2020, 12, 3438. [CrossRef]](http://dx.doi.org/10.3390/w12123438)
18. Tchagang, A.; Yoo, Y. V2B/V2G on Energy Cost and Battery Degradation under Different Driving Scenarios, Peak Shaving, and
[Frequency Regulations. World Electr. Veh. J. 2020, 11, 14. [CrossRef]](http://dx.doi.org/10.3390/wevj11010014)
19. Gli´nski, M.; Bojesen, C.; Rybi´nski, W.; Byku´c, S. Modelling of the Biomass mCHP Unit for Power Peak Shaving in the Local
[Electrical Grid. Energies 2019, 12, 458. [CrossRef]](http://dx.doi.org/10.3390/en12030458)
20. Odkhuu, N.; Lee, K.-B.A.; Ahmed, M.; Kim, Y.-C. Optimal Energy Management of V2B with RES and ESS for Peak Load
[Minimization. Appl. Sci. 2018, 8, 2125. [CrossRef]](http://dx.doi.org/10.3390/app8112125)
21. Gelleschus, R.; Böttiger, M.; Bocklisch, T. Optimization-Based Control Concept with Feed-in and Demand Peak Shaving for a PV
[Battery Heat Pump Heat Storage System. Energies 2019, 12, 2098. [CrossRef]](http://dx.doi.org/10.3390/en12112098)
22. Bignucolo, F.; Caldon, R.; Prandoni, V.; Spelta, S.; Vezzola, M. The voltage control on MV distribution networks with aggregated
DG units (VPP). In Proceeding of the 41st International Universities Power Engineering Conference, Newcastle upon Tyne, UK,
6–8 September 2006.
23. Pudjianto, D.; Ramsay, C.; Strbac, G. Virtual Power Plant and System Integration of Distributed Energy Resources. IET Renew.
_[Power Gener. 2007, 1, 10–16. [CrossRef]](http://dx.doi.org/10.1049/iet-rpg:20060023)_
24. Dauensteiner, A. European Virtual Fuel Cell Power Plant; Management Summary Report; Germany, NNE5-2000-208, 2007.
25. Pudjianto, D.; Ramsay, C.; Strbac, G.; Durstewitz, M. The virtual power plant: Enabling integration of distributed generation and
demand. Fenix Bull. 2008, 2, 10–16.
26. Braun, M. Virtual power plants in real applications-pilot demonstrations in Spain and England as part of the European project
FENIX. In Proceedings of the Internationaler ETG Kongress, Düsseldorf, Germany, 27–28 October 2009.
27. Shafieisarvestani, A. Homogeneous Load Aggregation: Implementation and Control For Smart Grid Functions. Master’s Thesis,
UNB, Fredericton, NB, Canada, 2022.
28. Kharrich, M.; Kamel, S.; Hassan, M.H.; ElSayed, S.K.; Taha, I.B. An improved heap-based optimizer for optimal design of a hybrid
[microgrid considering reliability and availability constraints. Sustainability 2021, 13, 10419. [CrossRef]](http://dx.doi.org/10.3390/su131810419)
29. Chaudhari, S.; Killekar, S.; Mahadik, A.; Meerakrishna, N.; Divya, M. A review of unit commitment problem using dynamic programming. In Proceedings of the IEEE International Conference on Nascent Technologies in Engineering (ICNTE),
Navi Mumbai, India, 4–5 January 2019.
30. Danish, S.M.S.; Ahmadi, M.; Danish, M.S.S.; Mandal, P.; Yona, A.; Senjyu, T. A coherent strategy for peak load shaving using
[energy storage systems. J. Energy Storage 2020, 32, 101823. [CrossRef]](http://dx.doi.org/10.1016/j.est.2020.101823)
31. AEMO, Melbourne, Australia. National Electricity Market: Average Daily Prices. Available online: www.aemo.com.au/
Electricity/National-Electricity-Market-NEM/Data-dashboard (accessed on 8 August 2022).
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/electronics11213610?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/electronics11213610, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2079-9292/11/21/3610/pdf?version=1667880962"
}
| 2,022
|
[] | true
| 2022-11-04T00:00:00
|
[
{
"paperId": "17e51cd50e8ea43ff38fb2d7aa507c3492eaaa2d",
"title": "An Improved Heap-Based Optimizer for Optimal Design of a Hybrid Microgrid Considering Reliability and Availability Constraints"
},
{
"paperId": "c9c2940d20146ab0a13128fc6ab65f3a77731beb",
"title": "A New Peak-Shaving Model Based on Mixed Integer Linear Programming with Variable Peak-Shaving Order"
},
{
"paperId": "6b759d3cba17f305b22643aea5c2f5c4573ae88b",
"title": "Short-Term Peak-Shaving Operation of Head-Sensitive Cascaded Hydropower Plants Based on Spillage Adjustment"
},
{
"paperId": "b996fd05a326b0ccb8101f2f24e94c128543cdf8",
"title": "A coherent strategy for peak load shaving using energy storage systems"
},
{
"paperId": "d4c322451dbdaa06df418bc7aada5c6036a754d5",
"title": "Substation demand reduction by\n CVR\n enabled intelligent\n PV\n inverter control functions in distribution systems"
},
{
"paperId": "c084bb4dee978bb8f59e4faa6f42f344dabca709",
"title": "Optimal sizing design and operation of electrical and thermal energy storage systems in smart buildings"
},
{
"paperId": "32d84ca40bdee637eebbabd57613e9bfe7ca064d",
"title": "V2B/V2G on Energy Cost and Battery Degradation under Different Driving Scenarios, Peak Shaving, and Frequency Regulations"
},
{
"paperId": "2e7acf8f55539c82a212ac5ce52d0d46fa5a87e4",
"title": "Optimization-Based Control Concept with Feed-in and Demand Peak Shaving for a PV Battery Heat Pump Heat Storage System"
},
{
"paperId": "a7b877e7889ae11ddb7fb0c23dd575440fb7974e",
"title": "Modelling of the Biomass mCHP Unit for Power Peak Shaving in the Local Electrical Grid"
},
{
"paperId": "23f15dd37294bf1ff2e241f2bb22b735f4adad9f",
"title": "Optimal Energy Management of V2B with RES and ESS for Peak Load Minimization"
},
{
"paperId": "7a421d583f8bda7187db2d4a8f572f89bb8d8a74",
"title": "Optimal Component Sizing for Peak Shaving in Battery Energy Storage System for Industrial Applications"
},
{
"paperId": "8bcb6996765089c27cd19ac8fd2bfe2208730465",
"title": "Peak-Load Reduction by Coordinated Response of Photovoltaics, Battery Storage, and Electric Vehicles"
},
{
"paperId": "abd78784bba9fa3f20fe9cffa982abc1867ced27",
"title": "A review on peak load shaving strategies"
},
{
"paperId": "64aa71ce028bc1d35fc1d0a4f5d7eef50e37f037",
"title": "Optimal Sizing and Control of Battery Energy Storage System for Peak Load Shaving"
},
{
"paperId": "e49813726f678ba4a18e4b327f7f2d8559dd5d22",
"title": "An actively controlled residential heat pump: Potential on peak shaving and maximization of self-consumption of renewable energy"
},
{
"paperId": "ba42f152651e22ac5ffa2a629e7ebca1a7680008",
"title": "Grid Power Peak Shaving and Valley Filling Using Vehicle-to-Grid Systems"
},
{
"paperId": "13957fadfbf4b345dae696b4fa486858ce75042b",
"title": "Virtual power plant and system integration of distributed energy resources"
},
{
"paperId": "ee0c4119e924862350bcadf0bd720e58a0564bea",
"title": "Demand-side management in smart grid operation considering electric vehicles load shifting and vehicle-to-grid support"
}
] | 16,302
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Medicine",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0328c501ba75a2a596c18fdcec93b8f7d7e956d4
|
[
"Computer Science"
] | 0.874959
|
Security model to protect patient data in mHealth systems through a Blockchain network
|
0328c501ba75a2a596c18fdcec93b8f7d7e956d4
|
Proceedings of the LACCEI international multi-conference for engineering, education and technology
|
[
{
"authorId": "2208097690",
"name": "Angel Elí Gutiérrez Díaz"
},
{
"authorId": "1387489017",
"name": "Jimmy Armas"
},
{
"authorId": "2106148142",
"name": "J. Molina"
},
{
"authorId": "2206505832",
"name": "Cristhian Alexis Natividad Peña"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Proc LACCEI int multi-conference eng educ technol"
],
"alternate_urls": null,
"id": "3cf3e697-2894-4c70-ae49-e5e8d3ca7a83",
"issn": "2414-6390",
"name": "Proceedings of the LACCEI international multi-conference for engineering, education and technology",
"type": null,
"url": "http://www.laccei.org/index.php/publications/laccei-proceedings"
}
|
– On this research paper we propose a security model to protect patient data on mobile health systems (mHealth) through a Blockchain network. This model is implemented under a Blockchain platform that allows collecting, sharing and integrating data in a safe way through a mobile app for mHealth devices, for medical care in Peruvian clinics. This security model consists of three stages: 1. Data collection, 2. Data processing, 3. System monitoring. It should be noted that the patient is autonomous in the management of his information, and that each user requires a single identifier to get access to the data. A test scenario was defined to validate the proposed model. Also, the study was conducted with a group of users through a health mobile app and the used medical data was provided by a hospital in Peru as anonymized research data. During the study, we validated the following topics: access control to the network, access to medical information of authorized users, data integrity on each transaction and performance evaluation of the system under a high user transaction load. Preliminary results show the system average response time is 4.72 seconds for 10,000 users carrying out requests simultaneously.
|
# Security model to protect patient data in mHealth systems through a Blockchain network
### Cristhian Alexis Natividad Peña[1], Angel Elí Gutiérrez Díaz[2],
Jimmy Alexander Armas Aguirre[3 ]and Juan Manuel Madrid Molina[4]
1Universidad Peruana de Ciencias Aplicadas, Perú, u201413799@upc.edu.pe
[2 Universidad Peruana de Ciencias Aplicadas, Perú, u201214826@upc.edu.pe](mailto:second.author@email.com)
[3 Universidad Peruana de Ciencias Aplicadas, Perú, jimmy.armas@upc.pe](mailto:jimmy.armas@upc.pe)
[4 Universidad Icesi, Colombia, jmadrid@icesi.edu.co](mailto:jmadrid@icesi.edu.co)
**_Abstract– On this research paper we propose a security model_**
**_to protect patient data on mobile health systems (mHealth) through_**
**_a Blockchain network. This model is implemented under a_**
**_Blockchain platform that allows collecting, sharing and integrating_**
**_data in a safe way through a mobile app for mHealth devices, for_**
**_medical care in Peruvian clinics. This security model consists of_**
**_three stages: 1. Data collection, 2. Data processing, 3. System_**
**_monitoring. It should be noted that the patient is autonomous in the_**
**_management of his information, and that each user requires a single_**
**_identifier to get access to the data. A test scenario was defined to_**
**_validate the proposed model. Also, the study was conducted with a_**
**_group of users through a health mobile app and the used medical_**
**_data was provided by a hospital in Peru as anonymized research data._**
**_During the study, we validated the following topics: access control to_**
**_the network, access to medical information of authorized users, data_**
**_integrity on each transaction and performance evaluation of the_**
**_system under a high user transaction load. Preliminary results show_**
**_the system average response time is 4.72 seconds for 10,000 users_**
**_carrying out requests simultaneously._**
**_Keywords—mHealth, Blockchain, wearable devices, security_**
I. INTRODUCTION
Mobile health (mHealth), according to the OMS, is the
remote medical care service supported by mobile devices, such
as smartphones, personal digital assistants (PDA) among others
[1]. These devices allow doctors to remotely track a patient’s
clinical condition in real time, in order to take timely actions.
MHealth offers services that help patients to get access to their
data from any place through an internet connection. Likewise,
this service reduces the number of visits to hospitals and the
cost of medical attention [2].
Data security generates an impact on the patient’s care. The
inability of accessing the data could lead to delays in treatment
and decision making. In 60 analyzed mHealth apps, 137
security vulnerabilities were found, with remote monitoring
apps presenting most of the vulnerabilities (32.12% of the total)
[3]. Risk factors established by OWASP were considered [11];
64% of the found vulnerabilities corresponded to “security
decisions through untrustworthy entries”, meaning the attacker
elevated access and privileges, affecting confidentiality and
integrity of the clinical data.
Digital Object Identifier (DOI):
http://dx.doi.org/10.18687/LACCEI2019.1.1.285
ISBN: 978-0-9993443-6-1 ISSN: 2414-6390
Many different solutions have been developed in order to
provide data security in the mHealth system. However,
solutions [5] and [8] only use one authentication factor for
access to their systems. Furthermore, solution [5] only provides
a government-level approach, and it does not take into
consideration the behavior of an mHealth system in a private
entity. Likewise, solutions [4], [6] and [7] are not scalable for a
high level of transactions.
This paper is structured in the following way. We start with
a literature review, then we will describe the proposed model
and its implementation based on a real scenario. Finally, we
present the conclusions, based on the obtained results in the
case study.
II. LITERATURE REVIEW
Clinical information of patients is a critical asset that needs
to be protected by secure systems in order to avoid access by
unauthorized third parties. Previous studies have developed
different solutions to the problem of security in patients’ data in
an mHealth system. Now, we discuss the main security
attributes that must be incorporated in an mHealth system.
_A._ _Security Attributes in an mHealth system_
Table I shows the main attributes found in the literature
review. The order of listing does not represent the importance
of each one.
TABLE I
SECURITY ATTRIBUTES IN AN MHEALTH SYSTEM
|Attribute|Description|Reference|
|---|---|---|
|Confidentiality|To keep clinical data private and inaccessible to unauthorized people.|[5], [6]|
|Integrity|The system must verify that stored data hasn’t been changed by third parties, and also that data has been sent by someone trustworthy.|[5], [6]|
|Availability|Clinical data must be easily accessible to authorized people, whenever they require it.|[6]|
|Authentication|mHealth infrastructure must have robust authentication mechanisms to ensure identity of uses. In addition, it is recommended to have two or more authentication factors.|[5], [6]|
-----
|Access Control|Doctors, nurses and patients access the information previously shared by the data owner.|[5], [7], [8]|
|---|---|---|
|Data Transfer|Data must be protected during transport, to avoid interception by third parties.|[6], [13]|
|Auditability|User activity on the system must be traceable.|[5]|
_B._ _Blockchain platforms_
An evaluation of the different Blockchain platforms was
carried out in order to identify usability and capacity of each
one to secure medical information. Table II shows the main
Blockchain platforms. The Ethereum platform allows execution
of smart contracts between the participants, though it lacks
permissions to access the network and perform transactions,
which are visible to all the participants of the network. In
contrast, Hyperledger Fabric requires permissions to access
web content, and its transactions are visible only to a
determined group through the use of encryption algorithms. In
addition, Hyperledger Fabric allows reuse of components to
facilitate testing. These were the main reasons to choose this
platform to support our proposed model.
TABLE II
BLOCKCHAIN PLATFORMS
|Col1|Description|References|
|---|---|---|
|Multichain|It helps organizations to quickly develop and implement business solutions based on blockchain. The mining process is done via proof-of-work.The developer can decide either to create a private Blockchain network for being able to decide who can connect to the network and who can make transactions, or can choose to create a public blockchain. Customized cryptocurrencies can also be created [16].|[20]|
_C._ _Data security solutions in an mHealth system_
Models for data assurance applied to an mHealth
infrastructure have been proposed [8], [17]. Fig. 1, proposes a
model for safe personal data exchange focused on the user, in
order to improve interaction and collaboration in an mHealth
system [8]. This model proposes a mobile health app based on
Blockchain using a channel scheme and a membership service
for identity management. Data is retrieved from a permanent
database in the cloud synchronized to the Blockchain network,
in order to protect the integrity of the information of each
patient. Moreover, it uses a method of data processing based on
trees, with the purpose of managing huge quantities of data.
**Fig. 1 Model Personal Mobile Health Data Sharing**
Other solution, on Fig. 2, proposes a structure of an
mHealth system for management of cognitive behavioral
therapy in patients with insomnia, made tamper-resistant
through the use of Blockchain technology, which allows
auditability and reliable computing through a decentralized
network [17]. Electronic medical records registered in the
|Col1|Description|References|
|---|---|---|
|Hyperledger Fabric|Hyperledger Facbric is designed to develop apps or solutions with a modular architecture. It uses container technology to host chaincode, also known as smart contracts, i.e. logic of the system. Hyperledger Fabric was launched by Digital Asset and IBM as part of a hackathon [14].|[8], [17], [18]|
|Ethereum|Ethereum is a decentralized platform that executes smart contracts. It is a project developed by the Etherum Foundation, which is a Swiss organization with a team of developers all around the world. This platform allows design and creation of cryptocurrencies. Code in the Ethereum network may be executed for a fee [14]. This platform is adaptable for public networks.|[5], [19]|
**17[th]** **LACCEI International Multi-Conference for Engineering, Education, and Technology: “Industry, Innovation, And**
-----
Blockchain network through this solution were resistant to
manipulation attacks.
**Fig. 2 The structure of the mobile health system for**
cognitive behavioural therapy for insomnia
In the same solution, on Fig. 3, the authors propose an
architecture based on virtualization in Linux, using Docker and
Hyperledger Fabric. The system consist of 4 validation peers
(VP) and a membership service (MS). A peer is in charge of the
functionality of all the Blockchain network, the membership
service is in charge of the authentication to the system and each
peer has a database replica [17].
**Fig. 3 Virtual computing environment**
III. SECURITY PROPOSAL TO PROTECT PATIENT’S
DATA IN MHEALTH SYSTEMS THROUGH A
BLOCKCHAIN NETWORK
_A._ _Model Description_
On Fig. 4, we propose a model that allows maintaining the
security when collecting and sharing patient’ data through
mHealth devices. This proposal will allow patients to manage
access for visualization and treatment of their data by the
medical personnel from the health entity. Three stages were
conducted to compose the proposed model: First, patient data
collection through mHealth devices; second, data processing in
the Blockchain network in Cloud to guarantee privacy and
security; and third, system monitoring and performance
evaluation.
**Fig. 4 Proposed security model based on Blockchain**
**17[th]** **LACCEI International Multi-Conference for Engineering, Education, and Technology: “Industry, Innovation, And**
-----
_B._ _Stages of the model_
_1)_ _Data collection: Patient’ data is collected through mHealth_
devices by the use of wearables, data entry in a mobile
phone, PDA, etc. The patient shares his information giving
access to the medical personnel and his/her relatives.
_2)_ _Data processing: Information flows through secure_
connections to the Blockchain network in the Cloud
replicating the information among all the participant nodes
having previously executed the consensus algorithm.
_3)_ _System monitoring: In this stage, system performance and_
generation of new blocks in the network are evaluated. This
allows to evaluate system scalability.
The proposed model takes into consideration the main data
security requirements on mHealth devices, such as availability,
which means accessibility to information by an authorized user
at any place and time; integrity, in order to guarantee that all
stored data hasn’t been modified by unauthorized third parties
and to verify that the information has been sent by a reliable
user; authenticity, to verify the identity of the participants in the
network; and confidentiality, so that each participant can only
have access to information according to his/her role in the
organization and authorized access level [2]. In Peru, security
levels established by HIPPA for mHealth systems have not been
established nor regulated. This proposed model is supported on
Peruvian regulations such as Law N° 30024, the law of
protection of personal data; and the Peruvian Technical
Regulations (Normativa Técnica Peruana) 17799, wich are
mandatory in Peru.
_C._ _Proposed Architecture_
On Fig. 5, we propose an architecture to support the data
security model and the use of Blockchain technology in an
mHealth system. This architecture shows collection and
processing of patients’ data. It considers 3 user’s profiles:
patient, doctor and relative. Each participant needs an
associated personal card that allows access to the network and
to make transactions. These cards have the combination of
identity, connection profile and metadata, which are all required
to connect to the Blockchain network.
**Fig. 5 Health technology architecture in Blockchain**
On Fig. 5, we show the use of the Hyperledger Composer
platform. This set of tools and development framework allows
the creation of apps based on Blockchain technology. This
platform is compatible with the Hyperledger Fabric
infrastructure, which supports consensus protocols to ensure
that transactions are validated in relation to network policy.
Also, this architecture shows the integration and compatibility
of mHealth mobile devices with Blockchain technology with
the purpose of securing medical data.
IV. CASE STUDY
This case study shows the validations performed with the
purpose of verifying security of medical information using
Blockchain technology. For this, we validated through the
developed mobile app the access control to the Blockchain
network from an mHealth device, and the access to a patient’s
medical information for authorized users. The integrity of each
transaction made in the network was also validated and, finally,
the JMeter tool was used to measure system performance by
simulating the interaction of several simultaneous users with
the purpose of evaluating the scalability of the proposed system.
_A._ _Validation environment_
The validation made for the proposed model was
developed under a controlled environment. The information
used for this validation was provided by a specialist in
cardiology from a Peruvian hospital. It should be noted that this
information did not contain identifiable personal data (names,
surnames, phone numbers, emails, etc.) and it was used only for
the investigation. We worked with a sample of 75 records. The
study was conducted between September and November of
2018.
_B. Implementation_
For this validation, a Blockchain network using
Hyperledger Composer on Linux was implemented, and a
mobile app was implemented simulating part of the system of a
**17[th]** **LACCEI International Multi-Conference for Engineering, Education, and Technology: “Industry, Innovation, And**
-----
health entity, including records and queries of medical data. The
tests were focused mainly on verification of compliance of the
security attributes.
_1)_ _Authentication (Access Control):_ Two authentication
factors were established in order to access the network:
something the users knows and something the user has. For
this, each new user (patient, doctor and relative) was
enrolled by the network administrator. During this process,
credentials were created (user and password) and a single
identifier (a digital certificate) was automatically generated
for each record. This identifier was shared with each user
for interaction with the system; upon access, the app
requests the identifier stored in the mobile device (mobile
phone, tablet or laptop).
_2)_ _Confidentiality:_ In this case, data provided by the
cardiologist was registered manually. Three inputs were
received: heart rate, blood pressure and blood glucose
levels. Patients had the authority of managing access
permissions to their information, both for doctors and
relatives, by granting and denying permissions to the data
according to their needs.
_3)_ _Data Integrity: For this validation, we used Hyperledger_
Explorer, a web interface that allows visualizing
transactions, blocks, nodes and interactions developed
within a Blockchain network. As a user was registered,
medical records were entered, and permissions were
granted or denied, this interface showed a new transaction
posted to the network. We integrated Hyperledger Explorer
with the app, to verify that each transaction posted in the
network contained a single hash provided automatically by
the Blockchain technology.
_C. Results_
According to obtained data, the importance of proper
information management is determined through the use of
technologies that contribute to data security for the benefit of
patients and hospitals. The two implemented authentication
factors shows that 100% of the users registered in the network
had to introduce their credentials correctly and had to have the
single identifier hosted in the device to access the network. In
addition, due to the Access Control Logic (ACL), implemented
by the Blockchain technology, robust permission management
was guaranteed.
The integration of Hyperledger Explorer tool with the app
demonstrated the integrity of the hosted data in the network
because each posted transaction contained a cryptographic hash
generated by the SHA256 algorithm used by the Blockchain
technology.
Another important factor was the evaluation of system
performance, related to scalability and efficiency in data
processing. On Fig. 6, results of validation with a high load of
requests to the system by users are shown. We simulated a range
from 10 to 10,000 requests. The system showed an average
response time of 4.27 seconds with 10,000 simultaneous
requests.
On equation (1), we show the calculation of the average
time response of the system, where **_t_** is the response time for
each request, and n is the total number of requests.
𝑋[̅ ]=
∑𝑡
(1)
## 𝑛
**Fig. 6 Average response time of the Blockchain network**
Finally, and in relation to the previous point, performance
of the main functions of the system was evaluated: User
authentication, data registration, and permission grant/deny. On
Fig. 7, average response time for each functionality of the
system are indicated, we observe that permission grant/denial
presents a high performance based on the user’s response time.
Average response time for permission grant/denial for 10,000
simultaneous users is 4.13 seconds (grant) and 2.35 seconds
(denial). In this way, we show that users can efficiently manage
access to their data.
**17[th]** **LACCEI International Multi-Conference for Engineering, Education, and Technology: “Industry, Innovation, And**
-----
**Fig. 7 Average response time of the Blockchain network**
based on system functions
V. CONCLUSIONS
In this paper, we proposed a security model using
Blockchain technology, in order to secure data in a hospital.
The model was tested in an controlled environment, using
research data provided by a cardiology specialist from a
Peruvian hospital. We conclude that the implementation of the
proposed model guaranteed authentication, confidentiality,
integrity and availability of the data, generating enhanced
security in the hospital’s systems.
Finally, the system demonstrated to be scalable supporting
a high load of requests by users. This allows performing
transactions in the system in a very efficient way, to grant and
deny permissions to the rest of the participants.
REFERENCES
[1] World Health Organization. mHealth: New horizons for health through
mobile technologies-Volume 3. WHO Library Cataloguing-in-Publication
Data, Switzerland, 2011.
[2] Zubaydi, F., Saleh, A., Aloul F., Sagahyroon A.: Security of Mobile Health
(mHealth) Systems, pp. 1-5. UAE, 2015.
[3] Beltran, L. Cifuentes Y., Ramirez L.: Analysis of Security Vulnerabilities
for Mobile Health Applications. International Scholarly and Scientific
Research & Innovation, vol. 9, pp. 1067-1072, Bogotá, 2015.
[4] Yang, Y., Liu, X., Deng, R., Li, Y.: Lightwight Sharable and Traceable
Secure Mobile Health System. IEEE Transactions on Dependable and
Secure Computing, pp. 1-14, 2017.
[5] Azaria, A., Ekblaw A., Vieira T., Lippman, A.: Medrec: Using Blockchain
for Medical Data Access and Permision Management. In: 2016 2[nd]
International Conference on Open and Big Data, pp. 22-30, 2016.
[6] Alibasa, M., Santos, M., Glozier, N., Harvey, S., Calvo, R.: Designing a
Secure Architecture for m-Health Applications. In: IEEE Life Sciences
Conference (LSC), pp. 91-94, 2017.
[7] Zhang, H., Wang, Z., Scherb, C.: Sharing mHealth Data via Named Data
Networking. In: Proceedings of the 3[rd] ACM Conference on InformationCentric Networking, pp.142-147, 2016.
[8] Liang, X., Zhao, J., Shetty, S., Liu, J., Li, D.: Integrating Blockchain for
Data Sharing and Collaboration in Mobile Healthcare Applications. IEEE
28[th] Annual International Symposium on Personal Inndor and Mobile
Radio Communications (PIMRC), 2017.
[9] Wiss, M., Botha, A., Herselman, M., Loots, G.: Blockchain as an Enabler
for Public mHealth Solutions in South Africa. IST-Africa, pp.1-8, 2017.
[[10]Teachracers, https://www.techracers.com/healthcare-mhealth-blockchain,](https://www.techracers.com/healthcare-mhealth-blockchain)
last accessed 2018/09/23.
[[11]OWASP Foundation, https://www.owasp.org/index.php/Main_Page, last](https://www.owasp.org/index.php/Main_Page)
accessed 2018/09/16.
[12]Vhaduri, S., Poellabauer, C.: Wearable Device User Authentication Using
Physiological and Behavioral Metrics. In: 2017 IEEE 28th Annual
International Symposium on Personal, Indoor, and Mobile Radio
Communications (PIMRC). IEEE, Canadá, 2017.
[13]Atat, R., Liu, L., Ashdown, J., Medley, M.: A Physical Layer Security
Scheme for Mobile Health Cyber-Physical Systems. In: IEEE
GLOBECOM 2016, pp. 1.15. IEEE, Washington, 2016.
[[14]Hyperledger Fabric, https://www.hyperledger.org/projects/fabric, last](https://www.hyperledger.org/projects/fabric)
accessed 2018/10/12.
[[15]Ethereum, https://www.ethereum.org, last accessed 2018/10/12.](https://www.ethereum.org/)
[[16]Multichain, https://www.multichain.com, last accesed 2018/10/15.](https://www.multichain.com/)
[17]Ichikawa, D., Kashiyama, M., Ueno, T.: Tamper-Resistant Mobile Health
Using Blockchain Technology. pp. 1-10. JMIR Mhealth and Uhealth,
Japon, 2017.
[18]Dubovitskaya, A., Xu, Z., Ryu S., Schumacher, M., Wang, F.: Secure and
Trustable Electronic Medical Records Sharing using Blockchain. pp. 650659 AMIA Annual Symposium Proceedings Archive, 2017.
[19]Mannaro, K., Baralla, G., Pinna, A., Ibba, S.: A Blockchain Approach
Applied to a Telefermatology Platform in the Sardinian Region (Italy). In:
e-Health Pervasive Wireless Applications and Services (e-HPWAS’17),
pp. 1-15, 2018.
[20]Dai, H., Young, P., Durant, T., Gong, G., Kang, M., Krumholz, H, Schulz,
W., Jiang, L.: TrialChain: A Blockchain-Based Platform to Validate Data
Integrity in Large, Biomedical Research Studies, pp. 1-7, ArXiv, 2018.
**17[th]** **LACCEI International Multi-Conference for Engineering, Education, and Technology: “Industry, Innovation, And**
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.18687/laccei2019.1.1.285?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.18687/laccei2019.1.1.285, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "http://www.laccei.org/LACCEI2019-MontegoBay/full_papers/FP285.pdf"
}
| 2,019
|
[
"Conference"
] | true
| 2019-08-03T00:00:00
|
[] | 5,590
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0329334cad862b79881ba458b81e206454af946a
|
[
"Computer Science"
] | 0.869832
|
Secure and Efficient Multi-Signature Schemes for Fabric: An Enterprise Blockchain Platform
|
0329334cad862b79881ba458b81e206454af946a
|
IEEE Transactions on Information Forensics and Security
|
[
{
"authorId": "2122427562",
"name": "Yue-Lei Xiao"
},
{
"authorId": "51015896",
"name": "Peng Zhang"
},
{
"authorId": "46398863",
"name": "Yuhong Liu"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"IEEE Trans Inf Forensics Secur"
],
"alternate_urls": [
"http://ieeexplore.ieee.org/servlet/opac?punumber=10206",
"http://www.signalprocessingsociety.org/publications/periodicals/forensics/"
],
"id": "d406a3f4-dc05-43be-b1f6-812f29de9c0e",
"issn": "1556-6013",
"name": "IEEE Transactions on Information Forensics and Security",
"type": "journal",
"url": "http://www.ieee.org/organizations/society/sp/tifs.html"
}
|
Digital signature is a major component of transactions on Blockchain platforms, especially in enterprise Blockchain platforms, where multiple signatures from a set of peers need to be produced to endorse a transaction. However, such process is often complex and time-consuming. Multi-signature, which can improve transaction efficiency by having a set of signers cooperate to produce a joint signature, has attracted extensive attentions. In this work, we propose two multi-signature schemes, GMS and AGMS, which are proved to be more secure and efficient than state-of-the-art multi-signature schemes. Besides, we implement the proposed schemes in a real Enterprise Blockchain platform, Fabric. Experiment results show that the proposed AGMS scheme helps achieve the goal of high transaction efficiency, low storage complexity, as well as high robustness against rogue-key attacks and $k$ -sum problem attacks.
|
## Secure and Efficient Multi-Signature Schemes for Fabric: An Enterprise Blockchain Platform
#### Yue Xiao, Peng Zhang, Yuhong Liu
**_Abstract—Digital signature is a major component of transac-_**
**tions on Blockchain platforms, especially in enterprise Blockchain**
**platforms, where multiple signatures from a set of peers need to**
**be produced to endorse a transaction. However, such process is of-**
**ten complex and time-consuming. Multi-signature, which can im-**
**prove transaction efficiency by having a set of signers cooperate**
**to produce a joint signature, has attracted extensive attentions.**
**In this work, we propose two multi-signature schemes, GMS and**
**AGMS, which are proved to be more secure and efficient than**
**state-of-the-art multi-signature schemes. Besides, we implement**
**the proposed schemes in a real Enterprise Blockchain platform,**
**Fabric. Experiment results show that the proposed AGMS scheme**
**helps achieve the goal of high transaction efficiency, low storage**
**complexity, as well as high robustness against rogue-key attacks**
**and k-sum problem attacks.**
**_Index Terms—Multi-signature, Blockchain, Fabric, Schnorr_**
**signature, Gamma signature.**
I. INTRODUCTION
S an emerging distributed ledger technology, Blockchain
[1] has shown great potential to transform business and
# A
finance fields. Recently, several banks, such as J.P. Morgan
and Banco Santander S.A., have started to launch Blockchain
based platforms in capital markets, which are characterized
by “huge sums of money, multiple stakeholders and lots of
coordination” [2]. As transactions in capital markets often
require approvals from multiple parties, where each party has
to identify whether information matches transaction history
and follows the rules created by the participants, the approval
process is often complex and time consuming. It is believed
that Blockchain can effectively help cut costs and smooth
transactions among multiple parties [2].
It is worth mentioning that Fabric [3], an open-source
permissioned Blockchain platform for enterprise use cases, has
enabled endorsement functions to allow a set of endorsers to
approve the execution of a transaction. Cryptographic digital
signatures have been adopted to guarantee the validity of
endorsements from all endorsers before a transaction can be
added to the Blockchain ledger.
However, the endorsement process based on cryptographic
digital signatures is often resource consuming, inefficient,
and lack of scalability. In particular, to avoid inconsistency
in transaction states, a signature needs to be collected from
Peng Zhang is the corresponding author.
This work was in part supported by the National Natural Science Foundation
of China (61702342, 61872243).
Y. Xiao and P. Zhang are with the College of Electronics and Information Engineering, Shenzhen University, Shenzhen 518060, China (e-mail:
xiaoyue2017@email.szu.edu.cn; zhangp@szu.edu.cn ).
Y. Liu is with the Department of Computer Engineering, Santa Clara
U i it S t Cl 95053 USA ( il hli @ d )
each endorser according to the endorsement policy. The
verification of these signatures consumes large amounts of
computational resources. After verification, these signatures,
which can occupy a significant amount of storage space in a
transaction, will be stored in a block and broadcast over the
entire Blockchain network. Due to the large computation and
communication overhead, the overall throughput of Fabric is
about 100 to 2000 tps, which is very low and easily leads to
network transmission delay.
A promising approach to improve the throughput is multisignature [4], which allows a group of users to sign on a single
message, and produces a joint signature that stands for all
signers’ agreement on the message. Generally, a joint signature
has the same length as a single signature, and only needs to be
verified once with the public keys of signers that participate.
Therefore, compared to digital signature, multi-signature has
many advantages such as lower bandwidth, less storage space,
and faster verification. Multi-signature has been applied in
many fields, including distributed certificate authorities [5],
directory authorities [6], and timestamping services [7].
There are three major categories of multi-signature schemes,
as RSA-based, BLS-based, and Schnorr-based multi-signature
schemes. Compared to the other two types of schemes, the
Schnorr-based multi-signature schemes can well balance the
trade off between computational complexity and required storage space, and therefore attract extensive research attentions
recently. For example, based on Schnorr signature [8], BN
multi-signature scheme [9] is designed by adding one more
round in signing algorithm. BCJ multi-signature scheme [10]
is presented to eliminate the adding round by homomorphic
trapdoor commitments. Gregory et al. design Musig multisignature scheme [11] to improve BN scheme. One of the most
popular multi-signature schemes is CoSi [7], which introduces
a spanning tree structure to make it easily scale to thousands
of signers. However, CoSi can be easily forged by roguekey attacks and k-sum problem attacks [12]. Also, the leader
with excessive power in CoSi may replace the message m to
produce another challenge c[′].
In this work, we aim to fill the research gap by proposing secure and efficient multi-signature schemes, which can decrease
the storage of each transaction, improve the transmission rate
of block, and shorten the verification and update time of each
node. Our major contributions are described as follows.
_• Based on Gamma signature [13], we propose a secure_
multi-signature scheme named GMS (Gamma MultiSignature) using proof of possession, which is robust
against rogue-key attacks and k-sum problem attacks.
It also addresses the problem of excessive power of
-----
the leader in CoSi. In addition, the proposed GMS has
achieved strong provable security.
_• To further improve the online performance of GMS, we_
propose the Advanced Gamma Multi-Signature (AGMS),
a more efficient multi-signature scheme. In particular, we
propose to change the running order of phases in the
signing algorithm to reduce calculation steps after message arrivals. In addition, by enabling the key aggregation
algorithm to run together with the signing algorithm, the
distributed execution of the key aggregation algorithm is
allowed, which further reduces the overall execution time.
_• Based on the proposed AGMS scheme, we improve the_
transaction process in Fabric, for which we deploy the
multi-signature in and aggregate multiple signatures from
endorsers to a joint signature, so as to reduce the size of
the transaction and improve the efficiency of endorsement
and ledger update. The implementation results also show
that our designed transaction process can successfully
improve the efficiency and throughput of Fabric.
The rest of this paper is organized as follows. Related
works are summarized in Section II, followed by preliminaries
in Section III. In Section IV, we discuss the two proposed
multi-signature schemes GMS and AGMS. The corresponding
security analysis and performance analysis are presented in
Section V and VI respectively. Finally, the application to
Fabric is described in Section VII and Section VIII provides
the conclusion.
II. RELATED WORK
According to the difficulty assumptions and basic signature algorithms, multi-signature schemes can be divided into
RSA-based, BLS-based, Schnorr-based, etc. The details are
described as follows.
_A. Multi-signature schemes derived from RSA signatures_
As the implementation of RSA is particularly efficient,
there are some multi-signature schemes proposed under RSA
assumption. Harn et al. [14] propose a multi-signature scheme
based on RSA for the first time, for which the time to
generate and verify multiple signatures depends on the number of signers. Bellare and Neven [15] propose an identitybased multi-signature scheme which relies on the RSA assumption in the random oracle model. The scheme has fast
multi-signature generation and verification, but it takes three
rounds of interactions. Based on [15], Bagherzandi et al. [16]
propose an improved identity-based multi-signature scheme
and aggregation signature scheme under RSA assumptions.
The number of interactive rounds of the scheme is reduced
from three to two. Tsai et al. [17] propose an identity-based
sequential aggregation signature scheme which can be seen as
a generalization of multi-signature, where each signer signs a
different message, and signatures are aggregated in sequence.
Hohenberger et al. [18] construct a synchronized aggregation
signature from RSA, which can be used in Blockchain so that
the creation of a new block can be seen as a synchronization
event. Yu et al. [19] propose the use of multi-signature and
Blockchain to ensure security and privacy of the transmitted
data in the Internet of Things (IoT) scenario. Compared to the
schemes derived from Schnorr signature, length of signatures
in these RSA based schemes is significantly longer for a
similar level of security.
_B. Multi-signature schemes derived from BLS signatures_
BLS signature [20] is proposed based on bilinear paring,
where the signature length is just 224-bit compared to the
2048-bit signature in RSA. Based on efficient bilinear parings
and elegant BLS signatures, various multi-signature schemes
[21][22][23][24] are proposed. Particularly, Ambrosin et al.
[23] propose a novel optimistic aggregation signature scheme
called OAS to design secure collective attestation for Internet
of Things. Boneh et al. [24] also propose a BLS multisignatures with public-key aggregation in order to reduce the
size of Bitcoin Blockchain. Compared to the schemes derived
from Schnorr signature, these bilinear pairing based schemes
can further reduce the key and signature sizes. However, as
the bilinear pairing operation is one of the most complex
operations in modern cryptography [25], they also introduce
high computational overhead.
_C. Multi-signature schemes derived from Schnorr signatures_
When one uses a 2048-bit modulus, the corresponding
signature lengths for RSA, BLS, and Schnorr based schemes
are 2048 bits, 224 bits, and 448 bits, respectively. Although
the advantage of BLS signature length is obvious, the high
computational cost can not be ignored. Considering both computation and storage, Schnorr signature [8], one of the bestknown signature algorithms, is a good choice. Many multisignature schemes are proposed based on Schnorr signature.
Bellare and Neven [9] have designed BN scheme by adding
one more round in the signing algorithm, where all signers
involved need to exchange their own commitments. It is proved
secure in the plain public-key model. Then, Bagherzandi et
al. [10] propose BCJ scheme to eliminate the adding round
by using homomorphic trapdoor commitments. Gregory et al.
[11] design Musig scheme to improve BN scheme in two
aspects: holding the same key and signature size with Schnorr
signature, and allowing key aggregation. Furthermore, Musig
scheme is also applied to Bitcoin network to support key
aggregation without revealing the individual signer’s public
key.
One of the most popular Schnorr based multi-signature
schemes is CoSi [7], which requires each node to sign the
same message m by communicating and computing bottom-up
in a spanning tree structure. The introduction of the spanning
tree structure makes it easy for CoSi to scale up to thousands
of signers. Because of its great scalability, CoSi has served
as a basis for many multi-signature schemes proposed in later
research works [26][27][28][29]. However, Drijvers et al. [12]
point out that CoSi can be easily forged by rogue-key attacks
and k-sum problem attacks. The leader in CoSi can also
forge a joint signature on another message m[′] without any
other information. Therefore, mBCJ, a new multi-signature
scheme modified from CoSi, is proposed to defend against
these attacks Nevertheless the computation of mBCJ is more
-----
complicated and time-consuming. As a summary, although the
security of CoSi is challenged, it is efficient and scalable.
Although proof of possession can be introduced to improve the
security, it will potentially increase the overall computational
costs.
Therefore, considering both security and efficiency, we turn
to consider other digital signature schemes. Gamma signature
[13], proposed by Yao et al. in 2013, is modified from Schnorr
signature. Different from Schnorr signature, Gamma signature
can be implemented in two corresponding phases: the offline
phase, which pre-computes some partial values without any
information of the message m to be signed, and the online
phase, which produces the final signature after the message
_m arrives. Compared to Schnorr signature, Gamma signature_
performs better in several aspects: (1) online performance;
(2) flexible and easy deployment in interactive protocols; and
(3) great unforgeability against concurrent interactive attacks.
To our best knowledge, this work is the first multi-signature
scheme based on Gamma signature. Experiment results verify
that better online performance can be achieved when compared to the above mentioned Schnorr-based multi-signature
schemes.
III. PRELIMINARIES
_A. Target One-Way Hash Function_
**Definition 1 (Target One-Way Hash Function [13]): A hash**
function H : {0, 1}[∗] _→_ _ε ⊆{0, 1}[l][0]_ is defined as a (tf _, εf_ )
target one-way hash function w.r.t. an e-condition Re and a
set D 0, 1, if for any probabilistic poly-time adversary
_⊆{_ _}[l][0]_
, there exists a relationship that
_A_
Adv[tow]H [(][A][) =]
It is suitable to be applied in Public Key Infrastructure (PKI),
where each node has its certificate showing the information
about its own public key pk.
In addition, as stated in [12], there exists k-sum problem
attack that belongs to a k-dimensional generalization of the
birthday problem. It can effectively compromise several multisignature schemes, such as CoSi [7], Musig [11]. In particular,
the k-sum problem is defined as follows.
**Definition 2 (k-Sum Problem [12]): Given a group (Zq, +),**
an arbitrary l0-bit prime q, and k lists L1, · · ·, Lk with
an identical size, where elements in each list are sampled
uniformly and randomly from Zq, the k-sum problem aims
to find out k values: x1 ∈ _L1, · · ·, xk ∈_ _Lk that satisfy the_
equation x1 + · · · + xk ≡ 0 mod q.
We consider that an adversary can successfully launch a ksum problem attack if he/she can solve the k-sum problem by
using k lists with length of sL, within a total running time of
_τ and certain probability that_
Adv[k]Z[-sum]q (A) =
_L1, · · ·, Lk ∈_ Zq
_|L1| = · · · = |Lk| = sL_
_x1 ∈_ _L1, · · ·, xk ∈_ _Lk_
������
Pr
_x1 + · · · + xk ≡_ 0 mod q
Pr � _Re(d, e, d[′], e[′]) = 0_ ���� _m(m, s[′]_ _←) ←A2(AH, d, m, d1(H, d)_ _[′], s)_
_≤_ negl(l0),
�
where for any t-time algorithm A = {A1, A2}, we assume
that e = H(m), e[′] = H(m[′]), d, d[′] _D, and s is defined as_
_←_
some state information passed from A1 to A2.
_B. Rogue-Key Attack and k-Sum Problem Attack_
Rogue-key attack is a very typical attack against multisignature schemes including CoSi and BN, allowing a corrupted signer to set his/her own public key arbitrarily such
as X1 = g1[sk][1] ([�][n]i=2 _[X][i][)][−][1][ so that he/she can independently]_
forge a joint signature on the same messages m for public
keys {X1, ..., Xn}.
To protect systems against rogue-key attacks, some researchers choose to use a sophisticated key generation protocol. For example, proof of possession, proposed by Ristenpart
and Yilek [30], is a direct way to defend against this attack.
It is established based on the general key registered model,
meaning that the signer is required to provide his/her knowledge of the secret key sk corresponding to the public key pk
through a non-interactive zero knowledge protocol. This proof
is able to stop the corrupted signer forging a joint signature
_≥_ negl(l0).
According to the above construction of k-sum problem, the
adversary working as a leader in CoSi needs to simulate the
signing algorithm (k 1) times to produce different joint
_−_
signatures on the same message m, so that it can forge a joint
signature on a new message m[′] satisfying k-sum problem.
Therefore, an effective way to avoid this attack is to improve
the construction of the signing algorithm.
However, rogue-key attack and k-sum problem attack are
not handled in CoSi. Therefore, we propose to adopt proof of
possession in key generation algorithm and improve signing
algorithm, so that secure multi-signature schemes can be developed against rogue-key attacks and k-sum problem attacks.
_C. Gamma Signature_
The improvements to resist attacks and guarantee security
will inevitably increase the total computational costs. Hence
it is very challenging to consider security and efficiency at
the same time. Nevertheless, if we are able to move part
of the computational overhead from online to offline, an
improvement on both security and online efficiency may be
achieved even if the total computational costs (i.e. including
both online and offline) are higher.
Gamma signature [13] is such an online/offline signature
scheme, which has better online performance compared to
Schnorr signature. In particular, it is implemented in two
corresponding phases: the offline phase, which pre-computes
some partial values without any information of the message
_m to be signed, and the online phase, which produces the_
final signature after the arrival of message m. The detailed
procedure of Gamma signature is explained as follows.
-----
**Parameter generation. We use Pg(κ) to set up a group G of**
order q with generator g1, where q is defined as a prime with
_κ-bit, and finally output par = (G, g1, q)._
**Key generation. Kg(par) randomly selects sk ∈** [0, q − 1],
computes pk = g1[sk] and finally outputs value (pk, sk).
**Signing. This algorithm defines two kinds of hash functions:**
_H0 : {0, 1}[∗]_ _→_ Zq that is modelled as random oracles
and H1 : {0, 1}[∗] _→_ Zq that belongs to a target oneway hash function. A signer runs Sign(par, sk, m) by first
randomly selecting a value v [0, q 1] and pre-computing
_∈_ _−_
_V = g1[v]_ _[mod q][,][ c][ =][ H][0][(][V, pk][)][, and][ v][ ∗]_ _[c][. When the message]_
_m comes, the signer can further compute e = H1(m) and_
_s = v_ _c_ _e_ _sk mod q, and output σ = (c, s) as a signature_
_∗_ _−_ _∗_
on the message m.
**Verification. To run Vf(par, pk, m, σ), the verifier firstly com-**
putes e = H1(m), V = (g1[s] _[∗]_ _[pk][e][)][c][−][1][ mod q][, and then checks]_
whether it satisfies H0(V, pk) = c. If not, the signature is
invalid and the verifier rejects it. Else, the verifier accepts the
signature.
Due to its high online efficiency, Gamma signature is
adopted in this paper as a basis for the proposed multisignature schemes. To our best knowledge, this is the first
work that proposes multi-signature schemes based on Gamma
signature.
IV. PROPOSED MULTI-SIGNATURE SCHEMES
As mentioned before, CoSi is an efficient and scalable multisignature scheme, but it is easily forged by rogue-key attacks
and k-sum problem attacks. The leader in CoSi can also forge
a joint signature by producing the final challenge c[′] on another
message m[′]. It is of great significance to design a new multisignature scheme with enhanced security, high scalability, and
efficiency.
_A. Gamma Multi-Signature Scheme_
With the motivation of constructing a more secure, efficient, and scalable multi-signature scheme, we propose a new
multi-signature scheme. In particular, we introduce proof of
possession to ensure security of the proposed scheme against
rogue-key attacks. To reduce the extra computational costs
introduced by proof of possession, we adopt Gamma signature
[13] as the basis to split the overall computation into online
and offline parts, so that the computational complexity for
the online part is improved when compared to CoSi signature
scheme and make it hard to forge by k-sum problem attacks.
Furthermore, inspired by CoSi, we also adopt the spanning tree
structure to improve the scalability of the proposed scheme.
As a summary, our design goal is to ensure security against the
rogue-key attacks and k-sum problem attacks, while achieving
high online efficiency and scalability.
We firstly propose Gamma Multi-Signature (GMS) scheme.
Assume our proposed multi-signature scheme GMS consists
of six algorithms GMS = Pg, Kg, KAg, Sign, KVf, Vf and
_{_ _}_
adopts four hash functions: H0, H1, H2, H3 : {0, 1}[∗] _→_ Zq,
where H0, H1 are modelled as random oracles and H2, H3
are target one-way hash functions. It works as follows.
Fig. 1. The signing algorithm of our GMS scheme (We suppose that signer
_Si holds the key pair (pki, ski), where pki = (yi, πi), and parent Pi works_
as a leader S0. If parent Pi is not a leader, it just works as signer Si. Finally,
the leader S0 outputs (c, S) as the joint signature.)
**Parameter generation. We use Pg(κ) to set up a group G of**
order q with generator g1, where q is defined as a prime with
_κ-bit, and finally outputs par = (G, g1, q)._
**Key generation. Kg(par) randomly picks sk ∈** [0, q − 1] as
a private key and sets y = g1[sk] as the corresponding public
key. Then, it constructs proof of possession π = (a, d) of
_sk, which is to protect against rogue-key attacks, by choosing_
_r_ _←$_ Zq and computing a = H1(g1, g1r[)][,][ b][ =][ H][2][(][y][)][, and]
_d = r_ _a_ _b_ _sk mod q. Finally, it sets pk = (y, π) and_
_∗_ _−_ _∗_
outputs (pk, sk). The proof of possession will be checked by
the verifier each time when a new key pair involved to sign is
found. Proof of possession is used to defend against rogue-key
attacks existing in CoSi.
**Key Aggregation. Given PK as the set of all public keys,**
KAg(PK) parses each public key pki involved to sign in PK
as pki = (yi, πi), and outputs the aggregated public key as
_X˜ =_ [�]pki∈PK _[y][i][.]_
**Signing. We set Ci** = {Cij} as the set of children of
one signer Si in the spanning tree structure, and Pi as the
parent of signer Si. Assume S0 to be the root of the tree,
so called the leader. The signer Si runs signing algorithm
Sign(par, ski, m, τ ) in a tree τ for four phases, which is
shown in Fig. 1.
_Phase 1: Announcement. When the leader S0 receives a mes-_
sage m, it starts to multicast the announcement m to its
children top-down in the tree structure.
_Phase 2: Commitment. This process is run in a bottom-up way_
by each node Si. Specifically, given a node Si, after receiving
the announcement m, Si firstly chooses a random secret value
_vi and computes Vi = g1[v][i]_ [. Then,][ S][i][ waits for each immediate]
child j’s partial commitment _V[˜]ij. When all the partial com-_
mitments are received, Si computes _V[˜]i = Vi_ �j∈Ci _[V][˜][ij][. After]_
that, the result _V[˜]i is send to its parent Pi unless Si is the leader_
(i.e. i = 0).
_Phase 3: Challenge. The leader S0 waits for each immediate_
child’s partial commitment value _V[˜]0j and computes the final_
commitment _V[˜]_ _V˜_ _V_ [�] _V˜_ So the collective
-----
challenge is c = H0(g1, _V,[˜]_ _X[˜]_ ). The value c, as a part of the
joint signature, can be sent to the verifier in advance or stored
at the leader. After that, the leader sends the shared challenge
value c back to its children.
_Phase 4: Response. When Si receives c, it can compute the_
response: si = vi ∗ _c −_ _e ∗_ _ski, where c = H0(g1,_ _V,[˜]_ _X[˜]_ )
and e = H3(m), and wait for each partial response ˜sij from
its immediate children j. When all the partial responses are
received, it sets ˜si = si + [�]j∈Ci _[s][˜][ij][. After that, the result][ ˜][s][i]_
is sent to its parent Pi unless Si is the leader (i.e. i = 0).
Finally, the leader S0 computes the final response S = ˜s0 =
_s0 +_ [�]j∈C0 _[s][˜][0][j][ and outputs the joint signature][ (][c, S][)][.]_
Compared to CoSi, we divide the challenge c into two
independent values c and e, so as to avoid the excessive power
of the leader to replace the message m with m[′] and produce
another challenge c[′]. Through this signature algorithm, the
leader is hard to forge a joint signature (c[′], S[′]) by k-sum
problem attacks.
**Key Verification. Similar to Gamma signature, given an input**
as a public key pk as well as its corresponding proof of
possession such that pk = (y, π), π = (a, d), the key
verification algorithm KVf(par, pk) checks whether it satisfies
that a = H1(g1, V ), where V = (g1[d][y][b][)][a][−][1][ and][ b][ =][ H][2][(][y][)][. If]
not, the public key pk is invalid and must be discarded.
**Verification. Given an input as a joint signature σ = (c, S)**
on an announcement m as well as the aggregated public
key _X[˜]_, Vf(par, _X, m, σ[˜]_ ) computes e = H3(m) and _V[˜] =_
(g1[S][X][˜] _[e][)][c][−][1]_ [, and then checks whether the equation satisfies]
_c = H0(g1,_ _V,[˜]_ _X[˜]_ ). If not, (c, S) is an invalid signature.
Otherwise, it is valid and the verifier accepts it.
_B. Advanced Gamma Multi-Signature Scheme_
From the signature construction of the proposed GMS, it
can be seen that the generation of a collective challenge c
has nothing to do with the announcement m. Therefore, if the
challenge c can be precomputed offline, we can change the
running order of the above four phases in the signing algorithm
to achieve better online performance. Meanwhile, we choose to
run key aggregation algorithm in Commitment and Challenge
phases, so that it can be executed distributedly. Therefore, the
signing algorithm can be modified and optimized from GMS.
We call this new scheme as Advanced Gamma Multi-Signature
(AGMS).
In AGMS, we define Commitment and Challenge phases
as pre-signing phases or offline signing, where each signer in
a spanning tree structure comes to an agreement (challenge c)
before the announcement m arrives. And then, Announcement
and Response phases are defined as the formal-signing phases
or online signing, where the leader receives the announcement
_m to be signed and produces the joint signature σ = (c, S)._
The signing algorithm in AGMS is described as follows.
**Signing. We also set Ci = {Cij} as the set of children of**
one signer Si in the spanning tree structure, and Pi as the
parent of signer Si. Assume S0 to be the root of the tree,
so called the leader. The signer Si runs signing algorithm
Sign(par, (pki, ski), m, τ ) in a tree τ for four phases, which
is shown in Fig 2
Fig. 2. The signing algorithm of the proposed AGMS scheme (Text in red
indicates changes from Fig. 1. We suppose that signer Si holds the key pair
(pki, ski), where pki = (yi, πi), and parent Pi works as a leader S0. If
parent Pi is not a leader, it just works as signer Si. The key aggregation
algorithm also runs together with the signing algorithm. Finally, the leader
_S0 outputs (c, S) as the joint signature.)_
_Phase 1: Commitment. This process is run in a bottom-up_
way by each node Si. Specifically, given a node Si, choose
a random secret value vi and compute Vi = g1[v][i] [. Then,][ S][i]
waits for each immediate child j’s partial commitment _V[˜]ij and_
the partial aggregated public key _X[˜]ij. When all the partial_
commitments are received, Si computes _V[˜]i = Vi_ �j∈Ci _[V][˜][ij]_
and _X[˜]i = yi_ �j∈Ci _[X][˜][ij][. After that, the result][ ˜][V][i][ and][ ˜][X][i][ is]_
sent to its parent Pi unless Si is the leader (i.e. i = 0).
_Phase 2: Challenge. The leader S0 waits for each immedi-_
ate child’s partial commitment value _V[˜]0j, partial aggregated_
public key _X[˜]0j, and computes the final commitment_ _V[˜] =_
_V˜0 = V0_ �j∈C0 _[V][˜][0][j][, as well as the aggregated public key]_
_X˜ =_ _X˜0 = y0_ �j∈C0 _[X][˜][0][j][. So, the collective challenge is]_
_c = H0(g1,_ _V,[˜]_ _X[˜]_ ). The value c, as a part of the joint signature,
can be sent to the verifier in advance or stored at the leader.
After that, the leader sends the shared challenge value c back
to its children. All the signers Si store c and precompute their
own partial value vi ∗ _c._
_Phase 3: Announcement. When the leader S0 receives a mes-_
sage m, it starts to multicast the announcement m to its
children top-down in the tree structure.
_Phase 4: Response. When Si receives announcement m, it_
only computes e ∗ _ski and adds the previous partial value_
_vi ∗_ _c to attain the individual response: si = vi ∗_ _c −_ _e ∗_ _ski,_
where e = H3(m). Then, it waits for each partial response
_s˜ij from its immediate child j. When all the partial responses_
are received, it sets ˜si = si + [�]j∈Ci _[s][˜][ij][. After that, the]_
result ˜si is sent to its parent Pi unless Si is the leader (i.e.
_i = 0). Finally, the leader S0 computes the final response_
_S = ˜s0 = s0 +_ [�]j∈C0 _[s][˜][0][j][ and outputs the joint signature]_
(c, S).
In summary, we have proposed two multi-signature schemes
GMS and AGMS in this section GMS focuses on the security
-----
improvement, where the verification algorithm for public key
is deployed to defeat rogue-key attacks, and the signing
algorithm is improved to resist k-sum problem attacks and
avoid the leader modifying the message to produce another
challenge. Meanwhile, the signing algorithm is split into
online and offline parts. Furthermore, AGMS focuses on the
efficiency improvement, where the running order of phases
in signing algorithm is adjusted, and the key aggregation
algorithm is executed distributedly, so as to obtain better online
performance.
V. SECURITY ANALYSIS
In this section, we analyze security of the proposed AGMS
scheme in details. In particular, security of a multi-signature
scheme should satisfy two basic requirements.
First, a multi-signature scheme should be complete. That is,
if we build up a system by Pg(κ), generate a set of public
and private key pairs (pk, sk) by Kg(par), and produce a
joint signature σ on an announcement m representing a set of
signers in a tree τ by Sign(par, _, m, τ_ ), then we should
_SK_
be able to use _X[˜]_, generated from KAg( ), to successfully
_PK_
output KVf(par, pk) = 1 and Vf(par, _X, m, σ[˜]_ ) = 1. As
these two verification equations are true, the proposed scheme
AGMS satisfy the completeness requirement.
Second, a multi-signature scheme should be unforgeable.
We prove that the proposed scheme AGMS can achieve
unforgeability under current interactive attacks. The analysis
is described as follows.
_Lemma 1 (General forking lemma [9]): Let_ be a randomized
_C_
probabilistic algorithm. When given input (x, h1, · · ·, hq, ρ)
with access to oracle of size λ, where x is generated by the
_O_
input generator IG; ρ refers to C’s random tape; h1, · · ·, hq
are some random chosen values from Zq; then C outputs a pair
(J, y). Let π be the space of all the vectors (x, h1, · · ·, hq, ρ).
Let acc be the probability that can successfully output (J, y)
_C_
when given inputs (x, h1, · · ·, hq, ρ), where J is a non-empty
subsets of 1, _, q_ .
_{_ _· · ·_ _}_
For a given x, the forking lemma algorithm _FC(x) is_
described as follows.
_FC(x):_
Pick a random tape ρ for
_C_
_h1, · · ·, hq ←O_
(J, y) ←C(x, h1, · · ·, hq, ρ)
if J = 0 then
return (0, _,_ )
_⊥_ _⊥_
_h[′]1[,][ · · ·][, h]q[′]_ _[←O]_
(J _[′], y[′]) ←C(x, h[′]1[,][ · · ·][, h]q[′]_ _[, ρ][)]_
if J = J _[′]_ and hJ ̸= h[′]J _[′]_
return (1, y, y[′])
else
return (0, _,_ )
_⊥_ _⊥_
We let frk be the probability that FC successfully outputs
(1, y, y[′]) as shown below:
_frk = Pr[b = 1 : x ←_ IG; (b, y, y[′]) ← _FC(x)] ._ (1)
So that we have:
_frk_ _acc(_ _[acc]_ ) (2)
_≥_ _−_ [1 ]
_Lemma 2: Let_ [�] = (Pg, Kg, KAg, Sign, KVf, Vf) be a multisignature scheme. We define the security of a multi-signature
scheme as the universal unforgeability under a chosen message
attack against a set of honest players. We can say CoSi is
(t, qs, qf _, N, ε)-secure in the random-oracle model, if given_
_N as the maximum number of participating signers that the_
adversary needs to run at most t time, with the probability of
forgeability of at least ε, making at most qs signature queries
as well as qf random oracle queries.
As CoSi is based on Schnorr signature, we follow the
random oracle model. In CoSi, we only assume H0( V, m[˜] )
are modeled as random oracles, which is only tf _, εcr-collision_
resistant, so that we may prove CoSi secure in the random
oracle under the discrete algorithm assumption. Differently,
as for AGMS, we follow the so-called general key registered
model [31], where the validity of each public key must be
checked by the signature verifier. In the proposed scheme
AGMS, we only assume H0(g1, _V,[˜]_ _X[˜]_ ) : {0, 1}[∗] _→{0, 1}[κ]_
and H1(g1, u[∗]) : {0, 1}[∗] _→{0, 1}[κ]_ are modeled as random
oracles, and define the other two hash functions H2(y[∗]) :
_{0, 1}[∗]_ _→{0, 1}[κ]_ and H3(m) : {0, 1}[∗] _→{0, 1}[κ]_ as target
one-way hash functions, which are (tf _, εcr)-collision resistant_
and (tf _, εtow)-target one-way, to mitigate the dependency of_
provable security on random oracles.
_Theorem 1: Suppose that AGMS is (t[′], qs, qf_ _, N, ε[′])-secure_
under the discrete logarithm problem, there exists an algorithm that if we take uniformly random group elements
_C_
_X_ _[∗], two uniformly random chosen κ-bit strings H0, H1 for_
a total of (qs + qf ) times and two target one-way κ-bit
strings H2, H3 as inputs, then, C can successfully output a
tuple (i0, i3, PK, S, i1, i2), satisfying _X[˜] =_ [�]pki∈PK _[pk][i][ and]_
_H0(g1, (g1[S][X][˜]_ _[i][3]_ [)][i]0[−],[1] _X[˜]_ ) = i0. Here, i0 ∈ (H01, · · ·, H0q),
_i1 ∈_ (H11, · · ·, H1q), and i2, i3 are the two target one-way
hash values involved in the corresponding set of signers’ public
keys . Assume that N is the maximum number of signers
_PK_
that participate in AGMS. Then, the running time of algorithm
_C is at most t[′], and algorithm C succeeds with the probability_
of ε[′] such that
_acc_
_ε[′]_ _acc(_ (3)
_≥_ _−_ [1]
_qs + qf_ 2[κ][ )][ −] _[ε][tow][,]_
where
_acc ≥_ (1− _[q][s][(2][q][f][ +][ q][s][ −]_ [1)] )(ε− _[N][ + 1]_ _−_ _[N]_ [(][N][ −] [1) + 2] _εcr) ._
2[3][κ][+1] 2[κ] 2
(4)
_Proof: We construct a four-stage game for an algorithm_
_C_
around a (t, qs, qf _, N, ε)-forger F. Assume that the involved_
signers behave honestly. Given the random public key set
_PK = (pk1, · · ·, pkN_ ), we simulate the game in the following
steps.
**Setup: Algorithm C initializes par = (G, g1, q) ←** Pg(κ),
(pk, sk) ← Kg(par), and two empty hash query sets SH0
and SH1, corresponding to the queries of H0 and H1 respectively such that (dT 1, · · ·, dT qs _, dT (qs+1), · · ·, dT (qs+qf )) ←_
( 0, 1 )[q][s][+][q][f], (T = 0, 1). Then, we construct a “proof of
_{_ _}[κ]_
possession” of ski. C provides a random tape ρ to F, and
runs as a signer with the public key pk (y _π )_
_F_
-----
**RO queries: As for CoSi, there only involves one hash value**
that consists of the final commitment _V[˜] and a message m._
Differently, in the proposed AGMS, there are two independent
hash values to query. For each query set, under the i-th
query (1 ≤ _i ≤_ _qf_ ) denoted by QT i, (T = 0, 1) from
_F, C firstly checks whether the value QT i has been defined_
before. If yes, C gives up the repeated value HT (QT i) = α.
Otherwise, C defines HT (QT i) = dT (qs+i), stores the record
(j = qs + i, QT i, HT (QT i) = dT (qs+i)) in the corresponding
set SHT (T = 0, 1) and then sends the values dT (qs+i) to F.
**Signature queries: With the set of public keys PK =**
(pk1, · · ·, pkN ) and some messages m, C firstly simulates
each self-signed information π[∗] = (d[∗], w[∗]) by randomly
$
selecting two values d[∗], w[∗] _←_ Zq, and then computing
_u[∗]_ = (g1[w][∗] _[y][∗][b][∗]_ [)][d][∗−][1] [, where][ b][∗] = _H2(y[∗]). On input_
_pk[∗]_ = (y[∗], π[∗]) with the random tape ρ, makes the query
_C_
_H1(g1, u[∗]) = d[∗]. When there exists H1(g1, u[∗]i_ [)][ that is never]
defined in previous queries, C sets H1(Q1i) = d1i and stores
(j = i, Q1i, H1(Q1i) = d1i) in the set SH1 . After receiving
_X˜ =_ [�]pki∈PK _[pk][i][ and][ ˜][V][ =][ �]i[N]=1_ _[V][i][ from][ C][,][ F][ simulates]_
a query c = H0(g1, _V,[˜]_ _X[˜]_ ) that is never defined before,
stores (j = i, Q0i, H0(Q0i) = d0i) in the set SH0 and
sends c to its children without knowing the message m.
_C_
can return partial queries value c = H0(Q0i) = d0i firstly.
This is hard for some schemes including CoSi to produce the
partial signature value in advance. Finally, after knowing the
message m, similar to signer Si, C waits for the response ˜sij
that comes from its children j ∈ _Ci, proceeds to compute_
and send ˜si = si + [�]j∈Ci _[s][˜][ij][ mod q][ to its parent, where]_
_si = vi ∗_ _c −_ _e ∗_ _ski mod q. Finally, C returns (c, S) as the_
joint signature.
We assume that there are several cases that may happen
and cause C to abort the execution. (1) The value Q0j ←
(g1, _V[˜]j,_ _X[˜]j) that F can successfully guess is equal to Q0i ←_
(g1, _V[˜]i,_ _X[˜]i) that is already defined before. (2) F successfully_
attains the value Q0j ← (g1, _V[˜]j,_ _X[˜]j) that is never defined_
before by the birthday paradox. If either of the two cases
happens, we set bad _true._
_←_
**Output: Eventually, F** outputs a forged multi-signature
(c[′], S[′]) on the message m[′] for a multiset PK[′]. Without loss of
generality, we assume the following conditions. (1) All hash
queries that are involved in the verification of the forgery; and
the proof of possession in are made and recorded in sets
_PK[′]_
_ST (T = 0, 1). (2) There do not exist any two different values_
_Q2i and Q2j in PK[′]_ such that H2(Q2i) = H2(Q2j) = α. (3)
There do not exist any two different values Q3i and Q3j such
that H3(Q3i) = H3(Q3j) = α. When F’s forgery is verified
to be true, algorithm halts and returns (J, (c[′], e[′], S[′], )).
_C_ _PK[′]_
If not, algorithm returns (0, ) and fails to forge a joint
_C_ _⊥_
signature.
As we defined above, HT _←{$_ 0, 1}κ (T = 2, 3) is a
(tf _, εtow)-target one-way as well as (tf_ _, εcr)-collision resistant_
hash function. Let ts denote the running time of a signing
query and tex denote the running time of extracting SK
using the generalized forking lemma FC. Based on the above
description, we can derive that: the event bad _true happens_
_←_
with the probability of Pr(bad _true)_ + _[q][s][(][q][s][−][1)]_
_←_ _≤_ _[q][s][q][f]_
Considering that the event bad _true does not happen,_
_←_
the probability that successfully outputs a forged sig_C_
nature (c[′] _S[′]) satisfying the above requirements is acc_
_≥_
(1 − _[q][s][(2]2[q][f][3][ +][κ][+1][q][s][−][1)]_ )(ε − _[N]2[+1][κ]_ _−_ _[N]_ [(][N] _[−]2_ [1)+2] _εcr). Then, as F_
is described above, algorithm is (t[′], ε[′])-break the hash
_C_
property of one-wayness, where the running time is at most
_t[′]_ = (2N + 2)tf + (2N + 2)qsts + tex + O((N + 1)qf ), and
equation (3) and equation (4) are true.
We further prove the theorem in more details by constructing an algorithm . Suppose there is an algorithm, given
_C[′]_ _C[′]_
input group elements X _[∗]_ and a signature forger F that is the
same as described above, can solve the discrete logarithm
_C[′]_
problem in G. Finally, C[′] successfully outputs a forgery, using
_FC defined in Lemma 1. C[′]_ proceeds as follows.
We set (1, (c, e, S, N ) and (1, (c[′], e[′], S[′], N _[′]) as two different_
outputs of associated with the forgery such that:
_C[′]_
_g1[S]_ [= ˜][V][ c][ ˜][X] _[−][e][ = ˜][V][ c][ �]i[N]=1_ _[y]i[−][e]_ and
_g1[S][′]_ [= ˜][V][ ′][c][′][ ˜][X] _[′−][e][′][ = ˜][V][ ′][c][′][ �]i[N]=1[ ′]_ _[y]i[′−][e][′]_,
where we set _PK_ = (pk1, · · ·, pkN ) and _PK[′]_ =
(pk1[′] _[,][ · · ·][, pk]N[′]_ _[′]_ [)][ as two sets of public keys involved in][ F][’s]
forgery. According to the construction of, we should hold
_C[′]_
_V˜_ _[′]_ = ˜V, c[′−][1]e[′] ≠ _c[−][1]e, N_ _[′]_ = N and yi[′] [=][ y][i][(1][ ≤] _[i][ ≤]_ _[N]_ [)][.]
Therefore, we have that:
_N_
�
_yi[c][−][1][e],_ (5)
_i=1_
_N_
�
_yi[c][′−][1][e][′]_ _._ (6)
_i=1_
and
_V˜ = g1[Sc][−][1]_
_V˜ = g1[S][′][c][′−][1]_
Based on equations (5) and (6), it will yield:
_N_
� �
_g1[Sc][−][1][−][S][′][c][′−][1]_ = _yi[c][′−][1][e][′][−][c][−][1][e]_ = (yi)[c][′−][1][e][′][−][c][−][1][e].
_i=1_ _pki∈PK_
(7)
Because
_X˜ =_ � _pki = g1�pki_ _∈PK_ _[sk][i]_ _,_ (8)
_yi∈PK_
_C[′]_ can successfully attain the discrete logarithm of pk1 as
_Sc[−][1]−S[′]c[′−][1]_
_c[′−][1]e[′]−c[−][1]e_ _[−]_ [�]pki∈PK\pk1 _[sk][i][ mod q][ .]_
That is, if c[′] = c and e[′] = e,the forger can successfully
_̸_ _F_
extract all SK except its own sk1. Using Lemma 1, we can
compute that the probability for forger to obtain two differ_F_
ent outputs, where c[′] ≠ _c or e[′]_ ≠ _e, is frk ≥_ _acc(_ _qfacc +qs_ _[−]_ 2[1][κ][ )][.]
Thus, the probability of in doing so is given as frk
_C[′]_ _≥_
_acc(_ _qfacc +qs_ _[−]_ 2[1][κ][ )][ −] _[ε][tow][, where][ acc][ satisfies equation (4). The]_
total running time of algorithm C[′] is at most that of FC plus
_O(N_ ) operations. In other words, the proposed scheme AGMS
can achieve unforgeability under current interactive attacks.
VI. PERFORMANCE ANALYSIS
_A. Theoretical Analysis_
In theory, we briefly compare the proposed schemes
GMS and AGMS with current most popular multi-signature
schemes including BN [9] CoSi [7] and Musig [11]
-----
The property comparisons of these schemes are summarized
in Table I. First, based on the prototype of Schnorr signature,
these schemes are proved to be standard existential unforgeable under the adaptive chosen message attacks. However,
BN, CoSi, and Musig involve only one hash value that
consists of the random value and message, meaning that these
schemes do not support to precompute the partial signature
_c and are possible to be forged by k-sum problem attacks._
Therefore, it is uncertain whether they can be proved secure
in concurrent interactive protocols. Differently, the proposed
GMS and AGMS are based on Gamma signature, which
involve two different independent hash values, and thus can
be secure against k-sum problem attacks. As the proposed
GMS produces the challenge c after the message m comes,
only the proposed AGMS can achieve provable security in
concurrent interactive protocols. That is to say, the leader in
AGMS can work as a representative of a group of signers
in a spanning tree structure, precompute the challenge c, and
achieve two-round interactive telecommunications with other
individuals or groups in a secure way. AGMS is also the only
scheme that can support the partial signature value c to be
public. Second, as mentioned before, CoSi is easily to be
forged by rogue-key attacks. BN and Musig added one more
round protocol to exchange their individual commitments to
other signers, which is a solution to avoid rogue-key attacks.
But this approach inevitably leads to high communication
and computation overhead. The two proposed schemes, GMS
and AGMS, use proof of possession, which is an efficient
way to avoid rogue-key attacks. Third, with the spanning tree
structure, GMS, AGMS, and CoSi can reach high scalability,
which is hard-to-reach by BN or Musig.
Furthermore, we compare the efficiencies of these multisignature schemes in Table II. In particular, Musig needs a very
time-consuming KAg algorithm to construct a more secure
joint signature without revealing individual signer’s public key.
In the Sign algorithm, due to the advantage that the challenge
_c can be precomputed offline, the proposed AGMS performs_
better in online signing than all other schemes. In the Vf
algorithm, the proposed schemes GMS and AGMS require
one more exponentiation when compared to BN and Musig.
Because of proof of possession, the two proposed schemes
GMS and AGMS also require KVf algorithm against rogue-key
attacks. The total computation of the Sign and Vf algorithms
in these two schemes is only slightly higher than that of CoSi
and Musig, but much less than that of BN. In the signature
domain and _X[˜] domain, the two proposed schemes require the_
smallest space among these multi-signature schemes. Only the
_pk domain needs more space than other schemes due to the_
proof of possession. In the offline storage, we can suggest
that the signer in other schemes except AGMS to precompute
and store (vi, Vi). But the signer in AGMS can store (vi, c),
meaning that in terms of offline storage, AGMS only needs G[2],
which is much smaller than G×Zq required by other schemes.
In summary, the proposed GMS and AGMS schemes are
comparable to others in terms of efficiency, but AGMS enjoys
the greatest efficiency in online signing and the smallest space
in offline storage, which can avoid the network congestion and
is suitable to be applied in real time communications
_B. Experimental Analysis_
In this subsection, 32 physical machines that consist of
an Intel (R) Core (TM) i7-4790 processor and a RAM with
total memory of 8GB are adopted for testing purpose. We
implement the following schemes through Go[1] programming
language. We use hash function SHA-512 [32] and SHA512 based target one-way hash function [13]. We run each
experiment for 20 times and show the average results. As the
experiment results have significant differences, to show every
value, y-axis in Fig. [3]-[8] and Fig. [10]-[13] has logarithmic
scale.
According to the difficulty assumptions and basic signature
algorithms, we test RSA based multi-signature [18], BLS
based multi-signature [24], and Schnorr based multi-signature,
including CoSi [7] and AGMS. As Gamma signature, the
basis of AGMS, is modified from Schnorr signature, and still
based on the discrete logarithm problem, AGMS is classified
to Schnorr based multi-signature schemes. These experiments
import two Go programming libraries: crypto[2] and pbc[3]. For
the same security level, we define the elliptic curve is NIST P224, and modulus for RSA is 2048-bit. Through experiments,
we have validated that the signature lengths for RSA, BLS,
and Schnorr based schemes are 2048 bits, 224 bits, and 448
bits, respectively, indicating that BLS based signatures will
take up the smallest storage space. However, as shown in
Fig. 3, BLS based signature scheme takes significantly longer
running time than the other two categories for both the signing
and verification processes, as the bilinear pairing operation is
time-consuming. On the other hand, although the time cost for
verification algorithm of RSA based multi-signature is low, the
total time is very close to that of Schnorr-based schemes (e.g.
CoSi, AGMS). In addition, its signature length (2048 bits)
will significantly increase the system storage overhead, and is
usually unacceptable. With a reasonable signature length (448
bits), the experiment results validate that CoSi and AGMS
yield the shortest running time for the signing process and a
reasonable running time for the verification process. Hence,
the Schnorr based multi-signature schemes CoSi and AGMS
are beneficial for achieving the balance of computational
complexity and required storage space.
Next, the two proposed schemes and CoSi are evaluated
with a total amount of signers ranging from 128 to 16384,
and all the signing nodes are created and connected in a
tree structure. As the random depth of tree may influence the
results, we set the tree depth to 3 and choose the branching
factor according to the number of signers so as to keep it
manageable. These experiments import two Go programming
libraries: cothority[4] and onet[5]. These schemes are based on
elliptic curve 25519, and we ignore the computation time of
key aggregation algorithm. From Fig. 4, we can find that, the
offset among the total running time of signing and verification
algorithms for these schemes is very close when the number of
1http://golang.org/, January, 2015.
2Go cryptography libraries.
3https://github.com/Nik-U/pbc, accessed December, 2018.
4https://github.com/dedis/cothority, accessed February, 2018.
5htt // ith b /d di / t d F b 2018
-----
TABLE I
PROPERTIES OF SEVERAL MULTI-SIGNATURE SCHEMES
Multi-signature schemes Proposed GMS Proposed AGMS BN CoSi Musig
Provable security (Standard) Yes Yes Yes Yes Yes
Provable security (Concurrent interactive) Uncertain Yes Uncertain Uncertain Uncertain
Support challenge c public No Yes No No No
Against rogue-key attacks Yes Yes Yes No Yes
Against k-sum problem attacks Yes Yes No No No
Rounds 2 2 3 2 3
Spanning tree structure Yes Yes No Yes No
TABLE II
EFFICIENCIES OF SEVERAL MULTI-SIGNATURE SCHEMES
Multi-signature schemes Proposed GMS Proposed AGMS BN CoSi Musig
KAg - - - - 1·exp[N]
Sign (online signing) 1·exp - 1·exp 1·exp 1·exp
Sign (offline signing) - 1·exp - - Vf 1·exp[3] 1·exp[3] 1·exp[N] [+1] 1·exp[2] 1·exp[2]
KVf 1·exp[3] 1·exp[3] - - Total (Sign + Vf) 1·exp[N] [+3] 1·exp[N] [+3] 1·exp[2][N] [+1] 1·exp[N] [+2] 1·exp[N] [+2]
Signature domain Z[2]q Z[2]q G × Zq Z[2]q G × Zq
_pk domain_ G × Z[2]q G × Z[2]q G G G
_X˜ domain_ G G G[N] G G
Offline storage G × Zq G[2] G × Zq G × Zq G × Zq
(“-” denotes no exponentiation. “exp” denotes an exponentiation. “exp[k]” denotes an k-multi-exponentiation in a group “G”. “N ” denotes
the number of signers involved in a multi-signature scheme.)
signers is up to 16384. The results confirm that the proposed
schemes can easily scale up to thousands of signers as well.
Then, we test the running time of online signing phase
and offline signing phase in the proposed AGMS. In the
proposed AGMS, the online signing phase consists of the
Announcement and Response phases, and the offline signing
phase consists of the Commitment and Challenge phases.
All the configurations remain the same as those in the first
experiment. From the first experiment, we see that the total
running time of signing algorithm of AGMS is very close
to that of CoSi. As the offline signing phase needs a large
amount of elliptic curve exponentiations, it accounts for the
vast majority of the total running time of signing algorithm
in AGMS. Therefore, the online signing part of the proposed
AGMS scheme is very fast. When the number of signers goes
up to 16384, we can find that the online signing time of AGMS
is less than 1 second, accounting for only about 1% of total
running time of signing algorithm. Fig. 5 depicts the results.
Finally, as the leader has heavier computation load in
signing algorithm than any other signers, we further test the
computation time on a leader node of CoSi and the proposed
AGMS in signing algorithm. In this experiment, we also divide
signing algorithm into two phases: the former consists of the
Announcement and Response phases, and the latter consists of
Commitment and Challenge phases. The corresponding results
are shown in Fig. 6. We clearly see that it takes much more
time for the latter phases than the former phases, since the
elliptic curve multiplication is much more complicated than
scalar multiplication. Because the Commitment and Challenge
phases can be precomputed in the proposed AGMS scheme,
while CoSi needs to run all the phases in a sequential way
the proposed AGMS scheme runs absolutely faster than CoSi
when we only focus on the computation time on a leader node
in online signing phase. The total running time for the online
signing phase of the two schemes are compared in Fig. 7.
Memory consumption is another factor to evaluate performance. On the group of 32 physical machines with the above
configurations, we test the memory consumption in signing
and verification algorithms of CoSi and AGMS with a total
amount of signers ranging from 128 to 16384. From Fig. 8, we
can see that on one physical machine the memory consumption
of CoSi and AGMS is very similar. Furthermore, as the vast
majority of memory consumption is in offline signing phase,
we have rather low memory consumption in online signing
phase, which is very friendly to low-power devices.
VII. APPLICATION TO FABRIC
Fabric [3] is a permissioned Blockchain platform, where a
CA (Certificate Authority) is introduced to manage the members, and every node needs to make a request for membership
to CA before it joins the network. Digital signature algorithm
ECDSA (Ellipse Curve Digital Signature Algorithm) is widely
adopted in Fabric to guarantee the validity of transactions. To
avoid inconsistency in transaction states, the client needs to
collect enough number of signatures from different endorsers
satisfying the endorsement policy in Fabric. If the endorsement
policy requires a large number of endorsers, the number of
signatures would be large, and the overhead of signature
verification would be high. In this case, the current mechanism
of Fabric will lead to significant drops of the transaction
efficiency.
Therefore, we try to introduce the proposed AGMS scheme
into Fabric to optimize the current transaction process In this
-----
Fig. 3. The running time of signing and verification algorithms of
typical difficulty assumptions based multi-signature schemes (y-axis has
logarithmic scale.)
Fig. 5. The total CPU running time in different phases of signing
algorithm in AGMS (y-axis has logarithmic scale.)
Fig. 7. The CPU running time on a leader node of CoSi and AGMS
in online signing phase (y-axis has logarithmic scale.)
Fig. 4. The total CPU running time of signing and verification
algorithms for CoSi, GMS, and AGMS. (The three algorithms achieve
similar CPU running time, showing that the additional security features
of the proposed algorithms do not sacrifice algorithm efficiency. y-axis
has logarithmic scale.)
Fig. 6. The CPU running time on a leader node of CoSi and AGMS
in different phases of signing algorithm (y-axis has logarithmic scale.)
Fig. 8. Memory consumption of CoSi and AGMS (y-axis has logarithmic scale.)
-----
�
���������������
�������� � ������������������� �����������������������
������
�������� � ������������ ������������ �������������������
���
�������������������������������
������� �
������� �
������������ ������������
Fig. 9. The revised Fabric transaction process
paper, we implement the proposed AGMS on Fabric v1.0. In
order to avoid confusion, we name original Fabric v1.0 as the
default Fabric, and Fabric with AGMS as the revised Fabric.
Compared to the default Fabric transaction process, we adopt
our multi-signature scheme AGMS to replace ECDSA and
add one synchronization step to run smoothly in the revised
Fabric transaction process. We assume the client as Cl, the
endorser as Eni, and the orderer as Or. We also define Ci as
the set of children of one endorser Eni, Pi as the parent of
the endorser Eni, and N as the number of endorsers required
by endorsement policy. As shown in Fig. 9, the revised Fabric
transaction process can be described as follows.
Firstly, CA uses Pg(κ) to output par = (G, g1, q). And
then, each node uses Kg(par) to generate its own public/private key pair (pk, sk). Before a node joins the Fabric
network, CA additionally uses KVf(par, pk) to verify the
validity of the node’s identity and its public key. If the result
is true, CA issues a certificate to the node so that it can
successfully join the network. Otherwise, CA rejects the node,
meaning that the node has no right to join the network of
Fabric.
**Step** **1:** **Synchronization. All the endorsers Eni(i** =
1, _, N_ ) designated by endorsement policy can work as
_· · ·_
a sub-group in a spanning tree structure τ . They can synchronize the block information and implement phase 1 of
Sign(par, (pki, ski), m, τ ). The client Cl works as the leader,
implementing phase 2 of Sign(par, (pki, ski), m, τ ) to produce a common challenge c, which acts as a part of the joint
signature and is sent to each endorsers. The aggregated public
key _X[˜] is also computed in this section by KAg(_ ).
_PK_
**Step 2: Transaction Proposal. When the client Cl needs**
to request a transaction m, it firstly implements phase 3 of
Sign(par, (pki, ski), m, τ ), sending the transaction proposal
of m to the designated endorsers Eni(i = 1, · · ·, N ) in a
sub-group in a top-down way.
**Step 3: Endorsement. When the endorser Eni receives a**
proposal from the client Cl it first uses KVf(par pk) to
check validity of the client Cl’s identity, then simulates the
transaction implementation and signs the transaction proposal with its own private key and the previous common
challenge c. Finally, the endorser Eni implements phase 4
of Sign(par, (pki, ski), m, τ ), computing the partial response
value si.
**Step 4: Proposal Response. Then, all the designated en-**
dorsers Eni(i = 1, · · ·, N ) proceed to implement phase
4 of Sign(par, (pki, ski), m, τ ), sending back the proposal
response bottom-up. The client Cl only needs to collect
all the proposal responses from its children endorsers j,
which includes the simulated transaction results and the partial response values ˜sj. When all the proposal responses
are received, the client Cl checks the transaction results
and computes S = ˜sCl = sCl + [�]j∈CCl _[s][˜][j][. Finally, the]_
client Cl successfully produces a joint signature σ = (c, S)
representing the client Cl and all the designated endorsers
_Eni(i = 1, · · ·, N_ ). This joint signature can be easily verified
by all nodes including the client Cl itself, so as to check
whether it satisfies the endorsement policy.
**Step 5: Transaction Submission. If the joint signature is**
valid, the client Cl sends the final transaction proposal and
response to an orderer Or.
**Step 6: Block Delivery. The orderer Or orders the transac-**
tions from different clients into blocks and broadcasts them
on the network;
**Step 7: Ledger Updated. All the nodes on the network need**
to use Vf(par, _X, m, σ[˜]_ ) to verify the block information and
update synchronously.
Some relevant experiments are shown in Fig. 10, Fig. 11,
Fig. 12 and Fig. 13 respectively. We mainly test the running
time of signing algorithm in different transaction sections on
a client node for the default Fabric and the revised Fabric.
All the configurations are the same as those in Section VI.
We assume that we can set different numbers of endorsers
without limitation and there is no delay in communication.
Fig. 11 shows that, compared to the default Fabric, the revised
Fabric transaction process runs much faster when a transaction
comes. This is because the revised transaction process runs
Step 1 shown in Fig. 10 in advance, which does not exist
in the default Fabric. This step leads to a much faster online
signing. From Fig. 12, in terms of Step 5 to Step 7, as the
verification algorithm of ECDSA is implemented one time for
each endorser, the CPU running time of the default Fabric
increases linearly with the number of endorsers. But in the
revised Fabric, the verification algorithm is only implemented
once regardless of the number of endorsers. Thus, the CPU
running time is almost constant. Therefore, we can take
advantage of this extra time to implement Step 1, and the
total time of the revised Fabric transaction process is still
shorter than that of the default Fabric. The results are shown in
Fig. 13. In general, by applying the proposed multi-signature
scheme AGMS to replace ECDSA in the default Fabric, the
revised Fabric transaction process has faster online signing and
verification performance and smaller storage space, so that we
can achieve the goal of improving the transaction efficiency
and reducing the transaction storage in a block.
Please note that although the proposed multi signature
���������������
�������� �
������
�������� �
������� �
������� �
������������
-----
Fig. 10. The CPU running time of Step 1 on a client node in the revised
Fabric transaction process (y-axis has logarithmic scale.)
Fig. 12. The CPU running time from Step 5 to Step 7 on a client node
between the default Fabric transaction process and the revised Fabric
transaction process (y-axis has logarithmic scale.)
scheme, AGMS, is implemented on Fabric, a permission
based Blockchain platform, AGMS is also useful for permissionless Blockchain platforms, as it’s not based on the
assumption of Trusted Authority (TA). Taking a public and
permissionless Blockchain Bitcoin as an example, there exists
Multisig address [33], which is the hash of n public keys
(pk1, pk2, . . ., pkn). To spend funds associated with this address, one creates a transaction containing signatures from
these n public keys (pk1, pk2, . . ., pkn). Authors in [11] use
multi-signature to aggregate multiple signatures into a joint
one, so as to shrink the size of transaction data associated with
Bitcoin Multisig addresses. Compared to the permissioned
application, without a CA verifying the nodes’ identities
and permitting the entrance to Blockchain, the probability of
attacks would increase. Nevertheless, the attacks can still be
identified by the key verification and signature verification algorithms, which is guaranteed by the security of the proposed
multi-signature schemes.
Fig. 11. The CPU running time from Step 2 to Step 4 on a client node
between the default Fabric transaction process and the revised Fabric
transaction process (y-axis has logarithmic scale.)
Fig. 13. The total CPU running time on a client node between the
default Fabric transaction process and the revised Fabric transaction
process (y-axis has logarithmic scale.)
VIII. CONCLUSION
This paper proposes two multi-signature schemes based on
Gamma signature. Compared to CoSi, the most popular multisignature scheme based on Schnorr signature, the proposed
schemes achieves enhanced security, higher online efficiency
and similar scalability. We also apply the proposed AGMS to
improve the transaction process of Fabric, so that the efficiency
and throughput of Fabric are enhanced.
Undoubtedly, there are some limitations for the proposed
multi-signature schemes in real-life implementation. If there
exists tamper or forge in the multi-signature, the joint signature
cannot pass the verification algorithm. However, the nodes
in the tree need to verify the partial responses top-down
to find out the malicious signer, which would increase the
running costs. If the multiple signers are chosen in rotations,
the malicious singer continuously sending wrong responses
would be identified efficiently, leading to negligible attack
probability. In addition, the revised Fabric transaction process
is only suitable for the case where the endorsement policy is
set as “AND” but not for “OR” “NOT” These limitations
-----
will be investigated more in-depth in our future work.
REFERENCES
[1] S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system,” Decen_tralized Business Review, p. 21260, 2008._
[2] JP Morgan Tests Blockchain’s Capital Markets Potential, The Wall
[Street Journal, 2018. [Online]. Available: https://blogs.wsj.com/cio/](https://blogs.wsj.com/cio/2018/05/16/jp-morgan-tests-blockchains-capital-markets-potential/)
[2018/05/16/jp-morgan-tests-blockchains-capital-markets-potential/](https://blogs.wsj.com/cio/2018/05/16/jp-morgan-tests-blockchains-capital-markets-potential/)
[3] E. Androulaki, A. Barger, V. Bortnikov, C. Cachin, K. Christidis, A. D.
Caro, D. Enyeart, C. Ferris, G. Laventman, Y. Manevich, S. Muralidharan, C. Murthy, B. Nguyen, M. Sethi, G. Singh, K. Smith,
A. Sorniotti, C. Stathakopoulou, M. Vukolic, S. W. Cocco, and J. Yellick,
“Hyperledger fabric: a distributed operating system for permissioned
blockchains,” in Proceedings of the Thirteenth EuroSys Conference,
_EuroSys 2018, Porto, Portugal, April 23-26, 2018, 2018, pp. 30:1–30:15._
[4] K. Itakura and K. Nakamura, “A public-key cryptosystem suitable for
digital multisignatures,” NEC J. Res. Dev., vol. 71, pp. 1–8, 1983.
[5] P. Szalachowski, S. Matsumoto, and A. Perrig, “Policert: Secure and
flexible TLS certificate management,” in Proceedings of the 2014
_ACM SIGSAC Conference on Computer and Communications Security,_
_Scottsdale, AZ, USA, November 3-7, 2014, 2014, pp. 406–417._
[6] E. Syta, I. Tamas, D. Visher, D. I. Wolinsky, and B. Ford, “Decentralizing authorities into scalable strongest-link cothorities,” CoRR, vol.
abs/1503.08768, 2015.
[7] E. Syta, I. Tamas, D. Visher, D. I. Wolinsky, P. Jovanovic, L. Gasser,
N. Gailly, I. Khoffi, and B. Ford, “Keeping authorities ”honest or bust”
with decentralized witness cosigning,” in IEEE Symposium on Security
_and Privacy, SP 2016, San Jose, CA, USA, May 22-26, 2016, 2016, pp._
526–545.
[8] C. Schnorr, “Efficient signature generation by smart cards,” J. Cryptol_ogy, vol. 4, no. 3, pp. 161–174, 1991._
[9] M. Bellare and G. Neven, “Multi-signatures in the plain public-key
model and a general forking lemma,” in Proceedings of the 13th ACM
_Conference on Computer and Communications Security, CCS 2006,_
_Alexandria, VA, USA, October 30 - November 3, 2006, 2006, pp. 390–_
399.
[10] A. Bagherzandi, J. H. Cheon, and S. Jarecki, “Multisignatures secure
under the discrete logarithm assumption and a generalized forking
lemma,” in Proceedings of the 2008 ACM Conference on Computer
_and Communications Security, CCS 2008, Alexandria, Virginia, USA,_
_October 27-31, 2008, 2008, pp. 449–458._
[11] G. Maxwell, A. Poelstra, Y. Seurin, and P. Wuille, “Simple schnorr
multi-signatures with applications to bitcoin,” Des. Codes Cryptogr.,
vol. 87, no. 9, pp. 2139–2164, 2019.
[12] M. Drijvers, K. Edalatnejad, B. Ford, E. Kiltz, J. Loss, G. Neven, and
I. Stepanovs, “On the security of two-round multi-signatures,” in 2019
_IEEE Symposium on Security and Privacy, SP 2019, San Francisco, CA,_
_USA, May 19-23, 2019, 2019, pp. 1084–1101._
[13] A. C. Yao and Y. Zhao, “Online/offline signatures for low-power
devices,” IEEE Trans. Information Forensics and Security, vol. 8, no. 2,
pp. 283–294, 2013.
[14] L. Harn and T. Kresler, “New scheme for digital multisignatures,”
_Electronics Letters, vol. 25, no. 15, pp. 1002–1003, 1989._
[15] M. Bellare and G. Neven, “Identity-based multi-signatures from RSA,”
in Topics in Cryptology - CT-RSA 2007, The Cryptographers’ Track at
_the RSA Conference 2007, San Francisco, CA, USA, February 5-9, 2007,_
_Proceedings, ser. Lecture Notes in Computer Science, vol. 4377, 2007,_
pp. 145–162.
[16] A. Bagherzandi and S. Jarecki, “Identity-based aggregate and multisignature schemes based on RSA,” in Public Key Cryptography - PKC
_2010, 13th International Conference on Practice and Theory in Public_
_Key Cryptography, Paris, France, May 26-28, 2010. Proceedings, ser._
Lecture Notes in Computer Science, vol. 6056, 2010, pp. 480–498.
[17] J. Tsai, N. Lo, and T. Wu, “New identity-based sequential
aggregate signature scheme from RSA,” in International Symposium
_on Biometrics and Security Technologies, ISBAST 2013, 2-5 July,_
_[2013, Chengdu, Sichuan, China, 2013. [Online]. Available: http:](http://ieeexplore.ieee.org/document/6597680/)_
[//ieeexplore.ieee.org/document/6597680/](http://ieeexplore.ieee.org/document/6597680/)
[18] S. Hohenberger and B. Waters, “Synchronized aggregate signatures from
the RSA assumption,” in Advances in Cryptology - EUROCRYPT 2018
_- 37th Annual International Conference on the Theory and Applications_
_of Cryptographic Techniques, Tel Aviv, Israel, April 29 - May 3, 2018_
_Proceedings, Part II, ser. Lecture Notes in Computer Science, vol._
10821 2018 197 229
[19] M. Yu, J. Zhang, J. Wang, J. Gao, T. Xu, R. Deng, Y. Zhang, and
R. Yu, “Internet of things security and privacy-preserving method
through nodes differentiation, concrete cluster centers, multi-signature,
and blockchain,” IJDSN, vol. 14, no. 12, 2018.
[20] D. Boneh, B. Lynn, and H. Shacham, “Short signatures from the
weil pairing,” in Advances in Cryptology - ASIACRYPT 2001, 7th
_International Conference on the Theory and Application of Cryptology_
_and Information Security, Gold Coast, Australia, December 9-13, 2001,_
_Proceedings, ser. Lecture Notes in Computer Science, vol. 2248, 2001,_
pp. 514–532.
[21] A. Boldyreva, “Threshold signatures, multisignatures and blind signatures based on the gap-diffie-hellman-group signature scheme,” in Public
_Key Cryptography - PKC 2003, 6th International Workshop on Theory_
_and Practice in Public Key Cryptography, Miami, FL, USA, January_
_6-8, 2003, Proceedings, ser. Lecture Notes in Computer Science, vol._
2567, 2003, pp. 31–46.
[22] T. Ristenpart and S. Yilek, “The power of proofs-of-possession: Securing
multiparty signatures against rogue-key attacks,” in Advances in Cryp_tology - EUROCRYPT 2007, 26th Annual International Conference on_
_the Theory and Applications of Cryptographic Techniques, Barcelona,_
_Spain, May 20-24, 2007, Proceedings, ser. Lecture Notes in Computer_
Science, M. Naor, Ed., vol. 4515, 2007, pp. 228–245.
[23] M. Ambrosin, M. Conti, A. Ibrahim, G. Neven, A. Sadeghi, and
M. Schunter, “SANA: secure and scalable aggregate network attestation,” in Proceedings of the 2016 ACM SIGSAC Conference on Computer
_and Communications Security, Vienna, Austria, October 24-28, 2016,_
E. R. Weippl, S. Katzenbeisser, C. Kruegel, A. C. Myers, and S. Halevi,
Eds. ACM, 2016, pp. 731–742.
[24] D. Boneh, M. Drijvers, and G. Neven, “Compact multi-signatures for
smaller blockchains,” in Advances in Cryptology - ASIACRYPT 2018 _24th International Conference on the Theory and Application of Cryp-_
_tology and Information Security, Brisbane, QLD, Australia, December_
_2-6, 2018, Proceedings, Part II, 2018, pp. 435–464._
[25] D. He, S. Zeadally, B. Xu, and X. Huang, “An efficient identity-based
conditional privacy-preserving authentication scheme for vehicular ad
hoc networks,” IEEE Trans. Information Forensics and Security, vol. 10,
no. 12, pp. 2681–2691, 2015.
[26] B. Alangot, M. Suresh, A. S. Raj, R. K. Pathinarupothi, and
K. Achuthan, “Reliable collective cosigning to scale blockchain with
strong consistency,” in Proceedings of the Network and Distributed
_System Security Symposium (DISS ’18), 2018._
[27] E. Kokoris-Kogias, P. Jovanovic, N. Gailly, I. Khoffi, L. Gasser, and
B. Ford, “Enhancing bitcoin security and performance with strong consistency via collective signing,” in 25th USENIX Security Symposium,
_USENIX Security 16, Austin, TX, USA, August 10-12, 2016., 2016, pp._
279–296.
[28] E. Syta, P. Jovanovic, E. Kokoris-Kogias, N. Gailly, L. Gasser, I. Khoffi,
M. J. Fischer, and B. Ford, “Scalable bias-resistant distributed randomness,” in 2017 IEEE Symposium on Security and Privacy, SP 2017, San
_Jose, CA, USA, May 22-26, 2017, 2017, pp. 444–460._
[29] X. Zhou, Q. Wu, B. Qin, X. Huang, and J. Liu, “Distributed bitcoin account management,” in 2016 IEEE Trustcom/BigDataSE/ISPA, Tianjin,
_China, August 23-26, 2016, 2016, pp. 105–112._
[30] T. Ristenpart and S. Yilek, “The power of proofs-of-possession: Securing
multiparty signatures against rogue-key attacks,” in Advances in Cryp_tology - EUROCRYPT 2007, 26th Annual International Conference on_
_the Theory and Applications of Cryptographic Techniques, Barcelona,_
_Spain, May 20-24, 2007, Proceedings, 2007, pp. 228–245._
[31] A. Bagherzandi and S. Jarecki, “Multisignatures using proofs of secret
key possession, as secure as the diffie-hellman problem,” in Security and
_Cryptography for Networks, 6th International Conference, SCN 2008,_
_Amalfi, Italy, September 10-12, 2008. Proceedings, 2008, pp. 218–235._
[32] U. D. of Commerce, Secure Hash Standard - SHS: Federal Information
_Processing Standards Publication 180-4._ CreateSpace Independent
Publishing Platform, 2012.
[33] G. Andresen, “M-of-n standard transactions,” Bitcoin Improvement Pro_posal, 2011._
-----
**Yue Xiao is a postgraduate student of College**
of Electronics and Information Engineering, Shenzhen University, China. He got the B.S. degree
in telecommunication engineering from Guangdong
Ocean University, China, in 2017, and the M.S.
degree in information and telecommunication engineering from Shenzhen University, China, in 2020.
His current research interests include cryptography
technology and security in the Blockchain.
**Peng Zhang is an associate professor of College of**
Electronics and Information Engineering, Shenzhen
University, China. She got the Ph.D. degree in
signal and information processing from Shenzhen
University, China in 2011. Her current research
interests include cryptography technology and security in the Blockchain, Cloud Computing, IoT. She
has published more than 30 academic journal and
conference papers.
**Yuhong Liu is an Associate Professor at Department**
of Computer Engineering Santa Clara University.
She received her B.S. and M.S. degree from Beijing
University of Posts and Telecommunications in 2004
and 2007 respectively, and the Ph.D. degree from
University of Rhode Island in 2012. Her research
interests include trustworthy computing and cyber
security of emerging applications, such as online
social media, Internet-of-things, and Blockchain.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2210.10294, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://arxiv.org/pdf/2210.10294"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-10-19T00:00:00
|
[
{
"paperId": "a4da14a4329d7bf28e2ecbf9a3e42bf1faba523e",
"title": "Simple Schnorr multi-signatures with applications to Bitcoin"
},
{
"paperId": "6a03ab12bdd5cc21579dda850d1b4fd0403085c1",
"title": "Compact Multi-Signatures for Smaller Blockchains"
},
{
"paperId": "0f0212f1e8adc29e25b0fdb5cab475f70455ba2b",
"title": "Internet of Things security and privacy-preserving method through nodes differentiation, concrete cluster centers, multi-signature, and blockchain"
},
{
"paperId": "ed8f9b835c9e0df12b6be0a0b51d9c88c28a27a5",
"title": "Synchronized Aggregate Signatures from the RSA Assumption"
},
{
"paperId": "1b4c39ba447e8098291e47fd5da32ca650f41181",
"title": "Hyperledger fabric: a distributed operating system for permissioned blockchains"
},
{
"paperId": "3ff00474003801e4f560417a1ccc6b539d4b9577",
"title": "Scalable Bias-Resistant Distributed Randomness"
},
{
"paperId": "adad283c6b019e206fce1fb2a812abe6264bd2bc",
"title": "SANA: Secure and Scalable Aggregate Network Attestation"
},
{
"paperId": "86710afa7007e9a85100386e3519a66134a1fb23",
"title": "Distributed Bitcoin Account Management"
},
{
"paperId": "efd99fe3b5b620d89aa03201199c45988c688670",
"title": "Enhancing Bitcoin Security and Performance with Strong Consistency via Collective Signing"
},
{
"paperId": "3bda998e88f6eab9c3e6a24134359bdd5e2a1d3a",
"title": "An Efficient Identity-Based Conditional Privacy-Preserving Authentication Scheme for Vehicular Ad Hoc Networks"
},
{
"paperId": "a9efb1e481ec9ca6a42cf746c6135559cd314149",
"title": "Keeping Authorities \"Honest or Bust\" with Decentralized Witness Cosigning"
},
{
"paperId": "72a27d7a168219ad01077633ca4b8f6c87934ece",
"title": "Decentralizing Authorities into Scalable Strongest-Link Cothorities"
},
{
"paperId": "b694612693926c92e40a17135d14ab52bc08240c",
"title": "PoliCert: Secure and Flexible TLS Certificate Management"
},
{
"paperId": "bb4892d259ff772a1745fd36c7758d4974362f4f",
"title": "New Identity-Based Sequential Aggregate Signature Scheme from RSA"
},
{
"paperId": "a49fc65722ec3a4a5088e987c5dc1a021521a2b1",
"title": "Online/Offline Signatures for Low-Power Devices"
},
{
"paperId": "8a56b22eb66eba44b9192c081bf6ca2cf282d7a5",
"title": "Identity-Based Aggregate and Multi-Signature Schemes Based on RSA"
},
{
"paperId": "425871f3bb6992717e5355bee6be99d9a1f92dcd",
"title": "Multisignatures secure under the discrete logarithm assumption and a generalized forking lemma"
},
{
"paperId": "f2425f8258d92d122aea56dc911f3bbaf24b91a4",
"title": "Multisignatures Using Proofs of Secret Key Possession, as Secure as the Diffie-Hellman Problem"
},
{
"paperId": "551fb86810f70c5fcb5372a829a36055b3db0dc3",
"title": "The Power of Proofs-of-Possession: Securing Multiparty Signatures against Rogue-Key Attacks"
},
{
"paperId": "325dfc6cc728b8d7f9ebee6dc89dfbc2b6db1819",
"title": "Identity-Based Multi-signatures from RSA"
},
{
"paperId": "ff246612ad8665b8a1e65dc975b4dab0a1ed44e0",
"title": "Multi-signatures in the plain public-Key model and a general forking lemma"
},
{
"paperId": "3c0c82f42172bc1da4acc36b656d12351bf53dae",
"title": "Short Signatures from the Weil Pairing"
},
{
"paperId": "7c73395383a8d05f8cc5f5f31b54c29d14b5f834",
"title": "New scheme for digital multisignatures"
},
{
"paperId": "f09ee666ef7aa525a7b7444da596b33dd946eaf1",
"title": "On the Security of Two-Round Multi-Signatures"
},
{
"paperId": "e9de8a9d48298cb8bba693d7d058505ca7ca4c9d",
"title": "Reliable Collective Cosigning to Scale Blockchain with Strong Consistency"
},
{
"paperId": null,
"title": "JP Morgan Tests Blockchain's Capital Markets Potential"
},
{
"paperId": null,
"title": "Secure Hash Standard - SHS: Federal Information Processing Standards Publication 180-4"
},
{
"paperId": null,
"title": "M-of-n standard transactions"
},
{
"paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596",
"title": "Bitcoin: A Peer-to-Peer Electronic Cash System"
},
{
"paperId": "2642738e9977c08d4085ce1c6530d63545383d30",
"title": "Efficient signature generation by smart cards"
},
{
"paperId": "6cdb1ffa44c250eef7feddbd50fdb26ac0e9ccda",
"title": "Efficient threshold signature, multisignature and blind signature schemes based on the Gap-Diffie-Hellman-Group signature scheme"
},
{
"paperId": "ddc25416604971f49e43f49b2aa09115ea15467e",
"title": "A public-key cryptosystem suitable for digital multisignatures"
},
{
"paperId": null,
"title": "Hohenberger and B . Waters , “ Synchronized aggregate signatures from the RSA assumption , ” in Proc . 37 th Annu . Int . Conf . Appl . Cryptograph . Techn . Adv . Cryptol"
},
{
"paperId": null,
"title": "is with the Department of Computer Engineering, Santa Clara University, Santa Clara 95053, USA"
}
] | 21,191
|
en
|
[
{
"category": "Sociology",
"source": "s2-fos-model"
},
{
"category": "Political Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/032a922ca59977bb44a82a4612c000f05e3fb41f
|
[] | 0.930035
|
The importance of safety and security in urban space
|
032a922ca59977bb44a82a4612c000f05e3fb41f
|
Humanities & Social Sciences Reviews
|
[
{
"authorId": "1402353705",
"name": "Kamila Kiełek"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Purpose of the study: This article presents the main determinants of security and safety in the public space of the city. The main objective is to examine the importance of security in the public area of the city and to discuss how it can be achieved.
Methodology: In the article was used a literature review mainly on the urban public area as one of the most critical aspects of the city. "Desk research" is the method that was used to analyze.
Main findings: From the considerations, the urban public space can be a place of excellent security or an area of crime and fear. Key factors affecting safety in urban public regions are visibility and design.
Application of the study: This rticle refers to the behavior of citizens in urban spaces. As more and more hackers attack both companies and individuals, everyone needs to take the necessary precautions. The use of aids such as cameras and lighting can help to warn residents of possible hazards. Article's content may be helpful for residents to work together to create a safer environment.
Originality/Novelty of the study: Safety and security must never be compromised in public places in the city. Aspects of public space are an essential part of life and must be secure for safety and well-being. The reasons for applying precautions in urban areas in this article suggest that security should not be neglected, as it can lead to large-scale accidents and tragedies in the absence of adequate safety measures. This article may stimulate further research and study in the field of public security and contribute to other interesting scientific contributions on the subject.
|
eISSN: 2395 6518, Vol 10, No 6, 2022, pp 21 23
# The importance of safety and security in urban space
**Kamil Kiełek**
Ph.D. Student, Jan Kochanowski University of Kielce, Poland.
Email: kamil.kielek.96@gmail.com
**Keywords**
Urban Space, City, Inhabitants, Public
Space.
**Article History**
Received on 28[th] October 2022
Accepted on 19[th] November 2022
Published on 7[th] December 2022
**Cite this article**
Kiełek, K. (2022). The importance of
safety and security in urban space.
_Humanities & Social Sciences Reviews,_
_10(6), 21-23._
https://doi.org/10.18510/hssr.2022.1063
**Abstract**
**Purpose of the study: This article presents the main determinants of security and**
safety in the public space of the city. The main objective is to examine the
importance of security in the public area of the city and to discuss how it can be
achieved.
**Methodology: In the article was used a literature review mainly on the urban**
public area as one of the most critical aspects of the city. "Desk research" is the
method that was used to analyze.
**Main findings: From the considerations, the urban public space can be a place of**
excellent security or an area of crime and fear. Key factors affecting safety in urban
public regions are visibility and design.
**Application of the study:** This rticle refers to the behavior of citizens in urban
spaces. As more and more hackers attack both companies and individuals, everyone
needs to take the necessary precautions. The use of aids such as cameras and
lighting can help to warn residents of possible hazards. Article's content may be
helpful for residents to work together to create a safer environment.
**Originality/Novelty of the study: Safety and security must never be compromised**
in public places in the city. Aspects of public space are an essential part of life and
must be secure for safety and well-being. The reasons for applying precautions in
urban areas in this article suggest that security should not be neglected, as it can
lead to large-scale accidents and tragedies in the absence of adequate safety
measures. This article may stimulate further research and study in the field of
public security and contribute to other interesting scientific contributions on the
subject.
**Copyright @Author**
**Publishing License**
[This work is licensed under a Creative](https://creativecommons.org/licenses/by-sa/4.0/)
[Commons Attribution-Share Alike 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
[International License.](https://creativecommons.org/licenses/by-sa/4.0/)
**INTRODUCTION**
The meaning of safety and security in the urban public space can vary depending on the context. In general, however,
safety refers to the physical well-being of individuals, while security refers to the protection of property and possessions.
In an urban setting, public spaces are typically those areas that are open and accessible to all, such as parks, sidewalks,
and plazas (Lekareva & Zaslaskaya, 2018). It can also refer to the physical safety of people and their belongings in these
spaces. The safety and security of these spaces are essential to the health and vibrancy of cities. When people feel safe in
public spaces, they are more likely to use them. There are a few ways to increase security and protection in city public
spaces. One way is to improve lighting in public areas. This can deter crime and make people feel safer. Another way is
to increase police presence in public places. This can also prevent crime and make people feel safer. It is essential to
design public spaces in a way that promotes natural surveillance. This means that there are no places for criminals to
hide and that people can see and be seen by others. Urban public space is increasingly used for recreation, socializing,
and working. However, it is also a place where people go to feel safe and secure. This has led to a growing focus on
safety and security in urban public spaces, as well as the need to create environments that are both safe and welcoming.
As the world becomes increasingly digital, more people are spending time and money in urban public spaces. But what
happens when those spaces become dangerous or unsafe?
The main purpose of this article is to explore the meaning of safety and security in urban public spaces and discuss how
they can be achieved.
**DISCUSSION**
Urban public space is one of the most important aspects of a city. It's where we exercise, socialize, and go about our
daily lives. Unfortunately, this space is also increasingly subject to threats and dangers. In this article, we explore the
meaning of safety and security in urban public spaces and discuss how municipalities can protect their citizens. We also
provide some tips on how we can keep ourselves safe when we're out and about in the city. Urban public space can be a
place of excellent safety and security, or it can be the site of crime and fear. It all depends on the context in which it is
used. Many people feel safe walking around their neighbourhoods because they know the people who live there and trust
them not to hurt them. This kind of community-oriented safety is also found in many city centres, where residents know
one another and are willing to help if something goes wrong. However, this sense of security doesn't always exist in
urban public spaces outside our homes or neighbourhoods (Kacharo, Teshome & Woltamo, 2022). Many people feel
uncomfortable walking around large cities at night for fear of being mugged or attacked. And even during the day, some
areas can still be unsafe thanks to high crime levels (incredibly violent crimes). One key factor contributing to safety in
-----
eISSN: 2395 6518, Vol 10, No 6, 2022, pp 21 23
an urban public space is visibility: Urban planners want streets to be brightly lit at night so pedestrians will see potential
dangers ahead and avoid them, just as drivers should see obstacles on the road while driving safely.
Another important factor is design: Poorly designed streets don't allow for easy navigation by wheelchair users or those
with prosthetic devices; they're also tricky to cross without getting lost due to confusing intersections or lack of
signage/bicycle parking facilities nearby. Urban public space can be a source of both safety and security for people who
live in or visit cities. It can help to reduce crime rates, provide social opportunities, and promote healthy lifestyles
(Kacharo, Teshome & Woltamo, 2022). Safety in urban public spaces is enhanced by security cameras and lighting that
can alert residents to possible danger. These devices are often linked with other systems, such as sensors that detect
unauthorized entry or activity, or alerts sent through mobile phones when someone enters or leaves an area designated as
safe. In addition to physical safety hazards, urban public spaces may pose psychological risks due to fear of violence or
theft. By selecting certain areas as safe places, city governments can help minimize these dangers while allowing people
the freedom to move around within their community. For urban public space to serve its purpose as an essential part of
civic life, it must be well-maintained and regularly monitored so that it remains accessible and enjoyable for everyone
who uses it.
In today's society, it is increasingly vital for people to be aware of their safety and security when in public spaces. With
more and more hackers attacking businesses and individuals alike, everyone must take the necessary precautions to stay
safe. Understanding the security and safety of the city's public spaces is vital to ensuring that we remain both healthy and
safe while using them. Urban public spaces are a vital part of our cities, and they need to be as safe and secure as
possible. Unfortunately, this isn't always the case. This is because urban public spaces are often unprotected from crime
and vandalism (Beqaj, 2016).
To combat these problems, it's vital for businesses and residents to work together to create safer environments in which
everyone can enjoy their cityscape. Here are a few tips to help us to keep our heads when all around us seem unsafe:
- Always use common sense when interacting with strangers. If something doesn't feel right, don't do anything until
we've had a chance to think things through.
- Avoid walking alone at night or during busy times of the day - these are hazardous areas for criminals looking for
easy targets. Stick close to others whenever possible, and if we feel uncomfortable walking in a room, reach out for
help.
- Be careful what information we share online - whether it's personal photos or descriptions of our personality traits.
Always ensure that any information transmitted is private enough not to be found by unauthorized parties.
- Educate ourselves and our friends about the dangers of crime in urban public spaces so we know what to watch out
for when we're out there.
- Secure our property by installing security cameras or locks on gates/doors leading into our property. This will help
deter criminals from entering uninvited and committing crimes against us or our possessions (Beqaj, 2016).
- Report any suspicious behavior immediately to the police (or security personnel if applicable). Criminals won't feel
comfortable carrying out their deeds knowing they could be caught at any time!
- Keep an eye on social media (and other online communities) for updates regarding Crime Alerts or Security
Warnings in specific parts of town - This way, we'll always have up-to-date information about potential safety threats
near where we live or work (Svensdotter & Guarali, 2018).
As the world's population continues to grow and urbanize, safety and security issues in public spaces are becoming more
common. There are a variety of reasons for this. First, cities are becoming more crowded and denser, which makes it
easier for criminals to hide and commit crimes (Minton, 2018). Second, there's an increase in cybercrime (attacks that
use the Internet or computer networks to steal or damage property), which can have a devastating impact on businesses
and individual lives. To maintain peace and order in cities, it is essential that we must develop adequate safety and
security strategies. Here are some critical steps that we can take:
- Establish clear guidelines for acceptable behavior in public spaces. This will help everyone understand what's
allowed and what's not allowed.
- Make sure everyone who lives or visits urban public spaces knows their rights and responsibilities (including the
right to record footage of events). This will help deter criminals from committing crimes in public spaces.
- Keep an eye on social media platforms for updates about security incidents happening in cities – this will allow us to
plan and prepare for emergencies.
There are many common challenges that businesses face when it comes to safety and security. These can include issues
with data protection, malware attacks, physical theft or vandalism, social engineering scams, and more. To manage these
risks effectively, it's essential to have an urban security policy in place that sets out clear guidelines for addressing each
type of risk (Navarrete-Hernandez, Vetro & Concha, 2021). Furthermore, we need to put in place measures such as ID
scanning at entrances and using strong passwords (with a mix of upper and lowercase letters as well as numbers) to help
protect our data from unauthorized access We should also regularly monitor our system for signs of compromise (such
-----
eISSN: 2395 6518, Vol 10, No 6, 2022, pp 21 23
as unusual activity on our computers or suspicious emails), and take appropriate action if necessary (Martinez, 2019).
Overall, taking the time to understand our specific business situation and creating tailored policies to protect both assets
and customers is the best way to ensure safety and security for ourselves and our team.
**CONCLUSION**
To sum it all up, safety and security can never be compromised in any urban public place. The importance of these
norms has also been acknowledged by the Supreme Court as it laid down guidelines for providing adequate security to
people at public spaces like malls, markets, and railway stations. It is time now that this mindset gets rooted in our
system too. We should work towards making each city a safe one where we are free from any form of harassment or
crime. Public space is a critical part of our lives and must be kept safe and secure for the safety and well-being of all.
Recent events have shown us how dangerous it can be not to take precautions when in public spaces, no matter how
familiar we feel with the area. General security is an important responsibility that we all share. As citizens, we have to
report anything out of the ordinary or suspicious to authorities so that they can do their job correctly. Public spaces are
often used by people for socializing, studying, playing, and many more. Due to their importance in the daily life of
citizens, the designers have come up with new designs and installations on safety and security in these spaces. These
new designs mustn't be just aesthetically pleasing but also practical for the users. The only way to ensure such an
outcome is by making sure all stakeholders, from government bodies to cities' residents, work together toward creating
safe environments where everyone feels secure and happy. We can say that safety and security should not be taken
lightly. In the absence of adequate safety measures, it may lead to large-scale accidents and tragedies. People often fail
to notice the lack of sufficient security in parks and other public spaces because they have become so accustomed to
such facilities being safe. The frequent attacks against women across different cities are a testament to how vital it is for
authorities to step up their efforts in to ensure better security at public places like parks, bus stops, etc.
**REFERENCES**
1. Beqaj, B. (2016). Public Space, public interest and Challenges of Urban Transformation. _IFAC-Papers online,_
_[49(29), 320-324. https://doi.org/10.1016/j.ifacol.2016.11.087](https://doi.org/10.1016/j.ifacol.2016.11.087)_
2. Kacharo, D., Teshome, E., & Woltamo, T. (2022). Safety and security of women and girls in public transport.
_Urban,_ _[Planning and Transport Research, 10(1), 1-19. https://doi.org/10.1080/21650020.2022.2027268](https://doi.org/10.1080/21650020.2022.2027268)_
3. Lee, S. (2021). The safety of public space: urban design guidelines for neighborhood park planning. Journal Of
_Urbanism:_ _International Research_ _on_ _Placemaking_ _and_ _Urban_ _Sustainability,_ _15(2),_ 222-240.
[https://doi.org/10.1080/17549175.2021.1887323](https://doi.org/10.1080/17549175.2021.1887323)
4. Lekareva, N., & Zaslavskaya, A. (2018). New meaning of urban public spaces. _Urban Construction and_
_[Architecture, 8(2), 130-134. https://doi.org/10.17673/Vestnik.2018.02.22](https://doi.org/10.17673/Vestnik.2018.02.22)_
5. Martinez, P. (2019). Challenges for ensuring the security, safety, and sustainability of outer space activities.
_[Journal Of Space Safety Engineering, 6(2), 65-68. https://doi.org/10.1016/j.jsse.2019.05.001](https://doi.org/10.1016/j.jsse.2019.05.001)_
6. Minton, A. (2018). The Paradox of Safety and Fear: Security in Public Space. Architectural Design, 88(3), 84
[91. https://doi.org/10.1002/ad.2305](https://doi.org/10.1002/ad.2305)
7. Navarrete-Hernandez, P., Vetro, A., & Concha, P. (2021). Building safer public spaces: Exploring gender
difference in the perception of safety in public space through urban design interventions. Landscape And Urban
_Planning,_ _[214, 104180. https://doi.org/10.1016/j.landurbplan.2021.104180](https://doi.org/10.1016/j.landurbplan.2021.104180)_
8. Svensdotter, A., & Guaralda, M. (2018). Dangerous Safety or Safely Dangerous. Perception of safety and self
[awareness in public space. The Journal Of Public Space, 3(1), 75-92. https://doi.org/10.5204/jps.v3i1.319](https://doi.org/10.5204/jps.v3i1.319)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.18510/hssr.2022.1063?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.18510/hssr.2022.1063, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "https://mgesjournals.com/hssr/article/download/hssr.2022.1063/3770"
}
| 2,022
|
[
"Review"
] | true
| 2022-12-07T00:00:00
|
[
{
"paperId": "2c838f4ead98a487c4f92c1de60caa9c33e13fd5",
"title": "Safety and security of women and girls in public transport"
},
{
"paperId": "ca973d27fbb50381d5376f2c236f2cbf0ced0e22",
"title": "Building safer public spaces: Exploring gender difference in the perception of safety in public space through urban design interventions"
},
{
"paperId": "3460578c0263838f7602062aca50864c452a569f",
"title": "The safety of public space: urban design guidelines for neighborhood park planning"
},
{
"paperId": "3f9aeefbb1beef552b83b99efea0843fde317239",
"title": "Challenges for ensuring the security, safety and sustainability of outer space activities"
},
{
"paperId": "a85ac8c71bab2c0193bf4c0bd172ae6aa4701a0f",
"title": "The Paradox of Safety and Fear: Security in Public Space"
},
{
"paperId": "c7df9f2fd3be4c8d1880698f548210182864606f",
"title": "Dangerous Safety or Safely Dangerous. Perception of safety and self-awareness in public space"
},
{
"paperId": "d7a2b2b1a3a97deda1dbc812287091350e4b4c9d",
"title": "Public Space, public interest and Challenges of Urban Transformation"
}
] | 3,862
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/032b0f86e246d486632945a1948e42ac3bbaab16
|
[
"Medicine"
] | 0.860732
|
The Social Data Foundation model: Facilitating health and social care transformation through datatrust services
|
032b0f86e246d486632945a1948e42ac3bbaab16
|
Data & Policy
|
[
{
"authorId": "2055083",
"name": "M. Boniface"
},
{
"authorId": "153134253",
"name": "L. Carmichael"
},
{
"authorId": "48611100",
"name": "W. Hall"
},
{
"authorId": "144263467",
"name": "B. Pickering"
},
{
"authorId": "1401945649",
"name": "Sophie Stalla-Bourdillon"
},
{
"authorId": "6606609",
"name": "Steve Taylor"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Data Policy"
],
"alternate_urls": null,
"id": "f5f5f780-7c7e-4c2c-863b-627a707f6c65",
"issn": "2632-3249",
"name": "Data & Policy",
"type": null,
"url": "https://www.cambridge.org/core/journals/data-and-policy"
}
|
Abstract Turning the wealth of health and social data into insights to promote better public health, while enabling more effective personalized care, is critically important for society. In particular, social determinants of health have a significant impact on individual health, well-being, and inequalities in health. However, concerns around accessing and processing such sensitive data, and linking different datasets, involve significant challenges, not least to demonstrate trustworthiness to all stakeholders. Emerging datatrust services provide an opportunity to address key barriers to health and social care data linkage schemes, specifically a loss of control experienced by data providers, including the difficulty to maintain a remote reidentification risk over time, and the challenge of establishing and maintaining a social license. Datatrust services are a sociotechnical evolution that advances databases and data management systems, and brings together stakeholder-sensitive data governance mechanisms with data services to create a trusted research environment. In this article, we explore the requirements for datatrust services, a proposed implementation—the Social Data Foundation, and an illustrative test case. Moving forward, such an approach would help incentivize, accelerate, and join up the sharing of regulated data, and the use of generated outputs safely amongst stakeholders, including healthcare providers, social care providers, researchers, public health authorities, and citizens.
|
RESEARCH ARTICLE
# The Social Data Foundation model: Facilitating health and social care transformation through datatrust services
Michael Boniface[1], Laura Carmichael[1][,]*, Wendy Hall[1], Brian Pickering[1],
Sophie Stalla-Bourdillon[2] and Steve Taylor[1]
1Electronics & Computer Science, University of Southampton, Southampton, United Kingdom
2Law, University of Southampton, Southampton, United Kingdom
[*Corresponding author. E-mail: L.E.Carmichael@soton.ac.uk](mailto:L.E.Carmichael@soton.ac.uk)
Received: 16 June 2021; Revised: 17 December 2021; Accepted: 10 January 2022
Key words: data governance models; data institutions; data stewardship; datatrust services; healthcare and social care
Abbreviations: AI, artificial intelligence; API, application programming interface; CHIA, Care and Health Information Exchange
Analytics; DARS, Data Access Request Service; DLT, distributed ledger technology; DPIA, data protection impact assessment;
DPO, data protection officer; DSAP, data sharing and analysis project; GDPR, General Data Protection Regulation; HL7 FHIR,
Health Level 7 Fast Healthcare Interoperability Resources; HRA, Health Research Authority; ICO, Information Commissioner’s
Office; ICS, integrated care system; ISO, International Organization for Standardization; MELD, Multidisciplinary Ecosystem to
study Lifecourse Determinants of Complex Mid-life Multimorbidity using Artificial Intelligence; ML, machine learning; MLTC-M,
multiple long term conditions—multimorbidity; NHS, National Health Service (UK); NHS REC, NHS Research Ethics Committee;
NIHR, National Institute for Health Research (UK); ONS, Office for National Statistics; OWASP, Open Web Application Security
Project; PETs, privacy enhancing-technologies; PI, principal investigator; RDA, Research Data Alliance; SDF, Social Data
Foundation; SD-WANS, software-defined wide area networks; TRE, trusted research environment; UK, United Kingdom; UKDS,
UK Data Service; UKHDRA, UK Health Data Research Alliance; WSI, Web Science Institute
Abstract
Turning the wealth of health and social data into insights to promote better public health, while enabling more effective
personalized care, is critically important for society. In particular, social determinants of health have a significant
impact on individual health, well-being, and inequalities in health. However, concerns around accessing and
processing such sensitive data, and linking different datasets, involve significant challenges, not least to demonstrate
trustworthiness to all stakeholders. Emerging datatrust services provide an opportunity to address key barriers to
health and social care data linkage schemes, specifically a loss of control experienced by data providers, including the
difficulty to maintain a remote reidentification risk over time, and the challenge of establishing and maintaining a
social license. Datatrust services are a sociotechnical evolution that advances databases and data management
systems, and brings together stakeholder-sensitive data governance mechanisms with data services to create a trusted
research environment. In this article, we explore the requirements for datatrust services, a proposed implementation—
the Social Data Foundation, and an illustrative test case. Moving forward, such an approach would help incentivize,
accelerate, and join up the sharing of regulated data, and the use of generated outputs safely amongst stakeholders,
including healthcare providers, social care providers, researchers, public health authorities, and citizens.
Policy Significance Statement
Turning the wealth of health and social data into insights for better public health and personalized care is critically
important for society. Yet data access and insights are hampered by manual governance processes that can be time
consuming, error-prone, and not easy to repeat. With increasing data volumes, complexity, and need for ever-faster
© The Author(s), 2022. Published by Cambridge University Press. This is an Open Access article, distributed under the terms of the Creative Commons
-----
solutions, new approaches to data governance must be found that are secure, rights-respecting, and endorsed by
communities. The Social Data Foundation combines governance with datatrust services to allow citizens, service
providers, and researchers to work together to transform systems. By bridging the gap between data and trust
services, newprogressive modelsof datagovernancecan beestablished offering highlevels of datastewardshipand
citizen participation.
1. Introduction
Social determinants of health significantly affect individual well-being and health inequalities (Sadana
and Harper, 2011; Public Health England, 2017; Marmot et al., 2020). The World Health Organization
(n.d.) describes “social determinants of health” as “nonmedical factors that influence health outcomes”
such as “education,” “working life conditions,” “early childhood development,” and “social-inclusion
and nondiscrimination.” The global COVID-19 pandemic highlights how “disparities in social determinants of health” (Burström and Tao, 2020) give rise to poorer health outcomes for some groups in
society. For instance, disadvantaged economic groups appear to be at greater risk of exposure to COVID19, and are more susceptible to severe disease or death (e.g., Abrams and Szefler, 2020; Burström and Tao,
2020; Triggle, 2021).
Social determinants of health can be acquired from diverse data sources—for example, wearables, digital
health platforms, social media, and environment monitoring—many beyond the conventional boundaries of
health and social care (e.g., Sharon and Lucivero, 2019). The “safe” linkage (UK Data Service, n.d.; UK
Health Data Research Alliance, 2020) of good quality data is therefore vital for the generation of insights
supporting positive health and social care transformation.[1] Specifically, newer forms of social determinants
of health data (e.g., from wearables) need to bring together with other more conventional data types (e.g.,
electronic healthcare records, public health statistics, and birth cohorts datasets) for analysis by multidisciplinary researchers and practitioners, including the application and development of new and existing
healthdata sciencemethodsandtools.Such data-driveninsightscanbeusedto “improvedecision-makingat
the individualand community level” (Galea etal., 2020) thuspromoting betterpublichealth,[2] enablingmore
effective personalized care,[3] and ultimately helping address inequalities in health.
Although the need for sustainable and positive health and social care transformation is widely accepted
in principle, more needs to be done in practice to derive benefit from available data. This includes
incentivizing and accelerating sharing of regulated data and any associated outputs across relevant
stakeholders (e.g., healthcare or social care providers, researchers, public health authorities, and citizens).
Many health and social care datasets remain in silos under the control of individual groups or institutions
(Kariotis et al., 2020), giving rise to data monopolies or oligopolies. Slow, disjointed, manual governance
processes—often error-prone, time consuming, and difficult to repeat—hamper data access and insights.[4]
This has been accentuated by the extraordinary situation of the global COVID-19 pandemic (e.g.,
Research Data Alliance (RDA) COVID-19 Working Group, 2020). Trustworthy data governance is
essential not only to ensure data providers and data users can fulfill their regulatory obligations, but also to
maintain public confidence and engagement (Geissbuhler et al., 2013; Stalla-Bourdillon et al., 2021).
1 Note that a key theme for positive health and social care transformation is the design and implementation of “integrated care
systems” (ICSs) for seamless care delivery across the health and social care pathways (NHS, 2019)—also referred to as “hospitals
without walls” (Hawkes, 2013; Spinney, 2021).
2 For example, via public interventions, targeted health, and well-being campaigns.
3 For example, through personalized medicine, increased patient and/or service user empowerment, and better operational
efficiency for health and care service providers.
4 In the UK, the NHS remains a key provider of clinical and administrative data for research and innovation (i.e., secondary use of
data for nonclinical purposes) related to health and social care systems transformation. Data users can request access to data, for
-----
Advanced data governance[5] models are therefore required that can foster a “social license” (Carter et al.,
2015; Jones and Ford, 2018; O’Hara, 2019) and which can handle increasing data volumes and
complexity safely (e.g., Sohail et al., 2018; Winter and Davidson, 2019).
To enable fast, collaborative, and trustworthy data sharing that meets these needs, we propose a Social
Data Foundation for Health and Social Care (“the SDF”) (Boniface et al., 2020), as a new form of data
institution.[6] Based on the “Five Safes Plus One,” and the concept of the “trusted research environment
(TRE)” (The UK Health Data Research Alliance, 2020), the SDF proposes datatrust services as a
sociotechnical model for good data governance, sensitive to the needs of all stakeholders, and allied
with advances in dynamic and secure federated research environments.
This article considers how health and social care transformation can be facilitated through datatrust
services—and is divided into four main parts.[7] First, in Section 2, we explore the conceptual basis for
TREs within the health and social care domain. Second, in Section 3, we demonstrate why the SDF model
is well equipped to support health and social care transformation for individual and community benefit,[8]
boost open science, and generate insights for multiple stakeholders—by providing an overview of the
SDF governance structure and an implementation of datatrust services. Third, in Section 4, we validate
our SDF model through its application to a test case centered on social determinants of health research: the
“Multidisciplinary Ecosystem to study Lifecourse Determinants of Complex Mid-life Multimorbidity
using Artificial Intelligence” (MELD) project (MELD, 2021). Finally, in Section 5, we summarize the key
points raised and outline next steps for the SDF model.
2. “TREs” in Health and Social Care: Motivation and Key Requirements
Best practice for health and social care research and innovation—specified by the UK Health Data
Research Alliance (UKHDRA) (2020)—necessitates that data sharing and linkage occurs within TREs,
providing:
a secure space for researchers to access sensitive data. Commonly referred to as ‘data safe havens,’
TREs are based on the idea that researchers should access and use data within a single secure
environment (Harrison, 2020).
This section examines the concept of a “TRE” when used for linking data held by different parties for
the purpose of health and social care transformation.
2.1. Challenges with the “data release model”
Despite the long-established notion of the “data safe haven” (Burton et al., 2015),[9] health and social data
linkage typically uses a “data release model”: data are made available to approved users in their own data
environments (UKHDRA, 2020).
5 While there is no universal definition of the term “data governance,” Janssen et al. (2020) provide a useful description of this
term in a multiorganizational context: “Organizations and their personnel defining, applying, and monitoring the patterns of rules
and authorities for directing the proper functioning of, and ensuring the accountability for, the entire life-cycle of data and algorithms
within and across organizations.” Note that Smart Dubai and Nesta (2020) describe collaborative data governance innovation as
“fairly embryonic” in practice.
6 The phrase “data institution” is used by the Open Data Institute (ODI) as an umbrella term to describe: “organizations whose
purpose involves stewarding data on behalf of others, often towards public, educational or charitable aims” (Dodds et al., 2020).
7 A glossary of key terms is provided after the main text of this article.
8 For example, alignment with the CARE principles (2018).
9 Trusted third party intermediaries continue to play a crucial role in facilitating data linkage for public health research and
innovation—such as, SAIL (2021; Jones et al., 2014) for linkage of specified anonymized datasets, and UKHDRA (n.d.) for
discoverability of particular UK health datasets held by members through its Innovation Gateway. For further discussion of this
point, the Public Health Research Data Forum (2015) provides 11 case studies of data linkage projects from across the world, and
-----
The data release model can be problematic. Firstly, health and social care data are often rich and largescale requiring “diverse tooling” (UKHDRA, 2020). However, data safe havens were “until recently”
only able to provide limited tools for data analysis (UKHDRA, 2020) as well as “secure remote working
solutions, real-time anonymisation, and synthetic data” (Desai et al., 2016). Further, once data are shared,
data providers often experience a loss of control over their data. They have reduced oversight over how
data are accessed, linked, and reused. Generated outputs from any data linkage activities (e.g., containers,
derived data, images, notebooks, publications, and software) are often not adequately disclosed
(UKHDRA, 2020), making it more difficult to effectively mitigate the risk of reidentification, and
increasing potential “mosaic effects” (Pozen, 2005).
In some cases, this loss of control and visibility may act as disincentives to sharing data with higher
levels of utility[10] (e.g., data providers may share only aggregated data where deidentified data at the
individual level may offer greater societal benefit), or sharing any data whatsoever. A lack of control,
transparency, and measurement of benefit may also prevent, weaken, or nullify a social license (defined
below) for specific health and social care research and innovation activities.
2.2. Upholding a social license
Fulfilling legal obligations alone is not enough to secure social legitimacy for health and social care
research and innovation (Carter et al., 2015)—TREs require a “social license” defined by Muller et al.
(2021) as follows:
A social licence in the context of data-intensive health research refers to the non-tangible societal
permission or approval that is granted to either public or private researchers and research organisations. This allows them to collect, use, and share health data for the purpose of health research by
virtue of those activities being trustworthy, by which is meant trusted to be in line with the values
and expectations of the data subject communities, stakeholders, and the public.
A social license, therefore, is dependent on perceptions by the main stakeholders that what is being
done is acceptable and beneficial (Rooney et al., 2014). Applied to the TRE, its social license is supported
by its perceived trustworthiness (which can be expressed in terms of benevolence, integrity, and ability
[Mayer et al., 1995]) toward the communities it intends to serve. For instance, aligning ethical oversight
with the CARE Principles for Indigenous Data Governance (2018)—that is, “collective benefit,”
“authority to control,” “responsibility,” and “ethics”—brings to center stage the need to ensure equanimity
across the data lifecycle. The UKHDRA (2020) describes the principal rationale for TREs as follows:
[to] protect—by design—the privacy of individuals whose health data they hold, while facilitating
large scale data analysis using High Performance Computing that increases understanding of
disease and improvements in health and care.
Along similar lines, the Research Data Alliance (RDA) outlines TRUST principles for data infrastructures—that is, “Transparency,” “Responsibility,” “User Focus,” “Sustainability,” and “Technology”
(Lin et al., 2020). However, changes in technology, especially within data science, introduce other issues.
Given the availability of ever-increasing volumes of people-centric data, the Toronto Declaration (2018)
highlights the fundamental human rights of data subjects, especially for those felt to be particularly
10 While strong deidentification of data is vital to protect the rights of (groups of) individuals, deidentification can lower the utility
of data. The definition of anonymized data is provided by GDPR (2016) Recital 26, namely “information which does not relate to an
identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no
longer identifiable.” Although strictly speaking, Recital 26 is not binding it has been used by the Court of Justice of the European
Union and other national courts to interpret the concept of anonymized data. As a matter of principle, two different processes can lead
to anonymized data: a risk-based approach to aggregation (i.e., data is aggregated, e.g., to produce counts, average sums) or a risk
-----
vulnerable. Similarly, the UK Data Ethics Framework (Central Digital and Data Office, 2020) champions
the overarching principles of transparency, accountability, and fairness. As well as compliance with
relevant law and constant review of individual rights, the framework seeks to balance community needs
against those rights. Governance must include all relevant, possibly cross-disciplinary expertise, and
ongoing training, of course. In a similar vein with artificial intelligence (AI) technologies, the European
Commission (2019) and the UK Department of Health and Social Care (2021) both emphasize respect for
individual rights within the context of potential community benefit, accountability, and transparency.
Beyond this, though, for stakeholders to agree on a social license, it must be clear that the rights and
expectations of individuals and the communities they represent should be upheld.
2.3. Bringing citizens back to center stage
To promote social license and public trust, collaborative data-sharing initiatives need to (re)connect data
citizens with data about them and its utility. This is particularly pertinent as health and social care research
and innovation becomes increasingly data-driven (Aitken et al., 2020) with national and international data
aggregators aiming to increase the power of AI through collection of ever-larger population and diseasespecific datasets. In such circumstances provenance, transparency of (re)use, and benefits suffer the risk of
opacity; citizen inclusion must be embedded in the design and operation of such processes.
Further,thesecondaryuseofdatacontinuestoincrease(JonesandFord, 2018),yetisoftenlessunderstood
by citizens (CurvedThinking, 2019). While such citizen engagement and participation is not new within the
health and social care domain, more needs to be done to empower citizens and ensure greater inclusivity in
practice (Ocloo and Matthews, 2016)—especially where healthcare data are (re)used by third parties
(Understanding Patient Data and Ada Lovelace Institute, 2020). Precedence must be given to meaningful
citizen engagement and participation (Davidson et al., 2013; Ford et al., 2019), which remain “inclusive and
accessible to broad publics” (Aitken et al., 2020). Of course, given citizens are the focus of public health
promotion, recipients of care, and data subjects, it is important they not only have access to information about
how data are being (re)used, but also have a voice in the transformation of health and social care systems.
In a world of data-driven policies and technologies, citizen voice and agency will increasingly be
determined by participation in datasets themselves. Unless minority representation in datasets is
addressed, bias and health inequalities will continue to be propagated. As such, citizen engagement,
participation, and empowerment should be viewed as core to health and social care data governance (e.g.,
Hripcsak et al., 2014; Miller et al., 2018). In particular, there needs to be inclusion of appropriately
representative citizens—along with other stakeholders—in the codesign and coevaluation of digital
health and social care solutions—to ensure that the benefits derived from “safe outputs” are “measured
and evidenced” (Centre for Data Ethics and Innovation, 2020) for communities and individuals.
2.4. Maintaining cohesion and the “diameter of trust”
Existing data-sharing relationships between stakeholder communities (e.g., a specific university, local
council, and hospital) can be replicated and strengthened through a TRE. To maintain the cohesiveness of
such a community, extensions to membership and engagement need careful consideration as they relate to
notions of community-building around TRE interactions. A “diameter of trust” (Ainsworth and Buchan,
2015; Ford et al., 2019; Northern Health Science Alliance, 2020) provides a means to:
gauge the size and characteristics of a learning, sustainable and trustworthy system
(MedConfidential, 2017).
A diameter of trust may be defined for a data institution by examining:
(i) “The level at which engagement with the citizen can be established […]”
(ii) “The extent of patient flows within the health economy, between organisations […]”
-----
(iv) “The ability to bring data together from the wider determinants of health and care relevant for that
population in near real-time […]” (MedConfidential, 2017).
As such, mechanisms need to be in place for a TRE, therefore, to expand while appreciating potential
impacts of community size. A diameter of trust cannot be predicated solely on demographics (e.g.,
geographic scope and community), and trustworthiness must be demonstrated through the operation of a
data institution and its proven outcomes, which will in turn encourage trust responses from its stakeholders (e.g., O’Hara, 2019).
2.5. Progressive governance
To remain effective and appropriate, a data governance model for a TRE must be progressive, learning
iteratively, integrating new best practices without undue delay, as well as remaining compliant with the
changing legal landscape. Best practice may be both organizational (e.g., the adoption of codes of conduct
and ethical frameworks) and technical (e.g., application of advanced security and privacy-enhancing
technologies [PETs]) in nature. To maintain trustworthiness, crucially, it must adapt to the experience and
concerns of all key stakeholders (data subjects, data providers, service providers, researchers, etc.). For
instance, Understanding Patient Data (Banner, 2020) has provided a first iteration of a high-level
“learning data governance model” that aims to meaningfully integrate citizen views within the decision-making lifecycle.
Lessons may be learned not only from the day-to-day practicalities of supporting individual research
projects, through the outputs of citizen engagement and participation activities, but also externally via
authoritative national and international guidance. As Varshney (2020) asserts:
progressive data governance encourages fluid implementation using scalable tools and programs.
Therefore, progressive data governance is essential, and contingent on greater automation of data
governance processes and tooling to accelerate trustworthy and collaborative data linkage (Sohail et al.,
2018; Moses and Desai, 2020).
2.6. Adhering to the “Five Safes Plus One”
Best practice for TREs is centered on the “Five Safes Framework” (UKHDRA, 2020). The framework was
devised in 2003 for the Office for National Statistics (ONS), and is used “for designing, describing, and
evaluating access systems for data” (Desai et al., 2016). An additional safe—“Safe Return”—has been added
by UKHDRA (2020), which is described below. The “Five Safes Plus One” approach identifies the key
“dimensions” (ArbuckleandRitchie, 2019)thatinfluencetheriskandtrustworthinessof healthandsocialcare
research projects—and are provided as “adjustable controls rather than binary settings” (UKHDRA, 2020).
For our purposes, based on the interpretation of the UKHDRA (2020), the six dimensions are as
follows:
- “Safe people”: only trusted and authorized individuals (e.g., vetted researchers working on ethically
approved projects in the interests of the public good) shall have access to the data within the TRE.
- “Safe projects”: only approved projects shall be carried out via the TRE that are legally and ethically
compliant and have “potential public benefit.”
- “Safe setting”: the TRE shall provide a trust-enhancing technical (“safe computing”) and organizational infrastructure to ensure all data-related activities are undertaken securely and safely.
- “Safe data”: all other “safes” are adhered to; data are deidentified appropriately before reusage via
the TRE, and remain appropriately deidentified across the life-cycle of an approved project.
- “Safe outputs”: all outputs generated from data analysis activities, undertaken via the TRE, must not
be exported without authorization.
-----
- “Safe return”: to ensure that recombination of TRE outputs with other data at “the clinical setting that
originated the data”—which may reidentify data subjects—is only undertaken if permitted and
consented by the data subjects concerned. (UKHDRA, 2020).[11]
A collaborative health and social care data-sharing scheme must also fulfill essential data governance
[requirements for ethics (e.g., institutional approval, Integrated Research Application System (IRAS)](https://www.myresearchproject.org.uk/)
[2021] approval), legal-compliance (e.g., data protection, confidentiality, contracts, and intellectual
property), and cyber-security (e.g., UK Cyber Essentials Plus [National Cyber Security Centre, n.d.],
ISO27001 [ISO, 2013], NHS Data Security and Protection Toolkit, 2021).[12]
3. The SDF Model
Models of safe and high-quality data linkage from multiple agencies necessitate a high level of interdisciplinarity (Jacobs and Popma, 2019) wider than the conventional boundaries of medicine and social
care (Ford et al., 2019; Sharon and Lucivero, 2019). To address this, the SDF model has adopted a
sociotechnical approach[13] to governing data (e.g., Young et al., 2019) where the multidisciplinary aspects
(including, ethical, healthcare, legal, social care, social–cultural, and technical issues) of safe linkage for
health and social care transformation are considered collectively and holistically from the outset.
A key objective of the SDF is to accommodate different stakeholder communities and maintain their
approval at a level sufficient for engagement and participation. Since multistakeholder health and social
care data needs to be aggregated at various levels (e.g., locally, regionally, and nationally), the SDF offers
a localized hub for data-intensive research and innovation facilitating multiparty data sharing through a
community of vetted stakeholders—including healthcare providers, social care providers, researchers,
and public health authorities. Consequently, stakeholders can work together on projects facilitated by the
SDF to discover solutions to health and social care transformation, promote greater collaboration, address
key local priorities and rapidly respond to new and emerging health data-related challenges, while
offering national exemplars of health system solutions.
InorderfortheSDFtoacquireandmaintainasociallicense,anycommunityandindividualbenefitsarising
from the SDF must be “measured and evidenced” (Centre for Data Ethics and Innovation, 2020) as well as
potential risks and constraints—and disseminated to communities and stakeholders in a transparent manner.[14]
The SDF model therefore includes a standard process to identify, monitor, and measure project outputs for
different stakeholders. Metrics here include: the alignment between project strategy and its generated outputs;
resource allocation compared with action recommendations from generated project outputs; and, demonstrated positive health and social care transformation impacts for certain stakeholder groups.
While the “Five Safes Plus One” approach provides a useful guide by which to design, describe, and
evaluate TREs, it does not specify how to implement governance and technology to enable these six safes.
To address this, our SDF model interlinks two key threads: governance and technology. We first describe
the SDF governance model, then the SDF datatrust services supporting the management of data services
through functional anonymization, risk management, ownership/rights management, and audit.
A concluding section describes how the combined governance and technical approach addresses the
requirements identified in Section 2.
11 It is worthwhile to note that pursuant to section 171(1) of the Data Protection Act (2018) (UK): “It is an offence for a person
knowingly or recklessly to reidentify information that is deidentified personal data without the consent of the controller responsible
for deidentifying the personal data.”
12 For a nonexhaustive list of data governance requirements, see Boniface et al. (2020).
13 Note that the SDF initiative brings together a multidisciplinary team of clinical and social care practitioners with data
governance, health data science, and security experts from ethics, law, technology and innovation, web science, and digital health.
14
-----
3.1. SDF governance
The overall purpose of SDF governance model is to facilitate the safe (re)usage of data through “welldefined data governance roles and processes” that builds “prompt and on-going risk assessment and risk
mitigation into the whole data lifecycle” (Stalla-Bourdillon et al., 2019)—ultimately to ensure SDF
activities deliver positive health and social care transformation for stakeholders. Effective governance
therefore must enable the SDF Platform and its Facilitator (defined below) to exercise best practice and
progressive governance in support of “Data Sharing and Analysis Projects” (DSAPs) that are legally
compliant, respect ethical considerations, and maintain a social license.
Governance needs to take into account the requirements, sensitivities, and vulnerabilities of stakeholders (especially those of stakeholders who are not directly involved in decision-making), so that SDF
governance must adopt the key fiduciary ethical virtues of loyalty and care (O’Hara, 2021).[15] However,
the relationship is not a fiduciary one in the full legal sense,[16] because the purpose of the SDF is not to
serve a narrow range of stakeholders’ interests exclusively, but to deliver positive outcomes across the full
range of stakeholders (including service providers and data controllers themselves) while behaving in a
trustworthy manner and retaining trust (O’Hara, 2021). SDF governance is not intended to constrain
decision-makers’ abilities to make the best decisions for their own organizations, but rather to include, and
be seen to include, the full range of relevant legitimate interests (O’Hara, 2019).
3.1.1. SDF governance structure
The SDF Governance model builds on the “Data Foundations Framework” (Stalla-Bourdillon et al., 2019,
2021) developed by the Web Science Institute (WSI) at the University of Southampton (UK) and Lapin
Ltd (Jersey). The Data Foundations Framework advocates and provides guidance on robust governance
mechanisms for collective-centric decision-making, citizen representation, and data stewardship, so is a
suitable basis for the SDF Governance, whose structure is shown in Figure 1.
The main bodies, roles, and stakeholders that form the “SDF Governance Structure” are as follows:
- Advisory Committee: A group of individuals external to the SDF—with a wide range of expertise
related to health and social care transformation (e.g., health and social care services, cyber-security,
data governance, health data science, ethics, and law)—that provides advice to the SDF Board on
matters related to data sharing (as necessary).
- Citizen Representatives: Experts in patient/service user voice, who are mandatory members of the
SDF Board (see below), and oversee the administration of citizen participation and engagement
activities to ensure that the SDF maintains a social license. In particular, Citizen Representatives
shall create, implement and manage a framework for citizen participation and engagement activities,
where citizens can cocreate and participate in health and social care systems transformation as well
as exercise their data-related rights.[17]
- Data Provider: An entity is the owner or rights holder of data that is either discoverable via the
Platform, hosted by the Platform, or utilized in DSAPs. The Data Provider is typically an organizational role, represented by a senior person, who has authority to share the data. A representative of
a Data Provider could act as a member of the SDF Board.
- Data User: An entity that discovers, uses, and/or reuses shared data made accessible via the SDF, or
manages DSAPs that are facilitated by the SDF Platform. The Data User role is subdivided into:
15 The authors are grateful for discussions with Prof. Kieron O’Hara on an earlier version of this article—specifically on the
notion of fiduciary ethical virtues in relation to datatrust services.
16 For instance, in the legal sense a fiduciary is “[a] person to whom power or property is entrusted for the benefit of another”—
where “[d]uties [are] owed by a fiduciary to a beneficiary”—for example, “a duty of confidentiality,” “a duty of no conflict,” and “a
duty not to profit from his position” (Thompson Reuters: Practical law, n.d.).
17
-----
|Col1|orma�on Governance & Ethical Oversight|Col3|Col4|Col5|
|---|---|---|---|---|
||Personal Data Processing Oversight||||
||Personal Data Processing Oversight||||
|||SDF||Board|
||||||
|Viability & Value Proposion|Col2|
|---|---|
|Regulatory Compliance|Security, Ethics & Privacy|
|---|---|
Figure 1. Social Data Foundation Governance Structure.
o Citizen—an interested member of the public wanting to understand dataset use and measurable
outcomes;
o Project Manager—a person responsible for DSAPs and ensuring legal compliance, policy
compliance, and “safe people”; and
o Data Analyst—a person working on a DSAP analyzing datasets.
- Data Protection Officer (DPO): A standard role (whose appointment in some instances is mandatory
under the General Data Protection Regulation (GDPR, 2016) for organizations that process personal
data to oversee the processing to ensure that it is compliant with GDPR obligations and respects data
subjects’ rights. For the SDF, the DPO is responsible for overseeing the processing of any personal
data within the SDF and advising on compliance with the GDPR, in particular the identification and
implementation of controls to address the risk of reidentification when different Data Providers’ data
are linked in response to Data Users’ queries, thus contributing to “safe data.” The DPO’s advice
extends to the special case of “safe return” where in some cases the outputs of projects are permitted
to be returned to the Data Provider for reintegration with their source data. Here, the DPO can work
with project staff and the Data Providers themselves to determine the potential for reidentification
when project results are reintegrated with source data, whether reidentification is permissible, or
how it can be prevented. The DPO works closely with the Independent Guardian who is responsible
for overseeing the processing of all types of data.
- External Auditor: A body independent to the SDF who is responsible for auditing or certifying its
performance, conformance to standards and/or compliance to regulations.
- Independent Guardian: A team of experts in data governance, who are independent from the SDF
Board and oversee the administration of the SDF to ensure that all data-related activities within the
-----
policies and processes that govern the operation of the SDF Platform. In particular, the Independent
Guardian shall: (a) help set up a risk management framework for data sharing; (b) assess the
proposed data use cases in accordance with this risk management framework; and (c) audit and
monitor all day-to-day data-related activities, including data access, citizen participation and
engagement. These responsibilities contribute to “safe projects,” trustworthy governance, and
support SDF transparency and best practice.
- Platform Facilitator: An executing officer, usually supported by a team, who oversees the technical
day-to-day operation of the SDF Platform, including the provision of infrastructure and functional
services for Data Providers and Data Users, the implementation of governance policies, and support
services for other roles where required.
- The SDF Board: The inclusive decision-making body whose appointed members represent the
interests of the SDF’s key stakeholders: Data Providers, Data Users, and Citizens. Feedback from
Data Providers and Data Users is obtained via the Advisory Board, and citizen engagement is
provided by the presence of Citizen Representatives as board members. The principal responsibility of its members is to administer the SDF’s assets and carry out its purpose, including the
determination of objectives, scope and guiding principles as well as progressive operating
policies, processes and regulations through maintenance of the SDF Rulebook. The SDF Board
therefore consumes multidisciplinary input from other roles and bodies—and consolidates this
knowledge into the policies and processes expressed in the SDF Rulebook.
3.1.2. Examples of SDF governance processes for DSAPs
The SDF provides a “safe setting” for “safe projects”—that is, DSAPs. The following table of standard
governance processes is by no means exhaustive, but provides an illustration of the types of processes that
must be in place for all DSAPs (Table 1).
Table 1. Examples of key standardized processes for all data sharing and analysis projects (DSAPs)
Relation to the “five safes
Key standardized process for all “DSAPs” plus one”
(a) The SDF DSAP approval process
DSAPs must successfully complete a SDF pre-approval process “Safe people”; “Safe
before access is granted to the SDF Platform. A DSAP must have a projects”
Project Manager who is responsible for overseeing and
administering the project, and is pre-approved by the SDF via
background checks. The Project Manager must apply to the SDF
and provide evidence that their project has satisfied relevant legal
and ethical requirements. This evidence will be checked by the SDF
governance body in accordance with the SDF Rulebook, and only if
satisfactory will the SDF support the project and grant access to any
specified datasets
(b) The SDF DSAP container process
DSAPs must be secure and isolated from other projects and data “Safe setting”
(c) The SDF DSAP default access policy
There must be a default access policy that prevents unauthorized data “Safe outputs”
export or download from the secure environment
(d) The SDF DSAP audit trail process
DSAPs must have their activities recorded for audit purposes in a “Safe setting”
nonrepudiable way; a project audit record is shared between the
P j M h l D P id ( ) d h SDF
-----
Table 1. Continued
Relation to the “five safes
Key standardized process for all “DSAPs” plus one”
(e) The SDF DSAP functional anonymization process
DSAPs must process data legally, ethically, and securely—in
accordance with all applicable data-sharing licenses and/or
agreements, ethics approvals, and all other necessary requirements.
The SDF must practise “functional anonymization,” which is
defined by Elliot et al. (2018) as “the practice of reducing the risk of
re-identification through controls on the data and its environment so
that it is at an acceptably low level”
“Safe data”; “Safe projects”;
“Safe setting”
3.2. Datatrust services
Datatrust services are a sociotechnical evolution that advances databases and data management systems
toward a network of trusted stakeholders—who are connected through linked data by closely integrating
mechanisms of governance with data management and access services. Datatrust services can offer a
multisided service platform (the SDF Platform), which creates value through linked data interactions
between Data Providers and Data Users, while implementing the necessary management and governance
arrangements. We now describe the specific functionalities of our datatrust service platform recognizing
that the features and design choices represent a specific implementation. We expect multiple implementations of datatrust services to emerge, each with particular characteristics, but designed to flexibly
support a range of governance models and values.
3.2.1. Overview: Datatrust service platform
For illustration, Figure 2 depicts a datatrust service platform embedded into the “SDF Governance
Model” (Section 3.1).
Some key features of this datatrust service platform (as depicted by Figure 2) are as follows:
|Social Social Licence Data science is Images, and Ethics an interactive Models, Ethical Approval process Notebooks requiring Datasets,|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
||||||||Data science is an interactive process requiring||Images, Models, Notebooks Datasets,|||||
||||||||reorchestration||Publications|||||
|Data Service Pipeline & DSAP Query Specification DSAP Re-orchestration Analysis & Artefact DSAP Specification Template Development Release Programmable||||||||||||||
||||||||||||Artefact Release|||
|||||||||||||||
|governance with dataflows as Orchestration API Discovery API Release API code Reconfiguring Data Source Data & Services API Service Function SeD rva it ca e s AnF ou nn yc mti io sn aa til o n Environment DSAP Pipeline Quality Function Registry Orchestrator Containers Management Data Processing Data Service Function Control Ownership & Services Risk Rights Registry Management Management Container Management Security Control Control Services API Service Function Control Secure Virtual Infrastructure Trusted Runtime Data Environment||||governance with dataflows as|Orchestr|ation API Discov||||ery API Relea|||se API|
|||||||ation API||Discov||ery API||Relea||
||||code Reconfiguring Data & SeD rva it ca e s AnF ou nn yc mti io sn aa til o n Environment DSAP Pipeline Quality Registry Orchestrator Containers Management Control Ownership & Services Risk Rights Registry Management Management Container Management Secure Virtual Infrastructure Trusted Runtime Data Environment|||||||||||
|Security Control Control Services Service Function||||||||||||||
|||Control API||||||||||||
|Distributed runtime Quality Dynamic Safe Outputs environment including controlled and Functional need quality hierarchical data centres trusted functions Anonymisation assurance (edge, etc)||||||||||||||
||||||||||||Safe Outputs need quality assurance|||
-----
A. Datatrust services related to “ensuring value and proportionality”: Such datatrust services are
necessitated to provide oversight for the lifecycle of DSAPs—through stages of request, orchestration,
knowledge discovery, and artifact release—in order to ensure “value and proportionality” within the
defined remit of the SDF for stakeholder approval (i.e., maintaining a social license) and ethical
oversight.
B. Datatrust services related to “purpose specification”: Such datatrust services are required to make
sure the purpose of DSAPs are specified in templates that combine both human and machine-readable
elements for consistency—and allow for human approval and automated deployment. Templates support
programmable governance where dataflows are defined as code, and are used to orchestrate qualitycontrolled data services within functional anonymization environments dynamically with repeatability.
C. Datatrust services related to “configuration of data and environment”: Such datatrust services for
“data configuration” and “environment configuration” are essential to give rise to the important property
of functional anonymization, which is concerned with addressing risk of reidentification by controlling
data and their environment:
A data environment usually consists of four key elements, and a description of a data environment
that includes these four elements is usually adequate for discussing, planning or evaluating the
functional anonymisation of the original dataset. These elements are: other data [/] data users [/]
governance processes [/] infrastructure (Elliot et al., 2018).
Interpreting these four key elements of the “data environment” for a DSAP:
a. “Other data” are further datasets within the DSAP that may be combined with the dataset in
question. Each DSAP is assessed for risk of reidentification on a case-by-case basis where the
specific combination of datasets and rights asserted in smart contracts are considered.
b. “Data users” are vetted Data Analysts (“safe people”).
c. “Governance processes” comprise the SDF governance processes—for example, for ethical
approval, stakeholder acceptance, policy enforcement through contracts, licenses, and data usage
policies associated with data service functions.
d. “Infrastructure” is provided by secure cloud resources to datatrust services that may be federated
through software-defined wide area networks (SD-WANs) allowing flexible configuration of
networking elements—including potential for distributed runtime environment and hierarchical
data centers (e.g., public cloud, private cloud, and edge). Datatrust services are deployed as a cloud
tenant, and utilize standard cloud services APIs in order to package containers and provision secure
pipelines of containers and resources dedicated to each DSAP, which are isolated from other DSAP
instances.
To enable a “safe setting” and support for “safe projects,” datatrust services comply with applicable
cyber security certification (e.g., UK Cyber Essentials Plus) and industry-specific certification security
standards (e.g., NHS Data Security and Protection Toolkit, 2021 to enable NHS health data processing). In addition, datatrust services are operated within a cybersecurity risk assessment and mitigation
process to guard against cyber threats and attacks—guided by ISO 27005 (ISO, 2018), and compliant
with ISO 27001 (ISO, 2013) risk management.
Once a DSAP is deployed, Data Users can access data services that operate on the datasets within the
DSAP to produce artifacts including publications, new datasets, models, notebooks, and images. All
outputs undergo quality assurance before release to academic, policy, or operational channels, including
measurable evidence for social license, and updated data services available for deployment in new
DSAPs.
-----
3.2.2. Datatrust service functionality
Datatrust services govern a wide range of data service functions to collect, curate, discover, access
and process health, and social care data. The development and packaging of data service functions is
conducted outside of the datatrust service platform by developers and then packaged as images for
deployment by the platform. Such data service functions are typically quality controlled software
libraries deployed by the platform depending on the requirements of Data Providers and Data Users.
In general, Data Providers are required to select cohorts and prepare data at source for sharing and
linking through tasks not limited to: (a) data deidentification; (b) data cleaning; (c) data quality
assurance; (d) data consistency assurance (e.g., ensuring pseudonymized identifiers are consistent
across datasets); and (e) data harmonization and compatibility assurance (e.g., normalizing data
fields across heterogeneous data sets generated by different software). The use of standardized
metadata, including provenance records, is important to make it possible to interpret and link
datasets. “Health Level 7 Fast Healthcare Interoperability Resources”—known as HL7 FHIR—
(Bender and Sartipi, 2013) is the predominant standard for discovery and exchange of electronic
health care records and research databases, although routine datasets and those related to wider
social determinants of health are vastly heterogeneous, with harmonization remaining a topic of
significant research.
Data Users may (re)use a single source, or multiple sources, of data. The connection of multiple data
sources is referred to as “data linking”—which is defined by the Public Health Research Data Forum
(2015) as:
bringing together two or more sources of information which relate to the same individual, event,
institution or place. By combining information, it may be possible to identify relationships between
factors which are not evident from the single sources.
Different data linking processes exist to combine datasets. For example, deterministic and probabilistic
techniques can be used to identify the same individuals in two datasets, and then processed using
cryptographic algorithms to provide tokenized link identifiers (Jones et al., 2014), while federated
learning pipelines offer the opportunity to build AI (Machine Learning) models that can learn from
multiple datasets without exchanging the data itself (Rieke et al., 2020).
The capability to flexibly specify, provision and monitor secure dataflow pipelines within the context
of ethical oversight, social license, and risk management are key characteristics of datatrust services. In
the following subsections, we describe four important aspects of datatrust service functionality in more
detail: functional anonymization, specification of data and dataflows, compliance decision support, and
ownership and rights management.
3.2.3. Functional anonymization
What is the “Functional Anonymization Orchestrator”? As its name suggests, the “Functional Anonymization Orchestrator” is the datatrust service for functional anonymization—and performs an automated process for deployment of data services, security controls/permissions, and allocation of compute
storage and network resources.
How does it work? The Functional Anonymization Orchestrator interfaces with a registry of preapproved, trusted data service functions and environment controls, as well as the Risk Management
component responsible for assessment of risks related to compliance, privacy, and cybersecurity. The
outcome of orchestration is an isolated and secure virtual environment for each DSAP, thus implementing “safe projects.” This combination of data configuration, environment configuration, and risk
management ensures that datatrust services offer the property of functional anonymization—and
therefore works to address its key elements, as cited by Elliot et al. (2018) (see Section 3.2.1 for
further information).
-----
3.2.4. Specification of data and dataflows
What is the “DSAP template”? The “DSAP Template” is the datatrust service for the specification of
data and dataflows that are subsequently used as part of ethical approvals, data-sharing agreements, and
data protection impact assessments.
Table 2. Data sharing and analysis project template types
DSAP baseline template Description
Platform hosted Data are uploaded to the Platform from a Data Provider and then
subsequently imported and linked within a DSAP
Applies to situations where data are hosted by the Platform only
Project hosted Data are uploaded and linked within a DSAP from one or more Data
Providers
Applies to situations where data are made discoverable via the Platform,
but are not hosted by the Platform
Federated query Data are hosted by a Data Provider and access is limited to analysis by
predefined distributed queries executed at Data Providers and
subsequent linking of results
Applies to situations where Data Providers wish to maximize control
over their datasets
Hybrid hosted and query Data is linked in some combination of Platform Hosted, Project Hosted
and Federated Query
How does it work? The Functional Anonymization Orchestrator allows Data Users to express DSAP
requirements through declarative templates using cloud-native orchestration languages (e.g., Kubernetes). Such declarative languages provide ways to construct machine-readable DSAP templates that
can be tailored using properties and used to provision and configure virtual instances offering the
required data services. The templates include data service configuration specifying queries that define
cohort inclusion and exclusion criteria, and retention policies. The standardization of templates and
APIs will be essential for interoperation between datatrust services governing health and social
care data.
Templates are technical in nature and therefore a predefined set of baseline templates are defined for
different project types, as outlined in Table 2. These templates support data distribution patterns for
hosting, caching, and accessing datasets—and offer the flexibility required for variability in risk of loss of
control associated with different types of datasets and Data Providers’ appetite for such risks. In addition,
the flexibility in data distribution models allows for replication, retention, and associated cost implications
to be considered.
3.2.5. Compliance decision support
What is the “risk management” component? The “Risk Management” component is the datatrust
service for regulatory compliance decision support for DSAP pipelines—and utilizes an asset-based risk
modeling approach following ISO 27001 (ISO, 2013); initially based on cyber security.
How does it work? Risk is explicitly defined in relation to threats upon assets. Assets are tangible and
nontangible items of value—while datasets are core assets of interest, other assets include software, data,
machinery, services, people, and reputation. Assets may be attacked by threats, which cause misbehavior
in the asset (i.e., unwanted, erroneous, or dangerous behavior). The risk to the asset is the severity of the
misbehavior combined with the likelihood of the threat Controls may be applied to the asset to reduce the
-----
A semi-automated approach for risk identification and analysis based on a security risk analysis tool—
the “System Security Modeller”—has been developed in previous work; and, applied to trust in
communication network situations (Surridge et al., 2018) as well as health care applications and data
protection compliance (Surridge et al., 2019). This work has been further extended into the realm of
regulatory compliance requirements in Taylor et al. (2020). Threat types supported by the Risk Management approach therefore include cyber security, such as those associated with the “Open Web Application
Security Project” (OWASP) Top Ten (2021), or compliance threats due to failures in regulatory or
licensing compliance.
The Risk Management component therefore detects cyber security or regulatory compliance threats—
based on a specified DSAP template—and provide recommendations for controls (mitigating strategies)
to block a compliance threat sufficiently to satisfy a regulatory requirement. While further work is
required on the specifics of the compliance requirements themselves, the methodology for encoding
compliance requirements into a risk management approach has been proven.
Example of potential risk: Reidentification. A key risk to be mitigated is the potential for reidentification
that can arise through data sharing, usage and reusage in DSAPs. Oswald (2013) defines the risk of
reidentification as:
the likelihood of someone being able to re-identify an individual, and the harm or impact if that reidentification occurred.
Data linking, “singling out” individuals, and “inference”—that is, deducing some information about an
individual (Article 29 Data Protection Working Party, 2014) are data vulnerabilities that may result in
potential harms to data subjects, as well as compliance threats and potential harms to Data Providers. The
Risk Management component ensures that the SDF can “mitigate the risk of identification until it is
remote” (Information Commissioner’s Office, 2012) using control strategies (e.g., source pseudonymization and k-anonymization) that are assessed according to the DSAP template risk model, and monitored
through risk assessment points on DSAP deployment and data service functions (e.g., upload, query, and
aggregation). The Risk Management component provides risk assessment to the Functional Anonymization Orchestrator—and only if an acceptable, low level of risk is found will the services provide data to
Data Users. Where an unacceptable level of risk is found, data access is denied pending further checking
and additional measures to deidentify data.
Example of risk assessment points: Federated query scenario. As an example, Figure 3 shows the risk
assessment points for the “Federated Query DSAP Template” (as denoted by four numbered green
diamonds):
Project **_SDF Pla�orm Trusted Research Environment_**
Manager Data Provider
manage **Pseudonymised**
Query
Query Distribu�on **Data**
Query &
query 4 Analysis Query Results Result 1
3
results Linking
Project **_Secure Project Environment_** Query **Pseudonymised**
**Data**
Analyst
**_Shared Project Audit_** Result 2 Data
Provider
Fi 3 R id ifi i i k f di ib d
-----
In the Federated Query scenario (one of four predefined baseline DSAP templates outlined in Table 2),
policy enforcement is dynamic with risk assessment points (1) and (2) placed at each Data Provider upon
receipt and processing of a query fragment; here the results of the query fragment are checked. Risk
assessment point (3) occurs after the result fragments are linked, and risk assessment point (4) occurs after
any analysis of the linked result.
Note that a key difference between the Platform Hosted and Federated Query scenarios (see Table 2) is
where reidentification risk assessment takes place. While, in the Federated Query scenario, some of the
reidentification risk checking is distributed to Data Providers; in the Platform Hosted scenario, all such
checking is undertaken by the operator of the Platform. The ability to check for reidentification risk on a
per-query basis at Data Provider premises (in the Federated Query scenario) therefore strengthens the Data
Provider’s control over their data for circumstances where data cannot be exchanged.
3.2.6. Ownership and rights management
What is the “ownership and rights management” service? SDF Governance requires that each DSAP
have its activities recorded for audit purposes in a nonrepudiable way. This datatrust service therefore
ensures that all permitted stakeholders for a specified DSAP—for example, Project Manager, Data
Provider(s)—have access to a “Shared Project Audit Distributed Ledger” where all transactions for a
DSAP are recorded.
How does it work?
Distributed ledger technology. To provide such Shared Project Audit Distributed Ledgers, the SDF
employs distributed ledger technology (based on blockchain technology):
A distributed ledger is essentially an asset database that can be shared across a network of multiple
sites, geographies or institutions (UK Government Chief Scientific Adviser, 2016).
Distributed ledger technology has appropriate properties for “DSAP audit” in that it is immutable
(i.e., records cannot be altered or deleted), and it is inherently shared and distributed (i.e., each permitted
stakeholder has their own copy of the audit record). All transactions within the DSAP (e.g., analysis
activities of data analysts) are automatically recorded onto the audit ledger. Audit logs are irreversible and
incontrovertible, thus providing a robust audit trail, as well as encouraging compliant behavior.
Smart contracts. To ensure compliance with all specified data-sharing agreements and/or licenses
applicable to a DSAP, the Ownership and Rights Management service also employs smart contracts
technology. Smart contracts are related to distributed ledger technology—as programs are run on a
blockchain, which
define rules, like a regular contract, and automatically enforce them via the code (Ethereum, 2021).
Smart contracts have several useful properties for the purposes of “license terms enforcement” in the
SDF Platform:
- Smart contracts are programs that provide user functionality: Data browsing, analysis, access,
linking, and query functions can be written within smart contracts, and used by Data Analysts in
DSAPs.
For example, a smart contract can implement data linking using pseudonymized identifiers, or
queries on datasets at Data Providers.
- Smart contracts provide means to automate enforcement of agreement terms: Each invocation of
functionality provided by smart contract programs can be evaluated at runtime—based on the
combined data input, function, and parameters of the invocation—for compliance with the license
terms of the Data Providers whose datasets are used in a DSAP. Smart contracts implementing data
-----
be enforced at the point of execution by the Data Analyst. This automated enforcement prevents Data
Analysts from executing operations that are inconsistent with the license terms of Data Providers.
For example, if one Data Provider prohibits pseudonymized linking, their dataset will not be
available to a smart contract implementing pseudonymized linking; whereas for other Data
Providers who do permit linking, their datasets can be available to the “linking” smart contract.
- The transactions executed for smart contracts are recorded automatically on “Shared Project Audit
Distributed Ledgers”: Given smart contracts are implemented on blockchain (i.e., the underlying
technology shared with distributed ledger technology), a key link between the functionality
available to data-centric functions executed by Data Analysts and the Shared Project Audit
Distributed Ledger is provided.
It is important to highlight that further work is required to establish specific smart contract dataset
functions and license terms to be enforced. While it is expected that there will be highly specific
requirements for individual DSAPs, it also remains likely that there will be some common functionality
and license terms frequently used across many types of DSAPs.
4. Validation of the SDF Model
To validate the SDF model, we now analyze a real-world project exploring the social determinants of
health: the “Multidisciplinary Ecosystem to study Lifecourse Determinants of Complex Mid-life Multimorbidity using Artificial Intelligence”—MELD project (MELD, 2021). This test case seeks to answer
the question: if the MELD project were to be supported by the SDF (as a DSAP), to what extent would the
features of the SDF model improve the safety, execution and impact of the project?
4.1. Test case overview: National Institute for Health Research — MELD project
MELD focuses on the “lifecourse causes of early onset complex” multimorbidity; “early onset” is
where a person has two or more long-term conditions before the age of 50 years old, and “complex”
where a person has four or more long-term conditions (MELD, 2021). Multimorbidity is one of
several key focus areas for health and social care transformation. A substantial number of people
(30% all ages, 54% > 65 years of age and 83% > 85 years) suffer from two or more long-term
conditions (Cassell et al., 2018), with those from more disadvantaged backgrounds more likely to
develop multimorbidity earlier. Multimorbidity affects quality of life, leads to poorer health outcomes and experiences of care, and accounts for disproportionate healthcare workload and costs.
Solutions are needed to understand disease trajectories over the life-course (start well, live well, age
well) at population levels, and to develop effective personalized interventions.
Furthermore, complex and heterogeneous longitudinal and routine linked data—including social
determinants of health from datasets beyond electronic healthcare systems—are needed to study the
clusters and trajectory of disease.
MELD is selected for validation of the SDF model as it is closely aligned with the purpose of the SDF.
Specifically, MELD is seeking to develop novel public health interventions by analyzing the social
determinants of health using complex linked social and health datasets. MELD is part of a multidisciplinary ecosystem for data linkage and analysis together with citizen participation and engagement.
As such, MELD helps unpack different data requirements required for DSAPs—and can drive the
development of DSAP templates. MELD also highlights that data linkage can take many forms, such as
transfer learning, and demonstrates the variety of generated outputs that would need to be managed—for
example, derived data, artificial intelligence/machine learning models, and tooling.
4.2. MELD 1.0: Initial project
The first phase of MELD brings together a multidisciplinary team including researchers from medicine
-----
explore life-course determinants of multiple long-term conditions. MELD is supported by a National
Institute for Health Research (NIHR) and considers two datasets:
- The 1970 British Cohort Study (BCS70) dataset: The BCS70 is a well-established, longitudinal birth
cohort dataset that “follows the lives of more than 17,000 people born in England, Scotland and
Wales in a single week of 1970.” (UK Data Service, 1970) This dataset is available for secondary use
via the UK Data Service. The MELD project has access to all BCS70 data collected as part of data
sweeps.
- The Care and Health Information Exchange Analytics (CHIA) dataset: The CHIA (Care and Health
Information Exchange, n.d.) is a clinical dataset provided by the NHS and includes 700,000 patients
in Hampshire and the Isle of Wight. The dataset is available for secondary use via the South, Central,
and West Commissioning Support Unit on behalf of health and social care organizations in
Hampshire, Farnham, and the Isle of Wight.
The two datasets provided must only be accessible to the research team for the purposes of the project. The
development phase has received institutional-level (the appropriate Research Ethics Committee [REC]),
and national-level ethics approval (NHS REC). As part of the ethics review process, the project team has
carried out a data protection impact assessment.
MELD will develop AI pipelines to:
(1) Curate the datasets to assess and ensure readiness;
(2) Develop clustering algorithms to identify early onset complex and burdensome multiple long term
conditions;
(3) Explore if sentinel conditions and long-term condition accrual sequence can be identified and
characterized; and
(4) Devise AI transfer learning methods that allow extrapolation of inferences from BCS70 to CHIA
—and vice versa.
The intention is for MELD to link together more datasets, in particular those related to other birth cohorts
and larger routine datasets requiring “the necessary environment, principles, systems, methods and team
in which to use AI techniques” in order to “identify optimal timepoints for public health interventions”
(IT Innovation Centre, n.d.).
The exploratory work undertaken will be used as a proof of concept for a larger research collaboration
application:
to scale the MELD ecosystem to ‘combine’ other birth cohorts and larger routine datasets giving
much greater power to fully explore the lifecourse relationship between sequence of exposure to
wider determinants, sentinel and subsequent clinical events, and development of early other
complex MLTC-M clusters (MELD, 2020).
It is therefore vital that MELD is able to handle more complex types of data linkage activities than the
remit of its current study—for example, combinations of multiple types of diverse data from additional
data providers with different licensing arrangements, provenance, and quality. As part of these future
work plans, MELD requires a data governance model that is scalable and adaptive to its growing needs.
4.3. Hypothetical MELD 2.0: Scaling up data linking facilitated by the SDF
The SDF datatrust services will support the MELD project team in the delivery of research outcomes
while helping stakeholders manage associated risks efficiently. The stakeholders include: the NHS Health
Research Authority (HRA); the two data providers (NHS for CHIA and the UK Data Service for BCS70);
the principal investigator for MELD who takes the role of project manager; and the data analysts
-----
4.3.1. Project approvals, data access and resources
The principal investigator for MELD must first establish the required research ethics approvals (e.g., at
institutional and national levels), data access rights, and resources to undertake research—all of which are
necessary to delegate rights to Data Analysts as part of the MELD project team. For instance, NHS HRA
approval “applies to all project-based research taking place in the NHS in England and Wales” (NHS
HRA, 2021). NHS HRA approval requires researchers to submit a research application form through the
IRAS—which includes detailed study information along with supporting documents (NHS HRA, 2019,
2021)—such as, “Organization Information Document,” “Schedule of Events” and “Sponsors Insurance”
provided by the principal investigator’s host organization following local approval.
While institutional and national governance processes for approval requests require similar information, there is little standardization between processes and document structures. Consistency between
described dataflows, data scope, policies, and environments is entirely disconnected from system
implementation. By starting with a project template configured with human and machine-readable data
requirements, dataflows, and environment controls (e.g., the DSAP template as described in Section
3.2.4), risk management can be directly embedded into research processes—and thus greater agility in
such processes can be achieved. The project specification is then used and adapted to authority requests.
Ideally, authorities need to transform governance web forms to programmable APIs and business
processes; collaboration through standardization will be required.
4.3.2. Example: Setup and operation of a MELD 2.0 DSAP
We now outline the main steps to be taken by the principal investigator for MELD and the SDF in order to
set up a DSAP for MELD 2.0.
**_SDF Pla�orm Trusted Research Environment_**
Meld PI
Administer Metadata 1
Project **_NHS_**
3
**CHIA Data**
1 **CHIA**
CHIA Query **Container** **(pseudonymous)**
Explore 4 Examina�onData Fragment
Data 1
**_MELD_**
CHIA **_Shared_**
Data Analysis Query 4 1 Result **_Audit_**
Analyst Query 4 Distribu�on 4 2 Fragment Metadata **_Records_**
BCS70
Use of ML 4 ML Tools Query 2
Tools Fragment **_UKDS_**
Analysis **BCS70**
Results Query Results, ML **Data** 2 **BCS70**
Analysis
4 1 2 & Reden�fica�on 2 **Container** **(pseudonymous)**
Risk Assessment BCS70 Result
Fragment
**_MELD Secure Project Environment_**
1 2 3 4 Project Agreement Audit Checkpoint
Re-iden�fica�on Risk Assessment Checkpoint
Figure 4. MELD within the datatrust service platform.
Figure 4 shows the secure project environment for the MELD 2 0 project within the SDF platform
|Administer|Col2|Col3|Col4|
|---|---|---|---|
|||||
|||Adm||
||||oject 3|
-----
(1) The principal investigator for MELD (“MELD PI”) requests a DSAP—and then completes a
DSAP template with data configuration (inclusion, exclusion, and retention) along with supporting information regarding satisfaction of compliance requirements, ethical soundness and
social benefit.
(2) The SDF’s governing body performs background checks on the principal investigator for MELD
—and if approved assesses the project application.
(3) The SDF’s governing body assesses the application and if the evidence regarding compliance,
ethics, and social benefit is satisfactory, the SDF agrees to support MELD.
(4) The principal investigator for MELD makes an agreement with the SDF for a DSAP (as denoted
by the “green circle 3” in Figure 4).
(5) The SDF creates a DSAP for MELD. The MELD DSAP (represented by the “pink box” in
Figure 4) is a secure environment—isolated from DSAPs for other projects. Access to the MELD
DSAP for data analysts is specified by principal investigator for MELD and enforced by the
platform.
(6) The principal investigator for MELD acquires agreements and/or licenses from specified data
providers, which will come with terms of use that must be respected (indicated by the “green
circles 1 and 2,” respectively in Figure 4). The MELD principal investigator names the SDF as
their TRE in agreement with these specified data providers.
(7) The datasets are acquired from the Data Providers by the SDF (as the named delegate by the
principal investigator)—and are loaded into the MELD DSAP.
(8) The principal investigator for MELD authors the “MELD Data Usage Policy” (as denoted by
“green circle 4” in Figure 4), which must be consistent with the licenses and/or agreements
between the principal investigator for MELD and the two data providers (“green circles 1 and 2”
in Figure 4).
(9) The principal investigator for MELD appoints Data Analysts, who must agree to the “MELD
Data Usage Policy.”
(10) The principal investigator for MELD grants access to the “MELD DSAP” for each approved
Data Analyst.
During operation, the following steps are performed, most likely iteratively. All MELD analyst operations
are via datatrust services that perform data functions encoded within smart contracts that provide
functionality constrained to agreements and policies for MELD, which are denoted by the “green
circles” in Figure 4.
(1) One or more specified Data Analysts for MELD explore dataset metadata limited by those defined
in the DSAP specification.
(2) One or more specified Data Analysts for MELD formulate queries, which may be on an individual
dataset or inferences between datasets. These queries must be consistent with:
a. Specified data usage terms for the DSAP; and
b. Approvals (IRAS for CHIA dataset; UK Data Service End User Agreement for BCS70
dataset).
(3) One or more specified Data Analysts for MELD run queries and use machine learning tools to
analyze the resultant data. Depending on the query from the Data Analyst(s), the results may be
from one dataset or both datasets linked by common attributes. Data Analysts are not able to
download the datasets from the DSAP.
(4) Results are returned after internal checking for consistency with the appropriate agreements. Audit
records (the “large shared green box” in Figure 4) are maintained and shared between the key
stakeholders to encourage transparency and promote trustworthiness.
-----
4.4. Validation
The SDF model aims to improve and accelerate data flows for health and social care transformation in five
ways (Boniface et al., 2020), through: “empowerment of citizens”; “greater assurances to stakeholders”;
“faster ethical oversight and information governance”; “better discoverability of data and generated
outcomes”; and “facilitation of localized solutions with national leadership.” We now explore each
proposed benefit—and how it can be realized for the MELD project.
4.4.1. Empowerment of citizens
Given the depth of data required to understand lifestyle behaviors, socioeconomic factors, and health, the
development of AI-based interventions addressing multimorbidity over the life-course necessitates a
trusted partnership with citizens: access to such data is contingent on trust building.
The SDF model is governed through the principles and values of open science, ethics, integrity, and
fairness in full consideration of digital inclusion (i.e., literacy and innovation opportunities), social
inclusion, and gender equality. It further considers the structures required to support multidisciplinary
and multimotivational teams. Through Citizen Representatives, patient/service user voice is represented at board level. Citizen empowerment is further addressed through collaborations with local
initiatives, such as the Southampton Social Impact Lab (2021), which allows for novel ways of
codesign and coevaluation, including hard-to-reach groups. The SDF model therefore goes beyond
representation in governance—and further facilitates participation in the design of solutions for
communities.
The SDF is positioned in Southampton (UK)—a region serving a 1.8 million population (3.7
million including specialist care) with a large network of distributed health and social care providers.
The geographic region and environmental conditions are highly diverse—including urban, maritime,
and rural economic activities as well as large permanent/transient populations presenting a diverse
population with a wide range of health and care needs. This population diversity helps to ensure civil
and citizen engagement activities (e.g., patient/service user voice, codesign, and coevaluation),
related to the discovery and evaluation of new interventions, as inclusive and connected to local
needs. The SDF is therefore well positioned to make sure that research results are publicized
appropriately, and that community and individual benefits are realized—with evidence provided of
proven potential.
4.4.2. Greater assurances for stakeholders
The design, testing, and generalization of interventions from MELD require the incremental exploration of the datasets required to develop new clustering and prediction algorithms. The methodology
requires an iterative process of data discovery, curation, and linking to assess the readiness of datasets
for the required analysis. The quality of routine health and social care data, and birth cohort data is
unknown, as is the performance of AI pipelines applied to such data. As such, data needs to be carefully
assembled, incrementally, in accordance with governance requirements for data minimization and
mitigation of risk.
The approach of the SDF to dynamic functional anonymization, risk management, and auditable
processes is ideally designed to efficiently support projects such as MELD and provide assurances for
stakeholders. Both the CHIA and BCS70 datasets are pseudonymized, and therefore present a risk of
reidentification when analyzed or linked, with newly identified datasets introducing further risks. The
SDF provides checkpoints for such risks within data pipelines from source to insight, and data analysis
functionality constrained to compliance with license terms of Data Providers. Further, given that the SDF
supports Federated Query project types, data are not linked until the purpose is known (i.e., to meet the
principle of purpose limitation), the prior knowledge of the project purposes, usage context and dataset
structures involved can inform the reidentification risk assessment. The use of transparent, shared,
nonrepudiable audit records encourages compliant behaviors. Audit checkpoints for recording access
-----
are audited at the same point—for example, when the Data Analyst receives query results, the two license
agreements plus data usage policy terms are audited. With all datasets stored within an isolated and secure
project environment, Data Analysts are not able to download them. The datasets therefore cannot be
propagated further, thus reducing the risk of unauthorized access, and potential loss of control experienced
by Data Providers.
4.4.3. Faster ethical oversight and information governance
The initial MELD project is a form of “data release” where datasets (CHIA and BSC70) are defined in
advance at the start of the project—and a single ethics approval is provided. In many ways, the datasets
and governance of the MELD 1.0 project are simple—however, this initial approach does not scale when
the complexity of data linkage increases (i.e., MELD 2.0) raising challenges for capturing the data
requirements, but also providing the information to those responsible for ethical oversight, such as the
NHS HRA and research sponsors.
The SDF model addresses the sociotechnical interface between humans responsible for ethics
decisions and the machines used by analysts to undertake the research. By establishing the concept of
DSAP templates—as a sociotechnical integration mechanism driving oversight, risk management, and
provisioning—processes can be semi-automated in ways that ensure the human-in-the-loop is retained.
The automation of processes will deliver efficiencies in approvals, risk assessment (e.g., deidentification
standards) and dataflows, and such efficiencies will allow for the potential for iterative ways of working
and reorchestration of DSAP projects when new requirements are discovered. Given the SDF model is
predicated on strong oversight and monitoring of approved projects through the Independent Guardian,
the SDF is able to help to support and present the exploratory work undertaken during a proof of concept.
This is because the SDF is able to provide assurances to data providers that licensing arrangements were
complied with, and best practice was followed.
4.4.4. Better discoverability of data and generated outcomes
MELD is part of a wider National Institute for Health Research (NIHR) AI programme, which itself
is part of a vibrant research community seeking ways by which AI solutions can deliver better
care. Collaboration and sharing outcomes therefore will be an essential part of MELD success and
impact.
The SDF supports an ecosystem for data-driven research and innovation in health and social care. As a
hub, the SDF provides opportunities for MELD to connect with a community of stakeholders sharing
common interests (including local social and healthcare data providers), and experts from a wide range of
disciplines, such as ethics, law, psychology, sociology, and technology. By joining the SDF community,
MELD will be enriched through increased citizen engagement and participation, and feedback from the
research and innovation community can uncover new associations between projects (including projects
that are already part of the SDF and from elsewhere), and lead to new opportunities for collaborations and
impact. More general outcomes—such as new datasets, data usage metrics, reusable
methodologies, tools, and models—are all possible benefits to the community that can increase MELD
impact. For example, as a progressive data governance model, the SDF would aim to iteratively learn and
integrate best practices from the MELD project to influence policy, benefit the SDF community, and
provide evidence for a social license.
4.4.5. Facilitation of localized solutions with national leadership
MELD aims to provide community and individual benefits to those living with multimorbidity—and must
develop interventions in ways that both connect with the local needs of citizens and can be generalized and
scaled nationally.
The SDF recognizes that disruptive research and innovation often happens between trusted local
partners working in placed-based systems who address identified challenges together (NHS, n.d.).
-----
Projects are undertaken in the context of supportive national policies—where engagement in scale-up
programs turns federated place-based transformation into national assets. This contrasts with approaches
to build single solutions nationally, which expect place-based systems to accept and adopt them. The SDF
therefore supports projects where experimentation is needed to explore unknown solutions, and retain
pluralism of research, while developing leaders that have influence on the national stage.
4.5. Limitations
Although the MELD project has provided an initial validation of the SDF platform as a real-life
implementation for a TRE, there are some limitations. First, since one of the MELD project
coinvestigators also coleads the SDF project, it is possible that data governance constructs may have
influenced each project implicitly. However, we maintain that such overlap demonstrates that the SDF
is based on experience and not just a literature review. Secondly, focusing on one test case does not
cover the breadth of challenges related to data linkage for health and social care transformation. For
instance, the management of multiple long-term conditions is only one area of the much larger field of
health and social care transformation, the data users are only from one academic institution, and there
are no transnational data-sharing activities. However, notwithstanding these limitations and for the
purposes of this article, we consider that as a “thought-exercise” the MELD test case provides a useful
contribution to the much wider and on-going effort of the SDF initiative to test and validate the SDF
model.
5. Conclusion
The SDF model provides one example of a TRE, which offers a new approach to data-driven transformation of health and social care systems that is secure, rights respecting, and endorsed by communities.
Through datatrust services—a sociotechnical evolution of databases and data management systems—
stakeholder-sensitive data governance mechanisms are combined with data services to create TREs that
adhere to the “Five Safes Plus One.” In an age of increasing data complexity and scale, such TREs can
accelerate research and innovation that depends on multistakeholder linked data (e.g., social determinants
of health research) while providing a trust-enhancing and well-regulated structure offering assurances to
data subjects and data providers. The ability of datatrust services to dynamically orchestrate secure
dataflows with properties of functional anonymization and monitor risks at runtime—allows for progressive governance models, and more iterative knowledge discovery processes. The means to iterate
creates new ways to incorporate collective ethical oversight and citizen participation (i.e., representation,
codesign, and evaluation) more naturally into phases of research.
We further outlined the “SDF Governance Model,” including the institutional structure, processes, and
roles with consideration of the full range of relevant legitimate interests and the fiduciary ethical virtues of
loyalty and care. We then described how datatrust services can support DSAPs using capabilities of
functional anonymization orchestration, risk management, and auditable data ownership and rights
management. We then validated the approach against a representative project “MELD” exploring the
social determinants of multimorbidity over the life-course—as an exemplar DSAP—in order to highlight
how MELD can benefit from the SDF model when scaling the research to more complex datasets.
In this article, we have presented our version of datatrust services within the specific context of the
SDF. However, we recognize that there is no-one-size-fits-all approach, and there may be simpler and
more complex forms of datatrust services better suited to other data-sharing initiatives with different
governance arrangements to the SDF (e.g., with other data-sharing purposes, contexts, diameters of trust,
and stakeholder expectations). While we must remain cognizant of the types of values embedded in the
design of datatrust services, and the extent to which these could act as constraints if redeployed in other
multiparty sharing scenarios, elements of the SDF model could be used as primitives for datatrust services
as part of other TREs. The design and development of these datatrust services therefore must be suitably
-----
flexible so that they can be generalized to deliver different governance arrangements and facilitate safe
data sharing within other settings and domains.
Following agreement of the three principal partners, we now move into a phase of establishing a SDF in
Southampton working with citizens to attain social license, and other stakeholders to provision infrastructure and datatrust services. A set of transformation projects have been identified beyond the initial
MELD project that aim to deliver a wide range of benefits to citizens, healthcare providers, and social care
providers, but are also being used to drive forward approaches to governance. This interplay between
“progressive digitalisation” and “progressive governance” is at the heart of the SDF model, which aims to
ensure that governance reflects the values and priorities of the community, in order to accelerate projects
so that outcomes benefit citizens as soon as possible.
Glossary
For the purposes of this article, we define the following terms:
Data governance mechanisms Well-defined roles and processes for ensuring the safe and secure sharing, usage and
reusage of health and social care data as part of a TRE, such as in relation to collectivecentric decision-making, citizen representation, and data stewardship.
Data sharing and analysis project A health and social care research project that is approved by the SDF Governance
(“DSAP”) Board for facilitation via the SDF Platform.
Datatrust services A sociotechnical evolution that advances databases and data management systems,
and brings together stakeholder-sensitive data governance mechanisms with data
services to create a TRE.
Fiduciary ethical virtues of loyalty and Behavior seen to be trustworthy, that retains trust and, in so doing, delivers positive
care outcomes across the full range of stakeholders in relation to a data institution (such as
the SDF).
Functional anonymization The practice of mitigating the risk of reidentification to a remote level by
implementing “controls on data and its environment” (Elliot et al., 2018).
Health and social care transformation The progressive digitalization of health and social care services in response to societal
demands and advances in clinical practice, medicine, and technology.
Multimorbidity The cooccurrence of two or more long-term health conditions.
Social Data Foundation for Health and A new data institution for multiparty data sharing to enable positive health and social
Social Care (“the SDF”) care transformation via a TRE, which is based on a specific implementation of
datatrust services.
Social determinants of health Nonmedical factors that significantly affect individual well-being and health
inequalities—for example, education and employment.
Social license A high degree of social legitimacy; stakeholder approvals for health and social care
research, innovation and transformation given to data institutions (which are under
constant reevaluation)—on the basis that the main stakeholders perceive that what is
being done is acceptable, trustworthy, and beneficial toward the communities it
intends to serve.
Trusted research environment A safe and secure data platform for approved DSAPs that can be accessed (remotely)
by authorized persons (e.g., data analysts); and, which abides by the “Five Safes Plus
One” approach: “safe people,” “safe projects,” “safe data,” “safe setting,” “safe
outputs,” and (where necessary) “safe return” (The UK Health Data Research
Alliance, 2020).
Acknowledgments. This article expands and extends the concepts in our Web Science Institute (WSI) white paper (Boniface et al.,
2020). We therefore give special thanks to all those that supported and contributed to this white paper. This includes Rachel Bailey
(University Hospital Southampton NHS Foundation Trust), Tom Barnett (Web Science Institute, University of Southampton), Prof.
Sally Brailsford (CORMSIS, University of Southampton), Guy Cohen and Marcus Grazette (Privitar), Paul Copping (Sightline
Innovation), Christine Currie (CORMSIS, University of Southampton), Jo Dixon (Research and Innovation Services, University of
Southampton), Dan King (Southampton City Council), Alison Knight (Research and Innovation Services, University of Southampton), Prof. Kieron O’Hara (Electronics and Computer Science, University of Southampton), Alistair Sackley (Web Science
Institute, University of Southampton), Prof. Mike Surridge (IT Innovation Centre, University of Southampton), Neil Tape
(University Hospital Southampton NHS Foundation Trust) Gary Todd (Famiio Ltd ) and Wally Trenholm (Sightline Innovation)
-----
support of Pinsent Mason lawyers in the development of legal arrangements. We again thank Prof. Kieron O’Hara for the discussions
and valuable input on the notion of fiduciary ethical virtues in relation to datatrust services. Finally, but by no means least, we extend
our special thanks to NIHR MELD lead investigators, Dr. Simon Fraser and Dr. Nisreen Alwan at the University of Southampton for
the contribution to the validation case. Please note that all views and opinions expressed in this article are those of the authors, and do
not necessarily represent those named above. A pre-print version of this article is available via EPrints Soton—Boniface et al.
(2021). Prof. Dame Wendy Hall also delivered a keynote speech, which is available via the Sixth International Data for
Policy Conference playlist (Hall, 2021).
Funding Statement. The Social Data Foundation (SDF) project is partly funded and supported by the University of Southampton’s
Web Science Institute (WSI) and Southampton Connect. This research article has also been supported in part by the “Multidisciplinary Ecosystem to Study Lifecourse Determinants of Complex Mid-life Multimorbidity using Artificial Intelligence” (MELD)
project funded by the National Institute for Health Research (Award ID: NIHR202644).
Competing Interests. The authors declare no competing interests exist.
Author Contributions. Conceptualization: M.B., L.C., B.P., S.S.-B, S.T.; Methodology: M.B., S.S.-B.; Writing—original draft:
M.B., L.C., B.P., S.S.-B, S.T.; Writing—review and editing: M.B., L.C., W.H., B.P., S.S.-B., S.T. These author contributions are
[based on the CRedit Taxonomy—available at: https://casrai.org/credit/. All authors approved the final submitted draft.](https://casrai.org/credit/)
Data Availability Statement. As part of the MELD test case, the two following datasets are discussed: the 1970 British Cohort
Study (BCS70) dataset available from the UK Data Service (beta.ukdataservice.ac.uk/datacatalogue/series/series?id=200001); and,
the Care and Health Information Exchange Analytics (CHIA) dataset available from the South, Central and West Commissioning
[Support Unit on behalf of health and social care organizations in Hampshire, Farnham, and the Isle of Wight (https://](https://careandhealthinformationexchange.org.uk/)
[careandhealthinformationexchange.org.uk/). Restrictions apply to the availability of these data.](https://careandhealthinformationexchange.org.uk/)
References
Abrams EM and Szefler SJ (2020) COVID-19 and the impact of social determinants of health. The Lancet Respiratory Medicine 8
[(7), 659–661. https://doi.org/10.1016/S2213-2600(20)30234-4](https://doi.org/10.1016/S2213-2600(20)30234-4)
[Ada Lovelace and the AI Council (2021) Exploring legal mechanisms for data stewardship. Available at https://www.](https://www.adalovelaceinstitute.org/report/legal-mechanisms-data-stewardship/)
[adalovelaceinstitute.org/report/legal-mechanisms-data-stewardship/ (accessed 20 May 2021).](https://www.adalovelaceinstitute.org/report/legal-mechanisms-data-stewardship/)
Ainsworth J and Buchan I (2015) Combining health data uses to ignite health system learning. Methods of Information in Medicine
[54(6), 479–487. Available at https://www.thieme-connect.de/products/ejournals/pdf/10.3414/ME15-01-0064.pdf (accessed](https://www.thieme-connect.de/products/ejournals/pdf/10.3414/ME15-01-0064.pdf)
20 May 2021).
Aitken M, Tully MP, Porteous C, Denegri S, Cunningham-Burley S, Banner N, Black C, Burgess M, Cross L, van Delden J,
Ford E, Fox S, Fitzpatrick N, Gallacher K, Goddard C, Hassan L, Jamieson R, Jones KH, Kaarakainen M, Lugg-Widger
F, McGrail K, McKenzie A, Moran R, Murtagh MJ, Oswald M, Paprica A, Perrin N, Richards EV, Rouse J, Webb J and
Willison DJ (2020) Consensus statement on public involvement and engagement with data-intensive health research. Inter[national Journal of Population Data Science 4(1), 586. https://doi.org/10.23889/ijpds.v4i1.586](https://doi.org/10.23889/ijpds.v4i1.586)
[Arbuckle L and Ritchie F (2019) The five safes of risk-based anonymization. IEEE Security & Privacy 17(5), 84–89. https://](https://doi.org/10.1109/MSEC.2019.2929282)
[doi.org/10.1109/MSEC.2019.2929282](https://doi.org/10.1109/MSEC.2019.2929282)
Article 29 Data Protection Working Party (2014) Opinion 05/2014 on anonymisation techniques. WP216 adopted on 10 April
[2014. Available at https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2014/wp216_en.pdf](https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2014/wp216_en.pdf;)
(accessed 21 May 2020).
Banner N (2020) A new approach to decisions about data. Understanding patient data. Available at [https://](https://understandingpatientdata.org.uk/news/new-approach-decisions-about-data;)
[understandingpatientdata.org.uk/news/new-approach-decisions-about-data (accessed 20 May 2021).](https://understandingpatientdata.org.uk/news/new-approach-decisions-about-data;)
Bender D and Sartipi K (2013) HL7 FHIR: An Agile and RESTful approach to healthcare information exchange. In Pereira
Rodrigues P, Pechenizkiy M, Gama J, Cruz Correia R, Liu J, Traina A, Lucas P and Soda P (eds), Proceedings of the 26th IEEE
International Symposium on Computer-based Medical Systems, University of Porto, Portugal. Institute of Electrical and
[Electronics Engineers, Inc. (IEEE), pp. 326–331. https://doi.org/10.1109/CBMS.2013.6627810](https://doi.org/10.1109/CBMS.2013.6627810)
Boniface M, Carmichael L, Hall W, Pickering B, Stalla-Bourdillon S and Taylor S (2020) A blueprint for a social data
foundation: Accelerating trustworthy and collaborative data sharing for health and social care transformation. Web Science
Institute (WSI) White Paper #4. Available at www.socialdatafoundation.org/ (accessed 20 May 2021).
Boniface M, Carmichael L, Hall W, Pickering B, Stalla-Bourdillon S and Taylor S (2021) The social data foundation model:
Facilitating health and social care transformation through datatrust services. [Preprint]. Local EPrints ID: 449699. Available at
[http://eprints.soton.ac.uk/id/eprint/449699 (accessed 15 June 2021).](http://eprints.soton.ac.uk/id/eprint/449699)
Burström B and Tao W (2020) Social determinants of health and inequalities in COVID-19. European Journal of Public Health 30
[(4), 617–618. https://doi.org/10.1093/eurpub/ckaa095](https://doi.org/10.1093/eurpub/ckaa095)
Burton PR, Murtagh MJ, Boyd A, Williams JB, Dove ES, Wallace SE, Tassé A-M, Little J, Chisholm RL, Gaye A, Hveem K,
-----
[Care and Health Information Exchange (CHIE) (n.d.) Available at https://careandhealthinformationexchange.org.uk/ (accessed](https://careandhealthinformationexchange.org.uk/)
20 May 2021).
CARE Principles for Indigenous Data Governance (2018) International data week and research data alliance plenary co-hosted
event. Indigenous Data Sovereignty Principles for the Governance of Indigenous Data Workshop, Gaborone, Botswana.
[Available at https://www.gida-global.org/care (accessed 20 May 2021).](https://www.gida-global.org/care)
Carter P, Laurie GT and Dixon-Woods M (2015) The social licence for research: why care.data ran into trouble. Journal of
[Medical Ethics 41(5), 404–409. Available at https://jme.bmj.com/content/41/5/404 (accessed 20 May 2021).](https://jme.bmj.com/content/41/5/404)
Cassell A, Edwards D, Harshfield A, Rhodes K, Brimicombe J, Payne R and Griffin S (2018) The epidemiology of
[multimorbidity in primary care: A retrospective cohort study. British Journal of General Practice 68(669), e245–51. https://](https://doi.org/10.3399/bjgp18X695465)
[doi.org/10.3399/bjgp18X695465](https://doi.org/10.3399/bjgp18X695465)
[Central Digital and Data Office (2020) Data ethics framework. UK Government Digital Services. Available at https://www.gov.](https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework-2020)
[uk/government/publications/data-ethics-framework/data-ethics-framework-2020 (accessed 20 May 2021).](https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework-2020)
[Centre for Data Ethics and Innovation (2020) Addressing public trust in public sector data use. Available at https://www.gov.uk/](https://www.gov.uk/government/publications/cdei-publishes-its-first-report-on-public-sector-data-sharing/addressing-trust-in-public-sector-data-use)
[government/publications/cdei-publishes-its-first-report-on-public-sector-data-sharing/addressing-trust-in-public-sector-data-](https://www.gov.uk/government/publications/cdei-publishes-its-first-report-on-public-sector-data-sharing/addressing-trust-in-public-sector-data-use)
[use (accessed 20 May 2021).](https://www.gov.uk/government/publications/cdei-publishes-its-first-report-on-public-sector-data-sharing/addressing-trust-in-public-sector-data-use)
CurvedThinking (2019) Understanding public expectations of the use of health and care data. Developed in consultation with:
[Understanding Patient Data, Commissioned by One London. Available at https://understandingpatientdata.org.uk/sites/default/](https://understandingpatientdata.org.uk/sites/default/files/2019-07/Understanding%20public%20expectations%20of%20the%20use%20of%20health%20and%20care%20data.pdf)
[files/2019-07/Understanding%20public%20expectations%20of%20the%20use%20of%20health%20and%20care%20data.pdf](https://understandingpatientdata.org.uk/sites/default/files/2019-07/Understanding%20public%20expectations%20of%20the%20use%20of%20health%20and%20care%20data.pdf)
(accessed 20 May 2021).
[Data Linkage Western Australia (2021) Available at https://www.datalinkage-wa.org.au/ (accessed 19 March 2021).](https://www.datalinkage-wa.org.au/)
[Data Protection Act (2018) (UK) Available at https://www.legislation.gov.uk/ukpga/2018/12/contents/enacted (accessed 24 May](https://www.legislation.gov.uk/ukpga/2018/12/contents/enacted)
2021).
Davidson S, McLean C, Treanor S, Aitken M, Cunningham-Burley S, Laurie G, Pagliari C and Sethi N (2013) Public
acceptability of data sharing between the public, private and third sectors for research purposes. Scottish Government Social
Research, Ipsos MORI Scotland and University of Edinburgh, Research Commissioned by the Scottish Government. Available at
[https://www.webarchive.org.uk/wayback/archive/3000/](https://www.webarchive.org.uk/wayback/archive/3000/) and [https://www.gov.scot/resource/0043/00435458.pdf](https://www.gov.scot/resource/0043/00435458.pdf) (accessed
14 December 2021).
Desai Y, Ritchie F and Welpton R (2016) Five safes: Designing data access for research. UWE, Economics Working Paper Series
[1601. Available at https://www2.uwe.ac.uk/faculties/bbs/Documents/1601.pdf (accessed 21 May 2021).](https://www2.uwe.ac.uk/faculties/bbs/Documents/1601.pdf)
Dodds L, Szász D Keller JR, Snaith B and Duarte S (2020) Designing sustainable data institutions. Open Data Institute (ODI)
[report. Contributions from Hardinges J and Tennison J. Available at https://theodi.org/article/designing-sustainable-data-](https://theodi.org/article/designing-sustainable-data-institutions-paper/)
[institutions-paper/ (accessed 20 May 2021).](https://theodi.org/article/designing-sustainable-data-institutions-paper/)
Elliot M, O’Hara K, Raab C, O’Keefe CM, Mackey E, Dibben C, Gowans H, Purdam K and McCullagh K (2018) Functional
[anonymisation: Personal data and the data environment. Computer Law & Security Review 34(2), 204–221. https://doi.org/](https://doi.org/10.1016/j.clsr.2018.02.001)
[10.1016/j.clsr.2018.02.001](https://doi.org/10.1016/j.clsr.2018.02.001)
[Ethereum (2021) Introduction to smart contracts. Available at https://ethereum.org/en/developers/docs/smart-contracts/ (accessed](https://ethereum.org/en/developers/docs/smart-contracts/)
21 May 2021).
European Commission (2019) Building trust in human-centric artificial intelligence. Communication from the Commission to the
European parliament, the Council, the European Economic and Social Committee and the Committee of the Regions. COM
[(2019) 168 final. Available at https://digital-strategy.ec.europa.eu/en/library/communication-building-trust-human-centric-arti](https://digital-strategy.ec.europa.eu/en/library/communication-building-trust-human-centric-artificial-intelligence)
[ficial-intelligence (accessed 21 May 2021).](https://digital-strategy.ec.europa.eu/en/library/communication-building-trust-human-centric-artificial-intelligence)
Ford E, Boyd A, Bowles JKF, Havard A, Aldridge RW, Curcin V, Greiver M, Harron K, Katikireddi V, Rodgers SE and
Sperrin M (2019) Our data, our society, our health: A vision for inclusive and transparent health data science in the United
[Kingdom and beyond. Learning Health Systems 3(3), e10191. https://doi.org/10.1002/lrh2.10191](https://doi.org/10.1002/lrh2.10191)
Galea S, Abdalla SM and Sturchio JL (2020) Social determinants of health, data science, and decision-making: Forging a
[transdisciplinary synthesis. PLoS Medicine 17(6), e1003174. https://doi.org/10.1371/journal.pmed.1003174](https://doi.org/10.1371/journal.pmed.1003174)
Geissbuhler A, Safran C, Buchan I, Bellazzi R, Labkoff S, Eilenberg K, Leese A, Richardson C, Mantas J, Murray P and De
Moor G (2013) Trustworthy reuse of health data: A transnational perspective. International Journal of Medical Informatics 82
[(1), 1–9. https://doi.org/10.1016/j.ijmedinf.2012.11.003](https://doi.org/10.1016/j.ijmedinf.2012.11.003)
General Data Protection Regulation (GDPR) (2016) Regulation (EU) 2016/679 of the European Parliament and of the Council of
27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such
[data, and repealing Directive 95/46/EC (General Data Protection Regulation) European Commission. Available at https://eur-](https://eur-lex.europa.eu/eli/reg/2016/679/oj)
[lex.europa.eu/eli/reg/2016/679/oj (accessed 21 May 2021).](https://eur-lex.europa.eu/eli/reg/2016/679/oj)
Hall W (2021). A Blueprint for a Data Foundation. Keynote Speech at Sixth International Data for Policy Conference 2021.
[Recording available at https://dataforpolicy.org/global-discussion-on-the-future-of-policy-data-interactions-at-data-for-policys-](https://dataforpolicy.org/global-discussion-on-the-future-of-policy-data-interactions-at-data-for-policys-sixth-edition/)
[sixth-edition/ (accessed 17 December 2021).](https://dataforpolicy.org/global-discussion-on-the-future-of-policy-data-interactions-at-data-for-policys-sixth-edition/)
[Harrison T (2020) Putting the trust in trusted research environments. Understanding patient data. Available at https://](https://understandingpatientdata.org.uk/news/putting-trust-trusted-research-environments)
[understandingpatientdata.org.uk/news/putting-trust-trusted-research-environments (accessed 21 May 2021).](https://understandingpatientdata.org.uk/news/putting-trust-trusted-research-environments)
[Hawkes N (2013) Hospitals without walls. BMJ 34, f5479. https://doi.org/10.1136/bmj.f5479](https://doi.org/10.1136/bmj.f5479)
-----
Health data use, stewardship, and governance: Ongoing gaps and challenges: A report from AMIA’s 2012 Health Policy Meeting.
[Journal of the American Medical Informatics Association 21(2), 204–211. https://doi.org/10.1136/amiajnl-2013-002117](https://doi.org/10.1136/amiajnl-2013-002117)
[ICES (2021) Available at https://www.ices.on.ca/ (accessed 20 May 2021).](https://www.ices.on.ca/)
Information Commissioner’s Office (ICO) (2012) Anonymisation: Managing data protection risk code of practice. Available at
[https://ico.org.uk/media/1061/anonymisation-code.pdf (accessed 20 May 2021).](https://ico.org.uk/media/1061/anonymisation-code.pdf)
[Integrated Research Application System (IRAS) (2021) Available at https://www.myresearchproject.org.uk/ (accessed 20 May](https://www.myresearchproject.org.uk/)
2021).
ISO (2013) ISO/IEC 27001:2013. Information technology—Security Techniques—Information security management systems—
[Requirements, International Organization for Standardization, 2013. Available at https://www.iso.org/ (accessed 20 May 2021).](https://www.iso.org/)
[ISO (2018) ISO 27005 Information Technology—Security techniques—Information security risk management. Available at https://](https://www.iso.org/)
[www.iso.org/ (accessed 20 May 2021).](https://www.iso.org/)
[IT Innovation Centre (n.d.) MELD. University of Southampton. Available at http://www.it-innovation.soton.ac.uk/projects/ai-](http://www.it-innovation.soton.ac.uk/projects/ai-meld)
[meld (accessed 4 June 2021).](http://www.it-innovation.soton.ac.uk/projects/ai-meld)
[Jacobs B and Popma J (2019) Medical research, big data and the need for privacy by design. Big Data & Society 6, 1–5. https://](https://doi.org/10.1177/2053951718824352)
[doi.org/10.1177/2053951718824352](https://doi.org/10.1177/2053951718824352)
Janssen M, Brous P, Estevez E, Barbosa LS and Janowski T (2020) Data governance: Organizing data for trustworthy artificial
[intelligence. Government Information Quarterly 37(3), 101493. https://doi.org/10.1016/j.giq.2020.101493](https://doi.org/10.1016/j.giq.2020.101493)
Jones KH and Ford DV (2018) Population data science: Advancing the safe use of population data for public benefit. Epidemiology
[and Health 40, e2018061. https://doi.org/10.4178/epih.e2018061](https://doi.org/10.4178/epih.e2018061)
Jones KH, Ford DV, Jones C, Dsilva R, Thompson S, Brooks CJ, Heaven ML, Thayer DS, McNerney C and Lyons RA (2014)
A case study of the Secure Anonymous Information Linkage (SAIL) gateway: A privacy-protecting remote access system for
health-related research and evaluation. Journal of Biomedical Informatics 50, 196–204. [https://doi.org/10.1016/](https://doi.org/10.1016/j.jbi.2014.01.003)
[j.jbi.2014.01.003](https://doi.org/10.1016/j.jbi.2014.01.003)
Kariotis T, Ball M, Greshake Tzovaras B, Dennis S, Sahama T, Johnston C, Almond H and Borda A (2020). Emerging health
[data platforms: From individual control to collective data governance. Data & Policy 2, E13. https://doi.org/10.1017/](https://doi.org/10.1017/dap.2020.14)
[dap.2020.14](https://doi.org/10.1017/dap.2020.14)
Lin D, Crabtree J, Dillo I, Downs RR, Edmunds R, Giaretta D, De Giusti M, L’Hours H, Hugo W, Jenkyns R, Khodiyar V,
Martone ME, Mokrane M, Navale V, Petters J, Sierman B, Sokolova DV, Stockhause M and Westbrook J (2020) The
[TRUST principles for digital repositories. Scientific Data 7, 144. https://doi.org/10.1038/s41597-020-0486-7](https://doi.org/10.1038/s41597-020-0486-7)
Marmot M, Allen J, Boyce T, Goldblatt P and Morrison J (2020) Health Equity in England: The Marmot Review 10 Years on.
[London: Institute of Health Equity. Available at http://www.instituteofhealthequity.org/resources-reports/marmot-review-10-](http://www.instituteofhealthequity.org/resources-reports/marmot-review-10-years-on/the-marmot-review-10-years-on-full-report.pdf)
[years-on/the-marmot-review-10-years-on-full-report.pdf (accessed 21 May 2021).](http://www.instituteofhealthequity.org/resources-reports/marmot-review-10-years-on/the-marmot-review-10-years-on-full-report.pdf)
Mayer RC, Davis JH and Schoorman FD (1995) An integrative model of organizational trust. The Academy of Management
[Review 20(3), 709–734. Available at https://www.jstor.org/stable/258792 (accessed 21 May 2021).](https://www.jstor.org/stable/258792)
MedConfidential (2017) Enabling evidence based continuous improvement: The target architecture – Connected care settings and
[improving patient experience. Available at https://medconfidential.org/wp-content/uploads/2017/09/2017-07-13-Target-](https://medconfidential.org/wp-content/uploads/2017/09/2017-07-13-Target-Architecture.pdf)
[Architecture.pdf (accessed 21 May 2021).](https://medconfidential.org/wp-content/uploads/2017/09/2017-07-13-Target-Architecture.pdf)
MELD, University of Southampton (2021) Research project: Developing a multidisciplinary ecosystem to study lifecourse
[determinants of complex mid-life multimorbidity using artificial intelligence (MELD). Faculty of Medicine. Available at https://](https://www.southampton.ac.uk/medicine/academic_units/projects/meld.page)
[www.southampton.ac.uk/medicine/academic_units/projects/meld.page (accessed 21 May 2021).](https://www.southampton.ac.uk/medicine/academic_units/projects/meld.page)
Miller FA, Patton SJ, Dobrow M and Berta W (2018) Public involvement in health research systems: A governance framework.
[Health Research Policy and Systems 16, 79. https://doi.org/10.1186/s12961-018-0352-7](https://doi.org/10.1186/s12961-018-0352-7)
[Moses B and Desai K (2020). Data governance is broken. Information Week. Available at https://informationweek.com/big-data/](https://informationweek.com/big-data/data-governance-is-broken-/a/d-id/1339635)
[data-governance-is-broken-/a/d-id/1339635 (accessed 21 May 2021).](https://informationweek.com/big-data/data-governance-is-broken-/a/d-id/1339635)
Muller SHA, Kalkman S, van Thiel GJMW, Mostert M and van Delden JJM (2021) The social licence for data-intensive health
[research: Towards co-creation, public value and trust. BMC Medical Ethics 22, 110. https://doi.org/10.1186/s12910-021-00677-5](https://doi.org/10.1186/s12910-021-00677-5)
Multidisciplinary Ecosystem to study Lifecourse Determinants of Complex Mid-life Multimorbidity using Artificial
Intelligence (MELD) (2020) Project proposal. University of Southampton. Internal document.
[National Cyber Security Centre (n.d.) UK Cyber Essentials Plus. Available at https://www.ncsc.gov.uk/cyberessentials/overview](https://www.ncsc.gov.uk/cyberessentials/overview)
(accessed 20 May 2021).
[NHS (2019) The NHS long term plan. V1.2. Available at https://www.longtermplan.nhs.uk/wp-content/uploads/2019/08/nhs-long-](https://www.longtermplan.nhs.uk/wp-content/uploads/2019/08/nhs-long-term-plan-version-1.2.pdf)
[term-plan-version-1.2.pdf (accessed 21 May 2021).](https://www.longtermplan.nhs.uk/wp-content/uploads/2019/08/nhs-long-term-plan-version-1.2.pdf)
[NHS (n.d.) Placed based approaches to reducing health inequalities. Available at https://www.england.nhs.uk/ltphimenu/placed-](https://www.england.nhs.uk/ltphimenu/placed-based-approaches-to-reducing-health-inequalities/)
[based-approaches-to-reducing-health-inequalities/ (accessed 21 May 2021).](https://www.england.nhs.uk/ltphimenu/placed-based-approaches-to-reducing-health-inequalities/)
[NHS Data Security and Protection Toolkit (2021) Available at https://www.dsptoolkit.nhs.uk/ (accessed 21 May 2021).](https://www.dsptoolkit.nhs.uk/)
[NHS Digital (n.d.) Data Access Request Service (DARS). Available at https://digital.nhs.uk/services/data-access-request-service-](https://digital.nhs.uk/services/data-access-request-service-dars)
[dars (accessed 20 May 2021).](https://digital.nhs.uk/services/data-access-request-service-dars)
[NHS Health Research Authority (HRA) (2019) Prepare study documentation. Last updated: 17 July 2019. Available at https://](https://www.hra.nhs.uk/planning-and-improving-research/research-planning/prepare-study-documentation/)
-----
[NHS Health Research Authority (HRA) (2021) HRA approval. Last updated: 22 November 2021. Available at https://www.](https://www.hra.nhs.uk/approvals-amendments/what-approvals-do-i-need/hra-approval/)
[hra.nhs.uk/approvals-amendments/what-approvals-do-i-need/hra-approval/ (accessed 14 December 2021).](https://www.hra.nhs.uk/approvals-amendments/what-approvals-do-i-need/hra-approval/)
[Northern Health Science Alliance (NSHA) (2020) Connected health cities: Impact report 2016–2020. Available at https://](https://www.thenhsa.co.uk/app/uploads/2020/10/CHC-full-impact-report.pdf)
[www.thenhsa.co.uk/app/uploads/2020/10/CHC-full-impact-report.pdf (accessed 21 May 2021).](https://www.thenhsa.co.uk/app/uploads/2020/10/CHC-full-impact-report.pdf)
O’Hara K (2019) Data trusts: Ethics, architecture and governance for trustworthy data stewardship. Web Science Institute (WSI)
[White Paper #1. Available at https://www.southampton.ac.uk/wsi/enterprise-and-impact/white-papers.page (accessed 20 May](https://www.southampton.ac.uk/wsi/enterprise-and-impact/white-papers.page)
2021).
O’Hara K (2021) From internal discussions with authors on the notion of fiduciary ethical virtues and datatrust services.
Ocloo J and Matthews R (2016) From tokenism to empowerment: Progressing patient and public involvement in healthcare
[improvement. BMJ Quality & Safety 25(8), 626–632. http://doi.org/10.1136/bmjqs-2015-004839](http://doi.org/10.1136/bmjqs-2015-004839)
[Oswald M (2013) Something bad might happen: Lawyers, anonymization and risk. XRDS 20(1), 22–26. https://doi.org/10.1145/](https://doi.org/10.1145/2508970)
[2508970.](https://doi.org/10.1145/2508970)
[OWASP (2021) OWASP top ten. Available at https://owasp.org/www-project-top-ten/ (accessed 20 May 2021).](https://owasp.org/www-project-top-ten/)
Pozen DE (2005) The Mosaic theory, national security, and the freedom of information act. Yale Law Journal 115, 628–679.
[Available at SSRN https://ssrn.com/abstract=820326 (accessed 20 May 2021).](https://ssrn.com/abstract=820326)
[Public Health England (2017) Reducing health inequalities: System, scale and sustainability. Available at https://assets.publishing.](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/731682/Reducing_health_inequalities_system_scale_and_sustainability.pdf)
[service.gov.uk/government/uploads/system/uploads/attachment_data/file/731682/Reducing_health_inequalities_system_](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/731682/Reducing_health_inequalities_system_scale_and_sustainability.pdf)
[scale_and_sustainability.pdf (accessed 21 May 2021).](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/731682/Reducing_health_inequalities_system_scale_and_sustainability.pdf)
Public Health Research Data Forum (2015). Enabling data linkage to maximise the value of public health research data: Full
report. Available at [https://cms.wellcome.org/sites/default/files/enabling-data-linkage-to-maximise-value-of-public-health-](https://cms.wellcome.org/sites/default/files/enabling-data-linkage-to-maximise-value-of-public-health-research-data-phrdf-mar15.pdf)
[research-data-phrdf-mar15.pdf (accessed 21 May 2021).](https://cms.wellcome.org/sites/default/files/enabling-data-linkage-to-maximise-value-of-public-health-research-data-phrdf-mar15.pdf)
Research Data Alliance (RDA) COVID-19 Working Group (2020) RDA COVID-19; Recommendations and guidelines on data
[sharing, final release 30 June 2020. https://doi.org/10.15497/rda00052.](https://doi.org/10.15497/rda00052)
Rieke N, Hancox J, Li W, Milletari F, Roth HR, Albarqouni S, Bakas S, Galtier MN, Landman BA, Maier-Hein K, Ourselin
S, Sheller M, Summers RM, Trask A, Xu D, Baust M and Cardoso MJ (2020) The future of digital health with federated
[learning. NPJ Digital Medicine 3, 119. https://doi.org/10.1038/s41746-020-00323-1](https://doi.org/10.1038/s41746-020-00323-1)
[Rooney D, Leach J and Ashworth P (2014) Doing the social in social licence. Social Epistemology 28(3–4), 209–218. https://](https://doi.org/10.1080/02691728.2014.922644)
[doi.org/10.1080/02691728.2014.922644](https://doi.org/10.1080/02691728.2014.922644)
Sadana R and Harper S (2011) Data systems linking social determinants of health with health outcomes: Advancing public goods
[to support research and evidence-based policy and programs. Public Health Reports 126(3), 6–13. https://doi.org/10.1177/](https://doi.org/10.1177/00333549111260S302)
[00333549111260S302](https://doi.org/10.1177/00333549111260S302)
[SAIL (2021) Databank. Available at https://saildatabank.com/ (accessed 21 May 2021).](https://saildatabank.com/)
Scott K (2018). Data for Public Benefit: Balancing the risks and benefits of data sharing. Report Co-authored by Understanding
Patient Data, Involve and Carnegie UK Trust. Contributors: Burall S, Perrin N, Shelton P, White D, Irvine G and Grant
[A. Available at https://www.involve.org.uk/sites/default/files/field/attachemnt/Data%20for%20Public%20Benefit%20Report_](https://www.involve.org.uk/sites/default/files/field/attachemnt/Data%20for%20Public%20Benefit%20Report_0.pdf)
[0.pdf (accessed 24 May 2021).](https://www.involve.org.uk/sites/default/files/field/attachemnt/Data%20for%20Public%20Benefit%20Report_0.pdf)
Sharon Tand Lucivero F (2019) Introduction to the special theme: the expansion of the health data ecosystem—Rethinking data
[ethics and governance. Big Data & Society 6, 1–5. https://doi.org/10.1177/2053951719852969](https://doi.org/10.1177/2053951719852969)
Smart Dubai and Nesta (2020) Data sharing toolkit: Approaches, guidance and resources to unlock the value of data. Available at
[https://www.nesta.org.uk/toolkit/data-sharing-toolkit/ (accessed 4 June 2021).](https://www.nesta.org.uk/toolkit/data-sharing-toolkit/)
[Sohail O, Sharma P and Ciric B (2018) Data governance for next-generation platforms. Deloitte. Available at https://www2.](https://www2.deloitte.com/us/en/pages/technology/articles/data-governance-next-gen-platforms.html)
[deloitte.com/us/en/pages/technology/articles/data-governance-next-gen-platforms.html (accessed 20 May 2021).](https://www2.deloitte.com/us/en/pages/technology/articles/data-governance-next-gen-platforms.html)
[Spinney L (2021). Hospitals without walls: The future of healthcare. The Guardian. Available at https://www.theguardian.com/](https://www.theguardian.com/society/2021/jan/02/hospitals-without-walls-the-future-of-digital-healthcare)
[society/2021/jan/02/hospitals-without-walls-the-future-of-digital-healthcare (accessed 21 May 2021).](https://www.theguardian.com/society/2021/jan/02/hospitals-without-walls-the-future-of-digital-healthcare)
Stalla-Bourdillon S, Carmichael L and Wintour A (2021) Fostering trustworthy data sharing: Establishing data foundations in
[practice. Data & Policy 3, e4. https://doi.org/10.1017/dap.2020.24](https://doi.org/10.1017/dap.2020.24)
Stalla-Bourdillon S, Wintour A and Carmichael L (2019) Building Trust through Data Foundations: A Call for a Data
[Governance Model to Support Trustworthy Data Sharing. Web Science Institute (WSI) White Paper #2. Available at https://](https://www.southampton.ac.uk/wsi/enterprise-andimpact/white-papers.page)
[www.southampton.ac.uk/wsi/enterprise-andimpact/white-papers.page (accessed 21 May 2021).](https://www.southampton.ac.uk/wsi/enterprise-andimpact/white-papers.page)
Surridge M, Correndo G, Meacham K, Papay J, Phillips SC, Wiegand S and Wilkinson T (2018) Trust modelling in 5G mobile
networks. In Proceedings of the 2018 Workshop on Security in Softwarized Networks: Prospects and Challenges (SecSoN ’18).
[Workshop Co-Chairs: Benson T, Bisson P, Pries R and Zinner T. New York, NY: ACM, pp. 14–19. https://doi.org/10.1145/](https://doi.org/10.1145/3229616.3229621)
[3229616.3229621](https://doi.org/10.1145/3229616.3229621)
Surridge M, Meacham K, Papay J, Phillips SC, Pickering JB, Shafiee A and Wilkinson T (2019) Modelling compliance threats
and security analysis of cross border health data exchange. In Attiogbé C, Ferrarotti F and Maabout S (eds), New Trends in Model
[and Data Engineering. MEDI 2019. Communications in Computer and Information Science, Vol. 1085. Cham: Springer. https://](https://doi.org/10.1007/978-3-030-32213-7_14)
[doi.org/10.1007/978-3-030-32213-7_14](https://doi.org/10.1007/978-3-030-32213-7_14)
-----
The Toronto Declaration (2018) Protecting the right to equality and non-discrimination in machine learning systems. Amnesty
[International and AccessNow (eds). Available at https://www.torontodeclaration.org/wp-content/uploads/2019/12/Toronto_](https://www.torontodeclaration.org/wp-content/uploads/2019/12/Toronto_Declaration_English.pdf)
[Declaration_English.pdf (accessed 21 May 2021).](https://www.torontodeclaration.org/wp-content/uploads/2019/12/Toronto_Declaration_English.pdf)
[Thompson Reuters: Practical Law (n.d.) Fiduciary duties and fiduciary. Glossary. Available at https://uk.practicallaw.](https://uk.practicallaw.thomsonreuters.com/1-107-5744?transitionType=DefaultcontextData=(sc.Default)firstPage=true)
[thomsonreuters.com/1-107-5744?transitionType=Default&contextData=(sc.Default)&firstPage=true (accessed 30 November](https://uk.practicallaw.thomsonreuters.com/1-107-5744?transitionType=DefaultcontextData=(sc.Default)firstPage=true)
2021).
[Triggle N (2021). Is COVID at risk of becoming a disease of the poor? BBC News, February 2021. Available at https://www.bbc.](https://www.bbc.co.uk/news/health-56162075)
[co.uk/news/health-56162075 (accessed 21 May 2021).](https://www.bbc.co.uk/news/health-56162075)
UK Data Service (1970) 1970 British Cohort Study (BCS70). Available at beta.ukdataservice.ac.uk/datacatalogue/series/series?
id=200001 (accessed 20 May 2021).
[UK Data Service (n.d.) Regulating access to data: 5 Safes. Available at https://www.ukdataservice.ac.uk/manage-data/legal-](https://www.ukdataservice.ac.uk/manage-data/legal-ethical/access-control/five-safes)
[ethical/access-control/five-safes (accessed 20 May 2021).](https://www.ukdataservice.ac.uk/manage-data/legal-ethical/access-control/five-safes)
UK Department of Health and Social Care (2021) A guide to good practice for digital and data-driven health technologies.
[Available at https://www.gov.uk/government/publications/code-of-conduct-for-data-driven-health-and-care-technology/initial-](https://www.gov.uk/government/publications/code-of-conduct-for-data-driven-health-and-care-technology/initial-code-of-conduct-for-data-driven-health-and-care-technology)
[code-of-conduct-for-data-driven-health-and-care-technology (accessed 21 May 2021).](https://www.gov.uk/government/publications/code-of-conduct-for-data-driven-health-and-care-technology/initial-code-of-conduct-for-data-driven-health-and-care-technology)
UK Government Chief Scientific Adviser (2016) Distributed ledger technology: Beyond block chain. Government Office for
Science. Available at [https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/492972/gs-16-1-distributed-ledger-technology.pdf)
[492972/gs-16-1-distributed-ledger-technology.pdf (accessed 21 May 2021).](https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/492972/gs-16-1-distributed-ledger-technology.pdf)
UK Health Data Research Alliance (UKHDRA) (2020) Trusted Research Environments (TRE): A strategy to build public trust
[and meet changing health data science needs. Green Paper v2.0 dated 21 July 2020. Available at https://ukhealthdata.org/wp-](https://ukhealthdata.org/wp-content/uploads/2020/07/200723-Alliance-Board_Paper-E_TRE-Green-Paper.pdf)
[content/uploads/2020/07/200723-Alliance-Board_Paper-E_TRE-Green-Paper.pdf (accessed 21 May 2021).](https://ukhealthdata.org/wp-content/uploads/2020/07/200723-Alliance-Board_Paper-E_TRE-Green-Paper.pdf)
[UK Health Data Research Alliance (UKHDRA) (n.d.) Innovation Gateway. Available at https://www.healthdatagateway.org/](https://www.healthdatagateway.org/)
(accessed 21 May 2021).
Understanding Patient Data and Ada Lovelace Institute (2020) Foundations of fairness: Where next for NHS health data
[partnerships. Available at https://understandingpatientdata.org.uk/sites/default/files/2020-03/Foundations%20of%20Fairness%](https://understandingpatientdata.org.uk/sites/default/files/2020-03/Foundations%20of%20Fairness%20-%20Summary%20and%20Analysis.pdf)
[20-%20Summary%20and%20Analysis.pdf (accessed 21 May 2021).](https://understandingpatientdata.org.uk/sites/default/files/2020-03/Foundations%20of%20Fairness%20-%20Summary%20and%20Analysis.pdf)
[University of Southampton (2021) Social impact lab. Available at https://www.southampton.ac.uk/silab/index.page (accessed](https://www.southampton.ac.uk/silab/index.page)
20 May 2021).
[Varshney S (2020). A progressive approach to data governance. Forbes. Available at https://www.forbes.com/sites/forbestech](https://www.forbes.com/sites/forbestechcouncil/2020/11/03/a-progressive-approach-to-data-governance/)
[council/2020/11/03/a-progressive-approach-to-data-governance/ (accessed 20 May 2021).](https://www.forbes.com/sites/forbestechcouncil/2020/11/03/a-progressive-approach-to-data-governance/)
[Wessex Care Records (2021) Available at https://www.wessexcarerecords.org.uk/ (accessed 20 May 2021).](https://www.wessexcarerecords.org.uk/)
Winter JS and Davidson E (2019) Big data governance of personal health information and challenges to contextual integrity. The
[Information Society 35(1), 36–51. https://doi.org/10.1080/01972243.2018.1542648](https://doi.org/10.1080/01972243.2018.1542648)
[World Health Organization (n.d.) social determinants of health. Available at https://www.who.int/health-topics/social-determin](https://www.who.int/health-topics/social-determinants-of-health)
[ants-of-health (accessed 20 May 2021).](https://www.who.int/health-topics/social-determinants-of-health)
Young M, Rodriguez L, Keller E, Sun F, Sa B, Whittington J and Howe B (2019) Beyond open vs. closed: Balancing individual
privacy and public accountability in data sharing. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ’19). General Co-Chairs: Boyd, D and Morgenstern J. Program Co-Chairs: Chouldechova A and Diaz F.
[New York: Association for Computing Machinery (ACM), pp. 191–200. https://doi.org/10.1145/3287560.3287577](https://doi.org/10.1145/3287560.3287577)
Cite this article: Boniface M, Carmichael L, Hall W, Pickering B, Stalla-Bourdillon S and Taylor S (2022). The Social Data
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1017/dap.2022.1?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1017/dap.2022.1, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNCND",
"status": "GOLD",
"url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/CD882977DA412B4020945C3FFE8725A0/S2632324922000013a.pdf/div-class-title-the-social-data-foundation-model-facilitating-health-and-social-care-transformation-through-span-class-italic-datatrust-services-span-div.pdf"
}
| 2,022
|
[] | true
| 2022-02-10T00:00:00
|
[
{
"paperId": "cfb5ab8ab26bcd08e73ef68db5ee5f2012272933",
"title": "The social licence for data-intensive health research: towards co-creation, public value and trust"
},
{
"paperId": "1eafc5a844364eed5c06931d4d6b538a0b138922",
"title": "Fostering trustworthy data sharing: Establishing data foundations in practice"
},
{
"paperId": "3eb9bcef41e59c42151dbe145d9706b462547e5d",
"title": "A blueprint for a social data foundation: Accelerating trustworthy and collaborative data sharing for health and social care transformation"
},
{
"paperId": "6b5b1b8b033672fd9d232ef7abbbaaefeacbb066",
"title": "Regulatory Compliance Modelling Using Risk Management Techniques"
},
{
"paperId": "da562924e544a446c281ab05d95adfdb6ab24e5b",
"title": "Emerging health data platforms: From individual control to collective data governance"
},
{
"paperId": "5c9321331026f44ef500266261e4bc6a5de75513",
"title": "Social determinants of health and inequalities in COVID-19"
},
{
"paperId": "7bf035d37c55e40941221e00069e32ea0889c157",
"title": "Data governance: Organizing data for trustworthy Artificial Intelligence"
},
{
"paperId": "d46887e04f0b5b44654f54866bcf341101131daa",
"title": "Social determinants of health, data science, and decision-making: Forging a transdisciplinary synthesis"
},
{
"paperId": "b305fae8f86bc6f4d0e574f8bdc3ecb5f5701ba8",
"title": "COVID-19 and the impact of social determinants of health"
},
{
"paperId": "3271fe5da2a9998d9ee9c1bb117d8f845c798aff",
"title": "The TRUST Principles for digital repositories"
},
{
"paperId": "b245c5d36702eaae0ff374fad92848024bc99534",
"title": "The future of digital health with federated learning"
},
{
"paperId": "9e9c6d3aedd8af0dc09bfdfbe7e9a59a48f3d3ca",
"title": "Health equity in England: the Marmot review 10 years on"
},
{
"paperId": "0134631aa650d939795208e46199ab2586df1a97",
"title": "Modelling Compliance Threats and Security Analysis of Cross Border Health Data Exchange"
},
{
"paperId": "869e1ef4994108a71f7076f7b4e289487a35edfb",
"title": "The Five Safes of Risk-Based Anonymization"
},
{
"paperId": "04aee8ebdab8cf5f606b61dfd5ad3e6d40bd3cdf",
"title": "Introduction to the Special Theme: The expansion of the health data ecosystem – Rethinking data ethics and governance"
},
{
"paperId": "d6801d0ffd08ba56bccfa01884bb6a126f99de2e",
"title": "Our data, our society, our health: A vision for inclusive and transparent health data science in the United Kingdom and beyond"
},
{
"paperId": "f3ba555fc5dbaeec50ee1b07f7197a6d2414b9a9",
"title": "Data Trusts: Ethics, Architecture and Governance for Trustworthy Data Stewardship"
},
{
"paperId": "070cd335c36dfee145ba4dafa77c949232e5fffe",
"title": "Beyond Open vs. Closed: Balancing Individual Privacy and Public Accountability in Data Sharing"
},
{
"paperId": "c050e65ace30fd109364675da2da31b0b523d5ff",
"title": "Medical research, Big Data and the need for privacy by design"
},
{
"paperId": "736c848038351852eb7037691eaca2e2a2a4c2a2",
"title": "Big data governance of personal health information and challenges to contextual integrity"
},
{
"paperId": "8551a2f546da94a10f58c5be6b99445a239f13b3",
"title": "Population data science: advancing the safe use of population data for public benefit"
},
{
"paperId": "9ed358c79bcd96c565eb9ce5dbb6003984b34692",
"title": "Learning health systems."
},
{
"paperId": "7cc8a7aa7de4bccbe601f19afc965ccc08e1f5b2",
"title": "Consensus Statement on Public Involvement and Engagement with Data Intensive Health Research"
},
{
"paperId": "91a66d791da247b4a08bdffc97097e78afa43689",
"title": "Trust Modelling in 5G mobile networks"
},
{
"paperId": "230e1bbe4e552512f03943685f07ca7697ece3b6",
"title": "Public involvement in health research systems: a governance framework"
},
{
"paperId": "0ad9103536ea14ced81a31efac9efd9f5661edc8",
"title": "Functional anonymisation: Personal data and the data environment"
},
{
"paperId": "15dc2abee1d39f2c3b2643b5c444067c9e6ac77e",
"title": "The epidemiology of multimorbidity in primary care: a retrospective cohort study"
},
{
"paperId": "a52c309314c9e0addd9a1704d0fc9a89d27d2209",
"title": "Centre"
},
{
"paperId": "10617ea04ab9f785130fcf26e1e56f9c655bf48a",
"title": "From tokenism to empowerment: progressing patient and public involvement in healthcare improvement"
},
{
"paperId": "5ade6c684631947979f54645b6e77f7529ccd4cd",
"title": "Five Safes: designing data access for research"
},
{
"paperId": "6a93cd5d643fd1c0d58959bce10e1de74c208382",
"title": "Combining Health Data Uses to Ignite Health System Learning"
},
{
"paperId": "d5625cacdf64319afa39814d21d3d5d8bcd60730",
"title": "What We Do"
},
{
"paperId": "4fcb72c984996b927fb0e60f17b075b0c7a32d8a",
"title": "Data Safe Havens in health research and healthcare"
},
{
"paperId": "1387854d5e93007b45685b5f22fa6eacf2971937",
"title": "Enabling data linkage to maximise the value of public health research data: Summary report"
},
{
"paperId": "a093cfc413fb134589786aa9c5358333eec37854",
"title": "The social licence for research: why care.data ran into trouble"
},
{
"paperId": "335a6ae016b9cb865f62e0125792415674aa1df1",
"title": "A case study of the Secure Anonymous Information Linkage (SAIL) Gateway: A privacy-protecting remote access system for health-related research and evaluation☆"
},
{
"paperId": "ad8a4cef377a710fe5969398da7fdf0f9b99e8cf",
"title": "Doing the Social in Social License"
},
{
"paperId": "ccb74343d6e2d79f95eb1441ef8390d2894ba960",
"title": "Health data use, stewardship, and governance: ongoing gaps and challenges: a report from AMIA's 2012 Health Policy Meeting"
},
{
"paperId": "1b64ebe08601b53bc440ffe61484223612e4e08e",
"title": "Hospitals without walls"
},
{
"paperId": "14c936611d6ecc88623739079733a9f3d0b1b425",
"title": "Something Bad Might Happen: Lawyers, anonymization and risk"
},
{
"paperId": "1303cba4966df8aefb696b288ba06e1438f3d109",
"title": "HL7 FHIR: An Agile and RESTful approach to healthcare information exchange"
},
{
"paperId": "7f0034fe9a417cb375adf243a6b8fe4d219bfeb6",
"title": "Data Systems Linking Social Determinants of Health with Health Outcomes: Advancing Public Goods to Support Research and Evidence-Based Policy and Programs"
},
{
"paperId": "024d9300e118952348a929afba762e4aacc0c27c",
"title": "Data Protection Act"
},
{
"paperId": "b49ed01d5a7722382955a7bd54b5ce3f1a4d29f0",
"title": "The Mosaic Theory, National Security, and the Freedom of Information Act"
},
{
"paperId": "87424ab135b0ee1b8899fc157fcc8f5e6e9ab67f",
"title": "Social Determinants of Health"
},
{
"paperId": "b8775b8ead06b2f683a7ed21384d50d5da34d3a8",
"title": "An Integrative Model Of Organizational Trust"
},
{
"paperId": "66037547447f0faf0256dba5d11311d1fd5f2bdb",
"title": "of Epidemiology"
},
{
"paperId": null,
"title": "Downloaded from https://www.cambridge.org/core. 14 Feb 2022 at 20:58:58, subject to the Cambridge Core terms of use"
},
{
"paperId": null,
"title": "The Social Data Foundation model: Facilitating health and social care transformation through datatrust services.Data"
},
{
"paperId": null,
"title": "Data Linkage Western Australia"
},
{
"paperId": null,
"title": "Fiduciary duties and fiduciary"
},
{
"paperId": null,
"title": "HRA approval"
},
{
"paperId": null,
"title": "Integrated Research Application System (IRAS)"
},
{
"paperId": null,
"title": "NHS Data Security and Protection Toolkit"
},
{
"paperId": null,
"title": "A Blueprint for a Data Foundation"
},
{
"paperId": null,
"title": "From internal discussions with authors on the notion of fiduciary ethical virtues and datatrust services"
},
{
"paperId": null,
"title": "Designing sustainable data institutions. Open Data Institute (ODI) report"
},
{
"paperId": null,
"title": "2020, March). Data Sharing Toolkit: Approaches, guidance and resources to unlock the value of data"
},
{
"paperId": null,
"title": "Regulating access to data: 5 Safes"
},
{
"paperId": null,
"title": "Is COVID at risk of becoming a disease of the poor? BBC News, February 2021"
},
{
"paperId": null,
"title": "Innovation Gateway"
},
{
"paperId": null,
"title": "OWASP"
},
{
"paperId": null,
"title": "Exploring legal mechanisms for data stewardship"
},
{
"paperId": null,
"title": "Putting the trust in trusted research environments"
},
{
"paperId": null,
"title": "A progressive approach to data governance"
},
{
"paperId": null,
"title": "Data ethics framework"
},
{
"paperId": null,
"title": "Addressing public trust in public sector data use"
},
{
"paperId": null,
"title": "A new approach to decisions about data"
},
{
"paperId": null,
"title": "Building Trust through Data Foundations: A Call for a Data Governance Model to Support Trustworthy Data Sharing. Web Science Institute (WSI) White Paper #2"
},
{
"paperId": null,
"title": "TheNHSlongtermplan"
},
{
"paperId": null,
"title": "Building Trust in Human-Centric Artificial Intelligence"
},
{
"paperId": null,
"title": "Trust modelling in 5Gmobile networks. In Proceedings of the 2018 Workshop on Security in Softwarized Networks: Prospects and Challenges (SecSoN ’18)"
},
{
"paperId": null,
"title": "Data for Public Benefit: Balancing the risks and benefits of data sharing"
},
{
"paperId": null,
"title": "Data governance for next-generation platforms. Deloitte"
},
{
"paperId": null,
"title": "ISO27005InformationTechnology — Securitytechniques — Informationsecurityriskmanagement"
},
{
"paperId": null,
"title": "Reducinghealthinequalities:System,scaleandsustainability"
},
{
"paperId": null,
"title": "Enabling Evidence Based Continuous Improvement: The Target Architecture – Connected Care Settings and Improving Patient Experience"
},
{
"paperId": null,
"title": "GeneralDataProtectionRegulation(GDPR)"
},
{
"paperId": null,
"title": "Distributed ledger technology: Beyond block chain"
},
{
"paperId": "02d49a97fac90c25207019729910a0d637d0239f",
"title": "ISO/IEC 27001:2013概述与改版分析"
},
{
"paperId": "e059abcac85cc4884714bfad880f233eeacfe01d",
"title": "Public Acceptability of Data Sharing Between the Public, Private and Third Sectors for Research Purposes"
},
{
"paperId": "cb4f014239971fed608a12c5ffd37454f8c17aa2",
"title": "Trustworthy reuse of health data: A transnational perspective"
},
{
"paperId": "c54e0f4356173b0808badde59dbfa1664980b5ed",
"title": "ARTICLE 29 DATA PROTECTION WORKING PARTY"
},
{
"paperId": "3650135f5d32722fee78bd8526f5696914e9de9a",
"title": "Available From"
},
{
"paperId": null,
"title": "1970 British Cohort Study (BCS70). Available at beta"
},
{
"paperId": null,
"title": "Developed in consultation with: Understanding Patient Data, Commissioned by One London"
},
{
"paperId": null,
"title": "transfer learning methods that allow extrapolation of inferences from BCS70 to CHIA — and vice versa"
},
{
"paperId": null,
"title": "Central Digital and Data Office (2020) Data ethics framework. UK Government Digital Services"
},
{
"paperId": null,
"title": "Information technology-Security Techniques-Information security management systems-Requirements, International Organization for Standardization"
},
{
"paperId": null,
"title": "data-driven-health-and-care-technology/initial-code-of-conduct-for-data-driven-health-and-care-technology"
},
{
"paperId": null,
"title": "Results are returned after internal checking for consistency with the appropriate agreements"
},
{
"paperId": null,
"title": "linkage-to-maximise-value-of-public-health-research-data-phrdf-mar15.pdf; last accessed 21"
},
{
"paperId": null,
"title": "Keynote Speech at Sixth International Data for Policy Conference 2021"
},
{
"paperId": null,
"title": "Develop clustering algorithms to identify early onset complex and burdensome multiple long term conditions"
},
{
"paperId": null,
"title": "Data are hosted by a Data Provider and access is limited to analysis by predefined distributed queries executed at Data Providers and subsequent linking of results"
},
{
"paperId": null,
"title": "The SDF creates a DSAP for MELD"
},
{
"paperId": null,
"title": "The principal investigator for MELD makes an agreement with the SDF for a DSAP (as denoted by the “ green circle 3 ” in Figure 4)"
},
{
"paperId": null,
"title": "The datasets are acquired from the Data Providers by the SDF (as the named delegate by the principal investigator) — and are loaded into the MELD DSAP"
},
{
"paperId": null,
"title": "Data Access Request Service (DARS)"
},
{
"paperId": null,
"title": "The SDF ’ s governing body performs background checks on the principal investigator for MELD — and if approved assesses the project application"
},
{
"paperId": null,
"title": "The SDF ’ s governing body assesses the application and if the evidence regarding compliance, ethics, and social benefit is satisfactory, the SDF agrees to support MELD"
},
{
"paperId": null,
"title": "NHS Digital"
},
{
"paperId": null,
"title": "Research project: Developing a Multidisciplinary Ecosystem to study Lifecourse Determinants of Complex Mid-life Multimorbidity using Artificial Intelligence (MELD). Faculty of Medicine"
},
{
"paperId": null,
"title": "Understanding public expectations of the use of health and care data"
},
{
"paperId": null,
"title": "Care and Health Information Exchange (CHIE) (n.d"
},
{
"paperId": null,
"title": "The principal investigator for MELD authors the “ MELD Data Usage Policy ”"
},
{
"paperId": null,
"title": "Report Coauthored by Understanding Patient Data, Involve and Carnegie UK Trust"
},
{
"paperId": null,
"title": "Introduction to Smart Contracts"
},
{
"paperId": null,
"title": "Prepare study documentation. Last updated: 17"
},
{
"paperId": null,
"title": "Placed based approaches to reducing health inequalities"
},
{
"paperId": null,
"title": "UK Department of Health and Social Care (2021) A guide to good practice for digital and data-driven health technologies"
},
{
"paperId": null,
"title": "Anonymisation: managing data protection risk code of practice"
},
{
"paperId": null,
"title": "Research Data Alliance (RDA) COVID-19 Working Group (2020) RDA COVID-19; Recommendations and guidelines on data sharing, final release 30"
},
{
"paperId": null,
"title": "Understanding Patient Data and Ada Lovelace Institute (2020) Foundations of fairness: Where next for NHS health data partnerships"
},
{
"paperId": null,
"title": "Social impact lab"
},
{
"paperId": null,
"title": "International Data Week and Research Data Alliance Plenary co-hosted event \"Indigenous Data Sovereignty Principles for the Governance of Indigenous Data Workshop"
},
{
"paperId": null,
"title": "Trusted Research Environments (TRE): A strategy to build public trust and meet changing health data science needs. Green Paper v2.0 dated 21"
},
{
"paperId": null,
"title": "2020) Connected health cities: Impact report 2016-2020"
},
{
"paperId": null,
"title": "Data governance is broken. Information Week"
},
{
"paperId": null,
"title": "The Toronto Declaration: Protecting the right to equality and non-discrimination in machine learning systems"
}
] | 32,069
|
en
|
[
{
"category": "Environmental Science",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/032bf884dcfa77e82d3b8af8d993452114441001
|
[] | 0.858238
|
Green Behavior Strategies in the Green Credit Market: Analysis of the Impacts of Enterprises’ Greenwashing and Blockchain Technology
|
032bf884dcfa77e82d3b8af8d993452114441001
|
Sustainability
|
[
{
"authorId": "2305225903",
"name": "Xianwei Ling"
},
{
"authorId": "2305748926",
"name": "Hong Wang"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://mdpi.com/journal/sustainability",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172127"
],
"id": "8775599f-4f9a-45f0-900e-7f4de68e6843",
"issn": "2071-1050",
"name": "Sustainability",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172127"
}
|
With the degradation of the environment due to increasing ecological destruction and pollution, sustainable development has become the paramount objective of social progress. As a result, the concept of green development has garnered considerable attention, which is an important starting point for China to achieve stable economic development and sustainable ecological development. To achieve high-quality economic progress while advancing environmentally friendly practices, it is imperative to formulate and uphold a sound green credit system. However, the phenomenon of greenwashing by enterprises still exists, which compromises the efficacy of green credit and hinders the long-term sustainable and well-organized progress of green finance. Building on the background of green credit, considering the existence of blockchain and government subsidies and adopting the method of tripartite evolutionary game, this paper examines the strategic decisions made by the government, financial institutions, and small and medium-sized enterprises in the context of greenwashing. An emphasis is placed on the impact of blockchain technology on the three parties involved in the green credit market. The findings demonstrate that blockchain technology can diminish the likelihood of greenwashing by businesses and enhance the impact of government subsidies. However, it cannot replace the regulatory authority of the government in sustainable development. Moreover, excessive subsidies can stimulate more greenwashing practices, but eliminating subsidies does not eradicate the root of greenwashing. To encourage sustainable economic development and minimize corporate defaults, the government ought to reinforce supervision and establish a robust social surveillance and publicity mechanism. This paper broadens the research perspective on the effectiveness of green credit and provides some empirical and theoretical references for further promoting the green transformation of SMEs and the sustainable development of the ecological environment.
|
## sustainability
_Article_
# Green Behavior Strategies in the Green Credit Market: Analysis of the Impacts of Enterprises’ Greenwashing and Blockchain Technology
**Xianwei Ling and Hong Wang ***
School of Economics and Management, Nanjing Forestry University, Nanjing 210037, China;
lingxianwei@njfu.edu.cn
*** Correspondence: wanghong@njfu.edu.cn**
**Citation: Ling, X.; Wang, H. Green**
Behavior Strategies in the Green
Credit Market: Analysis of the
Impacts of Enterprises’ Greenwashing
and Blockchain Technology.
_Sustainability 2024, 16, 4858._
[https://doi.org/10.3390/su16114858](https://doi.org/10.3390/su16114858)
Academic Editor: ¸Stefan
Cristian Gherghina
Received: 21 April 2024
Revised: 1 June 2024
Accepted: 3 June 2024
Published: 6 June 2024
**Copyright: © 2024 by the authors.**
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: With the degradation of the environment due to increasing ecological destruction and pol-**
lution, sustainable development has become the paramount objective of social progress. As a result,
the concept of green development has garnered considerable attention, which is an important starting
point for China to achieve stable economic development and sustainable ecological development.
To achieve high-quality economic progress while advancing environmentally friendly practices, it is
imperative to formulate and uphold a sound green credit system. However, the phenomenon of
greenwashing by enterprises still exists, which compromises the efficacy of green credit and hinders
the long-term sustainable and well-organized progress of green finance. Building on the background
of green credit, considering the existence of blockchain and government subsidies and adopting the
method of tripartite evolutionary game, this paper examines the strategic decisions made by the
government, financial institutions, and small and medium-sized enterprises in the context of greenwashing. An emphasis is placed on the impact of blockchain technology on the three parties involved
in the green credit market. The findings demonstrate that blockchain technology can diminish the
likelihood of greenwashing by businesses and enhance the impact of government subsidies. However,
it cannot replace the regulatory authority of the government in sustainable development. Moreover,
excessive subsidies can stimulate more greenwashing practices, but eliminating subsidies does not
eradicate the root of greenwashing. To encourage sustainable economic development and minimize
corporate defaults, the government ought to reinforce supervision and establish a robust social
surveillance and publicity mechanism. This paper broadens the research perspective on the effectiveness of green credit and provides some empirical and theoretical references for further promoting the
green transformation of SMEs and the sustainable development of the ecological environment.
**Keywords:** green credit; evolutionary game; blockchain technology; government subsidy;
greenwashing; SMEs; green behavior strategies; green transformation
**1. Introduction**
Environmental issues have gradually become one of the major global challenges [1,2].
In response to this critical challenge, commercial banks worldwide promote green financial
development [3]. In China, the government proposed the concept of green credit in 2007,
issued the “green credit guidelines” in 2012, and established a thorough ecological-oriented
financial policy to guide the flow of funds [4–6]. However, on account of information
asymmetry, it is arduous for financial institutions to ensure the use of green credit funds.
Profit maximization leads some enterprises to engage in non-green production to bolster
profitability [7], which is known as greenwashing behavior [8]. Greenwashing is common
in the international market. In 2022, Bank of New York Mellon was accused of making false
statements and omissions in ESG factor considerations by some of its mutual funds. In 2023,
China’s Southern Weekend magazine published a greenwashing list, which included many
of the world’s top 500 companies, such as Tesla. Numerous factors induce greenwashing.
-----
_Sustainability 2024, 16, 4858_ 2 of 21
Future investment and financing requirements, particularly for companies with higher debt
levels [9], may encourage greenwashing behavior. Furthermore, the style of leadership
(adherence to authoritative and moral leadership) and incentives [10], as well as gender [11],
can also play a role in this behavior. At a macro level, the existence of greenwashing hinders
the progress of the real economy and is not conducive to long-term sustainable finance.
At an individual level, greenwashing has a negative impact on the work performance of
employees and on investor willingness [12]. Individual investors believe that companies
with greenwashing behavior are hypocritical [13] and are reluctant to invest in companies
that engage in falsification and deceptive manipulation [14]. Additionally, greenwashing
impacts consumers’ trust in products [15] and results in decreased brand equity and
purchase intention [16]. This is why greenwashing needs to be paid attention to and solved
in time.
To address information asymmetry and enhance the efficacy of green credit, financial
technology has been utilized, such as blockchain [17]. Moreover, the pioneering attributes
of blockchain can augment credit value and optimize corporate default cost [18]. The use
of highly transparent and traceable data and smart contract technology is conducive to the
supervision of the government and banks. While ensuring the confidentiality and security
of private data, it reduces the evaluation and credit reporting costs of financial institutions
to enterprises and improves financing efficiency [19]. However, the majority of existing
articles concentrate on the features of blockchain, with limited scholarly attention paid to
the application and outcomes of blockchain technology in the domain of green credit and
regulation. The purpose of this paper is to address this gap and to provide an insight into
the use cases and benefits of blockchain in the aforementioned context.
This paper investigates the green supply chain in the financial market against the
background of green credit. Based on an evolutionary game, it examines the possibility
for greenwashing by SMEs. This paper analyzes the stability strategies of the government,
financial institutions, and SMEs, reveals the decision-making mechanisms of participants in
the green credit market under blockchain technology and conducts a sensitivity analysis of
the pertinent factors. The aim is to address the following inquiries. (1) Can the adoption of
blockchain technology reduce the possibility of greenwashing by SMEs? (2) In the context
of the green credit policy, what is the regulatory position and role of blockchain? (3) Is there
a link between government subsidies and corporate greenwashing? If so, is there no room
for greenwashing if the subsidy is cancelled?
The innovations and marginal contributions of this paper are as follows. First of all,
the previous research on greenwashing mostly focused on the discussion of greenwashing
motivation and solutions, and the field mostly focused on accounting and management,
ignoring the impact of the existence of this behavior on the financing system. This paper
incorporates it into the consideration of the credit system, explores the impact of greenwashing behavior on the decision-making of financial institutions and enterprises, and
identifies the behavioral relationship between the government, financial institutions and enterprises, so as to fill the gap in the relationship between the three under the consideration
of enterprise greenwashing behavior. Secondly, previous research on blockchain technology
in financing focused on changes in financing methods and financing efficiency between
banks and enterprises, and few scholars observed its regulatory role in the financial system
from a macro perspective. This paper applies blockchain technology to the green credit
market and discusses its impact on government regulation and subsidy policies. Thirdly,
most of the environment-related literature adopts the perspective of management and
empirical research. This paper provides a dynamic perspective of bounded rationality with
the method of an evolutionary game, and it studies the relationship between environment
and sustainable economic growth. The results provide inspiration for further research in
innovation theory and offer a theoretical reference and scientific basis for improving the
effectiveness of the green credit policy.
This paper comprises multiple sections, with the second being a literature review.
The Section 3 outlines the model hypothesis and parameter setting. Subsequently, we present
-----
_Sustainability 2024, 16, 4858_ 3 of 21
the results of the simulation analysis in the Section 4. Finally, we present the conclusions,
discussion and policy implications in the Section 5.
**2. Literature Review**
_2.1. Governance of Greenwashing_
In general, scholars tended to solve the problem of greenwashing from two directions,
one was digital technology represented by blockchain and the other was to reduce the
occurrence of greenwashing through various aspects of supervision.
Blockchain technology had the potential to meet the demands of both supply chain
flexibility and sustainable circular economy. Dong [20] found that manufacturers in the
logistics industry might engage in “greenwashing” practices. The extent to which greenwashing behavior benefited logistics companies was contingent on the likelihood of being
caught and receiving punishment. According to Nygaard [15], blockchain had the ability
to provide greater protection to consumers against the dangers of greenwashing than
authentication systems.
Some scholars included supervision in their research on corporate greenwashing.
Hu [21] argued that unifying environmental rating standards, strengthening internal supervision, and extending external supervision were vital in curbing the phenomenon of
greenwashing. Supervision was categorized as either internal or external. Internal supervision involved enhancing the responsible management of supply chain businesses [22].
External scrutiny consisted of government and media oversight. Xu [23] believed in the
power of regulatory measures and efforts by the government. Sun [24] revealed the impact
of the government’s punishment mechanism and tax subsidy mechanism on the greenwashing behavior among heterogeneous enterprises. Considering the problem of greenwashing
in the green certification mechanism of enterprises, Chen [25] examined the effect of subsidy policies and other policy tools from the perspective of a multi-agent evolutionary
game. Sun [26] proved that the influence of different government supervision intensities
and different heterogeneous proportion coefficients of enterprises on the greenwashing
behavior of superior and inferior enterprises was different. Government regulations and
media coverage could reduce information asymmetry [27]. The cooperation between the
two could also have a synergistic governance effect [28]. Yu [29] believed that the greenwashing behavior in the environment, social and governance (ESG) dimension could be
prevented by some measures. It was most effective to have more institutional investors
and independent directors.
_2.2. The Impact of Blockchain on SCF_
The current blockchain research focused on mechanism design and application, and
on the comparison of financing methods. This paper concentrates on the influence of
blockchain’s introduction on the supply chain and upstream and downstream participants,
such as financial institutions and SMEs.
Ahluwalia [30] analyzed the transactional nature of blockchain technology in view
of transactional economics, showing how the technology overcame the inherent problems
of entrepreneurial finance. Blockchain technology had a great impact on SME financing.
Cao [31] believed that the emergence of platforms based on blockchain technology and
Internet of Things technology could effectively solve the problems of high risks, financing difficulties, and lack of credit in traditional agricultural supply chains. Blockchain
could promote trust between organizations in terms of the system and reputation [32],
thereby promoting cooperation between upstream and downstream nodes in supply chain
finance [33,34] and helping SMEs to comply with contracts [35]. It also brought about
changes in financing methods and boosted financing efficiency. Yu [36] suggested that
blockchain technology’s credibility and transparency had enabled SMEs to obtain loans
from financial institutions through self-guaranteeing them. Song [37] studied the ways in
which blockchain can improve financing performance, especially in accounts receivable
financing and inventory financing scenarios [38,39].
-----
_Sustainability 2024, 16, 4858_ 4 of 21
Blockchain could also help financial institutions such as banks to solve practical
business problems and foster financial services to better serve the economy. Cucari [40]
discussed a blockchain case study. It was assumed that blockchain technology provided
greater data transparency and visibility, improving the transmission and networking efficiency of information and ledger accounting. The decentralized structure of blockchain
also greatly enhanced the security of banking business. Wang [41] argued that blockchain
changed the credit business and mechanism of traditional banks. It upgraded the centralized banking system, which could reduce the cost of centralized databases, credit risks and
potential money laundering risks, and it could develop new financial products [42].
_2.3. Research on Green Supply Chain Financing System Based on an Evolutionary Game_
Green financing has always been a hot issue in research. The research field centered
on influencing factors and financing decisions. This paper mainly revolves around the
evolutionary game strategies in the green financial market. Hu [43] considered the existence
of joint fraud in the financing problems of banks and enterprises. Li [44] started a game
model of the green construction industry. Sun [45] studied the impact of government
subsidy mechanisms on corporate green investment. Wang [46] considered the pollution
control strategy.
Many scholars regarded government regulation as a party in the evolutionary game to
study the longer-term and more macro stability of the credit market. Cui [47] constructed
an evolutionary game model composed of four participants—government, financial institutions, enterprises and consumers—to prove the importance of strengthening government
supervision and stressed that the construction of a sustainable economy needs the interaction of all parties. The main factors affecting the green behavior of supply chain enterprises
included government subsidies, corporate investment income and green consumption
costs [48,49]. In terms of government participation factors, many scholars subdivided them.
Long [50] considered the government’s green sensitivity. Wei [51] focused on the issue of
governance intensity and the punishment coefficient. From the perspective of differentiated
pricing, Ye [52] analyzed the strategic choices of the government, financial institutions and
enterprises in the process of green credit transactions and promoted the stable strategy of
the tripartite game to the ideal range of low interest rates, green production and efficient
supervision. With regard to greenwashing, Yang [53] found that the size of the interest gap
between greenwashing and ecological innovation (positive and negative) fundamentally
determined the direction and outcome of the evolution of new enterprise behavior strategies. Xu [54] analyzed the relevant conditions for enterprises to produce green products by
constructing an evolutionary game model of green credit financing.
From the above literature review, it can be seen that there are some gaps in the
practical research. (1) The problem of greenwashing is more centered on the discussion of
motivation and solutions, while most of the research on ways to curb greenwashing focuses
on technical solutions and strengthening supervision, without considering the combination
of the two to enhance effectiveness. (2) Given the massive articles on regulation and
blockchain technology, the specific application of the green credit market is not taken into
account. Few literature studies pay attention to the impact of the existence of greenwashing
on the lending of specific financially constrained SMEs, and most ignore the impact of
greenwashing on the decision-making of financial institutions. (3) In the research on
introducing a supply chain into blockchain technology, few scholars pay attention to the
behavior choice of the government as a regulator with the participation of blockchain
technology. This paper focuses on the three-party game between the government, financial
institutions and enterprises against the background of the green credit market, considering
the introduction of blockchain technology and the existence of greenwashing behavior on
the part of enterprises.
-----
_Sustainability 2024, 16, 4858_ 5 of 21
**3. Methodology**
Green credit is an important market for China’s green financial development. In reality,
the market players involved in green credit are the government, financial institutions and
SMEs. The main body of direct transactions of funds is comprised by financial institutions and enterprises, both of which want to maximize their profits, but there are often
conflicts between economic benefits and environmental protection. In order to cater to
green indicators and maximize their own interests, enterprises may carry out false environmental protection behaviors, obtain recognition and preference of the investment market
through greenwashing, and engage in non-green production activities with high risk and
high return [55]. Due to information asymmetry, for financial institutions, bearing the
risk is too high to recover the loss of all the loan amount. The blockchain itself has the
characteristics of openness, decentralization, and traceability, and it can be well applied
to the financing scenario. Compared with the traditional mode, the financial mode of
access to blockchain technology can reduce the cost of credit investigation and the risk
of loss while bearing certain service costs. Increasing the default cost of the defaulting
party may be a good solution to the problem of greenwashing [20]. Government subsidies
can guide the upgrading of green industries, promote the development of green finance,
better help green SMEs with positive externalities, solve the imbalance of economic and
social development, and obtain economic benefits and social welfare [56]. At the same
time, the input of government subsidies is a burden for national finance, and it is necessary
to balance and manage fiscal revenue and expenditure. How to achieve this balance is
a problem that the government needs to think about. On the basis of Xu [54], this paper
combines the blockchain and greenwashing issues with China’s green credit policy and
expands the government’s choice of behavior. The government, as a regulator, quantifies
and measures the benefits of environmental protection, supervision and subsidy costs, and
it also serves as one of the main players in the tripartite game. This paper focuses on a
macro dynamic financing market.
_3.1. Evolutionary Model Construction_
In order that the research on the evolutionary game can be effectively carried out,
some necessary assumptions are made for the model. The assumptions are as follows.
(1) Economic man hypothesis. The purpose of the participants is to maximize their
own interests. As an advocate of environmental protection, the government maximizes
environmental benefits and social welfare; financial institutions as special profit-making
enterprises and SMEs have profit maximization as the biggest goal they pursue.
(2) Bounded rationality hypothesis. This paper abandons the traditional game and
chooses the evolutionary game, mainly because the bounded rationality of the participants
is more in line with the actual situation. We also need to observe a long-term, dynamic
selection and adjustment process of the green credit market.
(3) Strategy. The government’s strategic choices include the implementation of green
credit “subsidies” and the “non-subsidies.” The probability of “subsidies” is x (x [0, 1]),
_⊆_
and the probability of “non-subsidies” is 1 x. Correspondingly, the strategic choices
_−_
of financial institutions include “blockchain” and “traditional”, which means whether to
adopt blockchain technology in the financial situations, and the probability of “blockchain”
is y (y ⊆ [0, 1]), while the probability of “traditional” is 1 − y. The strategic choices of SMEs
include “green” and “greenwashing”. The probability of “green” is z (z [0, 1]), and the
_⊆_
probability of “greenwashing” is 1 z.
_−_
_3.2. Model Parameters_
As shown in Table 1, this paper sets the environmental benefits and social welfare
brought about by the green production of SMEs in the green credit market as W2. The
government will provide financing subsidies H for SMEs with limited funds to engage in
green production risks. In addition to the cost of capital, the government needs to invest a
certain amount of administrative resources in the design of relevant policies. It is necessary
-----
_Sustainability 2024, 16, 4858_ 6 of 21
to bear the cost of supervision and law enforcement while establishing monitoring and
evaluation mechanisms. The existence of incentives also reduces tax revenue. The total
cost is recorded as C2. The government has a regulatory function, especially for the trend
of funds paid by the government. The regulatory intensity of providing subsidies and
not providing subsidies is different, and the cost of different regulatory intensity is also
different. When the government pays the funds, we set its supervision as α. If the subsidy
policy is not adopted, there is no need to follow up the flow of subsidy funds and the
supervision β is weaker (α > β) [51]. Enterprises also have greenwashing production. The
emergence of this situation has allowed green funds to flow to non-green areas and failed
to achieve the expected environmental effects. Moreover, this fund can be used for the
normal production of other green SMEs, and the government will vigorously punish this
behavior, recorded as P. As an emerging technology, blockchain can improve the level
of information construction, enhance the effectiveness of public services, promote digital
economic growth and industrial innovation, and set the relevant income as W1. As a
platform builder, the government’s construction cost is C1.
**Table 1. Parameters and descriptions.**
**Descriptions** **Parameters**
Relevant income of financial institutions under the blockchain financial model _W1_
Environmental benefits of green production in SMEs _W2_
Blockchain platform construction cost _C1_
Government subsidies for green production of SMEs _H_
The total cost paid under government subsidies _C2_
The government’s punishment for greenwashing production of SMEs _P_
The supervision of SMEs under government subsidies _α_
The government’s supervision of small and medium-sized enterprises without subsidies _β_
Deposit rate _i1_
Small and medium-sized enterprise credit line _L_
Prime lending rate _i2_
The business cost of financing for green SMEs _C3_
Losses caused by greenwashing behavior of SMEs _S_
Blockchain platform service cost _C4_
Green production costs of SMEs _C5_
Small and medium-sized enterprise green production yield _r1_
Small and medium-sized enterprise greenwashing production yield _r2_
Blockchain trustworthiness incentives for green production in SMEs _G_
Blockchain system service platform tracking mechanism punishment _M_
The invisible value loss of greenwashing behavior under blockchain system _V_
Financial institutions are mainly based on deposit and loan interest spreads, and the
deposit interest rate is set as i1. The preferential loan interest rate under the green credit
policy is i2 (i2 > i1). That is, the financing cost of green production of SMEs. Its green credit
line is L. The business cost of financing for SMEs, such as credit audit, formalities, etc.,
is set as C3. After adopting the blockchain financial model, the service cost of accessing
the blockchain technology platform is C4. If the SMEs providing financing bleach green
production, then the financial institutions may bear a variety of damage, such as if the
enterprise cannot repay the loan on time, the financial institutions are exposed to the risk of
debt default, resulting in the funds not being recovered. Financial institutions may face bad
debt, the loss of the value of the assets and bear the loss of reputation and the possibility
of legal compliance risk. Under the principle of green finance, financial institutions and
highly polluting enterprises may be subject to social and public condemnation, affecting
their reputation and business development, and in serious cases, they will be responsible
for the legal liability, recorded as S.
The green production yield of small and medium-sized enterprises is r1, while greenwashing production has high risk and high return, and the yield is r2 (r2 > r1). If green
-----
_Sustainability 2024, 16, 4858_ 7 of 21
production is adopted, a certain cost is required, including updating or modifying existing
equipment to meet the needs of environmental protection and energy efficiency. In the
procurement of raw materials and supplier cooperation, it is necessary to consider that
raw materials must meet environmental protection standards and assess the sustainability
of the supply chain. The costs of certification and compliance review, as well as training
and governance costs, are recorded as C5. If you access blockchain technology, SMEs
in accordance with the rules of the agreement on green production must agree to the
timely repayment of loans. Then, according to the formula algorithm and smart contract of
blockchain decentralization, digital currency or other forms of trustworthy rewards can be
provided for participants’ honest performance, G [31]. In this way, participants are encouraged to actively follow the rules and enhance the operation effect of the system. Through
the consensus mechanism in the blockchain, participants participate in the verification
transaction according to the rules, and the destruction of greenwashing production will
face punishment, M. It is also the compensation obtained by financial institutions through
the reward and punishment mechanism of the blockchain service platform. In addition,
there is a credit evaluation mechanism in the blockchain, and honest and trustworthy
participants receive higher credit scores and reputations. The occurrence of malicious
behavior will lead to a series of consequences, such as corporate trust, corporate reputation and image damage [32]. We record these losses as invisible value losses, V, under
greenwashing behavior.
The revenue matrix of the three parties is shown in Table 2, which can be derived from
the above hypothesis.
**Table 2. Revenue matrix of the three parties.**
**SMEs**
**Green z** **Greenwashing 1 −** **_z_**
_W1 + αP −_ _H −_ _C1 −_ _C2_
_L(i2 −_ _i1) + M −_ _S −_ _C4_
_L(r2 −_ _i2) + H −_ _M −_ _V −_ _αP_
_αP −_ _H −_ _C2_
_L(i2 −_ _i1) −_ _C3 −_ _S_
_L(r2 −_ _i2) + H −_ _αP_
_W1 + βP −_ _C1_
_L(i2 −_ _i1) + M −_ _S −_ _C4_
_L(r2 −_ _i2) −_ _M −_ _V −_ _βP_
_βP_
_L(i2 −_ _i1) −_ _S −_ _C3_
_L(r2 −_ _i2) −_ _βP_
_W1 + W2 + αP −_ _H −_ _C1 −_ _C2_
_L(i2 −_ _i1) −_ _C4_
_L(r1 −_ _i2) + G + H −_ _C5_
_W2 + αP −_ _H −_ _C2_
_L(i2 −_ _i1) −_ _C3_
_L(r1 −_ _i2) + H −_ _C5_
_W1 + W2 + βP −_ _C1_
_L(i2 −_ _i1) −_ _C4_
_L(r1 −_ _i2) + G −_ _C5_
_W2 + βP_
_L(i2 −_ _i1) −_ _C3_
_L(r1 −_ _i2) −_ _C5_
Blockchain
_y_
Traditional
1 − _y_
Blockchain
_y_
Traditional
1 − _y_
Gov
Subsidy
FIs
_x_
Non-subsidy
FIs
1 − _x_
_3.3. Replicator Dynamics Equation and Stability analysis of Evolutionary Game_
According to the above model assumptions and variable settings, the government’s
expectation of adopting a green credit subsidy strategy is Ex. The expectation of not
adopting green credit subsidies is E1−x. Therefore, the government’s evolutionary game
replication dynamic equation is:
_Ex = yz(W1 + W2 + αP −_ _H −_ _C1 −_ _C2) + y(1 −_ _z)(W1 + αP −_ _H −_ _C1 −_ _C2) + z(1 −_ _y)(W2 + αP −_ _H −_ _C2) + (1 −_ _y)(1 −_ _z)(αP −_ _H −_ _C2)_
_E1−x = yz(W1 + W2 + βP −_ _C1) + y(1 −_ _z)(W1 + βP −_ _C1) + z(1 −_ _y)(W2 + βP) + (1 −_ _y)(1 −_ _z)βP_
_F(x) = dx/dt = x(x −_ 1)(C2 + H − _αP + βP)_
-----
_Sustainability 2024, 16, 4858_ Similarly, the expectations for financial institutions to adopt the blockchain financial 8 of 21
strategy and maintain the traditional financial strategy are as follows: Ey, E1-y. Thus, the
evolutionary game replication dynamic equation of financial institutions is:
Similarly, the expectations for financial institutions to adopt the blockchain financial
_Ey_ = _xz L i_ ( 2 − _i1)_ − _C4_ + _x(1strategy and maintain the traditional financial strategy are as follows: E−_ _z)_ L i( 2 − _i1)_ + _M_ − _S_ − _C4_ + _z(1−_ _x)_ L i( 2 − _i1)_ − _C4_ + (1− _z)(1−_ _x)_ L i( 2 − _i1)_ + _M_ − _S_ − _C4_ y, E1−y. Thus, the
_E1-y_ = _xz L i_ ( 2 − _i1)_ − _C3_ + _xevolutionary game replication dynamic equation of financial institutions is:(1_ − _z)_ L i( 2 − _i1)_ − _S_ − _C3_ + _z_ (1 − _x)_ L i( 2 − _i1)_ − _C3_ + (1 − _z)(1_ − _x)_ L i( 2 − _i1)_ − _S_ − _C3_
_F y( )_ = _dy dt/_ = − _y y(_ −1)(C3 − _C4_ + _M_ − _zM_ )
_Ey = xz[L(i2 −_ _i1) −_ _C4] + x(1 −_ _z)[L(i2 −_ _i1) + M −_ _S −_ _C4] + z(1 −_ _x)[L(i2 −_ _i1) −_ _C4] + (1 −_ _z)(1 −_ _x)[L(i2 −_ _i1) + M −_ _S −_ _C4]_
_E1−y = xz[L(i2 −_ _i1) −_ _C3] + x(1 −The expectations for SMEs to adopt the green production strategy and greenwashing z)[L(i2 −_ _i1) −_ _S −_ _C3] + z(1 −_ _x)[L(i2 −_ _i1) −_ _C3] + (1 −_ _z)(1 −_ _x)[L(i2 −_ _i1) −_ _S −_ _C3]_
_F(y) = dy/dt = −y(y −_ 1)(Cproduction strategy are as follows: 3 − _C4 + M −_ _zM)_ _Ez, E1−z._
_Ez_ = _xy L r_ ( 1 − _i2_ ) + _G_ + _H_ − _C5_ + _xThe expectations for SMEs to adopt the green production strategy and greenwashing(1−_ _y)_ L r( 1 − _i2_ ) + _H_ − _C5_ + _y(1−_ _x)_ L r( 1 − _i2_ ) + _G C−_ 5 + (1− _y)(1−_ _x)_ L r( 1 − _i2_ ) − _C5_
_E1−z_ [=] _xy L r_ ( 2 −i2) +H M V− −−production strategy are as follows:αP + _x(1−_ _y L r)_ ( 2 −i2) +H −αP + _y(1−x L r)_ ( _E2_ −zi2, E) −1M V−−−z. βP + −(1 _y)(1−x L r)_ ( 2 −i2) −βP
_F_ ( )z = _dz dt/_ = −z z( −1)(βP − _C5_ + _Lr1_ − _Lr2_ + _yG_ + _yM_ + _yV_ + _x Pα_ − _x Pβ_ )
_Ez = xy[L(r1 −_ _i2) + G + H −_ _C5] + x(1 −_ _y)[L(r1 −_ _i2) + H −_ _C5] + y(1 −_ _x)[L(r1 −_ _i2) + G −_ _C5] + (1 −_ _y)(1 −_ _x)[L(r1 −_ _i2) −_ _C5]_
_E1−z = xy[L(r2 −_ _i2) + H −_ _M −_ _V −_ _αP] +When x(1 −_ _yF x)[( )L(r2= −0, i2F) +′( )x H −<_ 0,αP let ] + y(F x1 −( ) =x)[L0,(r then 2 − _i2) −xM= −0, Vx −= .1βP] + (1 −_ _y)(1 −_ _x)[L(r2 −_ _i2) −_ _βP]_
_F(z) = dz/dt = −z(z −_ 1)(βP − _C5 + Lr1 −WhenLr2 + yGC2 ++_ _yMH_ − +α yVP + +β xPα>P0, − _Fxβ′( )P0)_ < 0, F′( )1 - 0 then _x =_ 0 is an evolution-stable point.
The government will not choose a subsidy.When F(x) = 0, F[′](x) < 0, let F(x) = 0, then x = 0, x = 1.
WhenWhen CC22 ++ HH _−−_ ααPP+ +βP β<P0, >F 0,′( ) F0 >[′](0, 0) <F′( ) 0,1 < F0[′], then (1) > 0 thenx = is an evolution-stable point. 1 _x = 0 is an evolution-stable_
point. The government will not choose a subsidy.
The government will choose a subsidy.
When CF y2 +( ) H= −0, FαP′( ) +y _β<_ 0,P < 0, FF y( )[′](0 =) >0, 0, F[′](1) < 0, then x = 1 is an evolution-stablez0 = (C3 − _C4_ ) [/] _M_ + 1
When let then _y =_ 0, y = and 1 .
point. The government will choose a subsidy.
WhenWhen Fz(y=) =z0[, ] 0,F y( ) F[′]=(y0), any value of < 0, let F(y) =y is an evolution-stable state. 0, then y = 0, y = 1 and z0 = (C3 _C4)/M + 1._
_−_
WhenWhen z =z ≠ zz00,, 0 F(<yz) =< _z0 0, any value of and_ _F′( )0_ - _y0, is an evolution-stable state.F′( )1_ < 0, then _y = is an evolution-stable 1_
point. When the probability of SMEs choosing green production is less than When z ̸= z0, 0 < z < z0 and F[′](0) > 0, F[′](1) < 0, then y = 1 is an evolution-stablez0, financial
point. When the probability of SMEs choosing green production is less thaninstitutions will adopt the blockchain financial model. _z0, financial_
institutions will adopt the blockchain financial model.
WhenWhen z ̸z=≠ zz00,, zz00< <z _z<1 <, and 1, andF F′[′]( )(00)< <0, 0,F′ F( )1_ _[′](>10) >, then 0, theny y = =0_ is an evolution-stable 0 is an evolution-stable
point. When the probability of SMEs choosing green production is more than z0, financial
point. When the probability of SMEs choosing green production is more than z0, financial
institutions will adopt the traditional financial model.
institutions will adopt the traditional financial model.
Based on the above analysis, the conclusions are expressed in a three-dimensional
Based on the above analysis, the conclusions are expressed in a three-dimensional
coordinate system, which leads to the dynamic evolutionary trend of financial institu
coordinate system, which leads to the dynamic evolutionary trend of financial institutions’
tions’ behavior, as shown in Figure 1.
behavior, as shown in Figure 1.
**Figure 1. Replication dynamic phase diagram of FIs.**
When F(z) = 0, F[′](z) < 0, let F(z) = 0, then z = 0, z = 1, and x0 = (βP − _C5 + Lr1_
_−Lr2 + yG + yM + yV)/(βP −_ _αP)._
When x = x0, F(z) = 0, any value of z is an evolution-stable state.
When x ̸= x0, 0 < x < x0 and F[′](0) < 0, F[′](1) > 0, then z = 0 is an evolution-stable
point. When the probability of the government choosing green credit subsidy is less than
_x0, SMEs will adopt the way of greenwashing production._
When x ̸= x0, x0 < x < 1, and F[′](0) > 0, F[′](1) < 0, then z = 1 is an evolution-stable
point. When the probability of the government choosing green credit subsidy is more than
_x0, SMEs will adopt the way of green production._
-----
0, 0, ( ), ( ), p
_Sustainability 2024, 16, 4858_ 9 of 21
When the probability of the government choosing green credit subsidy is more than x0,
SMEs will adopt the way of green production.
Based on the above analysis, the conclusions are expressed in a three-dimensional
Based on the above analysis, the conclusions are expressed in a three-dimensional
coordinate system, which leads to the dynamic evolutionary trend of financial institu
coordinate system, which leads to the dynamic evolutionary trend of financial institutions’
tions’ behavior, as shown in Figure 2.
behavior, as shown in Figure 2.
**Figure 2. Replication dynamic phase diagram of SMEs.**
**Figure 2. Replication dynamic phase diagram of SMEs.**
By analyzing the local stability of the matrix of the corresponding replication dynamic
system, the evolutionary stability strategy of the evolutionary game is obtained. AccordingBy analyzing the local stability of the matrix of the corresponding replication dy
namic system, the evolutionary stability strategy of the evolutionary game is obtained. to the replication dynamic equation of the three parties, we can obtain the Jacobian matrix
According to the replication dynamic equation of the three parties, we can obtain the Ja-of the system.
cobian matrix of the system.
_D11 =_ _[∂][F]∂[(]x[x][)]∂[,]F x[ D]( )[12][ =][ ∂][F]∂[(]y∂[x]F x[)]_ [,]( )[ D][13][ =][ ∂][F]∂∂F x[(]z[x]( )[)] _D11_ _D12_ _D13_ _λ1_ 0 0
_D21 =D[∂]11[F]∂[(]=x[y][)]_ [,][ D]∂x[22][ =], _D12[ ∂]=[F]∂[(]y[y][)]∂[,]y[ D][23],_ _D[ =]13_ _[ ∂]=_ _[F]∂[(]z[y]∂[)]z_ _J =_ _D21_ _D22_ _D23_ = 0 _λ2_ 0
_D31 =D[∂]21[F]∂[(]=x[z][)]∂[,]F y[ D]∂( )x[32][ =],_ _D[ ∂]22_ _[F]=∂[(]y[z]∂[)]F y[,]∂[ D]( )y_ [33],[ =]D23[ ∂]=[F]∂∂[(]z[z]F y[)]∂( )z _DD1131_ _D12D32D13_ D33 λ1 0 0 0 0 _λ3_
_D11 = (∂F z( )x −_ 1)(C∂2F z +( ) H − _αP∂ +F z( ) P) + xJ_ =(C2D +21 _HD −22_ _PD +23_ β=P)0 , Dλ122 0 = D 13 = 0.
_D31_ =, _D32_ =, _D33_ =
_D21 = 0, D22 =∂ −x_ y(C3 − ∂Cy4 + M − _zM∂z_ ) − (y − _D1)(31_ _CD3 −32_ _CD433 +_ _M_ 0 0 − _zMλ)3, D_ 23= yM(y − 1).
_D31 = −z(z −_ 1)(αP − _βP), D32 = −z(z −_ 1)(DG11 += M( _x_ +−1 V)(C),2 D+33H =− −αP(2+z −P) +1)[x Cβ( _P2 −+_ _CH5 +−_ _P Lr+1β −P)Lr,_ _D212 + (=_ _DG13 += M0. + V)y + Pαx −_ _Pβx]._
When all the eigenvalues of the matrix are negative, the equilibrium point is the
evolutionary stable point (D21 = 0, _D22_ = -y(C3 − _C4_ _ESS+_ _M_ ); when the sign of all the eigenvalues of the matrix is− _zM_ ) (- _y_ −1)(C3 − _C4_ + _M_ − _zM_ ),D =yM23 ( _y_ −1 .)
determined and there are positive eigenvalues, the equilibrium point is unstable. However,
_D31_ = -z( _z_ −1)(αP − βP D)if the equilibrium of an asymmetric game is asymptotically stable, it must be consistent, 32 = −z z( −1)(G + _M_ +V ),D33 = −(2z −1) βP C− 5 + _Lr1_ − _Lr2_ +(G + _M_ +V y) + _P xα_ − _P xβ_ .
with the strict Nash equilibrium and be a pure strategic equilibrium. Therefore, in order to
When all the eigenvalues of the matrix are negative, the equilibrium point is the evo
discuss the asymptotic stability of the equilibrium point of the replicator dynamics equation,
lutionary stable point (ESS); when the sign of all the eigenvalues of the matrix is deter
only the equilibrium point of the replicator dynamics equation needs to be discussed with a
mined and there are positive eigenvalues, the equilibrium point is unstable. However, if
pure strategy. This paper only considers the pure strategy and does not consider the mixed
the equilibrium of an asymmetric game is asymptotically stable, it must be consistent with
strategy, so only the positive and negative eigenvalues of the first eight stable points are
the strict Nash equilibrium and be a pure strategic equilibrium. Therefore, in order to dis
analyzed in Tables 3 and 4.
cuss the asymptotic stability of the equilibrium point of the replicator dynamics equation,
only the equilibrium point of the replicator dynamics equation needs to be discussed with Table 3. Equilibrium points and eigenvalues of the system.
a pure strategy. This paper only considers the pure strategy and does not consider the
**Equilibrium** **_λ1_** **_λ2_** **_λ3_**
_E1(0,0,0)_ _C3 −_ _C4 + M_ _βP −_ _C5 + Lr1 −_ _Lr2_ _αP −_ _H −_ _C2 −_ _βP_
_E2(1,0,0)_ _C3 −_ _C4 + M_ _αP −_ _C5 + Lr1 −_ _Lr2_ _C2 + H −_ _αP + βP_
_E3(0,1,0)_ _C4 −_ _C3 −_ _M_ _G −_ _C5 + M + V + βP + Lr1 −_ _Lr2_ _αP −_ _H −_ _C2 −_ _βP_
_E4(0,0,1)_ _C3 −_ _C4_ _C5 −_ _βP + Lr2 −_ _Lr1_ _αP −_ _H −_ _C2 −_ _βP_
_E5(1,1,0)_ _C4 −_ _C3 −_ _M_ _G −_ _C5 + M + V + αP + Lr1 −_ _Lr2_ _C2 + H −_ _αP + βP_
_E6(1,0,1)_ _C3 −_ _C4_ _C5 −_ _αP + Lr2 −_ _Lr1_ _C2 + H −_ _αP + βP_
_E7(0,1,1)_ _C4 −_ _C3_ _C5 −_ _G −_ _M −_ _V −_ _βP −_ _Lr1 + Lr2_ _αP −_ _H −_ _C2 −_ _βP_
_E8(1,1,1)_ _C4 −_ _C3_ _C5 −_ _G −_ _M −_ _V −_ _αP −_ _Lr1 + Lr2_ _C2 + H −_ _αP + βP_
-----
_Sustainability 2024, 16, 4858_ 10 of 21
**Table 4. ESS judgment.**
**Case 1** **Case 2** **Case 3** **Case 4**
**Equilibrium**
**_λ1_** **_λ2_** **_λ3_** **_λ1_** **_λ2_** **_λ3_** **_λ1_** **_λ2_** **_λ3_** **_λ1_** **_λ2_** **_λ3_**
_E1(0,0,0)_ + _−_ _−_ _×_ + _−_ + _×_ + _±_ _−_ _×_ + _±_ + _×_
_E2(1,0,0)_ + _±_ + _×_ + _−_ _−_ _×_ + _±_ + _×_ + _±_ _−_ _×_
_E3(0,1,0)_ _−_ _−_ _−_ _ESS_ _−_ _−_ + _×_ _−_ + _−_ _×_ _−_ _±_ + _×_
_E4(0,0,1)_ + + + _×_ + + + _×_ + _±_ _−_ _×_ + _±_ + _×_
_E5(1,1,0)_ _−_ _±_ + _×_ _−_ _−_ _−_ _ESS_ _−_ + + _×_ _−_ + _−_ _×_
_E6(1,0,1)_ + _±_ + _×_ + + _−_ _×_ + _±_ + _×_ + _±_ _−_ _×_
_E7(0,1,1)_ _−_ + _−_ _×_ _−_ + + _×_ _−_ _−_ _−_ _ESS_ _−_ _±_ + _×_
_E8(1,1,1)_ _−_ _±_ + _×_ _−_ + _−_ _×_ _−_ _−_ + _×_ _−_ _−_ _−_ _ESS_
In practice, the initial parameters should satisfy C4 _C3 < 0. The reason for this_
_−_
is that the use of the blockchain platform reduces the investment cost of financial institutions in financing SMEs, that is, the service cost of the blockchain platform is less
than the financing cost of green small and medium-sized enterprises. E1(0,0,0), E2(1,0,0),
_E4(0,0,1), E6(1,0,1). The eigenvalues do not meet the symbolic requirements of the Lya-_
punov discriminant method for evolutionary stable points. Whether the eigenvalues
_E3(0,1,0), E5(1,1,0), E7(0,1,1), E8(1,1,1) satisfied the Lyapunov criterion needs further discus-_
sion. As C4 _C3 < 0, C4_ _C3_ _M < 0. The stability of these four equilibrium points is_
_−_ _−_ _−_
discussed as follows:
Case I: G − _C5 + M + V + βP + Lr1 −_ _Lr2 < 0, αP −_ _H −_ _C2 −_ _βP < 0. In the green_
credit financing system, the total benefit of the loss compensation for financial institutions
and the incentive for enterprises to be trustworthy brought by the use of blockchain finance
is less than that of the greenwashing production enterprises under the weak regulatory
punishment and the loss of invisible value (G + M < Lr2 − _Lr1 + C5 −_ _V −_ _βP). For the_
government, the difference between the benefits of strong regulation and the benefits of
weak regulation is less than the cost of government subsidies (αP − _βP < C2 + H). The_
eigenvalues of the equilibrium point (0,1,0) are all negative. So, {non-subsidy, blockchain,
greenwashing} is a stable strategy.
Case II: G − _C5 + M + V + αP + Lr1 −_ _Lr2 < 0, C2 + H −_ _αP + βP < 0. In the green_
credit financing system, the total benefit of the loss compensation for financial institutions
and the incentive for enterprises to be trustworthy brought about by the use of blockchain
finance is less than that of the greenwashing production enterprises under the strict regulatory punishment and the loss of invisible value (G + M < Lr2 − _Lr1 + C5 −_ _V −_ _αP). For_
the government, the difference between the benefits of strong supervision and the benefits
of weak supervision is greater than the cost of government subsidies, and the benefits of
strong supervision under the government subsidy model are higher (C2 + H < αP − _βP)._
The eigenvalues of the equilibrium point (1,1,0) are all negative. So, {subsidy, blockchain,
greenwashing} is a stable strategy.
Case III: C5 − _G −_ _M −_ _V −_ _βP −_ _Lr1 + Lr2 < 0, αP −_ _H −_ _C2 −_ _βP < 0. In the green_
credit financing system, the total income from the loss compensation of financial institutions
and the incentive for enterprises to keep their promises brought by blockchain finance is
greater than the income from the use of greenwashing production enterprises to bear weak
regulatory penalties and invisible value losses (G + M > Lr2 − _Lr1 + C5 −_ _V −_ _βP). For_
the government, the difference between the benefits of strong regulation and the benefits
of weak regulation is less than the cost of government subsidies (αP − _βP < C2 + H). The_
eigenvalues of the equilibrium point (0,1,1) are all negative. So, {non-subsidy, blockchain,
green} is a stable strategy.
Case IV: C5 − _G −_ _M −_ _V −_ _αP −_ _Lr1 + Lr2 < 0, C2 + H −_ _αP + βP < 0. In the green_
credit financing system, the total income from the loss compensation of financial institutions
and the incentive for enterprises to keep their promises brought by blockchain finance is
greater than the income from the use of greenwashing production enterprises to bear weak
regulatory penalties and invisible value losses (G + M > Lr2 − _Lr1 + C5 −_ _V −_ _αP). For_
-----
_Sustainability 2024, 16, 4858_ 11 of 21
the government, the difference between the benefits of strong regulation and the benefits of
weak regulation is greater than the cost of government subsidies (C2 + H < αP − _βP). The_
eigenvalues of the equilibrium point (1,1,1) are all negative. So, {subsidy, blockchain, green}
is a stable strategy.
**4. Numerical Simulation Analysis**
_4.1. Case Study_
In reality, the Chinese government and financial institutions have gradually realized
the importance of blockchain in green credit, encouraged technological innovation and
application, and tapped the potential of blockchain for improving credit efficiency and
reducing credit fraud.
In 2022, the regulatory tools for Shenzhen Fintech innovation included blockchain.
The “green credit service based on artificial intelligence technology” declared the Shenzhen
Branch of Industrial Bank, which built a “circle of friends” network with green enterprises as the core. It constructed a green enterprise identification model so that financial
institutions could identify the risk of greenwashing from multiple dimensions, improve
the efficiency and accuracy of risk control, and strengthen the judgment ability of green
financial enterprises. At the same time, it provided more accurate and efficient green credit
services for eligible enterprises and lifted financing efficiency. It is the future trend to adopt
financial technology, including blockchain, to solve the problem of greenwashing.
_4.2. The Evolution Trajectory of the Stable Point_
The one-year pricing of the People’s Bank of China was 3.55% and the one-year fixed
deposit data of the Industrial and Commercial Bank of China in 2023 was 1.65%. According
to the green credit discount of Xiamen green financing enterprises and green financing
project library enterprises, each enterprise discounts no more than CNY 300,000 per year, so
we set H to 5 and 15. Regarding the government’s penalty, there are two cases for reference.
In January 2022, Shandong Xinhua Wanbo Chemical Co., Ltd. (Zibo, China) was fined
CNY 87,500 by the Zibo City Ecological Environment Bureau due to the inconsistency
between the pollutant discharge mode and the discharge destination and the pollutant discharge license. In December 2022, the Taizhou Ecological Environment Bureau announced
that Tiantai Huatong Animal Husbandry Co., Ltd. (Huzhou, China) was suspected of
discharging water pollutants by evading supervision and fined CNY 370,000. It can be
seen that the size of the company and the degree of pollution will lead to different amounts
of fines. This paper is set for SMEs, so P fluctuates between 10 and 40. The remaining
parameters are substituted into the three-party game model according to the operation of
the blockchain service platform, the basic situation of the green credit business and the
replication dynamic equation of the three parties. Then, using MATLAB R2021a, the model
based on the behavior strategy of the participants in the green credit market is simulated
and analyzed, and the impacts of government subsidies, punishments, supervision, service
costs, business costs, trustworthy incentives, platform tracking mechanisms and invisible
value losses on the three-party behavior strategy are discussed.
Based on the model assumptions and stability conditions, this paper assigns the
parameters and numerically simulates the equilibrium point of the tripartite evolutionary
game. Combined with the data mentioned above, according to the different preconditions
in the previous section, the parameter values in the four cases are set in Tables 5–8:
_G −_ _C5 + M + V + βP + Lr1 −_ _Lr2 < 0, αP −_ _H −_ _C2 −_ _βP < 0;_
_G −_ _C5 + M + V + αP + Lr1 −_ _Lr2 < 0, C2 + H −_ _αP + βP < 0;_
_C5 −_ _G −_ _M −_ _V −_ _βP −_ _Lr1 + Lr2 < 0, αP −_ _H −_ _C2 −_ _βP < 0;_
_C5 −_ _G −_ _M −_ _V −_ _αP −_ _Lr1 + Lr2 < 0, C2 + H −_ _αP + βP < 0._
-----
_Sustainability 2024, 16, 4858_ 12 of 21
**Table 5. Set 1 of the parameter values.**
**Parameter** **_W1_** **_W2_** **_C1_** **_C2_** **_C3_** **_C4_** **_C5_** **_P_** **_H_** **_L_**
Value 20 20 20 10 10 5 15 20 15 100
Parameter _M_ _S_ _G_ _V_ _r1_ _r2_ _i1_ _i2_ _α_ _β_
Value 10 20 5 5 0.3 0.45 0.0355 0.0165 1 0.3
**Table 6. Set 2 of the parameter values.**
**Parameter** **_W1_** **_W2_** **_C1_** **_C2_** **_C3_** **_C4_** **_C5_** **_P_** **_H_** **_L_**
Value 15 15 10 5 3 1 7 15 5 80
Parameter _M_ _S_ _G_ _V_ _r1_ _r2_ _i1_ _i2_ _α_ _β_
Value 5 10 3 2 0.25 0.5 0.0355 0.0165 1 0.1
**Table 7. Set 3 of the parameter values.**
**Parameter** **_W1_** **_W2_** **_C1_** **_C2_** **_C3_** **_C4_** **_C5_** **_P_** **_H_** **_L_**
Value 10 10 7 7 2 0.5 6 12 5 40
Parameter _M_ _S_ _G_ _V_ _r1_ _r2_ _i1_ _i2_ _α_ _β_
Value 5 12 3 2 0.22 0.3 0.0355 0.0165 1 0.25
**Table 8. Set 4 of the parameter values.**
**Parameter** **_W1_** **_W2_** **_C1_** **_C2_** **_C3_** **_C4_** **_C5_** **_P_** **_H_** **_L_**
Value 25 25 23 15 15 8 20 40 15 150
Parameter _M_ _S_ _G_ _V_ _r1_ _r2_ _i1_ _i2_ _α_ _β_
Value 15 40 10 10 0.15 0.3 0.0355 0.0165 1 0.1
This value represents the general proportion and is mainly used to verify the tripartite
evolutionary game model in the green credit market. These four groups of values are
shown in the following table. As time goes on, they have evolved 50 times and finally they
reach their respective stable points. The evolutionary trajectory is shown in the figure.
When x = 0.2, y = 0.2, z = 0.2, the evolution trajectory is shown in the figure. The
numerical simulation results provide a consistent and effective conclusion for the strategic
stability analysis of all the parties, and they provide some practical guidance for the
tripartite strategy of the government, financial institutions and SMEs.
Under the condition that the initial willingness is 0.2 and the stability of the first case
is satisfied, the government, financial institutions and small and medium-sized enterprises
finally reach E3(0,1,0) after 50 periods of evolution in the model. This point has only one
combination {non-subsidy, blockchain, greenwashing} and the evolutionary trajectories
are shown in Figure 3. Government subsidies need to invest a lot of money and the cost is
too high, so the willingness to subsidize is low. Financial institutions can obtain default
compensation and reduce financing costs by adopting the blockchain model. Under the
green credit policy, for the SMEs, the government and the blockchain default punishments
are not enough. Given that the greenwashing production yield is higher, the enthusiasm
for green production is very low.
-----
_Sustainability 2024, 16, 4858_ ishments are not enough. Given that the greenwashing production yield is higher, the ishments are not enough. Given that the greenwashing production yield is higher, the 13 of 21
enthusiasm for green production is very low.
enthusiasm for green production is very low.
(a) (b)
(a) (b)
**Figure 3. Evolutionary trajectory of E3(0,1,0); (a) 3D perspective; (b) plane perspective.**
**Figure 3. Figure 3.Evolutionary trajectory of Evolutionary trajectory ofE E3(0,1,0); (3(0,1,0); (a) 3D perspective; (a) 3D perspective; (bb) plane perspective.) plane perspective.**
Under the condition that the initial willingness is 0.2 and the stability of the second Under the condition that the initial willingness is 0.2 and the stability of the secondUnder the condition that the initial willingness is 0.2 and the stability of the second
case is satisfied, the government, financial institutions and SMEs evolve 50 times in the case is satisfied, the government, financial institutions and SMEs evolve 50 times in thecase is satisfied, the government, financial institutions and SMEs evolve 50 times in the
model and finally reach model and finally reachmodel and finally reach E EE5(1,1,0). The evolutionary trajectories are shown in Figure 4. This 55(1,1,0). The evolutionary trajectories are shown in Figure(1,1,0). The evolutionary trajectories are shown in Figure 4. This 4. This
point has only one combination {subsidy, blockchain, greenwashing}. The governmentpoint has only one combination {subsidy, blockchain, greenwashing}. The government
point has only one combination {subsidy, blockchain, greenwashing}. The government
subsidy income is higher than the cost paid and the subsidy enthusiasm is high. Thesubsidy income is higher than the cost paid and the subsidy enthusiasm is high. The ben
subsidy income is higher than the cost paid and the subsidy enthusiasm is high. The ben
benefit of financial institutions adopting the blockchain model is higher, but the additionefit of financial institutions adopting the blockchain model is higher, but the addition of
efit of financial institutions adopting the blockchain model is higher, but the addition of
of government subsidies and the lack of two-dimensional punishment still allow SMEs togovernment subsidies and the lack of two-dimensional punishment still allow SMEs to
government subsidies and the lack of two-dimensional punishment still allow SMEs to
retain the motivation for greenwashing production.retain the motivation for greenwashing production.
retain the motivation for greenwashing production.
(a) (b)
(a) (b)
**Figure 4. Figure 4.Figure 4. Evolutionary trajectory of Evolutionary trajectory ofEvolutionary trajectory of E EE5(1,1,0); (55(1,1,0); ((1,1,0); (a) 3D perspective; (aa) 3D perspective; () 3D perspective; (bbb) plane perspective.) plane perspective.) plane perspective.**
_Sustainability 2024, 16, x FOR PEER REVIEW_ 14 of 22
Under the condition that the initial willingness is 0.2 and the stability of the third
Under the condition that the initial willingness is 0.2 and the stability of the third case
Under the condition that the initial willingness is 0.2 and the stability of the third case
case is satisfied, as shown in the Figure 5, the government has not invested subsidy funds
is satisfied, as shown in the Figure 5, the government has not invested subsidy funds in
is satisfied, as shown in the Figure 5, the government has not invested subsidy funds in
in support, SMEs enjoy compliance incentives in the face of blockchain mode, and the
support, SMEs enjoy compliance incentives in the face of blockchain mode, and the cost
support, SMEs enjoy compliance incentives in the face of blockchain mode, and the cost cost of illegal production is high. Therefore, the government, financial institutions, smallof illegal production is high. Therefore, the government, financial institutions, small and
and medium-sized enterprises in the model after 50 periods of evolution, finally reachedmedium-sized enterprises in the model after 50 periods of evolution, finally reached
_EE77(0,1,1). This point has only one combination {non-subsidy, blockchain, green}.(0,1,1). This point has only one combination {non-subsidy, blockchain, green}._
(a) (b)
**Figure 5.Figure 5. Evolutionary trajectory ofEvolutionary trajectory of EE77(0,1,1); ((0,1,1); (aa) 3D perspective; () 3D perspective; (bb) plane perspective.) plane perspective.**
-----
_Sustainability 2024, 16, 4858_ SMEs to carry out green production and take into account economic and environmental 14 of 21
benefits. SMEs should also respond to the government’s call to engage in green production, obtaining no less than the income of green production while taking into account social responsibility, enterprises have the enthusiasm to participate in green construction. Under the condition that the initial willingness is 0.2 and the stability of situation
Therefore, for the sustainable development of the society, the government funds support 4 is satisfied, as shown in the Figure 6, the government invests subsidies to guide green
green small and medium-sized enterprises, guide the green upgrading of the industry, production, financial institutions use blockchain revenue maximization, small and mediumand strengthen supervision. Financial institutions introduce blockchain technology, while sized enterprises use green production, taking into account economic and social benefits,
enterprises operate legally and greenly, which is the ideal state of the green credit market. and the three parties are win–win. The government, financial institutions, SMEs in the
Therefore, in the next results analysis, we will pay more attention to the influence of key model, after 50 periods of evolution, finally reached E8(1,1,1). This point has only one
parameters in the ideal state. combination {subsidy, blockchain, green}.
(a) (b)
**Figure 6. Evolutionary trajectory of E8 (1,1,1); (a) 3D perspective; (b) plane perspective.**
However, the green economy is supported by green technology and ecological economic ethics. In the immature stage of the green economy, in order to pursue maximum
benefits and despise environmental protection, enterprises have also undermined the
effectiveness of green credit policies. Financial institutions have assumed credit risks,
and the government’s environmental governance costs are becoming higher and higher,
which runs counter to the concept of environmental protection. From an ideal perspective,
the government and SMEs should adjust their strategies to achieve a win–win situation.
The government subsidy income is higher than the cost, which can continuously guide
SMEs to carry out green production and take into account economic and environmental
benefits. SMEs should also respond to the government’s call to engage in green production, obtaining no less than the income of green production while taking into account
social responsibility, enterprises have the enthusiasm to participate in green construction.
Therefore, for the sustainable development of the society, the government funds support
green small and medium-sized enterprises, guide the green upgrading of the industry,
and strengthen supervision. Financial institutions introduce blockchain technology, while
enterprises operate legally and greenly, which is the ideal state of the green credit market.
Therefore, in the next results analysis, we will pay more attention to the influence of key
parameters in the ideal state.
_4.3. Sensitivity Analysis_
To evaluate the influence of some key parameters on the evolution results and trajectories of the three agents, numerical simulations are also carried out. The selected parameters
include H, P, α, β, G, V, M, C3, and C4. When x = 0.2, y = 0.2, z = 0.2, the initial parameters
are set to satisfy the Case IV: W1 = 50, W2 = 50, C1 = 25, C2 = 15, C3 = 10, C4 = 8,
_C5 = 15, P = 60, H = 5, L = 150, M = 15, S = 40, G = 10, V = 10, r1 = 0.15, r2 = 0.35,_
_i1 = 0.0355, i2 = 0.0165, α = 1, β = 0.1._
In different time periods, the government’s subsidies may be different. As we can
see in Figure 7, when H = 5, 20, 35, the simulation results of replicating the dynamic
equation system 20 times are shown in the figure. With the increase in the subsidy intensity,
the government’s financial pressure is also greater, and the speed of the evolutionary
stability point is also slower and slower. When H = 35, the evolution process of government
subsidy measures slows down. In view of the high cost of green R&D and the equipment
upgrading of enterprises, and given the low proportion of consumers’ awareness of green
-----
_Sustainability 2024, 16, 4858_ point is also slower and slower. When H = 35, the evolution process of government sub-15 of 21
sidy measures slows down. In view of the high cost of green R&D and the equipment
upgrading of enterprises, and given the low proportion of consumers’ awareness of green
environmental protection into effective market demand, resulting in the imbalance be-environmental protection into effective market demand, resulting in the imbalance between
tween the supply side and the demand side of green products, it is necessary to intervene the supply side and the demand side of green products, it is necessary to intervene in the
in the production of green products by means of government subsidies, alleviate the fi-production of green products by means of government subsidies, alleviate the financial
nancial pressure of enterprises, reduce the risk of technological R&D of enterprises, and pressure of enterprises, reduce the risk of technological R&D of enterprises, and improve
improve the enthusiasm of enterprises for green production. However, with the gradual the enthusiasm of enterprises for green production. However, with the gradual increase inf
increase inf government subsidies, this positive effect will also produce negative effects at government subsidies, this positive effect will also produce negative effects at the same
the same time, and the marginal effect of the positive effects is becoming lower and lower. time, and the marginal effect of the positive effects is becoming lower and lower. It increases
It increases the possibility of greenwashing production in small and medium-sized enter-the possibility of greenwashing production in small and medium-sized enterprises. When
prises. When the subsidy intensity becomes larger, the process of enterprise green pro-the subsidy intensity becomes larger, the process of enterprise green production evolution
duction evolution also slows down. The possibility of corporate greenwashing production also slows down. The possibility of corporate greenwashing production becomes higher,
becomes higher, which means that financial institutions need to bear higher capital risks which means that financial institutions need to bear higher capital risks and legal risks.
and legal risks. Therefore, financial institutions will accelerate the adoption of financial Therefore, financial institutions will accelerate the adoption of financial services under the
services under the blockchain model to minimize the risk of default lending. blockchain model to minimize the risk of default lending.
(a) (b)
**Figure 7. Figure 7.Impact of Impact ofH on the evolutionary results and trajectories; ( H on the evolutionary results and trajectories; (a) 3D perspective; (a) 3D perspective; (b) plane b) plane**
perspective.perspective.
Strict government supervision has a good effect on preventing the occurrence of Strict government supervision has a good effect on preventing the occurrence of
greenwashing behavior. When greenwashing behavior. WhenP = P40,25,15, = 40, 25, 15,α = 0.6,0.85,1, α = 0.6, 0.85, 1,β = 0.05,0,15,0.2, it can be seen β = 0.05, 0.15, 0.2, it can be
in Figures 8 and 9 that after 20 periods of system evolution, SMEs with penalties of 15 and seen in Figures 8 and 9 that after 20 periods of system evolution, SMEs with penalties of
25 finally choose greenwashing behavior, and the benefits of default behavior are much 15 and 25 finally choose greenwashing behavior, and the benefits of default behavior are
higher than the penalty cost found. At the same time, enterprises choose to default, and much higher than the penalty cost found. At the same time, enterprises choose to default,
government subsidy funds lose the significance of guiding green production. The govern-and government subsidy funds lose the significance of guiding green production. The
ment will choose the “no subsidy” strategy to reduce fiscal expenditure. When government will choose the “no subsidy” strategy to reduce fiscal expenditure. WhenP = 40,
with the increase in government punishment, the probability of SMEs choosing green pro-P = 40, with the increase in government punishment, the probability of SMEs choosing
duction becomes higher, and the government will adopt subsidies. It can be seen that green production becomes higher, and the government will adopt subsidies. It can be
seen that subsidies are very important for green development. With the increasing value
of the evolutionary convergence speed of enterprises choosing green production and the
government adopting subsidy measures is accelerated.
The reward and punishment mechanism of the blockchain has different effects on the
decision-making of all the parties. In reality, different blockchain platforms and financing
methods will have different information services, and the compensation for financial
institutions is not the same. With reference to the model mentioned in Jiao’s article,
Tencent Financial Technology’s micro-enterprise chain platform and cross-border factoring
financing with linkage advantages can provide different information services. Different
transaction information contracts and punishment mechanisms will have different effects on
the invisible value loss of defaulting companies. The compensation of cross-border factoring
financing and aerospace information invoice credit financing for financial institutions
is also different. Therefore, we set the assignment of the platform tracing mechanism
compensation, trustworthy incentive, and invisible value loss as (5,5,5), (15,15,10), (25,20,15).
From Figure 10, it can be seen that with the increase in positive incentives and penalties
brought about by blockchain, the production behavior of SMEs also has positive feedback,
which reduces the possibility of greenwashing by SMEs and evolves faster to the stable
-----
_Sustainability Sustainability Sustainability20242024 2024,, 16 16,, x FOR PEER REVIEW, x FOR PEER REVIEW 16, 4858_ 16 of 22 16 of 22 16 of 21
subsidies are very important for green development. With the increasing value of the evo-point of green production. The increase in penalties also means that financial institutions
subsidies are very important for green development. With the increasing value of the evo
lutionary convergence speed of enterprises choosing green production and the govern-can make up for more losses from SME default lending, and the enthusiasm for borrowing
lutionary convergence speed of enterprises choosing green production and the govern
ment adopting subsidy measures is accelerated. and the motivation to adopt blockchain are also increasing.
ment adopting subsidy measures is accelerated.
(a) (b)
(a) (b)
**Figure 8. Figure 8.Impact of Impact ofP on the evolutionary results and trajectories. ( P on the evolutionary results and trajectories. (a) 3D perspective; (a) 3D perspective; (b) plane b) plane**
**Figure 8. Impact of P on the evolutionary results and trajectories. (a) 3D perspective; (b) plane**
perspective. perspective. perspective.
(a) (b)
(a) (b)
_Sustainability 2024, 16, x FOR PEER REVIEW Figure 9. Figure 9. Figure 9.Impact of Impact of Impact ofαα,, ββ α on the evolutionary results and trajectories; ( on the evolutionary results and trajectories; (, β on the evolutionary results and trajectories; (aa) 3D perspective; () 3D perspective; (a) 3D perspective; (bb) plane ) plane 17 of 22 b) plane_
perspective.
perspective.perspective.
The reward and punishment mechanism of the blockchain has different effects on the
The reward and punishment mechanism of the blockchain has different effects on the
decision-making of all the parties. In reality, different blockchain platforms and financing
decision-making of all the parties. In reality, different blockchain platforms and financing
methods will have different information services, and the compensation for financial in
methods will have different information services, and the compensation for financial in
stitutions is not the same. With reference to the model mentioned in Jiao’s article, Tencent
stitutions is not the same. With reference to the model mentioned in Jiao’s article, Tencent
Financial Technology’s micro-enterprise chain platform and cross-border factoring fi
Financial Technology’s micro-enterprise chain platform and cross-border factoring fi
nancing with linkage advantages can provide different information services. Different
nancing with linkage advantages can provide different information services. Different
transaction information contracts and punishment mechanisms will have different effects
transaction information contracts and punishment mechanisms will have different effects
on the invisible value loss of defaulting companies. The compensation of cross-border fac
on the invisible value loss of defaulting companies. The compensation of cross-border fac
toring financing and aerospace information invoice credit financing for financial institu
toring financing and aerospace information invoice credit financing for financial institu
tions is also different. Therefore, we set the assignment of the platform tracing mechanism
tions is also different. Therefore, we set the assignment of the platform tracing mechanism
compensation, trustworthy incentive, and invisible value loss as (5,5,5), (15,15,10),
compensation, trustworthy incentive, and invisible value loss as (5,5,5), (15,15,10), (a) (b)
(25,20,15). From Figure 10, it can be seen that with the increase in positive incentives and
(25,20,15). From Figure 10, it can be seen that with the increase in positive incentives and
penalties brought about by blockchain, the production behavior of SMEs also has positive Figure 10. Figure 10.Impact of Impact ofM M, G, G, V, V on the evolutionary results and trajectories. ( on the evolutionary results and trajectories. (aa) 3D perspective; () 3D perspective; (bb) plane)
penalties brought about by blockchain, the production behavior of SMEs also has positive
plane perspective.perspective.
feedback, which reduces the possibility of greenwashing by SMEs and evolves faster to
feedback, which reduces the possibility of greenwashing by SMEs and evolves faster to
the stable point of green production. The increase in penalties also means that financial
the stable point of green production. The increase in penalties also means that financial In the real financing situation, the service cost of the blockchain platform is different In the real financing situation, the service cost of the blockchain platform is different
institutions can make up for more losses from SME default lending, and the enthusiasm
institutions can make up for more losses from SME default lending, and the enthusiasm from the business cost of financial institutions in the dynamic financial environment. In from the business cost of financial institutions in the dynamic financial environment. In
for borrowing and the motivation to adopt blockchain are also increasing.
for borrowing and the motivation to adopt blockchain are also increasing. order to comprehensively analyze the impact of the blockchain platform service fees and order to comprehensively analyze the impact of the blockchain platform service fees and
business costs of financial institutions on the strategy of participating entities, we set business costs of financial institutions on the strategy of participating entities, we set CC33 = 8 =,
8,15,20 and 15, 20 andC C44 = 1,4,5. The simulation results are shown in Figure 11. It can be seen from = 1, 4, 5. The simulation results are shown in Figure 11. It can be seen from
the figure that the change in the control cost and cost gap has no effect on government the figure that the change in the control cost and cost gap has no effect on government
decision-making. With the increase in the gap between the two, it will accelerate the speed
of fi a ial i titutio a d SME to the table oi t of {blo k hai ee } but the e i o
decision-making of all the parties. In reality, different blockchain platforms and financing
decision-making of all the parties. In reality, different blockchain platforms and financing
methods will have different information services, and the compensation for financial in-
methods will have different information services, and the compensation for financial in-
stitutions is not the same. With reference to the model mentioned in Jiao’s article, Tencent
stitutions is not the same. With reference to the model mentioned in Jiao’s article, Tencent
Financial Technology’s micro-enterprise chain platform and cross-border factoring fi-
Financial Technology’s micro-enterprise chain platform and cross-border factoring fi-
nancing with linkage advantages can provide different information services. Different
nancing with linkage advantages can provide different information services. Different
transaction information contracts and punishment mechanisms will have different effects
transaction information contracts and punishment mechanisms will have different effects
on the invisible value loss of defaulting companies. The compensation of cross-border fac-
on the invisible value loss of defaulting companies. The compensation of cross-border fac-
toring financing and aerospace information invoice credit financing for financial institu-
toring financing and aerospace information invoice credit financing for financial institu-
tions is also different. Therefore, we set the assignment of the platform tracing mechanism
tions is also different. Therefore, we set the assignment of the platform tracing mechanism
-----
_Sustainability 2024, 16, 4858_ business costs of financial institutions on the strategy of participating entities, we set C17 of 213 =
8,15,20 and C4 = 1,4,5. The simulation results are shown in Figure 11. It can be seen from
the figure that the change in the control cost and cost gap has no effect on government
decision-making. With the increase in the gap between the two, it will accelerate the speed decision-making. With the increase in the gap between the two, it will accelerate the speed
of financial institutions and SMEs to the stable point of {blockchain, green}, but there is no of financial institutions and SMEs to the stable point of {blockchain, green}, but there is no
obvious change in the decision-making of the three. The greater the cost gap, the higher obvious change in the decision-making of the three. The greater the cost gap, the higher
the necessity and enthusiasm of financial institutions to adopt the blockchain model. the necessity and enthusiasm of financial institutions to adopt the blockchain model. Small
Small and medium-sized enterprises have clearer rewards and punishments for trustwor-and medium-sized enterprises have clearer rewards and punishments for trustworthiness
thiness and default, which will further promote the evolution to green production. and default, which will further promote the evolution to green production.
(a) (b)
**Figure 11. Figure 11.Impact of Impact ofC3, CC34 on the evolutionary results and trajectories; (, C4 on the evolutionary results and trajectories; (a) 3D perspective; (a) 3D perspective; (b)** **b) plane**
plane perspective.perspective.
**5. Conclusions and Discussion 5. Conclusions and Discussion**
_5.1. Conclusions_
_5.1. Conclusions_
(1) In the green credit financing system, the government’s income difference under
(1) In the green credit financing system, the government’s income difference under
different regulatory intensities directly affects the government’s subsidy decision. The
different regulatory intensities directly affects the government’s subsidy decision. The dif
difference in income is greater than the subsidy cost, that is, the subsidy strategy is adopted.
ference in income is greater than the subsidy cost, that is, the subsidy strategy is adopted.
The difference in income is less than the subsidy cost, and the government will not adopt the
The difference in income is less than the subsidy cost, and the government will not adopt
subsidy method. The positive incentives and default penalties brought by the blockchain
the subsidy method. The positive incentives and default penalties brought by the block
affect the direction of enterprise production. If the benefits of the blockchain finance
chain affect the direction of enterprise production. If the benefits of the blockchain finance
for the enterprise’s trustworthy incentives and the compensation for the loss of financial
for the enterprise’s trustworthy incentives and the compensation for the loss of financial
institutions are greater than the regulatory penalties and invisible value losses borne by the
institutions are greater than the regulatory penalties and invisible value losses borne by
greenwash production enterprises, then the enterprise will choose green production. On
the greenwash production enterprises, then the enterprise will choose green production.
the contrary, the enterprise will still choose default greenwashing.
On the contrary, the enterprise will still choose default greenwashing.
(2) Excessive government subsidies will increase the possibility of corporate greenwashing in the short term, and the subsidy policy cannot be adhered to for a long time.
The cancellation of subsidies will still leave the problem of greenwashing. The addition of
blockchain can reduce the occurrence of greenwashing, but it cannot replace the regulatory
role of the government.
_5.2. Discussion_
(1) Government subsidies are intended to ease the financial pressure of green transformation and green R&D for SMEs, reduce the cost of green enterprise certification, and guide
the upgrading of green industries. The policy can indeed alleviate the problem of financial
constraints. The subsidy program is also an effective force for promoting the ecological
design and recycling of products, but it is not a fact that the higher government subsidies
are more beneficial to environmental performance [57]. Excessive government subsidies
cannot enhance the guiding effect of environmental ethics and the green production of
enterprises, and they even create the soil for greenwashing production of SMEs. However,
if the government removes subsidies, greenwashing production by SMEs will still occur.
The government needs to weigh the environmental benefits and financial costs and the
losses of corporate greenwashing.
(2) The citation of blockchain technology allows financial institutions to reduce the cost
of financing operations, as well as to moderately compensate for the loss of defaulted loans
to SMEs, reducing the credit risk and increasing the incentives of financial institutions for
-----
_Sustainability 2024, 16, 4858_ 18 of 21
green credit. The cost of blockchain platform services does not affect financial institutions’
decision-making, but the larger the gap with business costs, the more incentive financial
institutions have to adopt blockchain technology. The cost of blockchain in this paper
is relatively simple. In fact, there are cost problems in the construction, application and
operation of blockchain [52]. The high cost of blockchain will make it difficult for the game
system to reach the ideal state [3], and banks are extremely sensitive to the change in the
blockchain cost [54]. Reducing the blockchain cost is the key direction to be developed.
(3) Unlike Chen’s cautious attitude toward environmental information disclosure [25],
the information transparency brought about by blockchain has a positive effect on the entire
credit financing system, and there is no need to worry that the government’s disclosure
of negative information about the environmental behavior of enterprises will damage its
disclosure of good environmental information represented by green certification, while
positive information about enterprises’ fulfilment of green commitments will also be made
known to the public. As trustworthy enterprises are incentivized, the default cost of
greenwashed enterprises will be higher, including both the explicit and implicit loss of
value, thus accelerating the evolution of SMEs to green production and sharing the burden
of government regulation. However, the introduction of blockchain technology does not
eliminate the possibility of SMEs drifting toward green production, and to reduce the
occurrence of this behavior, the position of government regulation must not be missing
and regulation must not be relaxed at a reasonable cost. Compared with the advantageous
enterprises that occupy a higher market share, the decision-making speed and effect of
the disadvantageous enterprises are much lower than the former. In reality, enterprise
heterogeneity will affect the speed of decision-making and even the effectiveness of the
policy. Advantageous enterprises can obtain more green innovation returns and reputation
from their market share, and government punishment can effectively warn them. They
can have more choice space to access the blockchain. Disadvantaged companies are more
likely to engage in greenwashing for lower penalties [24]. It can be considered to enhance
the construction of corporate social responsibility to increase the cost of greenwashing
psychological loss [35]. But for disadvantaged companies, accepting blockchain means
smaller greenwashing possibilities and higher technical costs, and the enthusiasm for access
will not be high, which is a hidden danger to the effectiveness of green credit. In the market
with universal access to blockchain, rejection may mean financing difficulties, and brands
are not trusted by consumers and thus difficult to operate. In the future, it may be necessary
to consider subsidizing blockchain by the government.
(4) This paper considers the tripartite game of greenwashing behavior among the
government, financial institutions and SMEs in the green credit market based on blockchain
technology. The blockchain technology platform is built by the government and used
by financial institutions. In reality, many financial institutions build their own platforms.
Whether the construction cost can be used as a parameter affecting the decision-making of
financial institutions can be studied. In addition, the green credit market, in addition to
government participation, will have the participation of non-financial institutions, so in the
future, we can consider the four-party evolutionary game or Stackelberg game and take the
refined blockchain cost as the focus of the research, study the sensitivity of each party to the
blockchain cost and determine which blockchain platform is the cost minimization method.
_5.3. Policy Implications_
(1) We can reinforce the social publicity mechanism, increase social publicity and media
disclosure to enhance the reputation damage of greenwashing, and improve the reputation
and popularity of green production enterprises. We can strengthen the construction of
corporate social responsibility to increase the illegal cost and greenwashing psychological
loss, so that green enterprises have more motives to put into green production.
(2) Government subsidies can consider a variety of subsidy methods rather than a
one-size-fits-all cost subsidy and can be adjusted according to the subsidy object. For
example, the subsidy method can consider “inclusive subsidy” and “competitive subsidy”,
-----
_Sustainability 2024, 16, 4858_ 19 of 21
consider “consumption subsidy” and “innovation subsidy” with different incentive effects,
and also consider “technology subsidy” and “price subsidy” to improve the efficiency of
government funds.
(3) The introduction of blockchain reduces the financing cost of financial institutions,
improves the enthusiasm for green credit, and also benefits more small and medium-sized
enterprises in terms of the green transformation. When the government invests the same
degree of subsidy, the enthusiasm for green production is higher and the subsidy effect
is strengthened on behalf of blockchain. Therefore, the government can also consider
subsidizing the investment and construction of the blockchain, encouraging more institutions to participate in the use of new technologies and applying new technologies to all
aspects of green production. From the two dimensions of technology and government,
the two-pronged approach reduces the possibility of corporate greenwashing.
(4) The government should amplify the supervisory and management mechanism
and strengthen the supervision and penalties for opportunistic corporate behavior so
that the green credit market can develop more healthily. It can also match a variety of
policy combinations and make rational use of tax and other policy tools to encourage more
enterprises to engage in green production.
**Author Contributions: Conceptualization, X.L. and H.W.; methodology, X.L.; software, X.L.; val-**
idation, X.L. and H.W.; formal analysis, X.L.; writing—original draft preparation, X.L. and H.W.;
writing—review and editing, X.L. and H.W.; visualization, X.L.; supervision, X.L. and H.W.; funding
acquisition, H.W. All authors have read and agreed to the published version of the manuscript.
**Funding: This work was supported by the Humanities and Social Science Fund of Ministry of**
Education of China (grant numbers: 20YJC630142).
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: The original report data were obtained from the People’s Bank of China**
and Industrial and Commercial Bank of China.
**Acknowledgments: We are very grateful to the editors and anonymous reviewers. Funding is**
acknowledged as well.
**Conflicts of Interest: The authors declare no conflicts of interest.**
**References**
1. Chien, F.; Chau, K.Y.; Ady, S.U. Does the combining effects of energy and consideration of financial development lead to
[environmental burden: Social perspective of energy finance? Environ. Sci. Pollut. Res. 2021, 28, 40957–40970. [CrossRef] [PubMed]](https://doi.org/10.1007/s11356-021-13423-6)
2. Zhang, D.; Zhang, Z.; Managi, S. A bibliometric analysis on green finance: Current status, development, and future directions.
_[Financ. Res. Lett. 2019, 29, 425–430. [CrossRef]](https://doi.org/10.1016/j.frl.2019.02.003)_
3. Chai, S.L.; Zhang, K.; Wei, W.; Ma, W.Y.; Abedin, M.Z. The impact of green credit policy on enterprises’ financing behavior:
[Evidence from Chinese heavily-polluting listed companies. J. Clean. Prod. 2022, 363, 12. [CrossRef]](https://doi.org/10.1016/j.jclepro.2022.132458)
4. Cowan, E. Topical Issues in Environmental Finance; Research Paper was Commissioned by the Asia Branch of the Canadian
International Development Agency (CIDA); International Development Research Centre: Ottawa, ON, Canada, 1999; Volume 1,
pp. 1–20.
5. Liu, Y.; Wang, J.; Xu, C. Green credit policy and labor investment efficiency: Evidence from China. Environ. Sci. Pollut. Res. 2023,
_[30, 110461–110480. [CrossRef] [PubMed]](https://doi.org/10.1007/s11356-023-30058-x)_
6. Lv, C.; Bian, B.; Lee, C.C.; He, Z. Regional gap and the trend of green finance development in China. Energy Econ. 2021, 102,
[105476. [CrossRef]](https://doi.org/10.1016/j.eneco.2021.105476)
7. He, L.; Gan, S.; Zhong, T. The impact of green credit policy on firms’ green strategy choices: Green innovation or green-washing?
_[Environ. Sci. Pollut. Res. 2022, 29, 73307–73325. [CrossRef] [PubMed]](https://doi.org/10.1007/s11356-022-20973-w)_
8. [Lyon, T.P.; Montgomery, A.W. The means and end of greenwash. Organ. Environ. 2015, 28, 223–249. [CrossRef]](https://doi.org/10.1177/1086026615575332)
9. Xia, F.; Chen, J.; Yang, X.; Li, X.; Zhang, B. Financial constraints and corporate greenwashing strategies in China. Corp. Soc.
_[Responsib. Environ. Manag. 2023, 30, 1770–1781. [CrossRef]](https://doi.org/10.1002/csr.2453)_
10. Blome, C.; Foerstl, K.; Schleper, M.C. Antecedents of green supplier championing and greenwashing: An empirical study on
[leadership and ethical incentives. J. Clean. Prod. 2017, 152, 339–350. [CrossRef]](https://doi.org/10.1016/j.jclepro.2017.03.052)
-----
_Sustainability 2024, 16, 4858_ 20 of 21
11. Zhang, D. Can environmental monitoring power transition curb corporate greenwashing behavior? J. Econ. Behav. Organ. 2023,
_[212, 199–218. [CrossRef]](https://doi.org/10.1016/j.jebo.2023.05.034)_
12. Li, W.; Seppänen, V. How and when does perceived greenwashing affect employees’ job performance? Evidence from China.
_[Corp. Soc. Responsib. Environ. Manag. 2022, 29, 1722–1735. [CrossRef]](https://doi.org/10.1002/csr.2321)_
13. Ioannou, I.; Kassinis, G.; Papagiannakis, G. The impact of perceived greenwashing on customer satisfaction and the contingent
[role of capability reputation. J. Bus. Ethics 2023, 185, 333–347. [CrossRef]](https://doi.org/10.1007/s10551-022-05151-9)
14. [Gatti, L.; Pizzetti, M.; Seele, P. Green lies and their effect on intention to invest. J. Bus. Res. 2021, 127, 228–240. [CrossRef]](https://doi.org/10.1016/j.jbusres.2021.01.028)
15. Nygaard, A.; Silkoset, R. Sustainable development and greenwashing: How blockchain technology information can empower
[green consumers. Bus. Strategy Environ. 2023, 32, 3801–3813. [CrossRef]](https://doi.org/10.1002/bse.3338)
16. Akturan, U. How does greenwashing affect green branding equity and purchase intention? An empirical research. Mark. Intell.
_[Plan. 2018, 36, 809–824. [CrossRef]](https://doi.org/10.1108/MIP-12-2017-0339)_
17. Wu, Q.; Kang, J.; Zhao, Y. Application of science and technology in the field of green finance. China Financ. 2023, 14, 71–72.
18. Jiao, Y.; Yan, X.; Du, J. Tripartite Evolutionary Game Research of Factoring Financing under the Integration of Blockchain. Chin. J.
_Manag. 2023, 20, 598–609._
19. Guerar, M.; Merlo, A.; Migliardi, M. A fraud-resilient blockchain-based solution for invoice financing. IEEE Trans. Eng. Manag.
**[2020, 67, 1086–1098. [CrossRef]](https://doi.org/10.1109/TEM.2020.2971865)**
20. Dong, C.; Huang, Q.; Pan, Y.; Ng, C.T.; Liu, R. Logistics outsourcing: Effects of greenwashing and blockchain technology. Transp.
_[Res. Part E Logist. Transp. Rev. 2023, 170, 103015. [CrossRef]](https://doi.org/10.1016/j.tre.2023.103015)_
21. Hu, X.; Hua, R.; Liu, Q.; Wang, C. The green fog: Environmental rating disagreement and corporate greenwashing. Pac.-Basin
_[Financ. J. 2023, 78, 101952. [CrossRef]](https://doi.org/10.1016/j.pacfin.2023.101952)_
22. Pizzetti, M.; Gatti, L.; Seele, P. Firms talk, suppliers walk: Analyzing the locus of greenwashing in the blame game and introducing
[‘vicarious greenwashing’. J. Bus. Ethics 2021, 170, 21–38. [CrossRef]](https://doi.org/10.1007/s10551-019-04406-2)
23. Xu, W.; Li, M.; Xu, S. Unveiling the “Veil” of information disclosure: Sustainability reporting “greenwashing” and “shared value”.
_[PLoS ONE 2023, 18, e0279904. [CrossRef]](https://doi.org/10.1371/journal.pone.0279904)_
24. Sun, Z.; Zhang, W. Do government regulations prevent greenwashing? An evolutionary game analysis of heterogeneous
[enterprises. J. Clean. Prod. 2019, 231, 1489–1502. [CrossRef]](https://doi.org/10.1016/j.jclepro.2019.05.335)
25. Chen, Y.; Liu, J. Dose subsidy make green certification more effective? Based on the perspective of green market evolution. J. Ind.
_Eng. Eng. Manag. 2022, 36, 274–282._
26. Sun, R.; He, D.; Su, H. Application of blockchain technology to preventing supply chain finance based on evolutionary game.
_Chin. J. Manag. Sci. 2024, 32, 1–18._
27. Li, W.; Li, W.; Seppänen, V.; Koivumäki, T. Effects of greenwashing on financial performance: Moderation through local
[environmental regulation and media coverage. Bus. Strategy Environ. 2023, 32, 820–841. [CrossRef]](https://doi.org/10.1002/bse.3177)
28. Wang, W.; Sun, Z.; Zhu, W. How does multi-agent govern corporate greenwashing? A stakeholder engagement perspective from
[“common” to “collaborative” governance. Corp. Soc. Responsib. Environ. Manag. 2023, 30, 291–307. [CrossRef]](https://doi.org/10.1002/csr.2355)
29. Yu, E.P.; Van Luu, B.; Chen, C.H. Greenwashing in environmental, social and governance disclosures. Res. Int. Bus. Financ. 2020,
_[52, 101192. [CrossRef]](https://doi.org/10.1016/j.ribaf.2020.101192)_
30. Ahluwalia, S.; Mahto, R.V.; Guerrero, M. Blockchain technology and startup financing: A transaction cost economics perspective.
_[Technol. Forecast. Soc. Chang. 2020, 151, 119854. [CrossRef]](https://doi.org/10.1016/j.techfore.2019.119854)_
31. Cao, Y.; Yi, C.; Wan, G.; Hu, H.; Li, Q.; Wang, S. An analysis on the role of blockchain-based platforms in agricultural supply
[chains. Transp. Res. Part E Logist. Transp. Rev. 2022, 163, 102731. [CrossRef]](https://doi.org/10.1016/j.tre.2022.102731)
32. Wu, J.; Yang, Y.; Zou, L.; Ke, D. The Information Sharing Model of Agricultural Supply Chain Finance for Small Farmers Based on
Blockchain. Inf. Sci. 2023, 41, 97–106.
33. Song, X.; Mao, J. Research on the Process of Blockchain-based Interorganizational Trust Building Take the Digital Supply Chain
Finance Model for Example. China Ind. Econ. 2022, 11, 174–192.
34. Su, L.; Cao, Y.; Li, H.; Tan, J. Blockchain-Driven Optimal Strategies for Supply Chain Finance Based on a Tripartite Game Model.
_[J. Theor. Appl. Electron. Commer. Res. 2022, 17, 1320–1335. [CrossRef]](https://doi.org/10.3390/jtaer17040067)_
35. Sun, Z.; Ge, C.; Zhang, W. Evolutionary game on enterprise greenwashing governance from the perspective of heterogeneity. Syst.
_Eng. 2024, 42, 1–14._
36. Yu, Y.; Huang, G.; Guo, X. Financing strategy analysis for a multi-sided platform with blockchain technology. Int. J. Prod. Res.
**[2021, 59, 4513–4532. [CrossRef]](https://doi.org/10.1080/00207543.2020.1766718)**
37. Song, H.; Han, S.; Yu, K. Blockchain-enabled supply chain operations and financing: The perspective of expectancy theory. Int. J.
_[Oper. Prod. Manag. 2023, 43, 1943–1975. [CrossRef]](https://doi.org/10.1108/IJOPM-07-2022-0467)_
38. Chen, J.; Chen, S.; Liu, Q. Applying blockchain technology to reshape the service models of supply chain finance for SMEs in
[China. Singap. Econ. Rev. 2021, 66. [CrossRef]](https://doi.org/10.1142/S0217590821480015)
39. Du, M.; Chen, Q.; Xiao, J.; Yang, H.; Ma, X. Supply chain finance innovation using blockchain. IEEE Trans. Eng. Manag. 2020, 67,
[1045–1058. [CrossRef]](https://doi.org/10.1109/TEM.2020.2971858)
40. Cucari, N.; Lagasio, V.; Lia, G.; Torriero, C. The impact of blockchain in banking processes: The Interbank Spunta case study.
_[Technol. Anal. Strateg. Manag. 2022, 34, 138–150. [CrossRef]](https://doi.org/10.1080/09537325.2021.1891217)_
41. [Wang, R. Blockchain and Bank Lending Behavior: A Theoretical Analysis. SAGE Open 2023, 13. [CrossRef]](https://doi.org/10.1177/21582440231164597)
-----
_Sustainability 2024, 16, 4858_ 21 of 21
42. Chang, V.; Baudier, P.; Zhang, H.; Xu, Q.; Zhang, J.; Arami, M. How Blockchain can impact financial services—The overview,
[challenges and recommendations from expert interviewees. Technol. Forecast. Soc. Chang. 2022, 158, 120166. [CrossRef] [PubMed]](https://doi.org/10.1016/j.techfore.2020.120166)
43. Hu, H.; Li, Y.; Tian, M. Evolutionary game of small and medium-sized enterprises’ accounts-receivable pledge financing in the
[supply chain. Systems 2022, 10, 21. [CrossRef]](https://doi.org/10.3390/systems10010021)
44. Li, S.; Zheng, X.; Zeng, Q. Can Green Finance Drive the Development of the Green Building Industry?—Based on the Evolutionary
[Game Theory. Sustainability 2023, 15, 13134. [CrossRef]](https://doi.org/10.3390/su151713134)
45. Sun, H.; Wan, Y.; Zhang, L.; Zhou, Z. Evolutionary game of the green investment in a two-echelon supply chain under a
[government subsidy mechanism. J. Clean. Prod. 2019, 235, 1315–1326. [CrossRef]](https://doi.org/10.1016/j.jclepro.2019.06.329)
46. Wang, Z.; Jian, Z.; Ren, X. Pollution prevention strategies of SMEs in a green supply chain finance under external government
[intervention. Environ. Sci. Pollut. Res. 2023, 30, 45195–45208. [CrossRef] [PubMed]](https://doi.org/10.1007/s11356-023-25444-4)
47. Cui, H.; Wang, R.; Wang, H. An evolutionary analysis of green finance sustainability based on multi-agent game. J. Clean. Prod.
**[2020, 269, 121799. [CrossRef]](https://doi.org/10.1016/j.jclepro.2020.121799)**
48. Gong, M.; Dai, A. Multiparty evolutionary game strategy for green technology innovation under market orientation and
[pandemics. Front. Public Health 2022, 9, 821172. [CrossRef] [PubMed]](https://doi.org/10.3389/fpubh.2021.821172)
49. Zhang, H.; Su, X. The applications and complexity analysis based on supply chain enterprises’ green behaviors under evolutionary
[game framework. Sustainability 2021, 13, 10987. [CrossRef]](https://doi.org/10.3390/su131910987)
50. Long, Q.; Tao, X.; Shi, Y.; Zhang, S. Evolutionary game analysis among three green-sensitive parties in green supply chains. IEEE
_[Trans. Evol. Comput. 2021, 25, 508–523. [CrossRef]](https://doi.org/10.1109/TEVC.2021.3052173)_
51. Wei’an, L.; Yin, M. A tripartite evolutionary game study on green governance in China’s coating industry. Environ. Sci. Pollut. Res.
**[2022, 29, 61161–61177. [CrossRef]](https://doi.org/10.1007/s11356-022-20220-2)**
52. Ye, L.; Fang, Y. Optimizing Green Credit Trading Mechanism from the Perspective of Differential Pricing. J. Hebei Univ. Econ. Bus.
**2022, 43, 97–109.**
53. Yang, X.; Liao, S.; Li, R. The evolution of new ventures’ behavioral strategies and the role played by governments in the green
[entrepreneurship context: An evolutionary game theory perspective. Environ. Sci. Pollut. Res. 2021, 28, 31479–31496. [CrossRef]](https://doi.org/10.1007/s11356-021-12748-6)
[[PubMed]](https://www.ncbi.nlm.nih.gov/pubmed/33606162)
54. Xu, L.; Tian, T. Blockchain-enabled enterprise bleaching green regulation banking evolution game analysis. Environ. Dev. Sustain.
**[2023, 15. [CrossRef]](https://doi.org/10.1007/s10668-023-03768-y)**
55. Li, S.; Chen, R.; Li, Z.; Chen, X. Can blockchain help curb “greenwashing” in green finance?-Based on tripartite evolutionary
[game theory. J. Clean. Prod. 2024, 435, 140447. [CrossRef]](https://doi.org/10.1016/j.jclepro.2023.140447)
56. Song, L.; Luo, Y.; Chang, Z.; Jin, C.; Nicolas, M. Blockchain adoption in agricultural supply chain for better sustainability: A game
[theory perspective. Sustainability 2022, 14, 1470. [CrossRef]](https://doi.org/10.3390/su14031470)
57. Yu, H.; Chang, X.; Liu, W. Cost-based subsidy and performance-based subsidy in a manufacturing-recycling system considering
[product eco-design. J. Clean. Prod. 2021, 327, 129391. [CrossRef]](https://doi.org/10.1016/j.jclepro.2021.129391)
**Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual**
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/su16114858?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/su16114858, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2071-1050/16/11/4858/pdf?version=1717675786"
}
| 2,024
|
[
"JournalArticle"
] | true
| 2024-06-06T00:00:00
|
[
{
"paperId": "fa5f2f5827c0f83748c6feba6fab7fbb3d4fe65e",
"title": "Can blockchain help curb “greenwashing” in green finance? - Based on tripartite evolutionary game theory"
},
{
"paperId": "e6536858183c4f881f4ba924b75eed143cd89cd4",
"title": "Green credit policy and labor investment efficiency: evidence from China"
},
{
"paperId": "0f1649587747624f4835880b1aa7cc15b0b7d83d",
"title": "Can Green Finance Drive the Development of the Green Building Industry?—Based on the Evolutionary Game Theory"
},
{
"paperId": "b6f89de469a0f72fec6ef77ea1978eec02db2e2b",
"title": "Blockchain-enabled enterprise bleaching green regulation banking evolution game analysis"
},
{
"paperId": "8fe5a3dbb2fe9ac6bb8b1ed23a1ee17f25df7fe7",
"title": "Can environmental monitoring power transition curb corporate greenwashing behavior?"
},
{
"paperId": "e16fd86d2311d54df630c91f61d2c32d6e82ffd1",
"title": "Blockchain-enabled supply chain operations and financing: the perspective of expectancy theory"
},
{
"paperId": "2baed05310d38eb59b0f9d23c1af23b24b156fb6",
"title": "Logistics outsourcing: Effects of greenwashing and blockchain technology"
},
{
"paperId": "95bd7d02855c17e571a67f0bc95957280499d2e6",
"title": "Pollution prevention strategies of SMEs in a green supply chain finance under external government intervention"
},
{
"paperId": "4ffc27724ebc83d577314e0e197f7f1cb80a710d",
"title": "Financial constraints and corporate greenwashing strategies in China"
},
{
"paperId": "f0569ab12d6d2a8b7f35100c34720a1a57718c3c",
"title": "Unveiling the “Veil” of information disclosure: Sustainability reporting “greenwashing” and “shared value”"
},
{
"paperId": "173a1287031d72b0b8826ee30f4120295733f7e4",
"title": "The green fog: Environmental rating disagreement and corporate greenwashing"
},
{
"paperId": "81c0e84292132bc99edde43ed29749fa08b49f7e",
"title": "Blockchain and Bank Lending Behavior: A Theoretical Analysis"
},
{
"paperId": "c48b1b4bb18b5d4b4172844bef2ea28f317c6c15",
"title": "Sustainable development and greenwashing: How blockchain technology information can empower green consumers"
},
{
"paperId": "2a5e3cd25d4a5d10c75537a9b69cb01e3912547a",
"title": "Blockchain-Driven Optimal Strategies for Supply Chain Finance Based on a Tripartite Game Model"
},
{
"paperId": "a44e42be296bd5708859f1410af4fb3a875bcc0c",
"title": "How does multi‐agent govern corporate greenwashing? A stakeholder engagement perspective from “common” to “collaborative” governance"
},
{
"paperId": "4d42192e596e9c6f4d2b7be4f30be4537fcaf116",
"title": "An analysis on the role of blockchain-based platforms in agricultural supply chains"
},
{
"paperId": "7f7e76a100cb763d88ec0f61293d6482de1acba1",
"title": "Effects of greenwashing on financial performance: Moderation through local environmental regulation and media coverage"
},
{
"paperId": "d12626c6e25aef0be64f92a41f3946a00f48d3c9",
"title": "How and when does perceived greenwashing affect employees' job performance? Evidence from China"
},
{
"paperId": "06fb0ce711954bd3eac7a9cfd5c9c780aead50c7",
"title": "The Impact of Perceived Greenwashing on Customer Satisfaction and the Contingent Role of Capability Reputation"
},
{
"paperId": "eaf4c39dca0a209ff5ad85b90b6cc073bb807214",
"title": "The impact of green credit policy on firms’ green strategy choices: green innovation or green-washing?"
},
{
"paperId": "d28639cb39f61a564afa29f58e00829a1a11c30e",
"title": "The impact of green credit policy on enterprises' financing behavior: Evidence from Chinese heavily-polluting listed companies"
},
{
"paperId": "359349f6bbd91efda921b7acff3c9fd22570e604",
"title": "A tripartite evolutionary game study on green governance in China’s coating industry"
},
{
"paperId": "d4a21c7d0ceac9ab28d01fb56bbbef8eeee862da",
"title": "Evolutionary Game of Small and Medium-Sized Enterprises' Accounts-Receivable Pledge Financing in the Supply Chain"
},
{
"paperId": "666d5c4bafc6b17288146277bccb85206f57596f",
"title": "Blockchain Adoption in Agricultural Supply Chain for Better Sustainability: A Game Theory Perspective"
},
{
"paperId": "c40166704f8be5d8e0934bc4fd961aca981e0ed1",
"title": "Multiparty Evolutionary Game Strategy for Green Technology Innovation Under Market Orientation and Pandemics"
},
{
"paperId": "56033a40b7c8b8fad3547c93caf36b899cdbe246",
"title": "The Applications and Complexity Analysis Based on Supply Chain Enterprises’ Green Behaviors under Evolutionary Game Framework"
},
{
"paperId": "7e1e6b8cc25d9c3d90b504086dd16f55cedf2376",
"title": "Cost-based subsidy and performance-based subsidy in a manufacturing-recycling system considering product eco-design"
},
{
"paperId": "a931f08d2fb63c6ebb278eb8f843df626460b4a3",
"title": "Regional gap and the trend of green finance development in China"
},
{
"paperId": "ee8f6d52a8c8068321096fc304b39d7a3089260a",
"title": "APPLYING BLOCKCHAIN TECHNOLOGY TO RESHAPE THE SERVICE MODELS OF SUPPLY CHAIN FINANCE FOR SMES IN CHINA"
},
{
"paperId": "b9296691590c68281c3c6f05c669d6b3be9df8af",
"title": "Evolutionary Game Analysis Among Three Green-Sensitive Parties in Green Supply Chains"
},
{
"paperId": "7c7ac5a0ca6ba6c5f014646b0078a1cc8ee56bfd",
"title": "Green lies and their effect on intention to invest"
},
{
"paperId": "9f4f47e32e5a3ee8f30dc9f89af95f844044f6e5",
"title": "Does the combining effects of energy and consideration of financial development lead to environmental burden: social perspective of energy finance?"
},
{
"paperId": "f1a1c5468292d08c48eea31553d17da563481719",
"title": "The impact of blockchain in banking processes: the Interbank Spunta case study"
},
{
"paperId": "d1831fcd07be479139a4eb5e4140de1602957abb",
"title": "The evolution of new ventures’ behavioral strategies and the role played by governments in the green entrepreneurship context: an evolutionary game theory perspective"
},
{
"paperId": "fccee059e0cf807b16a8488c5bdcc42b70d90253",
"title": "A Fraud-Resilient Blockchain-Based Solution for Invoice Financing"
},
{
"paperId": "9befad2e530ddff68a4fb6eae5673a6535a04c98",
"title": "Supply Chain Finance Innovation Using Blockchain"
},
{
"paperId": "6628fa0965810ace9c2701327fa612af406aec58",
"title": "An evolutionary analysis of green finance sustainability based on multi-agent game"
},
{
"paperId": "8b11895172edfece330cd5a35af5e9f04487c36e",
"title": "How Blockchain can impact financial services – The overview, challenges and recommendations from expert interviewees"
},
{
"paperId": "1d7a9791c06a97631b04518266e0eeddc59469f9",
"title": "Financing strategy analysis for a multi-sided platform with blockchain technology"
},
{
"paperId": "f3963e1379587144726be1cb41f38cd3de10479e",
"title": "Greenwashing in environmental, social and governance disclosures"
},
{
"paperId": "29caff504c1a6de0af1ad6150ee8c0b33639c06e",
"title": "Blockchain technology and startup financing: A transaction cost economics perspective"
},
{
"paperId": "4bfbe6f82b1ab7576bfdbba24ad1eb3bf0d8b7b7",
"title": "Firms Talk, Suppliers Walk: Analyzing the Locus of Greenwashing in the Blame Game and Introducing ‘Vicarious Greenwashing’"
},
{
"paperId": "d0f5da2ba6aed5814609d88f73e29e43ec662174",
"title": "Evolutionary game of the green investment in a two-echelon supply chain under a government subsidy mechanism"
},
{
"paperId": "660defe01f0a59e590f3efd15e9e07ed302abc7e",
"title": "Do government regulations prevent greenwashing? An evolutionary game analysis of heterogeneous enterprises"
},
{
"paperId": "ac7afd7f3a8a102adf105d2f87843fefeaa44697",
"title": "A bibliometric analysis on green finance: Current status, development, and future directions"
},
{
"paperId": "61ed8856c72e121b87b928c8b3e97fbf3b18855f",
"title": "How does greenwashing affect green branding equity and purchase intention? An empirical research"
},
{
"paperId": "d4469c83f61e485541a4787143cac36149e0511f",
"title": "Antecedents of green supplier championing and greenwashing: An empirical study on leadership and ethical incentives"
},
{
"paperId": "be6f2ba568be0880709184f420a3153dfa241102",
"title": "The Means and End of Greenwash"
},
{
"paperId": null,
"title": "Application of blockchain technology to preventing supply chain finance based on evolutionary game"
},
{
"paperId": null,
"title": "Evolutionary game on enterprise greenwashing governance from the perspective of heterogeneity"
},
{
"paperId": null,
"title": "Tripartite Evolutionary Game Research of Factoring Financing under the Integration of Blockchain"
},
{
"paperId": null,
"title": "The Information Sharing Model of Agricultural Supply Chain Finance for Small Farmers Based on Blockchain"
},
{
"paperId": null,
"title": "Application of science and technology in the field of green finance"
},
{
"paperId": null,
"title": "Research on the Process of Blockchain-based Interorganizational Trust Building Take the Digital Supply Chain Finance Model for Example"
},
{
"paperId": null,
"title": "Optimizing Green Credit Trading Mechanism from the Perspective of Differential Pricing"
},
{
"paperId": null,
"title": "Dose subsidy make green certification more effective? Based on the perspective of green market evolution"
},
{
"paperId": "c387a76dd940b0524ccb88d6d9483140850abd7b",
"title": "Topical Issues in Environmental Finance"
}
] | 26,560
|
en
|
[
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/032ece6e666e4975eca62d98dcd978b084deb0c5
|
[] | 0.867576
|
A Distributed Two-Level Control Strategy for DC Microgrid Considering Safety of Charging Equipment
|
032ece6e666e4975eca62d98dcd978b084deb0c5
|
Energies
|
[
{
"authorId": "2144439993",
"name": "Xiang Li"
},
{
"authorId": "9243881",
"name": "Zhenya Ji"
},
{
"authorId": "2050220446",
"name": "Fengkun Yang"
},
{
"authorId": "98568835",
"name": "Z. Dou"
},
{
"authorId": "2143410789",
"name": "Chunyan Zhang"
},
{
"authorId": "46308159",
"name": "Liangliang Chen"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-155563",
"https://www.mdpi.com/journal/energies",
"http://www.mdpi.com/journal/energies"
],
"id": "1cd505d9-195d-4f99-b91c-169e872644d4",
"issn": "1996-1073",
"name": "Energies",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-155563"
}
|
A direct current (DC) microgrid containing a photovoltaic (PV) system, energy storage and charging reduces the electric energy conversion link and improves the operational efficiency of the system, which has a broad development prospect. The instability and randomness of PV and charging loads pose a challenge to the safe operation of DC microgrid systems. The safety of grid operation and charging need to be taken into account. However, few studies have integrated the safety of charging devices with grid operation. In this paper, a two-level control strategy is used for the DC microgrid equipped with hybrid energy storage systems (ESSs) with the charging equipment’s safety as the entry point. The primary control strategy combines the health of the charging equipment with droop control to effectively solve the problem of common DC bus voltage deviation and power distribution. The consistency the control algorithm for multiple groups of hybrid ESSs ensures the local side DC bus voltage level and ensures reasonable power distribution among the ESSs. The simulation results in MATLAB/Simulink show that the control strategy can achieve power allocation with stable voltage levels in the case of fluctuating health of the charging equipment, which guarantees the safe operation of the microgrid and charging equipment.
|
# energies
_Article_
## A Distributed Two-Level Control Strategy for DC Microgrid Considering Safety of Charging Equipment
**Xiang Li** **[1]** **, Zhenya Ji** **[1,]*** **, Fengkun Yang** **[2], Zhenlan Dou** **[3], Chunyan Zhang** **[3]** **and Liangliang Chen** **[2]**
1 NARI School of Electrical and Automation Engineering, Nanjing Normal University, Nanjing 210046, China
2 NARI Technology Co., Ltd., Nanjing 211106, China
3 State Grid Shanghai Municipal Electric Power Company, Shanghai 200122, China
***** Correspondence: jizhenya@njnu.edu.cn
**Citation: Li, X.; Ji, Z.; Yang, F.; Dou,**
Z.; Zhang, C.; Chen, L. A Distributed
Two-Level Control Strategy for DC
Microgrid Considering Safety of
Charging Equipment. Energies 2022,
_[15, 8600. https://doi.org/10.3390/](https://doi.org/10.3390/en15228600)_
[en15228600](https://doi.org/10.3390/en15228600)
Academic Editor: Surender
Reddy Salkuti
Received: 25 October 2022
Accepted: 14 November 2022
Published: 17 November 2022
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright:** © 2022 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: A direct current (DC) microgrid containing a photovoltaic (PV) system, energy storage and**
charging reduces the electric energy conversion link and improves the operational efficiency of the
system, which has a broad development prospect. The instability and randomness of PV and charging
loads pose a challenge to the safe operation of DC microgrid systems. The safety of grid operation and
charging need to be taken into account. However, few studies have integrated the safety of charging
devices with grid operation. In this paper, a two-level control strategy is used for the DC microgrid
equipped with hybrid energy storage systems (ESSs) with the charging equipment’s safety as the
entry point. The primary control strategy combines the health of the charging equipment with droop
control to effectively solve the problem of common DC bus voltage deviation and power distribution.
The consistency the control algorithm for multiple groups of hybrid ESSs ensures the local side
DC bus voltage level and ensures reasonable power distribution among the ESSs. The simulation
results in MATLAB/Simulink show that the control strategy can achieve power allocation with stable
voltage levels in the case of fluctuating health of the charging equipment, which guarantees the safe
operation of the microgrid and charging equipment.
**Keywords: DC microgrid; two-level control; charging safety; droop control; consistency algorithm**
**1. Introduction**
A typical application of a direct current (DC) microgrid is the inclusion of photovoltaic (PV) power generation systems, energy storage systems (ESSs), electric vehicle
(EV) charging systems, etc. It is of great significance to promote energy conservation and
emission reduction and achieve sustainable energy development. If the DC microgrid
integrating these systems is operated in an uncoordinated way, then it will inevitably affect
the power quality and voltage stability [1], and endanger the safe operation of charging
devices and cause charging accidents. The safety performance of the charging devices
gradually decreases with the increase in the usage time. There has been considerable
research on charging device safety by relevant scholars, and a comprehensive review of
state assessment methods for charging devices was presented in [2], but rarely was the safe
operation of charging devices studied in conjunction with the operation of the power grid.
To ensure the reliable operation and power quality of microgrids and the safe operation of charging devices, it is important to mitigate the power fluctuations caused by these
renewable energy sources and provide stable DC bus voltages. The energy management
strategies applied in conventional DC microgrids are mainly classified into three categories:
centralized control, distributed control and hierarchical control. Distributed control requires only local communication to achieve self-management and control; droop control
enables current sharing by adding a virtual resistance control loop with plug-and-play
capability [3,4]. Distributed control combined with the characteristics of droop control
can achieve load power distribution simply and reliably. Distributed control based on
-----
_Energies 2022, 15, 8600_ 2 of 20
droop characteristics has become the focus of research by scholars at home and abroad,
but this control method is difficult to solve the conflict between DC bus voltage deviation
and accurate current distribution [5,6]. A nonlinear droop control method was proposed
in [7] to find the nonlinear droop coefficients for the DC microgrid system to satisfy the
voltage regulation and the current-sharing accuracy. Reference [8] proposed an advanced
distributed secondary control scheme based on droop control and fuzzy logic control for
an isolated DC microgrid with multi-group distributed generation (DG), which can also
solve the above contradictions well. Grasshopper optimized intelligent algorithm was also
added to the droop control, which can optimally adjust the PI controller parameters to
ensure the power quality of the islanded DC microgrid [9]. Aluko, A. et al. used an artificial
peak swarm optimization algorithm to optimize the weighting parameters used to balance
the mean current and voltage regulation [10]. Liu, X.K. et al. proposed iterative learning
algorithms in a game-theoretic framework to solve the equalization and voltage regulation problems [11]. However, the limitations of all these intelligent algorithms are a high
computational cost and more complex methods. It can be seen that there have been more
research results on relevant control methods in DC microgrids without energy storage.
ESSs are usually an integral part of DC microgrids to balance the power flow between
renewable energy sources and load systems [12–15]. In islanded DC microgrids with ESSs,
droop control is also commonly used to achieve power sharing in ESSs [16,17]. The DC
microgrid studied in [18] not only connected PV panels, an external grid and loads, but
also considered electric vehicles and distributed ESSs. It mainly addressed the dynamic
load distribution of ESSs in the microgrid but did not consider the issue of electric vehicle
charging safety. A distributed secondary control scheme was proposed in [19] for voltage
restoration and accurate power distribution in an isolated DC microgrid with a single
group of ESSs. For centralized ESSs, droop control can achieve ideal power distribution
performance [20], which is less applicable for distributed energy storage. In [21], a control
method based on multiagent was proposed, which incorporated voltage regulation with
a state-of-charge (SOC) balancing control method. It regulated the droop parameters by
balancing the SOC, which can achieve good results. In DC microgrids with a PV system
and an ESS, the control method based on nonlinear theory overcame the drawbacks of
droop control and ensured accurate voltage regulation [22]. Choi, J.-S. et al. directly used
the distributed ESS to achieve the regulation of the DC bus voltage, which improved the
reliability of the DC microgrid [23]. In [24], a fuzzy logic algorithm was used to adjust the
droop factor, and it could achieve SOC balancing and power balancing for a single group
of PV energy storage. However, these studies did not consider the power distribution
problem of a multi-group system connected in parallel.
Supercapacitors (SCs) are characterized by a fast response time with high instantaneous output power, which can well compensate for the slow dynamic performance of
energy storage batteries (ESBs). Rocabert, J. et al. illustrated the advantages displayed
by ESS applications configured with SCs in microgrids [25–27]. In contrast, it is difficult
to achieve coordinated control between multiple systems using conventional droop control [28]. For this reason, Zhang, X. et al. proposed a hybrid algorithm based on model
predictive control (MPC) and iterative learning control to cope with sudden load changes in
a PV islanded DC microgrid with a single group of hybrid ESSs [29]. Wu, X. et al. proposed
an adaptive energy optimization method for hybrid ESSs in order to maintain the stability of the DC bus voltage, but only for single-group PV hybrid ESSs, without involving
multi-system coordinated control [30]. Mathew, P. et al. proposed a multi-stage hybrid
control scheme for DC microgrids with hybrid ESSs, combining a central controller with
distributed control, while briefly considering the interaction between EV charging stations
and the grid [31], but the communication topology of this control scheme is more complex.
In summary, there are many existing studies on the voltage stabilization and current
distribution strategies of the DC microgrid, but they seldom involve multi-group distributed hybrid ESSs; moreover, they seldom consider and integrate them into the control
strategies of the DC microgrid from the perspective of the safe operation of electric vehicle
-----
_Energies 2022, 15, 8600_ 3 of 20
charging equipment. To this end, this paper systematically considers the safety states of
PV power generation systems, multi-group hybrid ESSs and charging devices, establishes
the structure and model of the studied PV energy storage and charging DC microgrid, and
proposes a distributed secondary hierarchical control strategy. In the distributed first-level
control strategy based on the droop control characteristics, the safety state of EV charging
equipment is incorporated. The microgrid can still respond quickly to the power distribution as well as maintain the stability of the common DC bus voltage when the safety
state of charging equipment changes. At the same time, the load power allocation of local
charging equipment is also fully considered in order to be closer to reality. The distributed
secondary control strategy is based on the consistency algorithm to achieve coordinated
control among multiple groups of hybrid energy storage to balance the system power and
stabilize the DC voltage to ensure the safe operation of the charging equipment and the
grid. To sum up, the main contributions of this work are:
In this paper, a distributed two-level control strategy applicable to the DC microgrid
_•_
is investigated, which includes the system-level control strategy of the DC microgrid
and the control method of the multi-group distributed hybrid ESSs based on the
consistency algorithm.
Starting from the safety of the charging equipment, the health measure of charging
_•_
device safety is incorporated into the droop control to maintain the system stability
and safe operation. In addition, to eliminate the influence of local charging loads,
the local charging loads are equated to the public charging loads. Based on this, the
control method is improved.
Hybrid ESSs can regulate the local DC bus voltage deviation. To fully utilize the
_•_
technical characteristics of different energy storage, a DC microgrid model containing
ESBs and SCs is developed. The effectiveness of the above control strategy under this
model is verified by simulation.
The remainder of this paper is organized as follows. The system modeling is described
in Section 2. Section 3 introduces the two-level control strategy for the DC microgrid. The
simulation results are presented in Section 4. Finally, conclusions and perspectives are
drawn in Section 5.
**2. Structure and Model of a DC Microgrid**
_2.1. Structure_
The structure of the DC microgrid with integrated PV energy storage and charging
studied in this paper is given in Figure 1. To smooth out the power fluctuation and improve
the reliability and stability of the DC microgrid system, the hybrid ESS is configured in the
DC microgrid system studied in this paper. The DC microgrid operates in the islanding
case and consists of a PV system, ESSs, DC–DC converters, and DC charging loads. The PV
array and ESS are connected by the DC–DC converter to form a PV energy storage unit. To
improve the robustness of the system, this paper considers multiple groups of PV energy
storage and charging units connected in parallel to a common DC bus, to realize the flow
of power between different units and achieve a reasonable power distribution.
The following is an example of a set of PV energy storage and charging DC microgrids
as shown in Figure 2.
The PV power generation system is connected to the DC bus via a boost-type unidirectional DC–DC converter. To capture as much solar energy as possible, it works in maximum
power point tracking (MPPT) mode so that it operates at the highest efficiency. When solar
energy is abundant and PV output is high, the DC bus voltage increases, which will directly
affect the charging safety of electric vehicles. So, the PV system needs to coordinate the ESS
to regulate the system power to maintain the voltage stability.
-----
_Energies Energies2022 2022, 15, 15, x FOR PEER REVIEW, 8600_ 4 of 204 of 20
**Figure 1. Structure diagram of the DC microgrid with integrated PV energy storage and charging.**
###### The following is an example of a set of PV energy storage and charging DC mi crogrids as shown in Figure 2. The PV power generation system is connected to the DC bus via a boost-type unidi rectional DC–DC converter. To capture as much solar energy as possible, it works in max imum power point tracking (MPPT) mode so that it operates at the highest efficiency When solar energy is abundant and PV output is high, the DC bus voltage increases which will directly affect the charging safety of electric vehicles. So, the PV system needs to coordinate the ESS to regulate the system power to maintain the voltage stability.
**Figure 1. Figure 1.Structure diagram of the DC microgrid with integrated PV energy storage and charging. Structure diagram of the DC microgrid with integrated PV energy storage and charging.**
**Figure 1. Structure diagram of the DC microgrid with integrated PV energy storage and charging.**
###### The following is an example of a set of PV energy storage and charging DC mi crogrids as shown in Figure 2. The PV power generation system is connected to the DC bus via a boost-type unidi rectional DC–DC converter. To capture as much solar energy as possible, it works in max imum power point tracking (MPPT) mode so that it operates at the highest efficiency When solar energy is abundant and PV output is high, the DC bus voltage increases which will directly affect the charging safety of electric vehicles. So, the PV system needs
**2022 2022, 15, 15, x FOR PEER REVIEW, 8600**
###### The following is an example of a set of PV energy storage and charging DC mi- crogrids as shown in Figure 2. The PV power generation system is connected to the DC bus via a boost-type unidi- rectional DC–DC converter. To capture as much solar energy as possible, it works in max- imum power point tracking (MPPT) mode so that it operates at the highest efficiency. When solar energy is abundant and PV output is high, the DC bus voltage increases, which will directly affect the charging safety of electric vehicles. So, the PV system needs to coordinate the ESS to regulate the system power to maintain the voltage stability.
**Figure 2. Figure 2.Single-group PV energy storage and charging system model with a boost converter and Single-group PV energy storage and charging system model with a boost converter and**
DC–DC converters. DC–DC converters.
###### The following is an example of a set of PV energy storage and charging DC mi- crogrids as shown in Figure 2. The PV power generation system is connected to the DC bus via a boost-type unidi- rectional DC–DC converter. To capture as much solar energy as possible, it works in max- imum power point tracking (MPPT) mode so that it operates at the highest efficiency. When solar energy is abundant and PV output is high, the DC bus voltage increases, which will directly affect the charging safety of electric vehicles. So, the PV system needs to coordinate the ESS to regulate the system power to maintain the voltage stability.
The energy storage unit includes ESBs and SCs, which are connected to the DC bus in
###### The energy storage unit includes ESBs and SCs, which are connected to the DC bus
parallel via a bidirectional DC–DC converter. Due to the uncertainty and fluctuation of the
###### in parallel via a bidirectional DC–DC converter. Due to the uncertainty and fluctuation o
charging load, to ensure the normal and safe operation of the charging equipment, a suitable
###### the charging load, to ensure the normal and safe operation of the charging equipment, acontrol strategy is used for the ESU to maintain the power balance and stabilize the DC bus suitable control strategy is used for the ESU to maintain the power balance and stabilizevoltage. The ESB in the hybrid ESS has a high energy density and the SC has a fast dynamic the DC bus voltage. The ESB in the hybrid ESS has a high energy density and the SC hasperformance, which can respond to power fluctuations with different characteristics.
**Figure 2. Single-group PV energy storage and charging system model with a boost converter and**
###### a fast dynamic performance, which can respond to power fluctuations with different charDC–DC converters. The DC charging load can be considered a constant power load and is divided into
local charging load as well as public charging load, which are hooked up to the DC bus
###### acteristics.
and public DC bus, respectively. With the increasing power of charging equipment, the
###### TheThe energy storage unit includes ESBs and SCs, which are connected to the DC bus DC charging load can be considered a constant power load and is divided into
safety of charging equipment must be given attention. In reference [32], the authors com
###### in parallel via a bidirectional DC–DC converter. Due to the uncertainty and fluctuation of
local charging load as well as public charging load, which are hooked up to the DC busbined the fuzzy comprehensive evaluation method with a neural network algorithm to
the charging load, to ensure the normal and safe operation of the charging equipment, a
and public DC bus, respectively. With the increasing power of charging equipment, theconstruct a comprehensive vehicle–pile–net safety state evaluation system and divided the
suitable control strategy is used for the ESU to maintain the power balance and stabilize
charging equipment into five safety levels. In this paper, operational wear parameters and
###### the DC bus voltage. The ESB in the hybrid ESS has a high energy density and the SC has
operational parameters are taken into account in the evaluation of charging devices. The
###### a fast dynamic performance, which can respond to power fluctuations with different char-operational wear parameters include the degree of wear and tear of the equipment, the acteristics. total number of charging hours and the number of years of use. The operating parame
ters include equipment insulation monitoring parameters, power monitoring parametersThe DC charging load can be considered a constant power load and is divided into
###### local charging load as well as public charging load, which are hooked up to the DC bus
**Figure 2. Figure 2.Single-group PV energy storage and charging system model with a boost converter and Single-group PV energy storage and charging system model with a boost converter and**
DC–DC converters. DC–DC converters.
The energy storage unit includes ESBs and SCs, which are connected to the DC bus in
-----
###### parameters include equipment insulation monitoring parameters, power mobined the fuzzy comprehensive evaluation method with a neural network algorithm to
_Energies 2022, 15, 8600_ 5 of 20
###### parameters and temperature monitoring parameters of key components of the cconstruct a comprehensive vehicle–pile–net safety state evaluation system and divided
the charging equipment into five safety levels. In this paper, operational wear parameters
###### equipment. Obviously, there is a relationship between each parameter and the o
and operational parameters are taken into account in the evaluation of charging devices.
and temperature monitoring parameters of key components of the charging equipment.power of the charging equipment. The charging equipment health degree factor mThe operational wear parameters include the degree of wear and tear of the equipment,
Obviously, there is a relationship between each parameter and the operating power of theis divided into five grades, is introduced as the weight of the operating powethe total number of charging hours and the number of years of use. The operating
charging equipment. The charging equipment health degree factor m, which is divided intocharging equipment. The correspondence between the health degree and the safparameters include equipment insulation monitoring parameters, power monitoring
five grades, is introduced as the weight of the operating power of the charging equipment.parameters and temperature monitoring parameters of key components of the charging
###### is obtained by using the integrated fuzzy evaluation method and the fuzzy s
The correspondence between the health degree and the safety level is obtained by usingequipment. Obviously, there is a relationship between each parameter and the operating
the integrated fuzzy evaluation method and the fuzzy statistical method. The specificmethod. The specific algorithm is shown in the following Figure 3 and the repower of the charging equipment. The charging equipment health degree factor m, which
algorithm is shown in the following Figureshown in Table 1 below. is divided into five grades, is introduced as the weight of the operating power of the 3 and the results are shown in Table 1 below.
charging equipment. The correspondence between the health degree and the safety level
is obtained by using the integrated fuzzy evaluation method and the fuzzy statistical
method. The specific algorithm is shown in the following Figure 3 and the results are
shown in Table 1 below.
**Figure 3.Figure 3. Figure 3. Flow chart of fuzzy comprehensive evaluation method of charging equipment safety.Flow chart of fuzzy comprehensive evaluation method of charging equipment safety. Flow chart of fuzzy comprehensive evaluation method of charging equipment s**
is obtained by using the integrated fuzzy evaluation method and the fuzzy statistical
method. The specific algorithm is shown in the following Figure 3 and the results are
shown in Table 1 below.
**Table 1.Table 1. Correspondence between charging equipment health degree and safety grade.Correspondence between charging equipment health degree and safety grade.**
###### Table 1. Correspondence between charging equipment health degree and safety grade.
**_mm_** **Safety Level Safety Level** **Safety Evaluation Score Safety Evaluation Score**
1 **_m_** Very good Safety Level >90 **Safety Evaluation**
1 Very good >90
0.80.8 1 Good Good Very good 80–90 80–90 >90
0.50.5 0.8 Moderate Moderate Good 70–80 70–80 80–90
0.20.2 Poor Poor 60–70 60–70
00 0.5 Very poor Very poorModerate <60 <60 70–80
###### 0.2 Poor 60–70
_2.2. Boost Converter Model2.2. Boost Converter Model 0_ Very poor <60
To simplify the analysis, only the two ideal states of the boost converter with theTo simplify the analysis, only the two ideal states of the boost converter with the
switching tubes off or on are considered, as shown in Figure2.2. Boost Converter Model switching tubes off or on are considered, as shown in Figure 4. 4.
###### To simplify the analysis, only the two ideal states of the boost converter switching tubes off or on are considered, as shown in Figure 4.
(a) (b)
**Figure 4. Equivalent circuit schematic of boost circuit in two working modes. (a) Equivalent circuit**
when the switch is off in the boost circuit; (b) Equivalent circuit when the switch is on in the
boost circuit.
###### (a) (b)
(a)
**Figure 4.**
when the switch is off in the boost circuit; (
According to the diagram, the differential equations of the circuit in the two states are
###### Figure 4. Equivalent circuit schematic of boost circuit in two working modes. (a) Equivale
listed as shown in Equations (1) and (2).
###### when the switch is off in the boost circuit; (b) Equivalent circuit when the switch is on in circuit. dipv
dt [=][ u]L[pv]
(1)
dduto [=][ −] _R[u]L[o]C_
(b)
**a) Equivalent circuit**
) Equivalent circuit when the switch is on in the
-----
pv = pv − _uo_
_Energies 2022, 15, 8600_ dt _L_ _L_ 6 of 20
(2)
duo _ipv_ _uo_
= −
dt _C_ _R CL_
dipv
The state of the switching tube is represented by the switching function dt [=][ u]L[pv] _L_ _d, which is_
_[−]_ _[u][o]_ (2)
the duty cycle of the converter. Therefore, using the state space representation and com-duo _uo_
bining Equations (1) and (2), the boost circuit is represented as follows in the state space dt [=][ i][pv]C _[−]_ _RLC_
model [26]: The state of the switching tube is represented by the switching function d, which
is the duty cycle of the converter. Therefore, using the state space representation and
_x_ = **A** _x_ + **Bu**
combining Equations (1) and (2), the boost circuit is represented as follows in the state
(3)
space model [26]: _y_ = **C** _x_ + **Du**
.
where _u_ = upv is the input voltage of the boost converter, →x = A→x + B→u _y_ = _uo_ is the output voltage
. (3)
_T_
of the converter, and _x_ = ipv _uo_ is the state vector. →y = C→x + D→u **A, B, C and D are the system pa-**
rameter matrices, expressed as follows: where _→u =_ �upv� is the input voltage of the boost converter, _→y = uo is the output voltage_
of the converter, and _→x =0_ �ipv− 1u−od�T is the state vector.1 **A, B, C and D are the system**
parameter matrices, expressed as follows:L
**A** = 1 − _d_ 1 , **B** = L, **C** = 0 1, **D** = 0 (4)
**A =�** _C0_ −−R C[1]L[−]L _[d]�, B =_ �0L1 �, C = �0 1�, D = 0 (4)
1−Cd _−_ _R1LC_ 0
_2.3. Energy Storage Converter Model_
_2.3. Energy Storage Converter Model_
The energy storage converter is a bidirectional DC–DC converter operating in two
The energy storage converter is a bidirectional DC–DC converter operating in two
modes. When operating in boost mode, the ESU is discharged and the mathematical
modes. When operating in boost mode, the ESU is discharged and the mathematical model
model of the converter is basically the same as that of the above boost converter, so this
of the converter is basically the same as that of the above boost converter, so this section
section will not go into details. When the energy storage converter operates in buck mode,
will not go into details. When the energy storage converter operates in buck mode, the ESU
the ESU is in the charging state, as shown in Figure 5.
is in the charging state, as shown in Figure 5.
(a) (b)
**Figure 5. Equivalent circuit schematic of buck circuit in two working modes. (a) Equivalent circuit**
**Figure 5. Equivalent circuit schematic of buck circuit in two working modes. (a) Equivalent cir-**
when the switch is off in the buck circuit; (b) Equivalent circuit when the switch is on in the buck
cuit when the switch is off in the buck circuit; (b) Equivalent circuit when the switch is on in the
circuit.
buck circuit.
According to Figure 5, the differential equations in the steady state are shown in
According to Figure 5, the differential equations in the steady state are shown in
Equations (5) and (6) below:
Equations (5) and (6) below:
� _LL[d]d[i]td[L]_ _i[=]L_ _[ u]=_ _u[2][ −]2_ −[u]u[C]C (5)
_C_ [d]d[u]dt[C]t[=][ i][L][ −] _[u]R[C]_ (5)
duC _uC_
� _LC[d]d[i]t[L]d[=]t_ _[ −]=_ _[u]iL[C]−_ _R_
(6)
_C_ [d]d[u]t[C] [=][ i][L][ −] _[u]R[C]_
Introducing the duty cycle q of the buck circuit, the state space equation combining
Equations (5) and (6) is shown below:
� _L_ [d]d[i]t[L] [=][ −][u][C]
(7)
_C_ [d]d[u]t[C] [=][ i][L][ −] _[u]R[C]_
-----
_Energies 2022, 15, 8600_ 7 of 20
**3. Control Strategy**
_3.1. MPC-Based MPPT_
The PV system has an irreplaceable role in the DC microgrid system studied in
this paper, and the unstable output characteristics of the PV array when the external
environment changes abruptly pose a serious challenge to the stability and robustness of
the system. In this paper, a PV maximum power point tracking algorithm based on MPC is
used to balance the control accuracy and tracking speed of the PV system under the change
of external factors.
According to Equation (3), the discrete state space equation of the boost circuit is
obtained by applying the forward Euler method, as shown in Equation (8):
�ipv(k)
_uo(k)_
�ipv(k + 1)
_uo(k + 1)_
�
�
=
0 _−_ [(][1][−]L[S][)][T][S]
(1−CS)TS 1 − _RC[T][S]_
(8)
where TS is the sampling period and k is the sampling moment. S is the switching state,
which is defined as:
�
0, switch is off
_S =_ (9)
1, switch is on
Based on Equation (7), the MPPT algorithm is optimized by using the parameter
values of the current moment. By solving the model, it is possible to predict the voltage
and current values of the next moment, thus predicting the future actions of the control
variables. In addition, the minimum value of the error between the reference and predicted
values is used as a constraint and is expressed as a function of FS:
_FS = α|uoS_ (k + 1) − _u[∗]| + β��ipvS_ (k + 1) − _i∗��_ (10)
where uoS (k + 1) is the predicted output voltage; ipvS (k + 1) is the predicted PV current;
_u[∗], i[∗]_ is the reference value.
According to Equation (10), the algorithm requires both voltage and current sensors. In this paper, the PV output current can be calculated by combining the output
characteristics of the PV as shown in Equation (11), so that the current sensor can be
eliminated [33,34].
_vpv+ipv_ _Rs_ �
_nNsvth_ 1 (11)
_−_ _−_ _[v][pv][ +][ i][pv][R][s]_
_Rsh_
_ipv = iph_ _io_
_−_
�
_e_
Where iph is the photogenerated current of the cell panel, io is the reverse saturation
current of the equivalent diode, RS is the series resistance of the module, Rsh is the equivalent bypass shunt resistance, the rest of the quantities in the formula are coefficients with
fixed values or are constants.
The MPPT algorithm using MPC is shown in Figure 6 below. In this case, the perturb
and observe (P&O) algorithm generates the reference voltage, which is based on the
principle of setting a perturbation on the original output voltage and determining the
maximum power point by comparing the output power before and after the perturbation.
The output value of the P&O is fed into the MPC controller as a reference quantity. Then the
voltage at the next moment can be predicted and compared to select the optimal switching
state. This method can save the PI controller as well as the PWM generator.
-----
_Energies 2022, 15, 8600_ The output value of the P&O is fed into the MPC controller as a reference quantity. Then 8 of 20
the voltage at the next moment can be predicted and compared to select the optimal
switching state. This method can save the PI controller as well as the PWM generator.
**Figure 6. MPC-based MPPT algorithm for PV boost converter.**
**Figure 6. MPC-based MPPT algorithm for PV boost converter.**
_3.2. A Distributed Two-Level Control Strategy for the DC Microgrid3.2. A Distributed Two-Level Control Strategy for the DC Microgrid_
3.2.1. Primary Control Strategy for the DC Microgrid
3.2.1. Primary Control Strategy for the DC Microgrid
_Energies 2022, 15, x FOR PEER REVIEW_ In DC microgrids, simple and reliable droop control is applied as a primary control9 of 20
In DC microgrids, simple and reliable droop control is applied as a primary control
strategy to ensure stable system operation [35]. Droop control is the control of a DC
strategy to ensure stable system operation [35]. Droop control is the control of a DC con
converter based on the relationship between voltage and power or voltage and current,
verter based on the relationship between voltage and power or voltage and current, using
using the DC bus voltage as the input signal. In the PV energy storage and charging DCmicrogrid, good voltage quality and reasonable distribution, and a fast response of thethe DC bus voltage as the input signal. In the PV energy storage and charging DC mi-crogrid, good voltage quality and reasonable distribution, and a fast response of the charg-in = _un*_ − _udc_ (13)
charging equipment power among system units are the prerequisites for the safe operationRn + _Rln_
ing equipment power among system units are the prerequisites for the safe operation of
of the charging equipment. For the steady-state operation of the DC microgrid, a set ofThe outlet current relationship between any two groups of PV energy storage sys
the charging equipment. For the steady-state operation of the DC microgrid, a set of a PV
a PV power generation unit and a hybrid ESU are equated as an ideal voltage source intems is:
power generation unit and a hybrid ESU are equated as an ideal voltage source in series
series with a resistor, as shown in Figure 6. The output characteristics using droop control
can be expressed as:with a resistor, as shown in Figure 6. The output characteristics using droop control can ii = _Ri_ + _Rli_, _i j,_ ∈ _N_ and _i_ ≠ _j_ (14)
be expressed as: _i_ _jun =R uj_ +[∗]nRlj (12)
_[−]*_ _[R][n][i][n][,][ n][ ∈]_ _[N]_
The outlet power of a single group of PV energy storage systems is expressed as:un = _un_ − _R in n_, _n_ ∈ _N_ (12)
where un is the output voltage of the nth group of the PV energy storage system; u[∗]n [is the]
rated output voltage of the PV energy storage system, i.e., the open-circuit voltage of thePV energy storage system;where un is the output voltage of the in is the output current of the PV energy storage system;nth group of the PV energy storage system; Pn = _un_ (un − _udc_ ) _un* is the Rn(15) is_
the virtual resistance of the system. In Figurerated output voltage of the PV energy storage system, i.e., the open-circuit voltage of the PV energy storage system; in is the output current of the PV energy storage system; 7R,n R+lnR is the line impedance of theln _nth line.Rn is_
the virtual resistance of the system. In Figure 7, Rln is the line impedance of the nth line.
In general, the voltage reference value of each group of the PV energy storage system
should be set to the same, according to Figure 7, then the outlet current of each unit can
be obtained as:
0
the virtual resistance of the system. In Figure 7,
should be set to the same, according to Figure 7, then the outlet current of each unit can
be obtained as:
**Figure 7.Figure 7. DC microgrid equivalent model with multiple DGs.DC microgrid equivalent model with multiple DGs.**
According to Equation (14) above, the system output current is proportional to the
-----
_Energies 2022, 15, 8600_ 9 of 20
In general, the voltage reference value of each group of the PV energy storage system
should be set to the same, according to Figure 7, then the outlet current of each unit can be
obtained as:
###### 0
_n_ _[−]_ dc (13)
_Rn + Rln_
In general, the voltage reference value of each group of the PV energy storage system
should be set to the same, according to Figure 7
obtained as:
_in =_ _[u]n[∗]_ _[−]_ _[u]dc_
_R_ + R
**Figure 7. The outlet current relationship between any two groups of PV energy storage sys-DC microgrid equivalent model with multiple DGs.**
tems is:
###### According to Equation (14) above, the system output current is proportional to theii = [R][i][ +][ R][li], i, j ∈ N and i ̸= j (14)
_ij_ _Rj + Rlj_
###### sum of the virtual impedance and the line impedance. As shown in Figure 8 below, the
The outlet power of a single group of PV energy storage systems is expressed as:
###### mismatch of the line parameters results in the designed conventional droop controller no being able to find a deviation-free solution to the inherent contradiction between accurate power distribution and voltage. Pn = [u][n][(][u]n[∗] [−] [u]dc[)] 0 (15)
_Rn + Rln_
###### To solve the power distribution problem, this paper introduces the adaptive virtua resistance According to Equation (14) above, the system output current is proportional to thegn to correct the droop control curve so that Δ is 0, then Equation (12) is modI
sum of the virtual impedance and the line impedance. As shown in Figure 8 below, the
###### ified as:
mismatch of the line parameters results in the designed conventional droop controller not
being able to find a deviation-free solution to the inherent contradiction between accurateun = _un*_ − _R in n_ + _g in n_ (16
power distribution and voltage.
**Figure 8.Figure 8. Illustrative diagram of the limitations of traditional droop control containing the differencesIllustrative diagram of the limitations of traditional droop control containing the differ**
between the two groups of DG systems.ences between the two groups of DG systems.
To solve the power distribution problem, this paper introduces the adaptive virtual
resistance gn to correct the droop control curve so that ∆I is 0, then Equation (12) is
modified as:
_un = u[∗]n_ _[−]_ _[R][n][i][n]_ [+][ g][n][i][n] (16)
At this time, the power emitted by the PV energy storage unit is:
_Pn =_ _[u][n][(][u]n[∗]_ _[−]_ _[u]dc[)]_ (17)
_Rn + Rln_ _gn_
_−_
In order to realize the power distribution of the DC microgrid system among the
individual PV energy storage units, it should therefore satisfy:
_gn = Rln_ (18)
Additionally, because
_Rn =_ _[u][n][ −]_ _[u][dc]_ (19)
_in_
Then the control equation with the common DC bus voltage as the reference quantity
can be obtained as:
-----
_Energies 2022, 15, 8600_ Then the control equation with the common DC bus voltage as the reference quantity 10 of 20
can be obtained as:
_unref_ = _un*_ − _R in n_ + _un_ − _udc_ (20)
where _unref_ is the control reference value of the bus voltage of each PV energy storage u[ref]n [=][ u]n[∗] _[−]_ _[R][n][i][n]_ [+][ u][n] _[−]_ _[u]dc_ (20)
unit.
where u[ref]n [is the control reference value of the bus voltage of each PV energy storage unit.]
The charging equipment is considered a constant power load in the DC microgrid,
The charging equipment is considered a constant power load in the DC microgrid,
both as a local load and a public load. In order to simplify the model, the local charging
both as a local load and a public load. In order to simplify the model, the local charging
equipment load is equated to the public load and the power is distributed uniformly ac
equipment load is equated to the public load and the power is distributed uniformly
cording to the public load, and the schematic and equivalent diagrams are shown in Fig
according to the public load, and the schematic and equivalent diagrams are shown in
ure 9.
Figure 9.
**Figure 9. Equivalent schematic of microgrid for equating local charging load to public load.**
**Figure 9. Equivalent schematic of microgrid for equating local charging load to public load.**
According to Figure 8, it is easy to obtain:
According to Figure 8, it is easy to obtain:
_un_ (un − _udc_ )
_PnP =n_ =[u][n][(][u]R[n]ln[−]R[u]ln[dc][)] + P+cPc
_PnP =n_ =[u]u[n][(]n[u](R[n]u[eq]ln[−]Rn[u]lneq−[dc]u[)] _dc_ )
(21)(21)
Solving the above equation yields:
Solving the above equation yields:
_R[eq]ln_ _R[=]lneq[ P]=[n][ −]PPnnP−[P]n[c]PRc_ _Rlnln_ (22)(22)
Considering the safety of the charging equipment, the charging equipment conductsConsidering the safety of the charging equipment, the charging equipment conducts
the safety assessment based on the operating status of the equipment, which is used tothe safety assessment based on the operating status of the equipment, which is used to
adjust the power and share the information with the grid. Based on the safety operationadjust the power and share the information with the grid. Based on the safety operation
information of the charging equipment, the PV energy storage system coordinates theinformation of the charging equipment, the PV energy storage system coordinates the
power output of the control system:power output of the control system:
_Pn[∗]_ _P[=]n*[ m]=[ ·]m P[ P]⋅[e]_ _e_ (23)(23)
_P[P]n[∗]n[ is the rated reference output of the ][is the rated reference output of the][ n][th group of PV energy storage units;][n][th group of PV energy storage units; ][ P][e]_ [is the][P]e[ is ]
rated power of the charging equipment.the rated power of the charging equipment.
To facilitate the characteristic relationship between voltage and power, Equation (19)
is transformed into:
_u[ref]n_ [=][ u]n[∗] [+][ χ][n][(][mP][e] _[−]_ _[P][n][) +][ P][n][ −]Pn_ _[P][c]_ (un − _udc)_ (24)
where χn is the droop factor.
For the improved droop control strategy shown in the above Equation (24), only the
DC bus voltage needs to be shared, which can realize the power distribution of the charging
equipment in a safe operating condition and improve the operating efficiency of the DC
microgrid system. However, there is still a common DC bus voltage offset, which is:
∆udc = u[ref]dc _[−]_ _[u][dc]_ (25)
-----
_Energies 2022, 15, 8600_ 11 of 20
In this paper, the voltage compensation strategy is adopted to compensate the drop of
the bus voltage within a limited time T, that is:
lim (26)
_t→T[∆][u][dc][(][t][) =][ 0]_
In the equivalent circuit shown in Figure 6, it is obtained by Kirchhoff’s laws:
##### ∑
_n_
_un −_ _udc_ = _[u][dc]_ (27)
_Rln_ _Req_
Simplified to obtain the common DC bus voltage as:
∑ _Rli/∏_ _Rli_
_udc =_ _R1eqn_ [+][u][n][ ∑][ ∏]in̸=in[∏]̸=n _Rli/i_ ∏i _Rli_ (28)
It can be seen from Equation (24) that the DC voltage of each PV energy storage unit
can track its control reference value. Combining with Equation (28), it is easy to find that
the DC bus voltage is related to the control reference value of the voltage, line resistance,
droop factor and equivalent load resistance. For a fixed DC microgrid system, the line
resistance is usually constant. Therefore, the method of rectifying the voltage reference
value can be chosen.
The derivation from Equation (28) is:
_∂udc_ 1/(Rn + Rln)
= (29)
_∂u[ref]n_ _R1eq_ [+][ ∑]n _i[∏]̸=n_ (Rn + Rln)/∏i (Rn + Rln)
Since the above Equation (29) is a constant, the amount of compensation for the bus
voltage can be expressed as:
�
_∂u = −K_ ∆udcdt (30)
where K is the compensation factor. Accordingly, Equation (30) can be modified as:
_u[ref]n_ [=][ u]n[∗] [+][ χ][n][(][mP][e] _[−]_ _[P][n][) +][ P][n][ −]Pn_ _[P][c]_ (un − _udc) + ∂u_ (31)
In this section, by considering the charging safety and health factors of charging
devices, an improved droop control strategy for the DC microgrid applicable to charging
devices is proposed, as shown in Figure 10. The control strategy consists of two parts: the
droop control considering charging safety and the voltage compensation control, and it
takes into account the impact of the local charging load on the microgrid. To implement the
method, it generates reference voltage signals and provides them to the secondary control.
This control strategy is locally distributed control, which requires the health information
of the charging equipment and the DC bus voltage information. The microgrid system
reasonably distributes the power according to the health information of the charging
equipment and ensures the voltage quality of the DC bus.
3.2.2. Secondary Control Strategy of the DC Microgrid
To make a reasonable distribution of power among ESUs, this paper investigates
the hybrid energy storage coordination control strategy based on consistency theory as a
secondary control strategy for islanded DC microgrids.
-----
_Energies 2022, 15, 8600_ 12 of 20
system reasonably distributes the power according to the health information of the charging equipment and ensures the voltage quality of the DC bus.
**Figure 10. Primary control strategy for the DC microgrid based on improved droop control.**
###### 3.2.2. Secondary Control Strategy of the DC Microgrid
To make a reasonable distribution of power among ESUs, this paper investigates th hybrid energy storage coordination control strategy based on consistency theory as a sec ondary control strategy for islanded DC microgrids. Considering each ESU as a node, the access point voltage of each node is not th
**Figure 10.Figure 10. Primary control strategy for the DC microgrid based on improved droop control.Primary control strategy for the DC microgrid based on improved droop control.**
###### same, and the node voltage needs to be exchanged with its neighboring nodes for infor mation [36]. The communication architecture is shown in Figure 11 and expressed by thConsidering each ESU as a node, the access point voltage of each node is not the same,
3.2.2. Secondary Control Strategy of the DC Microgrid
and the node voltage needs to be exchanged with its neighboring nodes for information [adjacency matrix as: 36].
To make a reasonable distribution of power among ESUs, this paper investigates the
The communication architecture is shown in Figure 11 and expressed by the adjacency
hybrid energy storage coordination control strategy based on consistency theory as a sec-0 1 0
matrix as:
ondary control strategy for islanded DC microgrids. A0G =1 10 0 1 (32
same, and the node voltage needs to be exchanged with its neighboring nodes for infor-Considering each ESU as a node, the access point voltage of each node is not the AG = 10 01 010 1 0 (32)
mation [36]. The communication architecture is shown in Figure 11 and expressed by the
adjacency matrix as:
**Figure 10. Primary control strategy for the DC microgrid based on improved droop control.**
###### 3.2.2. Secondary Control Strategy of the DC Microgrid
0 1 0
_AG_ = 1 0 1
0 1 0
system reasonably distributes the power according to the health information of the charging equipment and ensures the voltage quality of the DC bus.
###### To make a reasonable distribution of power among ESUs, this paper investigates th hybrid energy storage coordination control strategy based on consistency theory as a sec ondary control strategy for islanded DC microgrids.
(32)
adjacency matrix as:
**Figure 11. Bidirectional ring network communication structure for the DC microgrid with ESSs.**
**Figure 11. Bidirectional ring network communication structure for the DC microgrid with ESSs.**
To obtain the average value of the bus voltage, the output value of the next moment is
###### To obtain the average value of the bus voltage, the output value of the next momen
updated using the local measurement node voltage and the shared voltage information for
calculation as:is updated using the local measurement node voltage and the shared voltage informatio
###### for calculation as: � N � �
**Figure 11. Bidirectional ring network communication structure for the DC microgrid with ESSs. u[avg]i** = ui − _j∑=1_ _aij_ _uN[avg]i_ _−_ _u[avg]j_ (33)
To obtain the average value of the bus voltage, the output value of the next moment uiavg = _ui_ − _aij_ (uiavg − _uavgj_ ) (33
whereis updated using the local measurement node voltage and the shared voltage information u[avg]i and u[avg]j are the global average bus voltages at the access points of groupj =1 _i and_
_j hybrid ESUs.where for calculation as:[u]iavg_ and _[u]avgj_ are the global average bus voltages at the access points of group
The SC has high power density and a fast response, and its controller aims to suppress
_N_
the fluctuation of the DC bus voltage. On this basis, it is required that each SC is uniformlyand j hybrid ESUs. avg avg avg
_ui_ = _ui_ − _aij_ (ui − _uj_ ) (33)
discharged to keep the terminal voltage of the SC consistent. The control strategy of thej =1
SC based on the proportional–integral (PI) controller is shown in Figure 12 below. The
difference between the average bus voltage and the control voltage reference is used as thewhere _[u]iavg_ and _[u]avgj_ are the global average bus voltages at the access points of group i
input, and the bus voltage can be corrected by the control of the current inner loop. On theand j hybrid ESUs.
other hand, the difference in the SC terminal voltage is the input to the controller to correct
the SC terminal voltage.
**Figure 11.**
**Figure 11.**
calculation as:
-----
###### p, g y
troller to correct the SC terminal voltage.
_Energies 2022, 15, 8600_ loop. On the other hand, the difference in the SC terminal voltage is the input to 13 of 20
###### troller to correct the SC terminal voltage.
Figure 12. Control scheme of the SC controller for bidirectional DC–DC converter 1.
###### Figure 12. Control scheme of the SC controller for bidirectional DC–DC converter 1.
**Figure 12.Figure 12. Based on the consistency algorithm, the ESB controller is used to balance th Control scheme of the SC controller for bidirectional DC–DC converter 1.Control scheme of the SC controller for bidirectional DC–DC converter 1.**
###### state of the ESB. However, when the bus voltage fluctuates, the SC responds qBased on the consistency algorithm, the ESB controller is used to balance the charge
Based on the consistency algorithm, the ESB controller is used to balance th
state of the ESB. However, when the bus voltage fluctuates, the SC responds quickly tostabilize the bus voltage, and a voltage control loop needs to be incorporated to
stabilize the bus voltage, and a voltage control loop needs to be incorporated to maintainstate of the ESB. However, when the bus voltage fluctuates, the SC responds quthe end voltage level of the SC. The resulting control strategy to improve the volta
the end voltage level of the SC. The resulting control strategy to improve the voltage outerstabilize the bus voltage, and a voltage control loop needs to be incorporated to m
###### loop and current inner loop is shown in Figure 13. The SC voltage control is si
loop and current inner loop is shown in Figurethe end voltage level of the SC. The resulting control strategy to improve the volta 13. The SC voltage control is simultaneously
added to the controller outer loop with the SOC consistency control. Its output valueously added to the controller outer loop with the SOC consistency control. Its outp
###### loop and current inner loop is shown in Figure 13. The SC voltage control is sim
corrects the reference current of the ESB, and finally the control target can be achieved.corrects the reference current of the ESB, and finally the control target can be ach
###### ously added to the controller outer loop with the SOC consistency control. Its outp corrects the reference current of the ESB, and finally the control target can be ach
**Figure 13.Figure 13. Control scheme of the ESB controller for bidirectional DC–DC converter 2.Control scheme of the ESB controller for bidirectional DC–DC converter 2.**
###### troller to correct the SC terminal voltage.
###### corrects the reference current of the ESB, and finally the control target can be ach
###### Figure 13. The overall control strategy of the islanded DC microgrid studied in this paper isControl scheme of the ESB controller for bidirectional DC–DC converter 2.
The overall control strategy of the islanded DC microgrid studied in this
shown in Figure 14 below. It can be seen that the two-level control strategy consists of the
improved droop control strategy and the consistent energy storage control strategy, and theshown in Figure 14 below. It can be seen that the twoThe overall control strategy of the islanded DC microgrid studied in this -level control strategy consi
influence of charging equipment health is considered in the primary level of control. In theimproved droop control strategy and the consistent energy storage control strat
primary control, the droop control is used as a basis for improvement, which can provideshown in Figure 14 below. It can be seen that the two-level control strategy consis
###### the influence of charging equipment health is considered in the primary level o
voltage reference values for the secondary control. In the secondary control, the consistencyimproved droop control strategy and the consistent energy storage control strate
algorithm is added to the control method of the voltage outer loop to provide referenceIn the primary control, the droop control is used as a basis for improvement, w
###### the influence of charging equipment health is considered in the primary level of
signals for the energy storage converters. The control strategy is local decentralized control,provide voltage reference values for the secondary control. In the secondary con
###### In the primary control, the droop control is used as a basis for improvement, w
which can improve the DC voltage quality, adapt to the changes in charging equipmentconsistency algorithm is added to the control method of the voltage outer loop to
health, and ensure the safe and reliable operation of the charging equipment from theprovide voltage reference values for the secondary control. In the secondary con
###### reference signals for the energy storage converters. The control strategy is local d
network side.consistency algorithm is added to the control method of the voltage outer loop to
###### ized control, which can improve the DC voltage quality, adapt to the changes in
reference signals for the energy storage converters. The control strategy is local de
equipment health, and ensure the safe and reliable operation of the charging eq
ized control, which can improve the DC voltage quality, adapt to the changes in c
from the network side.
equipment health, and ensure the safe and reliable operation of the charging eq from the network side.
-----
_Energies 2022, 15, 8600_ 14 of 20
_Energies 2022, 15, x FOR PEER REVIEW_ 14 of 20
**Figure 14. Overall control strategy for the DC microgrid considering the safety of the charging equip-**
**Figure 14.** Overall control strategy for the DC microgrid considering the safety of the charging
equipment. ment.
_3.3. Small-Signal Models_
_3.3. Small-Signal Models_
The differential equation for the shunt voltage regulator capacitor on the common DC
The differential equation for the shunt voltage regulator capacitor on the common
bus is:
DC bus is: _Cdc_ ddudtdcudc= ∑NN _in −_ _idc_ (34)
_Cdc_ =n=1in − _idc_ (34)
In general, the measured power needs to be filtered by a first-order low-pass filter todt _n=1_
filter out the high-frequency signal of the instantaneous power. Combining Equation (31),In general, the measured power needs to be filtered by a first-order low-pass filter to
filter out the high-frequency signal of the instantaneous power. Combining Equation (31), it can be obtained:
it can be obtained: _b1s + b0_
∆un(s) = ∆Pn(s) (35)
where where _bbaaa01012 = = = = =aaa ω ω χ 2102�=2=u=nccunωKR(χ ω2 −n2ulnncΔ −uPnununc−nP −−u[ref]dcu( )nχ[ref]dcusdcref−ndcref[−]χ�ωχ ω=−nω)a[χ]nωcωa sχ2cu[n]2scKcncn[mP][2]KmPmPmP2b s+1+[e]eee aa s+ ++11bs(0+ +�22uau an0_ _n−0Δ −Pundcrefu( )s)[ref]dc(ω�c(ω+_ _Kc +)_ _K)_ (36) (35) (36)
b0 = ωcK −
Combining the DC–DC converter, primary controller and secondary controller to buildb1 = ωc _Rln_ − χ ωn _cun_
a global small-signal model:
.
Combining the DC–DC converter, primary controller and secondary controller to ∆X = Asys∆X (37)
build a global small-signal model: where
∆X = [∆udc1 ∆u1 ∆uSC1 ∆iSC1 ∆xSOC1 ∆iSB1, · · ·, ∆udcΔnX ∆=uAn ∆sysuΔSCX _n ∆iSCn ∆xSOCn ∆iSBn]_ (37) (38)
where Asys is the system characteristic matrix. For a microgrid with a known topology and
state, the small-signal stability of the system can be judged based on the eigenvalues ofΔX = Δudc1 Δu1 ΔuSC1 ΔiSC1 ΔxSOC1 ΔiSB1,, Δudcn Δun ΔuSCn ΔiSCn ΔxSOCn ΔiSBn (38)
matrix Asys, which provides a basis for the design of the system parameters.
_Asys_ is the system characteristic matrix. For a microgrid with a known topology and state,
**4. Simulation Results**
the small-signal stability of the system can be judged based on the eigenvalues of matrix
To verify the effectiveness of the proposed control strategy, a simulation model is built
_Asys_, which provides a basis for the design of the system parameters.
in MATLAB/Simulink based on the topological model shown in Figures 1 and 2. The system parameters of the simulation model are shown in Table 2 below. In the control strategy
above, the local charging load is equated to the public charging load in consideration. In
-----
_Energies 2022, 15, 8600_ 15 of 20
order to fully verify the effectiveness of its control strategy, the systems with only public
load and with both local load and public load are set in the two calculations, respectively.
**Table 2. Simulation model parameters table.**
**Parameters** **Numerical Values** **Parameters** **Numerical Values**
Common bus DC
reference voltage udc[∗] [/V] 500 _χn_ 0.005
Line impedance 1 Rl1/Ω 1 _K_ 5
Line impedance 2 Rl2/Ω 0.8 _Cpv1/µF_ 2200
Line impedance 3 Rl3/Ω 0.7 _Cpv2/µF_ 3000
Capacity of ESB 1/(kWh) 30 _Lpv/mH_ 0.35
Capacity of ESB 2/(kWh) 30 _Cb/µF_ 3000
Capacity of ESB 3/(kWh) 60 _Cc/µF_ 3000
Voltage of SC 1/(V) 320 _Lb/mH_ 0.32
Voltage of SC 2/(V) 320 _Lc/mH_ 0.3
Voltage of SC 3/(V) 500 The cycle of PWM/s 10 10[−][3]
_×_
_4.1. Simulation Example 1_
In this simulation example, the output power of PV1, PV2 and PV3 are set to 10 kW,
10 kW and 20 kW. A charging device with a rated power of 20 kW at the common DC bus is
charging and the health of the charging device drops to 0.5 at 5 s. No local charging devices
are operating in each distributed generation (DG). Figure 14 shows the simulation results
of the DC microgrid.
In Figure 15, the DGs should reasonably allocate the load power according to the
capacity ratio, so the output power of DG1 and DG2 should be the same and should be 0.5
of DG3. From Figure 15a, it can be seen that before 5 s, DG1, DG2 and DG3 provide 5 kW,
5 kW, and 10 kW load power, respectively. After 5 s, in order to ensure the charging safety
of the charging equipment, the charging equipment is operated at reduced power, so DG1,
DG2 and DG3 provide 2.5 kW, 2.5 kW and 5 kW load power, respectively. However, as
can be seen in Figure 15b, the relative deviation of power distribution reaches more than
30% under the conventional droop control without considering the safety of the charging
equipment. Figure 15c shows that the ESS dissipates the excess PV power, the SC can
quickly respond to the load change, and the storage battery responds reasonably to the
power distribution at the steady state after a certain time. In Figure 15d,e, it can be seen that
after the consistency algorithm, the SOC gap between different ESBs becomes smaller. Due
to the outer loop control of the SC voltage in the ESB controller, the SOC of the SC remains
consistent during the load change. From Figure 15f, it can be seen that the common DC bus
voltage is maintained at the rated value of 500 V, which ensures the power supply quality
of the charging equipment. On the contrary, the common DC bus voltage with traditional
droop control cannot be maintained at the rated value, which reduces the power supply
quality and affects the charging safety performance of the charging equipment.
To better validate the method in this paper, the advanced method proposed in [8]
is used for comparison, and the simulation results are shown in Figure 16. The output
powers shown in Figure 16a show that the powers can also be well distributed with this
advanced method. By comparing Figures 15a and 16a, it can be seen that the time to reach
the steady state is longer due to the more complex nature of this advanced method. From
Figure 16b, it is observed that the voltage of the common DC bus can also be maintained at
500 V. However, a comparison of Figures 15f and 16b shows that when the charging load
changes suddenly, the bus voltage takes a longer period of time to return to its rated value.
The voltages of the local DC buses are higher than the voltage of the common DC bus due
to the presence of line impedance. The voltages of the local DC buses also remain stable
before and after the sudden load change. Therefore, the method in this paper also has good
results and even better performance.
-----
_Energies 2022, 15, 8600_ 16 of 20
_Energies 2022, 15, x FOR PEER REVIEW_ 16 of 20
(b) Output powers of DGs with traditional droop control
(a) Output powers of DGs with proposed method
without considering charging safety
(c) Input powers of ESSs (d) SOC of ESBs
(e) SOC of SCs (f) Common DC bus voltages
_Energies 2022, 15, x FOR PEER REVIEW_ 17 of 20
**Figure 15. Simulation results in Example 1.**
**Figure 15. Simulation results in Example 1.**
To better validate the method in this paper, the advanced method proposed in [8] is
used for comparison, and the simulation results are shown in Figure 16. The output powers shown in Figure 16a show that the powers can also be well distributed with this advanced method. By comparing Figures 15a and 16a, it can be seen that the time to reach
the steady state is longer due to the more complex nature of this advanced method. From
Figure 16b, it is observed that the voltage of the common DC bus can also be maintained
at 500 V. However, a comparison of Figures 15f and 16b shows that when the charging
load changes suddenly, the bus voltage takes a longer period of time to return to its rated
value. The voltages of the local DC buses are higher than the voltage of the common DC
bus due to the presence of line impedance. The voltages of the local DC buses also remain
(a) Output powers of DGs (b) DC bus voltages
stable before and after the sudden load change. Therefore, the method in this paper also
**Figure 16. Simulation results of example 1 under the proposed control strategy in [8].**
has good results and even better performance. Figure 16. Simulation results of example 1 under the proposed control strategy in [8].
_4.2. Simulation Example 24.2. Simulation Example 2_
In this simulation example, both the public charging load attached to the commonIn this simulation example, both the public charging load attached to the common
DC bus and the local charging load contained in each DG of the DC microgrid system areDC bus and the local charging load contained in each DG of the DC microgrid system are
considered. In the simulation parameter setting, the common charging device is running atconsidered. In the simulation parameter setting, the common charging device is running
a rated power of 10 kW. In 0–5 s, the output power of PVat a rated power of 10 kW. In 0–5 s, the output power of PV1, PV1, PV2, and PV2, and PV3 is 10 kW,3 is 10 kW, 10 10 kW,
and 20 kW, and local charging equipment 1, local charging equipment 2, local chargingkW, and 20 kW, and local charging equipment 1, local charging equipment 2, local chargequipment 3 are running at a rated power of 5 kW, 5 kW, 20 kW, respectively. At 5 s, theing equipment 3 are running at a rated power of 5 kW, 5 kW, 20 kW, respectively. At 5 s,
sudden condition output power of PVthe sudden condition output power of PV3 drops to 10 kW, and the health of local charging3 drops to 10 kW, and the health of local charging
equipment 1, local charging equipment 2 and local charging equipment 3 drops to 0.2, 0.8
and 0 5 Figure 17 show the simulation results of simulation example 2
used for comparison, and the simulation results are shown in Figure 16. The output pow-
ers shown in Figure 16a show that the powers can also be well distributed with this ad-
vanced method. By comparing Figures 15a and 16a, it can be seen that the time to reach
the steady state is longer due to the more complex nature of this advanced method. From
Figure 16b, it is observed that the voltage of the common DC bus can also be maintained
at 500 V. However, a comparison of Figures 15f and 16b shows that when the charging
load changes suddenly, the bus voltage takes a longer period of time to return to its rated
value. The voltages of the local DC buses are higher than the voltage of the common DC
-----
due to the line impedance and the presence of the local charging equipment. In Figures
_Energies 2022, 15, 8600_ 17 of 20
17d,e, the outlet voltage of each DG system and the SOC of the ESBs are balanced due to
the consistency control. Meanwhile, it is known from Figure 17f that the common DC bus
voltage is stabilized at 500 V, which guarantees the power quality of the charging equip
equipment 1, local charging equipment 2 and local charging equipment 3 drops to 0.2, 0.8
ment. In contrast, the use of traditional droop control without considering charging safety
and 0.5. Figure 17 show the simulation results of simulation example 2.
results in the common DC bus voltage deviating from the rated value.
(a) Output power of PV3 (b) Output powers of DGs with proposed method
_Energies 2022, 15, x FOR PEER REVIEW_ 18 of 20
(d) Local DC bus voltages of droop control without
(c) Local DC bus voltages with proposed method
considering charging safety
(e) SOC of ESBs (f) Common DC bus voltages
**Figure 17. Simulation results in example 2.**
**Figure 17. Simulation results in example 2.**
FigureThe simulation results under the control strategy proposed in [8] are given in Figure 17a shows that the output power of PV3 suddenly drops at 5 s to reach the
steady state quickly, which shows good dynamic performance. Figure18. The relative deviations of the power distribution from DG1 to DG3 17 after 5 s are about b shows that the
local charging load and the public charging load jointly participate in the load distribution,15%, 45%, and 17%. Comparing with Figure 17b, the results in Figure 18a show a large
and each DG can still distribute the power proportionally to the capacity when the healthdeviation in power distribution because the effect of local load is not considered. In Figure
of the charging equipment decreases. However, in Figure18b, the common DC bus voltage can still maintain the rated value because of the inclusion 17c, the local DC bus voltage is
unstable and the safe operation of the charging equipment is not guaranteed due to the lineof voltage compensation. It is better compared to the common DC bus voltage obtained
impedance and the presence of the local charging equipment. In Figureby the traditional method in Figure 17f. Comparing Figures 17c,d, it is obvious that the 17d,e, the outlet
voltage of each DG system and the SOC of the ESBs are balanced due to the consistencylocal DC bus voltages vary more. This is because the local DC bus voltages are affected in
control. Meanwhile, it is known from Figureorder to meet the load demand. Therefore, the method in this paper is more applicable to 17f that the common DC bus voltage is
stabilized at 500 V, which guarantees the power quality of the charging equipment. Inthe microgrid system where local charging loads exist.
contrast, the use of traditional droop control without considering charging safety results in
the common DC bus voltage deviating from the rated value.
The simulation results under the control strategy proposed in [8] are given in Figure 18.
The relative deviations of the power distribution from DG1 to DG3 after 5 s are about 15%,
45%, and 17%. Comparing with Figure 17b, the results in Figure 18a show a large deviation
in power distribution because the effect of local load is not considered. In Figure 18b, the
common DC bus voltage can still maintain the rated value because of the inclusion of
voltage compensation. It is better compared to the common DC bus voltage obtained by
the traditional method in Figure 17f. Comparing Figure 17c,d, it is obvious that the local
(a) Output powers of DGs (b) DC bus voltages
The simulation results under the control strategy proposed in [8] are given in Figure 18
1 to DG3 after 5 s are about 15%,
17b, the results in Figure 18a show a large deviation
in power distribution because the effect of local load is not considered. In Figure 18b, the
common DC bus voltage can still maintain the rated value because of the inclusion of
voltage compensation. It is better compared to the common DC bus voltage obtained by
17f. Comparing Figure 17c,d, it is obvious that the local
the traditional method in Figure
-----
deviation in power distribution because the effect of local load is not considered. In Figure
_Energies 2022, 15, 8600_ 18b, the common DC bus voltage can still maintain the rated value because of the inclusion 18 of 20
of voltage compensation. It is better compared to the common DC bus voltage obtained
by the traditional method in Figure 17f. Comparing Figures 17c,d, it is obvious that the
DC bus voltages vary more. This is because the local DC bus voltages are affected in orderlocal DC bus voltages vary more. This is because the local DC bus voltages are affected in
to meet the load demand. Therefore, the method in this paper is more applicable to theorder to meet the load demand. Therefore, the method in this paper is more applicable to
microgrid system where local charging loads exist.the microgrid system where local charging loads exist.
(a) Output powers of DGs (b) DC bus voltages
**Figure 18. Simulation results of example 2 under the proposed control strategy in [8].**
**Figure 18. Simulation results of example 2 under the proposed control strategy in [8].**
**5. Conclusions5. Conclusions**
In this paper, a two-level control strategy is proposed for the safe operation of the DCIn this paper, a two-level control strategy is proposed for the safe operation of the
microgrid incorporating a PV system, energy storage and charging. Based on the droopDC microgrid incorporating a PV system, energy storage and charging. Based on the
control, a power allocation algorithm for charging loads considering the health of thedroop control, a power allocation algorithm for charging loads considering the health of
charging equipment is incorporated, along with a common DC bus voltage compensationthe charging equipment is incorporated, along with a common DC bus voltage compenstrategy. This ensures the load power balance and the voltage level of the common DC bus.sation strategy. This ensures the load power balance and the voltage level of the common
Furthermore, the unbalanced power of the system is compensated by the energy storageDC bus. Furthermore, the unbalanced power of the system is compensated by the energy
coordination method based on the consistency control algorithm to eliminate the deviation
storage coordination method based on the consistency control algorithm to eliminate the
of the local DC bus voltage, and the SOC of the ESSs converges. The control strategy is
deviation of the local DC bus voltage, and the SOC of the ESSs converges. The control
based on the information interaction between the microgrid and the charging equipment,
strategy is based on the information interaction between the microgrid and the charging
which ensures the operational safety of the charging equipment from the grid side. Finally,
equipment, which ensures the operational safety of the charging equipment from the grid
the simulation results verify the effectiveness of this control strategy.
side. Finally, the simulation results verify the effectiveness of this control strategy.
However, the research in this paper also has the following limitations. Firstly, the
However, the research in this paper also has the following limitations. Firstly, the
communication problem between the systems is not considered in detail, and the com
communication problem between the systems is not considered in detail, and the commu
munication delay and the communication failure problem may have some impact on the
nication delay and the communication failure problem may have some impact on the mi
microgrid operation. Secondly, the DC microgrid control strategy studied in this paper
crogrid operation. Secondly, the DC microgrid control strategy studied in this paper is
is based on microgrid operation in islanding mode, and the control method during grid
based on microgrid operation in islanding mode, and the control method during grid
connected operation is not studied. In-depth research will be conducted on the above two
connected operation is not studied. In-depth research will be conducted on the above two
aspects in the future.
aspects in the future.
**Author Contributions: Conceptualization, X.L.; Formal analysis, X.L.; Methodology, X.L., Z.J. and**
F.Y.; Visualization, X.L. and Z.J.; Writing—original draft, X.L. and Z.J.; Funding acquisition, Z.J.; Data
curation, F.Y.; Resources, F.Y. and Z.D.; Investigation, Z.D. and C.Z.; Validation, C.Z.; Supervision,
C.Z.; Writing—review & editing, L.C. All authors have read and agreed to the published version of
the manuscript.
**Funding: This work was financially supported by the Science and Technology Project of State Grid**
Corporation of China (Grant No. 52094021N00S).
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: Not applicable.**
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Cucuzzella, M.; Kosaraju, K.C.; Scherpen, J.M.A. Voltage Control of DC Microgrids: Robustness for Unknown ZIP-Loads. IEEE
_[Control Syst. Lett. 2022, 7, 139–144. [CrossRef]](http://doi.org/10.1109/LCSYS.2022.3187925)_
2. Chen, X.; Jiang, T.; Bi, M.; Wang, Y.; Gao, H. A review of condition assessment of charging infrastructure for electrical vehicles.
In Proceedings of the IET International Conference on Intelligent and Connected Vehicles (ICV 2016), Chongqing, China,
[22–23 September 2016; pp. 1–4. [CrossRef]](http://doi.org/10.1049/cp.2016.1178)
-----
_Energies 2022, 15, 8600_ 19 of 20
3. Zisen, L.; Shangzhi, P.; Minglong, W.; Wenqiang, L.; Jinwu, G.; Liangzhong, Y.; Praveen, J. A Three-Port LCC Resonant Converter
for the 380-V/48-V Hybrid DC System. IEEE Trans. Power Electron. 2022, 37, 10864–10876.
4. Yang, R.; Jin, J.; Chen, X.; Zhang, T.; Ming, S.; Zhang, S.; Xing, Y. A Battery-Energy-Storage-Based DC Dynamic Voltage Restorer
[for DC Renewable Power Protection. IEEE Trans. Sustain. Energy 2022, 13, 1707–1721. [CrossRef]](http://doi.org/10.1109/TSTE.2022.3164795)
5. Prabhakaran, P.; Goyal, Y.; Agarwal, V. A Novel Communication-Based Average Voltage Regulation Scheme for a Droop
[Controlled DC Microgrid. IEEE Trans. Smart Grid 2019, 10, 1250–1258. [CrossRef]](http://doi.org/10.1109/TSG.2017.2761864)
6. Nahata, P.; La Bella, A.; Scattolini, R.; Ferrari-Trecate, G. Hierarchical control in islanded DC microgrids with flexible structures.
_[IEEE Trans. Control Syst. Technol. 2020, 29, 2379–2392. [CrossRef]](http://doi.org/10.1109/TCST.2020.3038495)_
7. Zhang, Y.; Qu, X.; Tang, M.; Yao, R.; Chen, W. Design of Nonlinear Droop Control in DC Microgrid for Desired Voltage Regulation
[and Current Sharing Accuracy. IEEE J. Emerg. Sel. Top. Circuits Syst. 2021, 11, 168–175. [CrossRef]](http://doi.org/10.1109/JETCAS.2021.3049810)
8. Aluko, A.; Buraimoh, E.; Oni, O.E.; Davidson, I.E. Advanced Distributed Cooperative Secondary Control of Islanded DC
[Microgrids. Energies 2022, 15, 3988. [CrossRef]](http://doi.org/10.3390/en15113988)
9. Jumani, T.A.; Mustafa, M.W.; Md Rasid, M.; Mirjat, N.H.; Leghari, Z.H.; Saeed, M.S. Optimal Voltage and Frequency Control of
[an Islanded Microgrid Using Grasshopper Optimization Algorithm. Energies 2018, 11, 3191. [CrossRef]](http://doi.org/10.3390/en11113191)
10. Aluko, A.; Swanson, A.; Jarvis, L.; Dorrell, D. Modeling and Stability Analysis of Distributed Secondary Control Scheme for
[Stand-Alone DC Microgrid Applications. Energies 2022, 15, 5411. [CrossRef]](http://doi.org/10.3390/en15155411)
11. Liu, X.K.; Jiang, H.; Wang, Y.W.; He, H. A distributed iterative learning framework for DC microgrids: Current sharing and
[voltage regulation. IEEE Trans. Emerg. Top. Comput. Intell. 2020, 4, 119–129. [CrossRef]](http://doi.org/10.1109/TETCI.2018.2863747)
12. Shen, X.; Shahidehpour, M.; Han, Y.; Zhu, S.; Zheng, J. Expansion planning of active distribution networks with centralized and
[distributed energy storage systems. IEEE Trans. Sustain. Energy 2017, 8, 126–134. [CrossRef]](http://doi.org/10.1109/TSTE.2016.2586027)
13. Byrne, R.H.; Nguyen, T.A.; Copp, D.A.; Chalamala, B.R.; Gyuk, I. Energy Management and Optimization Methods for Grid
[Energy Storage Systems. IEEE Access 2018, 6, 13231–13260. [CrossRef]](http://doi.org/10.1109/ACCESS.2017.2741578)
14. Zhi, N.; Ding, K.; Du, L.; Zhang, H. An SOC-Based Virtual DC Machine Control for Distributed Storage Systems in DC Microgrids.
_[IEEE Trans. Energy Convers. 2020, 35, 1411–1420. [CrossRef]](http://doi.org/10.1109/TEC.2020.2975033)_
15. Liang, Y.; Zhang, H.; Du, M.; Sun, K. Parallel coordination control of multi-port DC-DC converter for stand-alone photovoltaic[energy storage systems. CPSS Trans. Power Electron. Appl. 2020, 5, 235–241. [CrossRef]](http://doi.org/10.24295/CPSSTPEA.2020.00020)
16. Li, X.; Guo, L.; Zhang, S.; Wang, C.; Li, Y.W.; Chen, A.; Feng, Y. Observer-based DC voltage droop and current feed-forward
[control of a DC microgrid. IEEE Trans. Smart Grid 2018, 9, 5207–5216. [CrossRef]](http://doi.org/10.1109/TSG.2017.2684178)
17. Majumder, R.; Chaudhuri, B.; Ghosh, A.; Majumder, R.; Ledwich, G.; Zare, F. Improvement of stability and load sharing in an
[autonomous microgrid using supplementary droop control loop. IEEE Trans. Power Syst. 2010, 25, 796–808. [CrossRef]](http://doi.org/10.1109/TPWRS.2009.2032049)
18. Xiaonan, L.; Kai, S.; Lipei, H. Dynamic Load Power Sharing Method with Elimination of Bus Voltage Deviation for Energy Storage
Systems in DC Micro-grids. Proc. CSEE 2013, 3, 37–46.
19. Guo, F.; Xu, Q.; Wen, C.; Wang, L.; Wang, P. Distributed Secondary Control for Power Allocation and Voltage Restoration in
[Islanded DC Microgrids. IEEE Trans. Sustain. Energy 2018, 9, 1857–1869. [CrossRef]](http://doi.org/10.1109/TSTE.2018.2816944)
20. Hoang, K.D.; Lee, H. Accurate power sharing with balanced battery state of charge in distributed DC microgrid. IEEE Trans. Ind.
_[Electron. 2019, 66, 1883–1893. [CrossRef]](http://doi.org/10.1109/TIE.2018.2838107)_
21. Wu, T.; Xia, Y.; Wang, L.; Wei, W. Multiagent Based Distributed Control with Time-Oriented SoC Balancing Method for DC
[Microgrid. Energies 2020, 13, 2793. [CrossRef]](http://doi.org/10.3390/en13112793)
22. Sun, J.; Lin, W.; Hong, M.; Loparo, K.A. Voltage Regulation of DC-microgrid with PV and Battery. IEEE Trans. Smart Grid 2020, 11,
[4662–4675. [CrossRef]](http://doi.org/10.1109/TSG.2020.3005415)
23. Choi, J.-S.; Oh, S.-Y.; Cha, D.-S.; Ko, B.-S.; Kim, M. Autonomous DC-Bus Voltage Regulation in DC Microgrid Using Distributed
[Energy Storage Systems. Energies 2022, 15, 4559. [CrossRef]](http://doi.org/10.3390/en15134559)
24. Tian, G.; Zheng, Y.; Liu, G.; Zhang, J. SOC Balancing and Coordinated Control Based on Adaptive Droop Coefficient Algorithm
[for Energy Storage Units in DC Microgrid. Energies 2022, 15, 2943. [CrossRef]](http://doi.org/10.3390/en15082943)
25. Rocabert, J.; Capó-Misut, R.; Muñoz-Aguilar, R.S.; Candela, J.I.; Rodriguez, P. Control of Energy Storage System Integrating
Electrochemical Batteries and Supercapacitors for Grid-Connected Applications. IEEE Trans. Ind. Appl. 2019, 55, 1853–1862.
[[CrossRef]](http://doi.org/10.1109/TIA.2018.2873534)
26. Liu, Y.; Yang, Z.; Wu, X.; Sha, D.; Lin, F.; Fang, X. An Adaptive Energy Management Strategy of Stationary Hybrid Energy Storage
[System. IEEE Trans. Transp. Electrif. 2022, 8, 2261–2272. [CrossRef]](http://doi.org/10.1109/TTE.2022.3150149)
27. Bharatee, A.; Ray, P.K.; Ghosh, A. A Power Management Scheme for Grid-connected PV Integrated with Hybrid Energy Storage
[System. J. Mod. Power Syst. Clean Energy 2022, 10, 954–963. [CrossRef]](http://doi.org/10.35833/MPCE.2021.000023)
28. Dragiˇcevi´c, T.; Lu, X.; Vasquez, J.C.; Guerrero, J.M. DC Microgrids—Part I: A Review of Control Strategies and Stabilization
[Techniques. IEEE Trans. Power Electron. 2016, 31, 4876–4891. [CrossRef]](http://doi.org/10.1109/TPEL.2015.2478859)
29. Zhang, X.; Wang, B.; Gamage, D.; Ukil, A. Model Predictive and Iterative Learning Control Based Hybrid Control Method for
[Hybrid Energy Storage System. IEEE Trans. Sustain. Energy 2021, 12, 2146–2158. [CrossRef]](http://doi.org/10.1109/TSTE.2021.3083902)
30. Wu, X.; Li, S.; Gan, S.; Hou, C. An Adaptive Energy Optimization Method of Hybrid Battery-Supercapacitor Storage System for
[Uncertain Demand. Energies 2022, 15, 1765. [CrossRef]](http://doi.org/10.3390/en15051765)
31. Mathew, P.; Madichetty, S.; Mishra, S. A Multilevel Distributed Hybrid Control Scheme for Islanded DC Microgrids. IEEE Syst. J.
**[2019, 13, 4200–4207. [CrossRef]](http://doi.org/10.1109/JSYST.2019.2896927)**
-----
_Energies 2022, 15, 8600_ 20 of 20
32. Gao, H.; Zang, B.; Sun, L.; Chen, L. Evaluation of Electric Vehicle Integrated Charging Safety State Based on Fuzzy Neural
[Network. Appl. Sci. 2022, 12, 461. [CrossRef]](http://doi.org/10.3390/app12010461)
33. Ahmed, M.; Abdelrahem, M.; Kennel, R. Highly efficient and robust grid connected photovoltaic system based model predictive[control with kalman filtering capability. Sustainability 2020, 12, 4542. [CrossRef]](http://doi.org/10.3390/su12114542)
34. Shongwe, S.; Hanif, M. Comparative analysis of different single-diode PV modeling methods. IEEE J. Photovolt. 2015, 5, 938–946.
[[CrossRef]](http://doi.org/10.1109/JPHOTOV.2015.2395137)
35. Kafeel, A.; Mehdi, S.; Saad, M. A review on primary and secondary controls of inverter-interfaced microgrid. J. Mod. Power Syst.
_[Clean Energy 2021, 9, 969–985. [CrossRef]](http://doi.org/10.35833/MPCE.2020.000068)_
36. Bin, Z.; Jianting, Z.; Chiyung, C. Multi-microgrid energy management systems: Architecture, communication, and scheduling
[strategies. J. Mod. Power Syst. Clean Energy 2021, 9, 463–476. [CrossRef]](http://doi.org/10.35833/MPCE.2019.000237)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/en15228600?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/en15228600, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/1996-1073/15/22/8600/pdf?version=1668671651"
}
| 2,022
|
[] | true
| 2022-11-17T00:00:00
|
[
{
"paperId": "103e9c782288519581bbf524fd4193b301cad644",
"title": "A Three-Port LCC Resonant Converter for the 380-V/48-V Hybrid DC System"
},
{
"paperId": "aa728fd52d06f1a08026f256af737407b903130f",
"title": "Modeling and Stability Analysis of Distributed Secondary Control Scheme for Stand-Alone DC Microgrid Applications"
},
{
"paperId": "6cee5eda131195d3462ff007853f56594316a4b9",
"title": "A Battery-Energy-Storage-Based DC Dynamic Voltage Restorer for DC Renewable Power Protection"
},
{
"paperId": "6d30bcd9685a7105215084f6b5bcd44298bc30bc",
"title": "Autonomous DC-Bus Voltage Regulation in DC Microgrid Using Distributed Energy Storage Systems"
},
{
"paperId": "66c5b5802a9da605a3c75eea576dc60dc2b38a80",
"title": "An Adaptive Energy Management Strategy of Stationary Hybrid Energy Storage System"
},
{
"paperId": "e2bb406b2fa28008a1a8138aa838ef21c8752c87",
"title": "Advanced Distributed Cooperative Secondary Control of Islanded DC Microgrids"
},
{
"paperId": "9627dfa148cde829a0fc880a0cf8d7429b7dc50e",
"title": "SOC Balancing and Coordinated Control Based on Adaptive Droop Coefficient Algorithm for Energy Storage Units in DC Microgrid"
},
{
"paperId": "fe9f62b70218474eb6aa4c384aa20946bfcf271a",
"title": "An Adaptive Energy Optimization Method of Hybrid Battery-Supercapacitor Storage System for Uncertain Demand"
},
{
"paperId": "1b93dbaccc9524b52800650a50f1657dbf407b11",
"title": "Model Predictive and Iterative Learning Control Based Hybrid Control Method for Hybrid Energy Storage System"
},
{
"paperId": "7f5ffef53a4ba8574b42f26689168d6ae0f04ec9",
"title": "Design of Nonlinear Droop Control in DC Microgrid for Desired Voltage Regulation and Current Sharing Accuracy"
},
{
"paperId": "1bc116c3d83bf12017f022d19adb60ea961e5423",
"title": "Multi-microgrid Energy Management Systems: Architecture, Communication, and Scheduling Strategies"
},
{
"paperId": "4e243218b39668018e9aa7dd467ad5f1027147b7",
"title": "A Review on Primary and Secondary Controls of Inverter-interfaced Microgrid"
},
{
"paperId": "682e149fd3b6f163adcc4bc22525b4d182dffbae",
"title": "Voltage Regulation of DC-Microgrid With PV and Battery"
},
{
"paperId": "1039fe8deab154b11d7c525d7dd4ce833d0dcf0a",
"title": "Parallel Coordination Control of Multi-Port DC-DC Converter for Stand-Alone Photovoltaic-Energy Storage Systems"
},
{
"paperId": "97ca185ec796735243164e27d210a551069f821b",
"title": "An SOC-Based Virtual DC Machine Control for Distributed Storage Systems in DC Microgrids"
},
{
"paperId": "53b6a1f2f08d14a4f56756656b094f2ad8b5d707",
"title": "Highly Efficient and Robust Grid Connected Photovoltaic System Based Model Predictive Control with Kalman Filtering Capability"
},
{
"paperId": "b3908e99161d219bb384beb3e80e207e0dfa1fba",
"title": "Multiagent Based Distributed Control with Time-Oriented SoC Balancing Method for DC Microgrid"
},
{
"paperId": "f12a1b9afafd7f1d43143363bc6ce9886bf3850c",
"title": "A Distributed Iterative Learning Framework for DC Microgrids: Current Sharing and Voltage Regulation"
},
{
"paperId": "2cd084a1a3c2e38c98f9bab9f60886b88c90050a",
"title": "Hierarchical Control in Islanded DC Microgrids With Flexible Structures"
},
{
"paperId": "18c643f6ec635602855ba3e58ffb174161dd8119",
"title": "Accurate Power Sharing With Balanced Battery State of Charge in Distributed DC Microgrid"
},
{
"paperId": "f5e9b8fc137f18c9a05754bfc898d04fd422aa69",
"title": "A Novel Communication-Based Average Voltage Regulation Scheme for a Droop Controlled DC Microgrid"
},
{
"paperId": "33a701fe6cbecf812240197accf14422c6a18f09",
"title": "Control of Energy Storage System Integrating Electrochemical Batteries and Supercapacitors for Grid-Connected Applications"
},
{
"paperId": "bb761b88f3f19d76c02369fb7d73734a9f6d215a",
"title": "A Multilevel Distributed Hybrid Control Scheme for Islanded DC Microgrids"
},
{
"paperId": "627f0e61110455359bf3274f8ccb1c74caee0070",
"title": "Optimal Voltage and Frequency Control of an Islanded Microgrid using Grasshopper Optimization Algorithm"
},
{
"paperId": "0861dc3ea57c1eb32a6b95b124445ea8205597f6",
"title": "Observer-Based DC Voltage Droop and Current Feed-Forward Control of a DC Microgrid"
},
{
"paperId": "981eb41089f45710685c14e43e3ad3cfaa78c5f6",
"title": "Distributed Secondary Control for Power Allocation and Voltage Restoration in Islanded DC Microgrids"
},
{
"paperId": "000153cd09823a5b65d904553b9c1d5c97054d0a",
"title": "DC Microgrids—Part I: A Review of Control Strategies and Stabilization Techniques"
},
{
"paperId": "fd6d5e9587791094fd759990b4898c745322d7e0",
"title": "Comparative Analysis of Different Single-Diode PV Modeling Methods"
},
{
"paperId": "8d03633778342d896fed19dfb411f170dfbeb783",
"title": "Improvement of stability and load sharing in an autonomous microgrid using supplementary droop control loop"
},
{
"paperId": "8c91a9230fea610f779bcda4bbe4c2ab2f286794",
"title": "Voltage Control of DC Microgrids: Robustness for Unknown ZIP-Loads"
},
{
"paperId": "fad39e0055e91f5800e9fab070d74fd5ea133495",
"title": "A Power Management Scheme for Grid-connected PV Integrated with Hybrid Energy Storage System"
},
{
"paperId": "efe41761c2a6361985420e4c0f03746332ab93d7",
"title": "Energy Management and Optimization Methods for Grid Energy Storage Systems"
},
{
"paperId": "ed1f5bc69f387f4b234ba8fd8549d0c19fbc8c12",
"title": "Expansion Planning of Active Distribution Networks With Centralized and Distributed Energy Storage Systems"
},
{
"paperId": "85e198256f80ad51e203e0bdb60eb820d3df2060",
"title": "Dynamic Load Power Sharing Method With Elimination of Bus Voltage Deviation for Energy Storage Systems in DC Micro-grids"
},
{
"paperId": null,
"title": "Evaluation of Electric Vehicle Integrated Charging Safety State Based on Fuzzy Neural Network"
}
] | 22,095
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Physics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/032f94129a045e5e22f7d7dfdbc37052ca52bf6b
|
[
"Computer Science"
] | 0.877402
|
A Modified Secure Scheme of Quantum Key Distribution Without Public Announcement Bases
|
032f94129a045e5e22f7d7dfdbc37052ca52bf6b
|
Journal of Computer Science
|
[
{
"authorId": "21007759",
"name": "Es-Said Chanigui"
},
{
"authorId": "2078271626",
"name": "A. Azizi"
}
] |
{
"alternate_issns": [
"1877-7503",
"2362-0110"
],
"alternate_names": [
"Journal of Computational Science",
"J Comput Sci"
],
"alternate_urls": [
"http://ansinet.org/sciencepub/c4p.php?j_id=jcs",
"https://www.journals.elsevier.com/journal-of-computational-science",
"http://journals.sjp.ac.lk/index.php/jcs",
"http://www.sciencedirect.com/science/journal/18777503"
],
"id": "449b5e8b-cda2-4a8e-b27b-9b7917ded9eb",
"issn": "1549-3636",
"name": "Journal of Computer Science",
"type": "journal",
"url": "https://thescipub.com/journals/jcs/"
}
|
This study provides a simple variation of the protocol of quantum key expansion proposed by Hwang. Some weaknesses relating to the step of public discussions for error detection are analyzed and an attack strategy, allowing the eavesdropper to get partial information about the used bases, is put forward. Using the One-Time Pad cipher, we propose a possible scheme which is secure against the presented attack.
|
## Journal of Computer Science
Original Research Paper
# A Modified Secure Scheme of Quantum Key Distribution Without Public Announcement Bases
**1Es-Said Chanigui and 2Abdelmalek Azizi**
_1Department of Mathematics and Computer Science, FSO, University Mohamed I, Oujda, Morocco_
_2Academy Hassan II of Sciences and Technology, Rabat, Morocco_
_Article history_ **Abstract: This study provides a simple variation of the protocol of**
Received: 25-11-2013 quantum key expansion proposed by Hwang. Some weaknesses
Revised: 17-04-2014
relating to the step of public discussions for error detection are
Accepted: 23-10-2014
analyzed and an attack strategy, allowing the eavesdropper to get
Corresponding Author: partial information about the used bases, is put forward. Using the
Es-said Chanigui One-Time Pad cipher, we propose a possible scheme which is secure
Department of Mathematics and against the presented attack.
Computer Science, FSO,
University Mohamed I, Oujda,
**Keywords: Quantum Key Distribution, Quantum Key Expansion,**
Morocco
Email: chanigui@yahoo.fr Quantum Cryptography, One-Time Pad
## Introduction
Key distribution is always an important issue in
cryptography. One of the earliest discoveries in
quantum computation and quantum information was
that quantum mechanics can be used to do key
distribution in such a way that communication security
cannot be compromised. The basic idea is to exploit the
quantum mechanical principle that observation disturbs
the system being observed. This procedure is known as
Quantum Key Distribution (QKD).
QKD protocol enables two remote communicating
parties (Bob and Alice) who are authenticated to share a
perfectly secure key even in the presence of an
Eavesdropper (Eve). The first QKD scheme, BB84
protocol, was proposed by Bennett and Brassard
(1984). Since then, many QKD protocols had been
suggested, among them the two famous protocols: EPR
protocol (Ekert, 1991) based on EPR entangled states
and B92 protocol (Bennett, 1992) based on nonorthogonal states. These protocols have been proved
secure (Lutkenhaus and Barnett, 1996). Over the last two
decades, other QKD protocols (Goldenberg and Vaidman,
1995; Huttner _et al., 1995; Bechmann-Pasquinucci and_
Peres, 2000; Gisin et al., 2001; Lo et al., 2005; Zhao et al.,
2008; Xiu _et al., 2009; Sun_ _et al., 2009; Sheridan_ _et al.,_
2010) have been proposed and QKD experiments have
been demonstrated (Gobby et al., 2004; Scheidl et al., 2009;
Rosenberg et al., 2009).
Hwang _et al. (1998) proposed a variation of the_
basic ideas of BB84 protocol, in which public
announcement bases is eliminated. Hwang protocol
provides a higher key generation rate (100%) as
compared with BB84 protocol (50%). The efficiency
of the scheme is 100% except for the error checking
step. The protocol’s security has been discussed in
ideal condition and has been proved (Hwang _et al.,_
2001; 2003; Wen and Long, 2005). Its security in real
circumstance is studied in (Lin and Liu, 2012) where
two attacks are presented. However, the previous
discussions about Hwang protocol security (Hwang _et al.,_
2001; 2003; Wen and Long, 2005) did not take into
consideration whether a partial information about the
encoding bases may be eavesdropped during the error
check, that’s what will be discussed in greater detail
over the course of this article.
This study is organized as follows. In section 2, a
brief description of Hwang protocol will be given and
the protocol will be analyzed. We will propose an
attack on the protocol. Taking into account the flaw of
Hwang protocol, we will propose a new secure
scheme in section 3 where the subset of cbits
(classical bits), that Alice and Bob intend to discuss
publically, is encrypted with the One-Time Pad
cipher. In section 4, we will show that the modified
protocol is more efficient than the original protocol
and it can be used securely against the presented
attack. Finally, section 5 presents our conclusions.
© 2015 Es-Said Chanigui and Abdelmalek Azizi. This open access article is distributed under a Creative Commons
Attribution (CC-BY) 3.0 license.
-----
Es-said Chanigui and Abdelmalek Azizi / Journal of Computer Science 2015, 11 (1): 75.81
**DOI: 10.3844/jcssp.2015.75.81**
## Eavesdropping on the Hwang Protocol
### Hwang Protocol
Let us start with the brief description of Hwang
protocol (Lin and Liu, 2012).
Alice and Bob share some secure binary random
sequence B = (b1,b2,...,bn), that is known to nobody
by the BB84 scheme or by courier and repeat it t times
to construct a string C = ( c11 [,c]21,...,cn1, c12
,c22,...,cn2,…, c1t, c2t,...,cnt) where cij[ = b]j[ (for i = ]
1,...,t).
Alice creates a random N = n×t cbit string X =
(x11,x21,...,xn1, x12 [,x]22,...,xn2,…, x1t [,x]2t,...,xnt) and
keeps it as the secret key. With the knowledge of two
binary strings X and C, Alice prepares a qubit
(quantum bit) string |φx[i]j,[c][i]j[> and each qubit is one of ]
the four states: |φ0,0> = |0>, |φ1,0> = |1>, |φ0,1> = |+>,
|φ1,1> = |−> that is to encode X in the rectilinear basis
B⊕ = {|0>,|1>} or the diagonal basis B⊗ = {|+>,|−>} if
the corresponding cbit of C is 0 or 1, respectively
(The association between the information cbit and the
basis are described in Table 1). Then, Alice sends the
qubit string to Bob.
After receiving these N qubits, Bob measures them
in the basis B⊕ or B⊗ according to the binary string C.
If all qubits have been sent, Alice and Bob
compare some randomly chosen subset of their key.
Bob informs Alice publically whether he obtained 0 or
1 at the subset of instances. Next, Alice compares
These data with her ones and checks if there is error.
Here what Bob announces is just cbit that the qubit
represents but not the exact state of the qubit, which is
to prevent the leakage of information.
In this protocol, Alice and Bob have common
random sequence _B. Then, there will be perfect_
correlation between their measurement results unless
the quantum states were perturbed by Eve’s attempt at
eavesdropping or noise. Thus, it is unnecessary to
perform a public announcement bases process, which
reduces information about bases that was attained by
Eve. However the announcement of cbit 0 or 1 for
error check will also leak out information about bases,
which we will discuss later.
### Attack Strategy
Now, let us turn to our eavesdropping scheme
“Sieving By Difference” attack (SBD attack), which
consists of several attacks. For the first series of
attacks, Eve will be detected by Alice and Bob and the
communication will be abandoned. But for the
followed attacks Eve will not be detected and be able
to get all the information exchanged subsequently.
Table 1. Qubit preparation according to the choice of basis
and cbit value
0 1
B⊕ |φ0,0> |φ1,0>
B⊗ |φ0,1> |φ1,1>
Suppose that Eve knows in advance the length of
the basis sequence between Alice and Bob (n cbits). If
the length of the key that Alice and Bob want to
establish is N cbits (N = n × t), then the basis
sequence will be used for at least t times. Now, let
us induce a method for attack. In the first attack,
Eve intercepts all of the photons from Alice to
Bob and performs measurement on every photon
always along the basis B⊕ or B⊗ which is randomly
chosen by Eve (For example B⊕ is chosen) and
she sends the measured photons to Bob. Two cases
may happen: In the first, which occurs with
probability 1/2, the qubit representing the state of the
photon sent from Alice to Bob is encoded with a
rectilinear basis. In this case, the qubit will not change
after measuring by Eve. In the second, which also
occurs with half probability, the qubit representing the
state of the photon sent from Alice to Bob is |+⟩ =
1/√2×(|0⟩+|1⟩) or |−⟩ = 1/√2×(|0⟩-|1⟩) (the photon state
is encoded with a diagonal basis B⊗). Then, when Eve
measures this qubit along the basis B⊕ she will get |0⟩
or |1⟩ with probability 1/2. After receiving the photon,
Bob should measure it along B⊗ according to the
sharing sequence, he will get |+⟩ or |−⟩ with
probability 1/2. By the announcement of Bob’s result
(0 or 1), Eve will find that her measuring result is
different from Bob’s result with half probability and
she will be sure that the basis with which this photon
was encoded is B⊗. Then, Eve can get the base state
with probability 1/2×1/2 = ¼ like the example shown
in Fig. 1. So, to recapitulate, after the first attack
averagely 1/4 of the bases corresponding to cbits
sacrificed to check for the eavesdropper’s activity will
be known by Eve.
In the following attacks, Eve invests her
knowledge about the basis sequence. She measures
the photons of known bases in the right bases and the
rest along randomly chosen basis (B⊕ or B⊗). Then
Eve will get more and more information about the
sharing sequence B and hence get an error rate low
enough for eavesdropping without being detected after
a certain number of attacks.
Suppose Alice and Bob check error rate for every _n_
cbits. Then Eve proceeds through the following steps
and will get the following results:
76
-----
Es-said Chanigui and Abdelmalek Azizi / Journal of Computer Science 2015, 11 (1): 75.81
**DOI: 10.3844/jcssp.2015.75.81**
- By eavesdropping on the quantum channel, Eve
intercepts all the n photons that Alice sends to Bob.
She measures them along the basis B⊕ and sends
them to Bob. At the same time, Eve should record
the measuring results of all photons
- Eve eavesdrops on the classical channel to get the
announcement of the q cbits used for error check
by Bob. Then Eve compares her records with
Bob’s results. If the results of one of the photons
are different, Eve can be sure that the base
corresponding to this photon must be B⊗ (SBD).
As mentioned above, Eve will get different results
with Bob for averagely r1 = q/4 photons.
Although the error rate induced by Eve, in this
first attack, is 25% (e1 = 0.25). Then, Eve will be
detected and Alice and Bob will abandon this
communication, but Eve will be sure that the
bases corresponding to these photons are B⊗
- During another communication for Alice and
Bob, Eve performs a second attack. But in this
time, Eve knows that the q/4 bases are B⊗, so she
measures the q/4 qubits corresponding to these
bases along B⊗ and measures the remaining n-q/4
qubits also along B⊗ as indicated in Fig. 2 (for
every attack, Eve will use alternatively either B⊕
or B⊗ to measure photons encoded in unknown
bases in order to increase the chance of finding
more different results from Bob). Then Eve sends
all photons to Bob and proceeds exactly like in
the first attack. It should be noted that, for every
time, Alice and Bob choose q photons randomly
for error check. So, on average there will be
q/4×q/n photons corresponding to a subset of
known bases to be chosen. So, the bases of the
q/4×q/n photons is known to Eve and the left qq/4×q/n photons chosen for error check are still
unknown. Similarly, there will be averagely (qq/4×q/n)×1/4 = (1-q/4n)×q/4 photons that Eve has
the different results from Bob, which means that
the bases corresponding to these photons are B⊕.
At the same time, the error rate induced by Eve is
averagely e2 = (1-q/4n)×1/4. After the second
attack, Eve will get to know averagely r2 = (1q/4n)×q/4+q/4 = (2-q/4n)×q/4 basis from the basis
sequence
- For the next following attacks, the results can be
deduced similarly. Let ri be the number of basis
that Eve has got to know after the i-th attack. Let
ei be the error rate in the i-th attack. Then, we
have ri+1 = ri×(1-q/4n)+q/4, ei+1 = 1/4×(1-ri/n ), i
= 1,2,3,... where q is the number of photons used
for error check and n is the length of the basis
sequence
### Example
Suppose the length of the key that Alice and Bob
want to establish is N = 10[5] and the length of their
sharing basis sequence is n = 10[3] where q cbits (q = 100,
200, 300) are used for announcement and comparison in
the classical channel for error check. Suppose, also, that
Eve uses our strategy to attack on Hwang protocol. Then
we have the results of the first 100 attacks which are
illustrated in Fig. 3 and 4.
Fig. 1. All possibilities when Alice sends the qubit |φ1,0>
77
-----
Es-said Chanigui and Abdelmalek Azizi / Journal of Computer Science 2015, 11 (1): 75.81
**DOI: 10.3844/jcssp.2015.75.81**
Fig. 2. The first four rounds of SBD attack
Fig. 3. The error rate as a function of the number of attacks when Eve uses our eavesdropping strategy in order to achieve her attack
on the Hwang protocol, for n = 1000 and q = 10, 20 and 30% n
78
-----
Es-said Chanigui and Abdelmalek Azizi / Journal of Computer Science 2015, 11 (1): 75.81
**DOI: 10.3844/jcssp.2015.75.81**
Fig. 4. Eve’s information about the basis sequence as a function of the number of attacks when Eve uses our eavesdropping strategy
in order to achieve her attack on the Hwang protocol, for n = 1000 and q = 10, 20 and 30% n
## Modified Protocol
In this section, a modified protocol is proposed,
which can stand against the attacks depicted in the
above section. Here, the One-Time Pad, also called
Vernam (1926), which is a provably secure
cryptosystem (Shannon, 1949), is utilized to encrypt a
public announcement of cbits for error check between
Alice and Bob.
One-time pad is a type of symmetric encryption
system in which a private key generated randomly is
used only once to encrypt a message that is then
decrypted by the receiver using a matching one-time
pad and key.
The modified protocol is described as follows:
- Alice and Bob share two prior random cbit strings.
One is the basis sequence B = (b1, b2, ..., bn) with
which they construct a cbit string C =
(c11,c21,...,cn1,c12,c22,...,cn2,..., c1t, c2t,...,cnt) where cij [= ]
bj for I = 1,...,t. The other is a short secret key S =
(s1,s2,...,sq) which will be used to encrypt a randomly
chosen subset of cbits before being exchanged
publicly during the first error check
- For i = 1 to t
- Alice creates a random cbit string Xi = (x1i, x2i,...,
xni) as the round key and with the knowledge of
two binary strings Xi and Ci, Alice prepares a
qubit string |φx[i]j,[c][i]j[> as described in Table 1 and ]
sends it to Bob
- After receiving these n qubits, Bob measures them
in the basis B⊕ or B⊗ according to the binary string
Ci. Then, he obtains X′i
- Let S1 = S and for i > 1, let Si = (s[i]1[, s][i]2[,..., s][i]q[) be a ]
subset of q cbits randomly chosen by Bob and Alice
from the shared key X′′i−1(X′′ is the shared
key formed after error correction and
privacy amplification)
In order to detect Eve’s intervention, Alice and Bob
compare some randomly chosen subset of received cbits
X′i as follows:
- First, Bob constructs a string Ti = (x′1i, x′2i,..., x′qi)
by choosing randomly q cbits into X′i and records
their positions. Then, he encrypts the cbits x′ji (∈
Ti), j = 1,...,q, by using the shared key Si and a
One Time Pad cipher. Finally, Bob sends the
ciphertext (x′ji⊕sji), j = 1,…,q publicly to Alice
and tells her the positions of chosen cbits
79
-----
Es-said Chanigui and Abdelmalek Azizi / Journal of Computer Science 2015, 11 (1): 75.81
**DOI: 10.3844/jcssp.2015.75.81**
- Alice applies XOR to every cbit of the encrypted
message she receives and the corresponding cbit
of the One Time Key Si, that is, x′ji⊕ sji ⊕ sji =
x′ji, j = 1,..., q Next, Alice compares These data
with her own (xji) j = 1,…,q and checks if there is
error
- According to the threshold error rate, Alice and
Bob abort the process or execute error correct and
privacy amplification to generate the secure key
X′′i
## Discussion
It’s important to note that, in the modifid protocol,
the subset Si, used to encrypt the exchanged cbits
during the error check operation, should be discarded
at the end of each round. The ongoing need to get hold
of the short keys Si may appear as a deficiency of our
protocol. But this is not correct because in all of the
QKD Protocols and especially Hwang protocol, a
subset of cbits used in the error check step (and that
has the same length as Si) is discarded as well. In our
case, the subset Si, with which we encrypt the
announcement of bases in the (i+1)-th round, isn’t
discarded until we use it to further increase the
protocol’s security.
In the modified protocol, nothing is changed
except the error check process. Hence, the security of
the modified protocol is the same as that of Hwang
protocol in ideal condition (without taking into
consideration its weakness due to the public error
check). In addition, the proposed protocol, by using
One-Time Pad encryption, makes secure a public
comparison between Alice and Bob and deprives Eve
of any information at all about Bob’s measurements.
Eve cannot judge whether her measuring result is
different from Bob’s result or not because, even by
intercepting an encrypted message Ti⊕Si exchanged
publicly between Alice and Bob during the error
check, she cannot attain any information about the
subset Ti. Then, she will not be able to make any
conclusion about prepare basis. Therefore, our scheme
is secure against the SBD attack presented in sect. 2.
## Conclusion
In summary, we have analyzed Hwang’s Protocol
and found that the announcement of cbits over the
classical channel for error check is the weakness of the
protocol because of the leakage of information about a
bases sequence. We propose an eavesdropping strategy
for Eve to attack on the protocol and show how she
can get more and more information of shared key
between Alice and Bob. To overcome this flaw, we
propose a new scheme, where the subset of cbits, that
Alice and Bob intend to discuss publically, is encrypted
with the One-Time Pad cipher. The security of the
proposed protocol is discussed and it is shown that the
new protocol is secure against the presented attack.
Unfortunately, there is no known way to initiate the
modified protocol without initially exchanging a secret
key S, which is a weakness. So, finding an efficient
QKD Protocol without public announcement of bases,
that avoids leaking information (during a public error
check) and that doesn’t require using a pre-shared
key, would be an interesting issue to study.
## Funding Information
The authors have no support or funding to report.
## Author’s Contributions
All authors equally contributed in this work.
## Ethics
This article is original and contains unpublished
material. The corresponding author confirms that all of
the other authors have read and approved the manuscript
and no ethical issues involved.
## References
Bechmann-Pasquinucci, H. and A. Peres, 2000. Quantum
cryptography with 3-state systems. Phys. Rev. Lett.,
85: 3313. DOI: 10.1103/PhysRevLett.85.3313
Bennett, C.H. and G. Brassard, 1984. Quantum
cryptography: Public key distribution and coin
tossing. Proceedings of the IEEE International
Conference on Computers, Systems and Signal
Processing, (SP ‘84), IEEE Press, New York, pp:
175-179. DOI: 10.1016/j.tcs.2011.08.039
Bennett, C.H., 1992. Quantum cryptography using any
two nonorthogonal states. Phys. Rev. Lett., 68:
3121-3124. DOI: 10.1103/PhysRevLett.68.3121
Ekert, A.K., 1991. Quantum cryptography based on
Bell’s theorem. Phys. Rev. Lett., 67: 661-663.
DOI: 10.1103/PhysRevLett.67.661
Gisin, N., G. Ribordy, W. Tittel and H. Zbinden, 2001.
Quantum cryptography. Rev. Mod. Phys., 74: 145145. DOI: 10.1103/RevModPhys.74.145
Gobby, C., Z.L. Yuan and A.J. Shields, 2004.
Quantum key distribution over 122 km of
standard telecom fiber. Appl. Phys. Lett., 84:
3762-3762. DOI: 10.1063/1.1738173
Goldenberg, L. and L. Vaidman, 1995. Quantum
cryptography based on orthogonal states. Phys.
Rev. Lett., 75: 1239-1243.
DOI: 10.1103/PhysRevLett.75.1239
80
-----
Es-said Chanigui and Abdelmalek Azizi / Journal of Computer Science 2015, 11 (1): 75.81
**DOI: 10.3844/jcssp.2015.75.81**
Huttner, B., N. Imoto, N. Gisin and T. Mor, 1995. Quantum
cryptography with coherent state. Phys. Rev. A 51:
1863-1869. DOI: 10.1103/PhysRevA.51.1863
Hwang, W.Y., D. Ahn and S.W. Hwang, 2001.
Eavesdropper’s optimal information in variations of
Bennett-Brassard 1984 quantum key distribution in
the coherent attacks. Phys. Lett. A, 279: 133-138.
DOI: 10.1016/S0375-9601(00)00825-2
Hwang, W.Y., I.G. Koh and Y.D. Han, 1998. Quantum
cryptography without public announcement of
bases. Phys. Lett., A244: 489-494.
DOI: 10.1016/S0375-9601(98)00358-2
Hwang, W.Y., X.B. Wang, K. Matsumoto, J. Kim and
H.W. Lee, 2003. Shor Preskill-type security proof for
quantum key distribution without public announcement
of bases. Phys. Rev., A67: 012302-012302.
DOI: 10.1103/PhysRevA.67.012302
Lin, S. and X.F. Liu, 2012. A modified quantum key
distribution without public announcement bases
against photon-number-splitting attack. Int. J.
Theor. Phys., 51: 2514-2523.
DOI: 10.1007/s10773-012-1131-9
Lo, H.K., X. Ma and K. Chen, 2005. Decoy state
quantum key distribution. Phys. Rev. Lett., 94:
230504-230504.
DOI: 10.1103/PhysRevLett.94.230504
Lutkenhaus, N. and S.M. Barnett, 1996. Security against
eavesdropping in quantum cryptography. Phys.
Rev., A54: 97-111. DOI: 10.1103/PhysRevA.54.97
Rosenberg, D., C.G. Peterson and J.W. Harrington,
2009. Practical long distance quantum key
distribution system using decoy levels. New J.
Phys., 11: 045009-045009.
DOI: 10.1088/1367-2630/11/4/045009
Scheidl, T., R. Ursin, A. Fedrizzi and S. Ramelow, 2009.
Feasibility of 300 km quantum key distribution with
entangled states. New J. Phys., 11: 085002-085002.
DOI: 10.1088/1367-2630/11/8/085002
Shannon, C.E., 1949. Communication theory of secrecy
systems. Bell Syst. Technical J., 28: 656-715.
DOI: 10.1002/j.1538-7305.1949.tb00928.x
Sheridan, L., T.P. Le and V. Scarani, 2010. Finite-key
security against coherent attacks in quantum key
distribution. New J. Phys., 12: 123019-123019.
DOI: 10.1088/1367-2630/12/12/123019
Sun, S.H., L.M. Liang and C.Z. Li, 2009. Decoy state
quantum key distribution with finite resources.
Phys. Lett. A, 373: 2533-2536.
DOI: 10.1016/j.physleta.2009.05.016
Vernam, G.S., 1926. Cipher printing telegraph
systems for secret wire and radio telegraphic
communications. J. IEEE, 55: 109-115.
DOI: 10.1109/T-AIEE.1926.5061224
Wen, K. and G.L. Long, 2005. Modified bennettbrassard 1984 quantum key distribution protocol
with two-way classical communications. Phys.
Rev. A, 72: 022336-022340.
DOI: 10.1103/PhysRevA.72.022336
Xiu, X.M., L. Dong, Y.J. Gao and F. Chi, 2009.
Quantum key distribution protocols with six-photon
states against collective noise. Opt. Commun., 282:
4171-4174. DOI: 10.1016/j.optcom.2009.07.012
Zhao, Y., B. Qi and H.K. Lo, 2008. Quantum key
distribution with an unknown and untrusted
source. Phys. Rev. A, 77: 052327-052340.
DOI: 10.1103/PhysRevA.77.052327
81
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3844/jcssp.2015.75.81?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3844/jcssp.2015.75.81, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "http://thescipub.com/pdf/10.3844/jcssp.2015.75.81"
}
| 2,015
|
[
"JournalArticle"
] | true
| 2015-01-22T00:00:00
|
[
{
"paperId": "659eb477e924c33df62325119ec14c54fad70d5c",
"title": "Self-Organizing Maps for Fingerprint Image Quality Assessment"
},
{
"paperId": "527ad08937ada942e4facbe8ee59761c19c1791e",
"title": "Is there a fingerprint pattern in the image?"
},
{
"paperId": "40509b99e23eb3c1450c8825bc9b6d1addcfe05e",
"title": "PDE-based regularization of orientation field for low-quality fingerprint images"
},
{
"paperId": "5f10b1c96f8016b5504592bc11124187fff87c63",
"title": "Fingerprint orientation modeling by sparse coding"
},
{
"paperId": "48e1034d03863f01d67f2ec76e1eb4241e3803f9",
"title": "A Modified Quantum Key Distribution Without Public Announcement Bases Against Photon-Number-Splitting Attack"
},
{
"paperId": "2666bde37a65a9736bad1da3157f5a504505c1ff",
"title": "Altered Fingerprints: Analysis and Detection"
},
{
"paperId": "fd6c30defa393f83b951e4ca545a02b67e45a0a5",
"title": "Fast and Accurate Fingerprint Indexing Based on Ridge Orientation and Frequency"
},
{
"paperId": "98d5da46f47030a91683e15ccd1c054c35b34324",
"title": "Latent fingerprint enhancement via robust orientation field estimation"
},
{
"paperId": "ec39f2bfd4ba7494f551cbe6aec45da693211958",
"title": "Fingerprint Reconstruction: From Minutiae to Phase"
},
{
"paperId": "6cd7d883efe24f6acdf667ea779aba1b63061b23",
"title": "Fingerprint identification using Cross Correlation of Field Orientation"
},
{
"paperId": "773e7dd0fb4a0b901063634bce36fda5631b167e",
"title": "Estimation of Fingerprint Orientation Field by Weighted 2D Fourier Expansion Model"
},
{
"paperId": "83edc3c8a7e894c5aa8ce8e563af7c1b83852282",
"title": "Finite-key security against coherent attacks in quantum key distribution"
},
{
"paperId": "745b56d61a3dd50ec512e983c48a1f4f9ea458d4",
"title": "An orientation-based ridge descriptor for fingerprint image matching"
},
{
"paperId": "272db4bb443cb2745b12b2da2e4b87d93fba39e7",
"title": "The use of SOM for fingerprint classification"
},
{
"paperId": "d048f7e8befd660cda1f9b1636b76be8f63c1522",
"title": "Quantum key distribution protocols with six-photon states against collective noise"
},
{
"paperId": "09307ab4c3902532a181b9671d3040386999f431",
"title": "Feasibility of 300 km quantum key distribution with entangled states"
},
{
"paperId": "0537c2d16e3203cba93fcf13a8c863c2684d643c",
"title": "Decoy state quantum key distribution with finite resources"
},
{
"paperId": "174950f2f48a7ec69cfc208ba8ee8376f1866a3b",
"title": "Practical long-distance quantum key distribution system using decoy levels"
},
{
"paperId": "55c6234546881dd8179ef0421c82ccc8c136f9b1",
"title": "Quantum key distribution with an unknown and untrusted source"
},
{
"paperId": "2e7b0b3a885f6c27d871d24f5ee2e28ae82a2ce9",
"title": "Modified Bennett-Brassard 1984 quantum key distribution protocol with two-way classical communications"
},
{
"paperId": "c45f3b85a2f7efccf834de721e4183992312f859",
"title": "Decoy state quantum key distribution."
},
{
"paperId": "0c01e5fe4fc65c34ca8021a8837d00ca277aa6db",
"title": "Quantum key distribution over 122 km of standard telecom fiber"
},
{
"paperId": "69b3153322e9eb374709a15e5043d0e7be930c47",
"title": "Shor-Preskill-type security proof for quantum key distribution without public announcement of bases"
},
{
"paperId": "dff721222915cc4bee4089bf5e7e456148f1da91",
"title": "Quantum Cryptography"
},
{
"paperId": "6a61ed82209ac5d394fe3021e812e625cca25827",
"title": "Eavesdropper's optimal information in variations of Bennett–Brassard 1984 quantum key distribution in the coherent attacks"
},
{
"paperId": "ccc2d2683e45a3556bc177b9e2229bdb0406febb",
"title": "Quantum cryptography with 3-state systems."
},
{
"paperId": "f64a32b195d436892d72bdb1d37016365b5510eb",
"title": "Quantum cryptography without public announcement of bases"
},
{
"paperId": "a4950155228f32e06da3eb2ee0b98726ffd3ae1f",
"title": "Security against eavesdropping in quantum cryptography."
},
{
"paperId": "d5ca8f08f19f04533bbec8e0a5dfb7ed32c2a4ba",
"title": "Quantum cryptography based on orthogonal states."
},
{
"paperId": "a9b48480086901f21897c2fd2395225c5a7e5386",
"title": "Quantum cryptography with coherent states"
},
{
"paperId": "73b97e8529490e0d38a6a36dc1491888496f6c9b",
"title": "Quantum cryptography using any two nonorthogonal states."
},
{
"paperId": "f8dcc3047eef8da135bca13b926b1e6cf50e7f3a",
"title": "Quantum cryptography based on Bell's theorem."
},
{
"paperId": "e073a7c5a6418d96fc16d8337a6056a457e75c1e",
"title": "Communication theory of secrecy systems"
},
{
"paperId": "84cc7a17986dfd6e20f6fbdda1055ac24f75592f",
"title": "Quantum cryptography : Public key distribution and coin tossing"
},
{
"paperId": "37d065a715e0503a921b723533f1609f91fbc9ed",
"title": "Cipher Printing Telegraph Systems For Secret Wire and Radio Telegraphic Communications"
}
] | 6,655
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0332b91195c9d1216057538d9ff00b098faf0cc0
|
[
"Computer Science",
"Mathematics"
] | 0.82487
|
Coded Data Rebalancing for Distributed Data Storage Systems with Cyclic Storage
|
0332b91195c9d1216057538d9ff00b098faf0cc0
|
Information Theory Workshop
|
[
{
"authorId": "2151206376",
"name": "Athreya Chandramouli"
},
{
"authorId": "2164989176",
"name": "Abhinav Vaishya"
},
{
"authorId": "47145483",
"name": "Prasad Krishnan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ITW",
"Inf Theory Workshop"
],
"alternate_urls": null,
"id": "d8b35857-6ad1-4a4c-a3b3-2e651a0e3666",
"issn": null,
"name": "Information Theory Workshop",
"type": "conference",
"url": null
}
|
We consider replication-based distributed storage systems in which each node stores the same quantum of data and each data bit stored has the same replication factor across the nodes. Such systems are referred to as balanced distributed databases. When existing nodes leave or new nodes are added to this system, the balanced nature of the database is lost, either due to the reduction in the replication factor, or the non-uniformity of the storage at the nodes. This triggers a rebalancing algorithm, that exchanges data between the nodes so that the balance of the database is reinstated. The goal is then to design rebalancing schemes with minimal communication load. In a recent work by Krishnan et al., coded transmissions were used to rebalance a carefully designed distributed database from a node removal or addition. These coded rebalancing schemes have optimal communication load, however, require the file-size to be at least exponential in the system parameters. In this work, we consider a cyclic balanced database (where data is cyclically placed in the system nodes) and present coded rebalancing schemes for node removal and addition in such a database. These databases (and the associated rebalancing schemes) require the file-size to be only cubic in the number of nodes in the system. We bound the advantage of our node removal rebalancing scheme over the uncoded scheme, and show that our scheme has a smaller communication load. In the node addition scenario, the rebalancing scheme presented is a simple uncoded scheme, which we show has optimal load.Due to space restrictions, the current version of this paper contains only a subset of the results concerning the node removal scenario. The full version of this paper, including additional results and examples, is available online [1].
|
# Coded Data Rebalancing for Distributed Data
Storage Systems with Cyclic Storage
### Athreya Chandramouli, Abhinav Vaishya, Prasad Krishnan
Abstract
We consider replication-based distributed storage systems in which each node stores the same quantum of data
and each data bit stored has the same replication factor across the nodes. Such systems are referred to as balanced
distributed databases. When existing nodes leave or new nodes are added to this system, the balanced nature of the
database is lost, either due to the reduction in the replication factor, or the non-uniformity of the storage at the nodes.
This triggers a rebalancing algorithm, that exchanges data between the nodes so that the balance of the database
is reinstated. The goal is then to design rebalancing schemes with minimal communication load. In a recent work
by Krishnan et al., coded transmissions were used to rebalance a carefully designed distributed database from a
node removal or addition. These coded rebalancing schemes have optimal communication load, however, require the
file-size to be at least exponential in the system parameters. In this work, we consider a cyclic balanced database
(where data is cyclically placed in the system nodes) and present coded rebalancing schemes for node removal and
addition in such a database. These databases (and the associated rebalancing schemes) require the file-size to be only
cubic in the number of nodes in the system. We bound the advantage of our node removal rebalancing scheme over
the uncoded scheme, and show that our scheme has a smaller communication load. In the node addition scenario,
the rebalancing scheme presented is a simple uncoded scheme, which we show has optimal load.
I. INTRODUCTION
In replication-based distributed storage systems, the available data is stored in a distributed fashion in the storage
nodes with some replication factor. Doing this helps prevent data loss in case of node failures, and also provides for
greater data availability and thus higher throughputs. In [1], replication-based distributed storage systems in which
(A) each bit is available in the same number of nodes (i.e., the replication factor of each bit is the same) and (B)
each node stores the same quantum of data, were referred to as balanced distributed databases. In such databases,
when a storage node fails, or when a new node is added, the ‘balanced’ nature of the database is disturbed (i.e.,
the properties (A) or (B) do not hold anymore); this is known as data skew. Data skew results in various issues in
the performance of such distributed databases. Correcting such data skew requires some communication between
the nodes of the database. In distributed systems literature, this communication phase is known as data rebalancing
(see, for instance, [2]–[5]). In traditional distributed storage systems, an uncoded rebalancing phase is initiated,
where uncoded bits are exchanged between the nodes to recreate the balanced conditions in the new collection of
Abhinav and Dr. Krishnan are with the Signal Processing & Communications Research Center, International Institute of Information
Technology Hyderabad, India, email:{abhinav.vaishya@research., prasad.krishnan}@iiit.ac.in. Athreya is with the Center for Security, Theory
and Algorithmic Research, International Institute of Information Technology Hyderabad, India, email: athreya.chandramouli@research.iiit.ac.in.
-----
nodes. Clearly, a primary goal in such a scenario would be to minimize the communication load on the network
during the rebalancing phase.
The rebalancing problem was formally introduced in the information theoretic setting in [1] by Krishnan et al.
The idea of coded data rebalancing was presented in [1], based on principles similar to the landmark paper on
coded caching [6]. In coded data rebalancing, coded data bits are exchanged between the nodes; the decoding of
the required bits can then be done by using the prior stored data available. In [1], coded data rebalancing schemes
were presented to rectify the data skew and reinstate the replication factor, in case of a single node removal or
addition, for a carefully designed balanced database. The communication loads for these rebalancing schemes were
characterized and shown to offer multiplicative benefits over the communication required for uncoded rebalancing.
Information theoretic converse results for the communication loads were also presented in [1], proving the optimality
of the achievable loads. These results were extended to the setting of decentralized databases in [7], where each
bit of the file is randomly placed in some subset of the K nodes.
While Krishnan et al. [1] present an optimal scheme for the coded data rebalancing problem, the centralized
database design in [1] requires that the number of segments in the data file be a large function of the number
of nodes (denoted by K) in the system. In fact, as K grows, the number of file segments, and thus also the file
size, have to grow exponentially in K. Thus, this scheme would warrant a high level of coordination to construct
the database and perform the rebalancing. Because of these reasons, the scheme in [1] could be impractical in
real-world settings. Motivated by this, in the present work, we study the rebalancing problem for cyclic balanced
databases, in which each segment of the data file is placed in a consecutive set of nodes, in a wrap-around fashion.
For such cyclic placement, the number of segments of the file could be as small as linear in K. Constructing such
cyclic databases and designing rebalancing schemes for them also may not require much coordination owing to the
simplicity of the cyclic placement technique. Such cyclic storage systems have been proposed for use in distributed
systems [8], [9], as well as in recent works on information theoretic approaches to private information retrieval [10]
and distributed computing [11].
We now describe the organization and contributions of this work. Section II sets up the formalism and gives
the main result (Theorem 1) of this work on rebalancing for single node removal or addition in a cyclic balanced
database. Sections III and IV are devoted to proving this main result. In Section III we present a coded data
rebalancing algorithm (Algorithm 1) for node removal in balanced cyclic databases. Algorithm 1 chooses between
two coded transmission schemes (Scheme 1 and Scheme 2) based on the system parameters. Each of the two
schemes has lower communication load than the other in certain parameter regimes determined by the values of
K and the replication factor r. We bound the advantage of our schemes over the uncoded scheme, and show that
the minimum of their communication loads is always strictly smaller than the uncoded rebalancing scheme, which
does not permit coded transmissions. Further, the segmentation required for the scheme is only quadratic in K, and
the size of the file itself is required only to be cubic in K (thus much smaller than that of [1]). In Section IV, we
present a rebalancing scheme for addition of a single node to the cyclic database, and show that its load is optimal.
We conclude this work in Section V with a short discussion on future work.
Notation: For a non-negative integer n, we use [n] to denote the set 1, . . ., n . We also define [0] ≜ φ. Similarly,
{ }
-----
for a positive integer n, n denotes the set 0, 1, . . ., n 1 . To describe operations involving wrap-arounds, we
⟨ ⟩ { − }
i + j, if i + j ≤ K
define two operators. For positive integers i, j, K such that i, j ≤ K, i ⊞K j = .
i + j − K, if i + j > K
i − j, if i − j > 0
Similarly, i ⊟K j = . We also extend these operations to sets. For A ⊆ [K] and
i − j + K, if i − j ≤ 0
i ∈ [K], we use {i ⊞K A} to denote the set {i ⊞K a : a ∈ A}. Similarly, {i ⊟K A} denotes {i ⊟K a : a ∈ A}. For
a binary vector X, we use |X| to denote its size. The concatenation of two binary vectors X1 and X2 is denoted
by X1|X2.
II. MAIN RESULT: REBALANCING SCHEMES FOR CYCLIC DATABASES
In this section, we give the formal definition of cyclic databases and present our main result (Theorem 1) on
rebalancing schemes for node removal and addition for such databases. Towards that end, we recall the formal
system model and other relevant definitions from [1].
Consider a binary file W consisting of a set of N equal-sized segments where the i[th]segment is denoted by Wi,
where |Wi| = T bits. The system consists of K nodes indexed by [K] and each node k ∈ [K] is connected to every
other node in [K] k via a bus link that allows a noise-free broadcast between the nodes. A distributed database
\{ }
of W across nodes indexed by [K] is a collection of subsets D = {Dk ⊆{Wi : i ∈ [N ]} : k ∈ [K]}, such that
�k∈[K] [D][k][ =][ W] [, where][ D][k][ denotes the set of segments stored at node][ k][. We will denote the set of nodes where]
Wi is stored as Si. The replication factor of the segment Wi is then |Si|. The distributed database D is said to be
r-balanced, if |Si| = r, ∀i, and |Dk| = [rN]K [,][ ∀][k][. That is, each segment is stored in][ r][ nodes, and each node stores]
an equal Kr [-fraction of the][ N][ segments. We may assume without loss of generality that][ 2][ ≤] [r][ ≤] [K][ −] [1][, since if]
r = 1 no rebalancing is possible from node removal, and if r = K no rebalancing is required.
When a node k is removed from the r-balanced database, the replication factor of the segments Dk stored in
node k drops by one, thus disturbing the ‘balanced’ state of the database. If a new empty node K + 1 is added to
the database, once again, the new database is not balanced. To reinstate the balanced state, a rebalancing scheme is
initiated. Formally, a rebalancing scheme is a collection of transmissions between the nodes present, such that upon
decoding the transmissions, the final database denoted by (on nodes [K] k in case of node removal, or on
D[′] \ { }
nodes [K + 1] in case of node addition) is another r-balanced database. Let Xk[′] = φk[′] (Dk[′] ) be the transmission
by node k[′] during the rebalancing phase, where φk′ represents some encoding function. The communication loads
of rebalancing (denoted by Lrem(r) for the case of node removal, Ladd(r) for node addition) are then defined as
Lrem(r) = �k[′]∈[K]\k [|][X][k][′] [|],
T
Ladd(r) = �k∈[K] [|][X][k][|] .
T
The normalized communication loads are then defined as Lrem(r) = Lrem(r)/N and Ladd(r) = Ladd(r)/N . The
optimal normalized communication loads for the node removal and addition scenarios are denoted by L[∗]rem[(][r][)][ and]
L[∗]
add[(][r][)][ respectively. Here, the optimality is by minimization across all possible initial and target (final) databases,]
-----
Fig. 1: r-balanced cyclic database on nodes [K]
and all possible rebalancing schemes. In [1], it was shown[1] that L[∗]rem[(][r][)][ ≥] K(rr−1) [and][ L]add[∗] [(][r][)][ ≥] Kr+1 [. Further,]
schemes for rebalancing were presented for node removal and addition for a carefully designed database which
required N = [(][K]r[+1)!]! and T = r − 1, which achieve these optimal loads. Observe that, in these achievable schemes,
the file size NT grows (at least) exponentially in K as K grows, for any fixed replication factor r, which is one
of the main drawbacks of this result. Therefore, our interest lies in databases where N and T are small. Towards
this end, we now define cyclic databases.
Definition 1. A distributed database is an r-balanced cyclic database if N = K and a segment labelled Wi is
stored precisely in the nodes Si = {i ⊞K ⟨r⟩}.
Fig. 1 depicts an r-balanced cyclic balanced database on K nodes as defined above. In this work, we present
rebalancing schemes for node removal and addition, for such a cyclic database on K nodes. Specifically, we prove
the following result.
Theorem 1. For an r-balanced cyclic database having K nodes and r 3, . . ., K 1, if the segment size T is
∈{ − }
divisible by 2(K [2] 1), then rebalancing schemes for node removal and addition exist which achieve the respective
−
communication loads
K r
Lrem(r) = −
(K 1) [+ min (][L][1][(][r][)][, L][2][(][r][))][,]
−
rK
Ladd(r) =
K + 1 [,]
where, L1(r) = [(][K][−](K[r][)(2]−1)[r][−][1)] and L2(r) = 2(K1−1) �K(r − 1) + ⌈ [r][2][−]2 [2][r] ⌉�. Also, the following relationship holds
between Lrem(r) and the load Lu(r) of the uncoded rebalancing scheme for node removal
��
< 1.
Lrem � �
< min 2 1
− [r][ −] [1]
Lu K − 1
� � 1 r
,
2 [+ 1]2r [+] 4(K 1)
−
Further, the rebalancing scheme for node addition is optimal (i.e., Ladd(r)/N = L[∗]add[(][r][) =] Kr+1 [).]
1The size of the file in the present work is NT bits; whereas in [1], the notation N represents the file size in bits, thus absorbing both the
segmentation and the size of each segment. The definitions of communication loads in [1] are also slightly different, involving a normalization
by the storage size of the removed (or added) node. The results of [1] are presented here according to our current notations and definitions.
-----
Remark 1. In the proof of the node removal part of Theorem 1, we assume that the target database is also cyclic.
For the case of r = 2, with this target database, our rebalancing scheme does not apply, as coding opportunities
do not arise. Hence, we restrict our result to the scenario of r 3, . . ., K 1 .
∈{ − }
Theorem 1 is proved via Sections III and IV. In Section III, we prove the result in Theorem 1 regarding the node
removal scenario. We present a rebalancing algorithm, the core of which is a transmission phase in which coded
subsegments are communicated between the nodes. In the transmission phase, the algorithm chooses between two
schemes, Scheme 1 and Scheme 2. Scheme 1 has the communication load [K]K[−]−[r]1 [+] [L][1][(][r][)][ and Scheme 2 has the load]
K−r
K−1 [+][ L][2][(][r][)][. We identify a threshold value for][ r][, denoted][ r][th][, beyond which Scheme 1 is found to be performing]
better than Scheme 2, as shown by the following claim; and thus the rebalancing algorithm chooses between the
two schemes based on whether r ≥ rth or otherwise. The proof of the below claim is in Appendix A.
Claim 1. Let rth = ⌈ [2][K]3[+2] ⌉. If r ≥ rth, then min(L1(r), L2(r)) = L1(r) (thus, Scheme 1 has a smaller load) and
if r < rth, we have min(L1(r), L2(r)) = L2(r) (thus, Scheme 2 has a lower communication load).
A comparison of these schemes is shown in Fig. 2 for the case of K = 15 (as r varies), along with the load Lu(r)
of the uncoded rebalancing scheme, and the lower bound based on the results of [1] (L[∗]rem[(][r][)][N][ =] K(rKr−1) [=] r−r 1 [).]
We observe that the minimum file size required in the case of cyclic databases in conjunction with the above
0
|14 12 Load 10 Communication 8 6 4 2 0|Scheme 1 Scheme 2 (Applicable for odd r) Scheme 2 (Applicable for even r) Scheme with uncoded transmissions Optimal rebalancing load from [2]|Col3|
|---|---|---|
3 4 5 6 7 8 9 10 11 12 13 14
Replication factor r
Fig. 2: For K = 15, the figure shows comparisons of communication loads of Scheme 1 and Scheme 2 with the load of
uncoded transmission scheme and the optimal load achieved by the scheme in [1], for varying r. While all curves are relevant
only for r ∈{3, . . ., K − 1}, Scheme 2 is represented using two curves (which are almost superimposed on each other), one
relevant for the even values of r and the other for the odd values. We see that the minimum of the loads of the two schemes
is always less than the uncoded load. Further, for any integer value of r ≥ 11, we see that Scheme 1 has smaller load than
Scheme 2, and the reverse is true otherwise.
-----
schemes is NT = 2(K [2] 1)K, i.e., it is cubic in K and thus much smaller than the file size requirement for the
−
schemes of [1].
In Section IV, we show a rebalancing scheme for node addition, as given by Theorem 1. We also calculate the
communication load of this scheme, and show that it is optimal.
III. REBALANCING SCHEMES FOR SINGLE NODE REMOVAL IN CYCLIC DATABASES
In Subsection III-A, we provide some intuition for the rebalancing algorithm, which covers both the two
transmission schemes. Scheme 1 and Scheme 2. Then, in Subsection III-B, we describe how the algorithm works
for two example parameters (one for Scheme 1, and another Scheme 2). In Subsection III-C, we formally describe
the complete details of the rebalancing algorithm. In Subsection III-D, we prove the correctness of two transmission
schemes and the rebalancing algorithm. In Subsection III-E, we calculate the communication loads of our schemes.
Finally, in Subsection III-F, we bound the advantage of our schemes over the uncoded scheme, and show that our
schemes perform strictly better, thus completing the arguments for the node-removal part of Theorem 1.
Remark 2. Note that throughout this section, we describe the scheme when the node K is removed from the system.
A scheme for the removal of a general node can be extrapolated easily by permuting the labels of the subsegments.
Further details are provided in Subsection III-C (see Remark 3).
A. Intuition for the Rebalancing Scheme
Consider a r-balanced cyclic database as shown in Fig. 1. Without loss of generality, consider that the node
K is removed. Now the segments that were present in node K, i.e., DK = {WK−r+1, . . ., WK}, no longer have
replication factor r. In order to restore the replication factor of these segments, we must reinstate each bit in these
segments via rebalancing into a node where it was not present before. We fix the target database post-rebalancing to
also be an r-balanced cyclic database. Recall that Si = {i ⊞ K⟨r − 1⟩} represents the nodes where Wi was placed
in the initial database. We represent the K − 1 file segments in this target cyclic database as W[˜] i : i ∈ [K − 1], and
the nodes where W[˜] i would be placed is denoted as S[˜]i = {i ⊞ K−1⟨r − 1⟩}. This target database is depicted in Fig.
3.
Fig. 3: Target cyclic database on nodes [K 1]
−
Our rebalancing algorithm involves three phases: (a) a splitting phase where the segments in DK are split into
subsegments, (b) a transmission phase in which coded subsegments are transmitted, and (c) a merge phase, where
the decoded subsegments are merged with existing segments, and appropriate deletions are carried out, to create
-----
the target database. Further, the algorithm will choose one of two transmission schemes, Scheme 1 and Scheme 2,
in the transmission phase. Our discussion here pertains to both these schemes.
The design of the rebalancing algorithm is driven by two natural motives: (a) move the subsegments as minimally
as possible, and (b) exploit the available coding opportunities. Based on this, we give the three generic principles
below.
- Principle 1: The splitting and merging phases are unavoidable for maintaining the balanced nature of the target
database and reducing the communication load. In our merging phase, the target segment W[˜] j : j ∈ [K − 1]
is constructed by merging, (a) either a subsegment of Wj or the complete Wj, along with (b) some other
subsegments of the segments in DK.
- Principle 2: Particularly, for each Wi ∈ DK, we seek to split Wi into subsegments and merge these into those
W˜ j : j ∈ [K − 1] such that | ˜Sj ∩ Si| is as large as possible, while trying to ensure the balanced condition of the
target database. Observe that the maximum cardinality of such intersection is r 1. We denote the subsegment
−
of segment Wi which is to be merged into W[˜] j, and thus to be placed in the nodes S[˜]j \ Si, as WiS˜j \Si. As
making |S[˜]j ∩ Si| large reduces |S[˜]j \ Si|, we see that this principle reduces the movement of subsegments
during rebalancing.
- Principle 3: Because of the structure of the cyclic placement, there exist ‘nice’ subsets of nodes whose indices
are separated (cyclically) by K r, which provide coding opportunity. In other words, there is a set of
−
subsegments of segments in DK, each of which is present in all-but-one of the nodes in any such ‘nice’
subset, and is to be delivered to the remaining node. Transmitting the XOR-coding of these subsegments
ensures successful decoding at the respective nodes they are set to be delivered to (given by the subsegments’
superscripts), because of this ‘nice’ structure.
We shall illustrate the third principle, which guides the design of our transmission schemes, via the examples
and the algorithm itself. We now elaborate on how the first two principles are reflected in our algorithms. Consider
the segment WK−r+1. We call this a corner segment of the removed node K. Following Principles 1 and 2, this
segment WK−r+1 will be split into ⌈ [K][−]2[r][+2] ⌉ subsegments, out of which one large subsegment is to be merged into
W˜ K−r+1 (as |SK−r+1 ∩S˜K−r+1| = r−1). In order to maintain a balanced database, the remaining ⌈ [K][−]2[r][+2] ⌉−1 are
to be merged into the ⌊ [K]2[−][r] [⌋] [target segments][ ˜][W]⌈ [K]2[−][r] ⌉+1[, . . .,][ ˜][W][K][−][r][, and additionally into the segment][ ˜][W][ K][−]2[r][+1]
if K − r is odd. The other corner segment of K is WK, for which a similar splitting is followed. One large
subsegment of WK will be merged into W[˜] K−1 (again, as |SK ∩ S[˜]K−1| = r − 1) and the remaining ⌈ [K][−]2[r][+2] ⌉− 1
will be merged into ⌊ [K]2[−][r] [⌋] [target segments][ ˜][W][1][, . . .,][ ˜][W]⌊ [K]2[−][r] ⌋[, and additionally into the segment][ ˜][W][ K][−]2[r][+1] if K − r
is odd.
Now, consider the segment WK−r+2. This is not a corner segment, hence we refer to this as a middle segment.
This was available in the nodes SK−r+2 = {K − r + 2, . . ., K, 1}. Following Principles 1 and 2, this segment
WK−r+2 will split into two: one to be merged into W[˜] K−r+1 (for which S[˜]K−r+1 = {K − r + 1, . . ., K − 1, 1})
and W[˜] K−r+2 (for which S[˜]K−r+2 = {K − r + 2, . . ., K − 1, 2}). Observe that |SK−r+2 ∩ S[˜]K−r+1| = r − 1 and
|SK−r+2 ∩ S[˜]K−r+2| = r − 1. In the same way, each middle segment WK−r+i+1 : i ∈ [r − 2] is split into two
-----
subsegments, and will be merged into W[˜] K−r+i and W[˜] K−r+i+1 respectively.
B. Examples
We now provide two examples illustrating our rebalancing algorithm, one corresponding to each of the two
transmissions schemes.
Example illustrating Scheme 1: Consider a database with K = 8 nodes satisfying the r-balanced cyclic storage
condition with replication factor r = 6. A file W is thus split into segments W1, . . ., W8 such that the segment
W1 is stored in nodes {1 ⊞8 ⟨5⟩}, W2 in nodes {2 ⊞8 ⟨5⟩}, W3 in nodes {3 ⊞8 ⟨5⟩}, W4 in nodes {4 ⊞8 ⟨5⟩}, and
W5 in nodes {5 ⊞8 ⟨5⟩}.
Node 8 is removed from the system and its contents, namely W3, W4, W5, W6, W7, W8, must be restored. The
rebalancing algorithm performs the following steps.
Splitting: The splitting is guided by Principles 1 and 2. Each node splits the segments it contains into subsegments
as follows:
- W3 is a corner segment with respect to the removed node 8. Thus, it is split into two subsegments. The larger
is labelled W3[{][1][}] and is of size [12]14[T] [. This is to be merged into][ ˜][W][3][ since][ |][S][3][ ∩] [S][˜][3][|][ = 5 = (][r][ −] [1)][. The other]
segment is labelled W3[{][2][}] and is of size [2]14[T] [. As before, the idea is to merge this into][ ˜][W][2][ to maintain a balanced]
database.
- The other corner segment W8 is handled similarly. It is split into two subsegments labelled W8[{][7][}] and W8[{][6][}]
of sizes [12]14[T] [and][ 2]14[T] [respectively.]
- W4 is a middle segment for node 8. It is split into two subsegments labelled W4[{][2][}] and W4[{][3][}] of sizes
1014T and [4]14[T] [respectively. The intent once again is to merge][ W]4[ {][2][}] into W[˜] 4 and W4[{][3][}] into W[˜] 3 since both
|S4 ∩ S[˜]3| = |S4 ∩ S[˜]4| = 5 = (r − 1). The remaining middle segments are treated similarly.
- W5 into two subsegments labelled W5[{][3][}] and W5[{][4][}] of sizes [8]14[T] [and][ 6]14[T] [respectively.]
- W6 into two subsegments labelled W6[{][4][}] and W6[{][5][}] of sizes [6]14[T] [and][ 8]14[T] [respectively.]
- W7 into two subsegments labelled W7[{][5][}] and W7[{][6][}] of sizes [4]14[T] [and][ 10]14[T] [respectively.]
The superscript represents the set of nodes to which the subsegment is to be delivered.
Coding and Transmission: Now, to deliver these subsegments, nodes make use of coded broadcasts. The design
of these broadcasts are guided by Principle 3. We elucidate the existence of the ‘nice’ subsets given in Principle
3 using a matrix form (referred to as matrix M ) in Figure 4. We note that this representation is similar to the
combinatorial structure defined for coded caching in [12], called a placement delivery array. Consider a submatrix
of M described by distinct rows i1, . . ., il and distinct columns j1, . . ., jl+1. If this submatrix is equivalent to the
l (l + 1) matrix
×
s . . .
∗ ∗ ∗
s . . .
∗ ∗ ∗
... ... ...
∗ ∗
. . . s
∗ ∗ ∗
(1)
-----
Fig. 4: The matrix M for K = 8, r = 6. The rows correspond to subsegments and the columns correspond to
nodes. Entry Mi,j = ‘∗’ if the i[th]subsegment is contained in the j[th]node. Mi,j = ‘s’ if the i[th]subsegment must be
delivered to the j[th]node. For each shape enclosing an entry, the row and column corresponding each entry with that
shape gives a valid XOR-coded transmission.
up to some row/column permutation, then each of the nodes j1, . . ., jl can decode their required subsegment from
the XOR of the i[th]1 [, . . ., i]l[th] [subsegments which can be broadcasted by the][ (][l][ + 1)][th][ node. Our rebalancing algorithm]
makes use of this property to design the transmissions. To denote the submatrices we make use of shapes enclosing
each requirement (represented using an ‘s’) in the matrix. For each shape, the row and column corresponding to
each ‘s’ result in a XOR-coded transmission. Before such XOR-coding, padding the ‘shorter’ subsegments with 0s
to match the length of the longest subsegment would be required. There are other s entries in the matrix M which
are not covered by matrices of type (1). These will correspond to uncoded broadcasts. Thus, we get the following
transmissions from the matrix
- Node 1 pads W6[{][5][}] and W4[{][3][}] to size [12]14[T] [and broadcasts][ W]8[ {][7][}] ⊕ W6[{][5][}] ⊕ W4[{][3][}].
- Similarly, Node 7 pads W5[{][3][}] and W7[{][5][}] and broadcasts W3[{][1][}] ⊕ W5[{][3][}] ⊕ W7[{][5][}].
- Node 1 pads W5[{][4][}] to size [10]14[T] [and broadcasts][ W]5[ {][4][}] ⊕ W7[{][6][}].
- Similarly, Node 7 pads W6[{][4][}] and broadcasts W4[{][2][}] ⊕ W6[{][4][}].
- Finally, Node 1 broadcasts W8[{][6][}] and Node 7 broadcasts W3[{][2][}].
The total communication load incurred in performing these broadcasts is T[1] �2. [12]14[T] [+ 2][.][ 10]14[T] [+ 2][.][ 2]14[T] � = [24]7 [.]
Decoding: The uncoded subsegments are directly received by the respective nodes. The nodes present in the
superscript of the XORed subsegment proceed to decode their respective required subsegment as follows.
- From the transmission W8[{][7][}] ⊕ W6[{][5][}] ⊕ W4[{][3][}], Node 7 contains W6, W4 and can hence recover W8[{][7][}] by
XORing away the other subsegments. Similarly, Nodes 3 and 5 can recover W4[{][3][}] and W6[{][5][}] respectively.
- From the transmission W3[{][1][}] ⊕ W5[{][3][}] ⊕ W7[{][5][}], Node 1 contains W5, W7 and can recover W3[{][1][}]. Similarly,
Nodes 3 and 5 can recover W5[{][3][}] and W7[{][5][}] respectively.
-----
- From the broadcast W5[{][4][}] ⊕ W7[{][6][}], Node 4 contains W7 and can hence recover W5[{][4][}]. Similarly, Node 6 can
recover W7[{][6][}].
- From the broadcast W4[{][2][}] ⊕ W6[{][4][}]. Node 2 contains W6 and can hence recover W4[{][2][}]. Similarly, Node 4 can
recover W6[{][4][}].
Merging and Relabelling: To restore the cyclic storage condition, each node k [K 1] merges and relabels all
∈ −
segments that must be stored in it in the final database. These are W[˜] j for j ∈{k ⊟7 ⟨6⟩}.
- W˜ 1 = W1|W8[{][6][}] of size 1 + [2]14[T] [=][ 8]7[T] [is obtained at nodes][ {][1][,][ 2][,][ 3][,][ 4][,][ 5][,][ 6][}][.]
- W˜ 2 = W2|W3[{][2][}] of size 1 + [2]14[T] [=][ 8]7[T] [is obtained at nodes][ {][2][,][ 3][,][ 4][,][ 5][,][ 6][,][ 7][}][.]
- W˜ 3 = W3[{][1][}]|W4[{][3][}] of size [12]14[T] [+][ 4]14[T] [=][ 8]7[T] [is obtained at nodes][ {][3][,][ 4][,][ 5][,][ 6][,][ 7][,][ 1][}][.]
- W˜ 4 = W4[{][2][}]|W5[{][4][}] of size [10]14[T] [+][ 6]14[T] [=][ 8]7[T] [is obtained at nodes][ {][4][,][ 5][,][ 6][,][ 7][,][ 1][,][ 2][}][.]
- W˜ 5 = W5[{][3][}]|W6[{][5][}] of size [8]14[T] [+][ 8]14[T] [=][ 8]7[T] [is obtained at nodes][ {][5][,][ 6][,][ 7][,][ 1][,][ 2][,][ 3][}][.]
- W˜ 6 = W6[{][4][}]|W7[{][6][}] of size [6]14[T] [+][ 10]14[T] [=][ 8]7[T] [is obtained at nodes][ {][6][,][ 7][,][ 1][,][ 2][,][ 3][,][ 4][}][.]
- W˜ 7 = W7[{][5][}]|W8[{][7][}] of size [4]14[T] [+][ 12]14[T] [=][ 8]7[T] [is obtained at nodes][ {][7][,][ 1][,][ 2][,][ 3][,][ 4][,][ 5][}][.]
After merging and relabelling, each node keeps only the required segments mentioned previously and discards any
extra data present. Since each node now stores 6 segments each of size [8]7[T] [, the total data stored is still][ 48][T][ =][ rNT][ .]
Thus, the cyclic storage condition is satisfied.
Example illustrating Scheme 2: Consider a database with K = 6 nodes satisfying the r-balanced cyclic storage
condition with replication factor r = 3. A file W is split into segments W1, . . ., W6 such that the segment W1 is
stored in nodes 1, 2 and 3, W2 in nodes 2, 3 and 4, W3 in nodes 3, 4 and 5, W4 in nodes 4, 5 and 6, W5 in nodes
5, 6 and 1, and W6 in nodes 6, 1 and 2.
Suppose the node 6 is removed from the system. The contents of node 6, namely W4, W5, W6 must be restored.
To do so, the rebalancing algorithm performs the following steps.
Splitting: Again, each node that contains these segments splits them into subsegments as per Principles 1 and 2.
- W4 is a corner segment for the removed node 6. This is split into three subsegments. The largest is labelled
W4[{][1][}] and is of size [7]10[T] [. This subsegment is to be merged into][ ˜][W][4][ since][ |][S][4][ ∩] [S][˜][4][|][ = 2 = (][r][ −] [1)][. Observe that]
the superscript of W4[{][1][}] represents the set of nodes to which the subsegment is to be delivered, i.e., S[˜]4 \ S4.
The remaining 2 subsegments are labelled W4[{][3][}] and W4[{][2][,][3][}] and are of sizes [2]10[T] [, and][ T]10 [respectively. In order]
to maintain a balanced database, these are to be merged into W[˜] 3 and W[˜] 2 respectively.
- Similarly, the other corner segment W6 is split into three subsegments labelled W6[{][5][}], W6[{][3][}] and W6[{][3][,][4][}] of
sizes [7]10[T] [,][ 2]10[T] [, and] 10T [, and are to be merged with][ ˜][W][5][,][ ˜]W1, and W[˜] 2 respectively.
- W5 is a middle segment of node 6. It is split into two subsegments labelled W5[{][1][}] and W5[{][4][}] of size [5]10[T] [each.]
W5[{][1][}] is to be merged into W[˜] 5, and W5[{][4][}] into W[˜] 4, since both |S5 ∩ S[˜]5| = |S5 ∩ S[˜]4| = 2 = (r − 1).
Coding and Transmission: As before, we make use of the placement matrix shown in Fig. 5 to explain how nodes
perform coded broadcasts as per Principle 3. Consider the submatrix denoted by the circles in Figure 5. This
submatrix described by columns 1, 4, 5 and rows 1, 3 means that each of the subsegments corresponding to these
rows are present in all but one of the nodes corresponding to these columns. Further, node 5 contains both of these
-----
Fig. 5: Matrix M for K = 6, r = 3 case. The rows of this matrix M correspond to subsegments and the columns
correspond to nodes. The Mi,j = ‘∗’ if the i[th]subsegment is contained in the j[th]node. Mi,j = ‘s’ if the i[th]subsegment
must be delivered to the j[th]node. For each shape enclosing an entry, the row and column corresponding each entry
with that shape lead to a XOR-coded transmission.
subsegments, and thus node 5 can broadcast the XOR of them and each of the nodes 1, 4 can recover the respective
subsegments denoted by the rows. Further, those s entries in the matrix not covered by any shape lead to uncoded
broadcasts. Following these ideas, we get the following transmissions.
- Node 1 pads W5[{][2][}] to size [7]10[T] [and broadcasts][ W]5[ {][2][}] ⊕ W6[{][5][}].
- Similarly, Node 4 pads W5[{][4][}] and broadcasts W4[{][1][}] ⊕ W5[{][4][}].
- Finally, Node 1 broadcasts W6[{][3][}], W6[{][3][,][4][}] and Node 4 broadcasts W4[{][3][}], W4[{][2][,][3][}].
The total communication load incurred in performing these broadcasts is T[1] �2. [7]10[T] [+ 2][.][ T]10 [+ 2][.][ 2]10[T] � = 2.
Decoding: The uncoded broadcast subsegments are received as-is by the superscript nodes. With respect to any
XOR-coded subsegment, the nodes present in the superscript of the subsegment can decode the subsegment, due
to the careful design of the broadcasts as per Principle 3. For this example, we have the following.
- From the transmission W5[{][2][}] ⊕ W6[{][5][}], node 2 can decode W5[{][2][}] by XORing away W6[{][5][}] and similarly node
5 can decode W6[{][5][}].
- Similarly, from W4[{][1][}] ⊕ W5[{][4][}], nodes 1 and 4 can recover W4[{][1][}] and W5[{][4][}] respectively.
Merging and Relabelling: To restore the cyclic storage condition, we merge and relabel the subsegments to form
W˜ 1, . . ., ˜W5. Each node k ∈ [K − 1] merges and relabels all segments that must be stored in it in the final database.
These are W[˜] j : j ∈{k ⊟5 ⟨3⟩}.
- Observe that W1 and W6[{][3][}] are available at nodes {1, 2, 3}, from either the prior storage or due to decoding.
Thus, the segment W[˜] 1 = W1|W6[{][3][}] of size 1 + [2]10[T] [=][ 6]5[T] [is obtained and stored at nodes][ {][1][,][ 2][,][ 3][}][. Similarly,]
we have the other merge operations as follows.
- W˜ 2 = W2|W4[{][2][,][3][}]|W6[{][3][,][4][}] of size 1 + 10[T] [+][ T]10 [=][ 6]5[T] [is obtained at nodes][ {][2][,][ 3][,][ 4][}][.]
- W˜ 3 = W3|W4[{][3][}] of size 1 + [2]10[T] [=][ 6]5[T] [is obtained at nodes][ {][3][,][ 4][,][ 5][}][.]
- W˜ 4 = W4[{][1][}]|W5[{][4][}] of size [7]10[T] [+][ 5]10[T] [=][ 6]5[T] [is obtained at nodes][ {][4][,][ 5][,][ 1][}][.]
- W˜ 5 = W5[{][2][}]|W6[{][5][}] of size [5]10[T] [+][ 7]10[T] [=][ 6]5[T] [is obtained at nodes][ {][5][,][ 1][,][ 2][}][.]
-----
After merging and relabelling, each node keeps only the required segments mentioned previously and discards any
extra data present. Since each node now stores 3 segments each of size [6]5[T] [, the total data stored is still][ 18][T][ =][ rNT][ .]
Thus the cyclic storage condition is satisfied.
C. Algorithm
In this section, following the intuition built in Subsection III-A, we give our complete rebalancing algorithm
(Algorithm 1) for removal of a node from an r-balanced cyclic database on K nodes (with r 2, . . ., K 1 ).
≤{ − }
We thus prove the node-removal result in Theorem 1. Algorithm 1 initially invokes the SPLIT routine (described in
Algorithm 2) which gives the procedure to split segments into subsegments. Each subsegment’s size is assumed to
be an integral multiple of 2(K1−1) [. This is without loss of generality, by the condition on the size][ T][ of each segment]
as in Theorem 1. This splitting scheme is also illustrated in the figures Fig. 6-8. Guided by Claim 1, based on the
value of r, Algorithm 1 selects between two routines that correspond to the two transmission schemes: SCHEME 1
and SCHEME 2. These schemes are given in Algorithm 3 and 4. We note that, since the sizes of the subsegments
may not be the same after splitting, appropriate zero-padding (up to the size of the larger subsegment) is done
before the XOR operations are performed in the two schemes. Finally, the MERGE routine (given in Algorithm
5) is run at the end of Algorithm 1. This merges the subsegments and relabels the merged segments as the target
segments, thus resulting in the target r-balanced cyclic database on K 1 nodes.
−
Remark 3. Note that, for ease of understanding, we describe the algorithms for the removal of node K from the
system. The scheme for the removal of a general node i can be obtained as follows. Consider the set of permutations
φi : [K] → [K], where φi(j) = j ⊟K (K − i), for i ∈ [K]. If a node i is removed instead of K, we replace each
label j in the subscript and superscript of the subsegments in our Algorithms 2-5 with φi(j). Due to the cyclic
nature of both the input and target databases, we naturally obtain the rebalancing scheme for the removal of node
i.
Algorithm 1 Rebalancing Scheme for Node Removal from Cyclic Database
1: procedure TRANSMIT
2: SPLIT() ⊲ Call SPLIT
3: if r ≥ rth = ⌈ [2][K]3[+2] ⌉ then
4: SCHEME 1() ⊲ Call SCHEME 1
5: else
6: SCHEME 2() ⊲ Call SCHEME 2
7: end if
8: MERGE() ⊲ Call MERGE
9: end procedure
-----
Fig. 6: For each i ∈ [r − 2], WK−r+1+i is split into two parts labelled WK[{][i]−[+1]r+1+[}] i [and][ W]K[ {][i]−[+]r[K]+1+[−][r][}]i [of sizes]
K2(+rK−−21)i−2 and [K]2([−]K[r]−[+2]1)[i] [respectively.]
Fig. 7: Let p = ⌊ [K]2[−][r] When K − r is odd, WK−r+1 is split into p + 2 parts la
[⌋][.]
belled WK[{][1]−[}]r+1[,] WK[{][(]−[K]r[−]+1[r][−][p][)][⊞][K][−][1][⟨][min(][r,p][+1)][⟩}], and WK[{][(]−[K]r[−]+1[r][+1][−][j][)][⊞][K][−][1][⟨][min(][r,j][)][⟩}] for j = 1, . . ., p;
of sizes 2(KK+r−−1)2 [,] 2(K1−1) [,] 2(K2−1) [,] 2(K2−1) [, . . .,] 2(K2−1) [.] Similarly, WK is split into p + 2 parts la
belled WK[{][K][−][1][}], WK[{][(][r][+][p][)][⊟][K][−][1][⟨][min(][r,p][+1)][⟩}], and WK[{][(][r][−][1+][j][)][⊟][K][−][1][⟨][min(][r,j][)][⟩}] for j = 1, . . ., p; of sizes
K+r−2 1 2 2 2
2(K−1) [,] 2(K−1) [,] 2(K−1) [,] 2(K−1) [, . . .,] 2(K−1) [respectively.]
Fig. 8: Let p = ⌊ [K]2[−][r] [⌋][. When][ K][ −] [r][ is even,][ W][K][−][r][+1][ is split into][ p][ + 1][ parts labelled][ W]K[ {][1]−[}]r+1[,]
and WK[{][(]−[K]r[−]+1[r][+1][−][j][)][⊞][K][−][1][⟨][min(][r,j][)][⟩}] for j = 1, . . ., p; of sizes 2(KK+r−−1)2 [,] 2(K2−1) [,] 2(K2−1) [, . . .,] 2(K2−1) [. Similarly,]
WK is split into p + 1 parts labelled WK[{][K][−][1][}], and WK[{][(][r][−][1+][j][)][⊟][K][−][1][⟨][min(][r,j][)][⟩}] for j = 1, . . ., p; of sizes
K+r−2 2 2 2
2(K−1) [,] 2(K−1) [,] 2(K−1) [, . . .,] 2(K−1) [respectively.]
-----
Algorithm 2 Splitting Scheme
1: procedure SPLIT
2: for each i [r 2] do
∈ −
3: Split WK−r+1+i into subsegments labelled WK[B]−r+1+i [for][ B][ ∈{{][i][ + 1][}][,][ {][i][ +][ K][ −] [r][}}][, where the size]
of the subsegment is K2(+rK−−21)i−2 [,] if B = {i + 1}
K2(−Kr−+21)i [,] if B = {i + K − r}
4: end for
5: if K r is odd then
−
6: Let p = ⌊ [K]2[−][r]
[⌋]
7: Split WK−r+1 into p + 2 subsegments labelled WK[B]−r+1[, for][ B][ ∈{{][1][}][,][ {][(][K][ −] [r][ −] [p][)][ ⊞][K][−][1]
⟨min(r, p + 1)⟩}} ∪{{(K − r + 1 − j) ⊞K−1 ⟨min(r, j)⟩} : j ∈ [p]} where the size of the subsegment is
2(2(KKK+1r−−−1)1)2 [,][,] ifif B B = = { {1(K} − r − p) ⊞K−1 ⟨min(r, p + 1)⟩}
2(K2−1) [,] otherwise
8: Split WK into p + 2 subsegments labelled WK[B][, for][ B][ ∈{{][K][ −] [1][}][,][ {][(][r][ +][ p][)][ ⊟][K][−][1][ ⟨][min(][r, p][ +]
1)⟩}} ∪{{(r − 1 + j) ⊟K−1 ⟨min(r, j)⟩} : j ∈ [p]} where the size of the subsegment is
2(2(KKK+1r−−−1)1)2 [,][,] ifif B B = = { {K(r + − p1)} ⊟K−1 ⟨min(r, p + 1)⟩}
2(K2−1) [,] otherwise
9: else
10: Let p = [K]2[−][r]
11: Split WK−r+1 into p + 1 subsegments labelled WK[B]−r+1[, for][ B][ ∈{][1][} ∪{{][(][K][ −] [r][ + 1][ −] [j][)][ ⊞][K][−][1]
min(r, j) : j [p] where the size of the subsegment is K2(K+r−−1)2 [,] if B = {1}
⟨ ⟩} ∈ }
2(K2−1) [,] otherwise
12: Split WK into p + 1 subsegments labelled WK[B][, for][ B][ ∈{][K][ −] [1][} ∪{{][(][r][ −] [1 +][ j][)][ ⊟][K][−][1][ ⟨][min(][r, j][)][⟩}][ :]
2(KK+r−−1)2 [,] if B = {K − 1}
j [p] where the size of the subsegment is
∈ }
2(K2−1) [,] otherwise
13: end if
14: end procedure
-----
Algorithm 3 Transmission Scheme 1
1: procedure SCHEME 1
2: for each i = 1, . . ., K r do
−
3: Node 1 broadcasts [�]j[⌊]=0[r]K[−]−[1][−]r[i] [⌋] WK[{][K]+1[−]−[i][−]i−[j][(]j[K](K[−]−[r][)]r[}])[.]
4: Node K − 1 broadcasts [�][⌊]j=0[r]K[−]−[1][−]r[i] [⌋] WK[{][i]−[+]r[j]+[(][K]i+[−]j[r]([)]K[}]−r)
5: end for
6: Node 1 broadcasts all subsegments of WK except WK[{][K][−][1][}].
7: Node K − 1 broadcasts all subsegments of WK−r+1 except WK[{][1]−[}]r+1[.]
8: end procedure
Algorithm 4 Transmission Scheme 2
1: procedure SCHEME 2
2: for each i = 2, . . ., r 1 do
−
3: Node 1 broadcasts WK[{][i]−[}] r+i [⊕] [W]K[ {][K]−[−]r+[r][+]i+1[i][}][.]
4: end for
5: Node K − 1 broadcasts WK[{][1]−[}]r+1 [⊕] [W]K[ {][K]−[−]r+2[r][+1][}].
6: Node 1 broadcasts all subsegments of WK except WK[{][K][−][1][}].
7: Node K − 1 broadcasts all subsegments of WK−r+1 except WK[{][1]−[}]r+1[.]
8: end procedure
-----
Algorithm 5 Merging and Relabelling
1: procedure MERGE
2: for each i = 1, . . ., r 1 do
−
3: Each node in {(K − r + i) ⊞K−1 ⟨r⟩} performs the concatenation W[˜] K−r+i = WK[{][i]−[}] r+i[|][W]K[ {][K]−[−]r+[r][+]i+1[i][}] [.]
4: end for
5: if K r is even then
−
6: for each i = 1, . . ., [K]2[−][r] do
7: Each node in {i ⊞K−1 ⟨r⟩} performs the concatenation W[˜] i = Wi|WK[{][(][r][−][1+][i][)][⊟][K][−][1][⟨][min][(][r,i][)][⟩}].
8: end for
9: for each i = [K]2[−][r] + 1, . . ., K − r do
10: Each node in {i ⊞K−1 ⟨r⟩} performs the concatenation W[˜] i = Wi|WK[{][i]−[⊞]r[K]+1[−][1][⟨][min][(][r,K][−][r][−][i][+1)][⟩}].
11: end for
12: else
13: for each i = 1, . . ., [K][−]2[r][−][1] do
14: Each node in {i ⊞K−1 ⟨r⟩} performs the concatenation W[˜] i = Wi|WK[{][(][r][−][1+][i][)][⊟][K][−][1][⟨][min][(][r,i][)][⟩}].
15: end for
16: for each i = [K][−]2[r][+1] + 1, . . ., K − r do
17: Each node in {i ⊞K−1 ⟨r⟩} performs the concatenation W[˜] i = Wi|WK[{][i]−[⊞]r[K]+1[−][1][⟨][min][(][r,K][−][r][−][i][+1)][⟩}].
18: end for
19: Each node in �� K−2r+1 � ⊞K−1 ⟨r⟩� performs the concatenation W˜ K−2r+1 =
W K−2r+1 |WK[{][(][r][+][p][)][⊟][K][−][1][⟨][min(][r,p][+1)][⟩}]|WK[{][(]−[K]r[−]+1[r][−][p][)][⊞][K][−][1][⟨][min(][r,p][+1)][⟩}], where p = ⌊ [K]2[−][r] [⌋][.]
20: end if
21: end procedure
Note: Once the target segments W[˜] 1, . . ., W[˜] K−1 are recovered at the required nodes, any extra bits present at the
node are discarded.
D. Correctness
To check the correctness of the scheme, we have to check the correctness of the encoding, decoding, and the
merging. It is straightforward to check that the nodes that broadcast any transmission, whether coded or uncoded
subsegments, contain all respective subsegments according to the design of the initial storage. Thus, the XOR-coding
and broadcasts given in the transmissions schemes are correct. For checking the decoding, we must check that each
subsegment can be decoded at the corresponding ‘superscript’ nodes where it is meant to be delivered. We must
also check that the merging scheme is successful, i.e., at any node, all the subsegments to be merged into a target
segment are available at that node. Finally, we check that the target database is the cyclic database on K 1 nodes.
−
Now, we focus on checking the decoding of the transmissions in both Scheme 1 and Scheme 2. Clearly, all
uncoded transmissions are directly received. Thus, we now check only the decoding involved for XOR-coded
-----
transmissions, for the two schemes.
- Decoding for Scheme 1: For each i ∈ [K−r], two broadcasts [�]j[⌊]=0[r]K[−]−[1][−]r[i] [⌋] WK[{][K]+1[−]−[i][−]i−[j][(]j[K](K[−]−[r][)]r[}]) [and][ �]j[⌊]=0[r]K[−]−[1][−]r[i] [⌋] WK[{][i]−[+]r[j]+[(][K]i+[−]j[r]([)]K[}]−r)
are made. Consider the first broadcast. Let J = {0, . . ., ⌊ [r]K[−]−[1][−]r[i] [⌋}][. For some][ j][ ∈] [J][, consider the seg-]
ment WK+1−i−j(K−r). For any j[′] ∈ J\{j}, we claim that node K − i − j[′](K − r) contains the segment
WK+1−i−j(K−r). Going through all possible j, j[′] would then mean that all the segments in this first XOR
coded broadcast can be decoded at the respective superscript-nodes. Now, for node K i j[′](K r) to
− − −
contain the segment WK+1−i−j(K−r), the following condition must be satisfied:
– Condition (A): K − i − j[′](K − r) ∈ SK+1−i−j(K−r) = {(K + 1 − i − j(K − r)) ⊞K ⟨r⟩}.
To remove the wrap-around, we simplify Condition (A) into two cases based on the relation between j and
j[′]. For Condition (A) to hold, it is easy to check that one of the following pairs of inequalities must hold.
1) if j < j[′], K + 1 i j(K r) 2K i j[′](K r) K + 1 i j(K r) + r 1
− − − ≤ − − − ≤ − − − −
2) if j > j[′], K + 1 i j(K r) K i j[′](K r) K + 1 i j(K r) + r 1.
− − − ≤ − − − ≤ − − − −
Consider the first inequality. First we prove that when j < j[′], K + 1 i j(K r) 2K i j[′](K r).
− − − ≤ − − −
To show this, we consider the following sequence of equations,
(2K i j[′](K r)) (K + 1 i j(K r)) = K 1 (j[′] j)(K r).
− − − − − − − − − − −
(a) � r 1 i
− −
K 1
≥ − −
K r
−
K r + i
≥ −
(b)
K r + 1
≥ −
(c)
K (K 1) + 1
≥ − −
0,
≥
�
(K r).
−
where (a) holds as the maximum value of j[′] − j is equal to ⌊ [r]K[−]−[1][−]r[i] [⌋][, (b) holds as the minimum value of][ i][ is]
1, and (c) holds as the maximum value of r is K 1.
−
Similarly,
(K + 1 i j(K r) + r 1) (2K i j[′](K r)) = r K + (j[′] j)(K r)
− − − − − − − − − − −
(a)
r K + (K r) 0,
≥ − − ≥
where (a) holds as the minimum value of j[′] j is equal to 1.
−
Now, for the second inequality, we first prove that when j > j[′], K + 1 i j(K r) K i j[′](K r).
− − − ≤ − − −
To show this, we consider the following sequence of equations
(K i j[′](K r)) (K + 1 i j(K r)) = (j j[′])(K r) 1
− − − − − − − − − −
(a)
K r 1
≥ − −
(b)
K (K 1) 1
≥ − − −
-----
0,
≥
where (a) holds as the minimum value of j j[′] is equal to 1 and (b) holds as the maximum value of r is
−
K 1.. Similarly,
−
(K + 1 i j(K r) + r 1) (K i j[′](K r)) = r (j j[′])(K r)
− − − − − − − − − − −
(a) � r 1 i
− −
r
≥ −
K r
−
1 + i
≥
(b)
0,
≥
�
(K r)
−
where (a) holds as the maximum value of j − j[′] is equal to ⌊ [r]K[−]−[1][−]r[i] [⌋] [and (b) holds as the minimum value of]
i is 1.
Hence, all the inequalities for both the cases are true. Similar arguments hold for the second broadcast as well.
- Decoding for Scheme 2: For each i ∈ [r], a broadcast WK[{][i]−[}] r+i [⊕] [W]K[ {][K]−[−]r+[r][+]i+1[i][}] [is made (c.f. Lines 3,5 in]
Algorithm 4). Now, node i contains the subsegment WK−r+i+1 since (K − r + i +1)⊞K (r − 1) = i. Similarly,
node K −r +i clearly contains the subsegment WK−r+i. Thus, node i can decode WK[{][i]−[}] r+i [and node][ K][ −][r] [+][i]
can decode WK[{][K]−[−]r+[r][+]i+1[i][}] [. Thus, we have verified the correctness of the transmission schemes.]
- Checking the merging phase: Initially, for each i ∈ [K], Wi is stored at the nodes {i ⊞K ⟨r⟩}. After the
transmissions are done, W[˜] i is obtained by merging some subsegments with Wi, for each i ∈{1, . . ., K}. Now,
to verify that the merging can be done correctly, we need to show that all these subsegments are present at
the nodes {i ⊞K−1 ⟨r⟩} after the transmissions are done.
– For each i ∈ [r − 1], consider the segment W[˜] K−r+i which is obtained by merging WK[{][i]−[}] r+i [with]
WK[{][K]−[−]r+[r][+]i+1[i][}] [, (c.f. Algorithm 5 Line 2 and 3). We observe that][ W]K[ {][i]−[}] r+i [was present in][ {][(][K][ −] [r][ +]
i)⊞K−1 ⟨r⟩}\{i} before rebalancing and was decoded by node i during the rebalancing process. Similarly,
WK[{][K]−[−]r+[r][+]i+1[i][}] [was present in][ {][(][K][ −] [r][ +][ i][)][ ⊞][K][−][1][ ⟨][r][⟩}\{][K][ −] [r][ +][ i][}][ before rebalancing and was decoded by]
node K r + i during the rebalancing process.
−
– For each i ∈{1, . . ., ⌊ [K]2[−][r] K, (c.f. Algo
[⌋}][,][ ˜][W][i][ is obtained by merging][ W][i][ and][ W][ {][(][r][−][1+][i][)][⊟][K][−][1][⟨][min(][r,i][)][⟩}]
rithm 5 Lines 7 and 14). Each node in {i⊞K−1⟨r⟩} that does not contain WK can obtain WK[{][(][r][−][1+][i][)][⊟][K][−][1][⟨][min(][r,i][)][⟩}]
using the broadcasts made in Lines 6-7 of Algorithms 3 and 4. Thus, W[˜] i can be obtained at the nodes
{i ⊞K−1 ⟨r⟩}.
– We use similar arguments the target segments W[˜] i for each i ∈{⌈ [K]2[−][r] [⌉] [+ 1][, . . ., K][ −] [r][}][ and][ ˜][W][ K][−]2[r][+1], if
K r is odd.
−
- Checking the target database structure: We first show that the sizes of all the segments after rebalancing
are equal. For this, we look at how the new segments are formed by merging some subsegments with the older
segment.
– The size of the target segment W[˜] K−r+i, for each i ∈{1, . . ., r − 1}, is [K][+][r]2([−]K[2(]−[i][−]1)[1)][−][2] + [K]2([−]K[r]−[+2]1)[i] [=] KK−1
(c.f. Algorithm 5 - Line 2 and 3).
-----
– The size of the target segment W[˜] i, for each i ∈ [⌊ [K]2[−][r] [⌋][]][, is][ 1 +] 2(K2−1) [=] KK−1 [(c.f. Algorithm 5 - Line]
6, 7, 13, and 14).
– For even K −r, the size of the target segment W[˜] i, for each i ∈{ [K]2[−][r] +1, . . ., K −r}, is 1+ 2(K2−1) [=] KK−1
(c.f. Algorithm 5 - Line 9 and 10).
– For odd K − r, the size of the target segment W[˜] i, for each i ∈{ [K][−]2[r][+1] + 1, . . ., K − r}, is 1 + 2(K2−1) [=]
K
K−1 [(c.f. Algorithm 5 - Line 16 and 17).]
1 1 K
– The size of the target segment W[˜] K−2r+1 is 1 + 2(K−1) [+] 2(K−1) [=] K−1 [. (c.f. Algorithm 5 - Line 19).]
We can see that the sizes of all the segments after rebalancing are the same, i.e., KK−1 [. This completes]
the verification of the correctness of our rebalancing algorithm, which assures that the target database is an
r-balanced cyclic database on K 1 nodes.
−
E. Communication Load
We now calculate the communication loads of the two schemes. For the uncoded broadcasts in both schemes corre
sponding to lines 6, 7 in both Algorithms 3 and 4, the communication load incurred is 2 � K−2r−1 . 2(K2−1) [+] 2(K1−1) � =
(KK−−1)r [, when][ K][ −] [r][ is odd, and][ 2] � K2−r [.] 2(K2−1) � = (KK−−1)r [when][ K][ −] [r][ is even, respectively. Now, we analyse]
the remainder of the communication loads of the two schemes.
1) Scheme 1: The coded broadcasts made in Scheme 1 are [�]j[⌊]=0[r]K[−]−[1][−]r[i] [⌋] WK[{][K]+1[−]−[i][−]i−[j][(]j[K](K[−]−[r][)]r[}]) [and][ �]j[⌊]=0[r]K[−]−[1][−]r[i] [⌋] WK[{][i]−[+]r[j]+[(][K]i+[−]j[r]([)]K[}]−r)
for each i [K r] (c.f. Algorithm 3 Lines 2-5). Once again, the subsegments involved in the broadcast are padded
∈ −
so that they are of the same size. Consider [�][⌊]j=0[r]K[−]−[1][−]r[i] [⌋] WK[{][i]−[+]r[j]+[(][K]i+[−]j[r]([)]K[}]−r)[. The size of each subsegment involved is]
K+r−2(i2(+Kj(−K1)−r)−1)−2 Thus the subsegment having the maximum size is the one corresponding to j = 0 having
size [K]2([+]K[r]−[−]1)[2][i] [. Similarly, each subsegment involved in][ �]j[⌊]=0[r]K[−]−[1][−]r[i] [⌋] WK[{][K]+1[−]−[i][−]i−[j][(]j[K](K[−]−[r][)]r[}]) [is of size][ K][−][r][+2(]2([r]K[−]−[i][−]1)[j][(][K][−][r][))] .
Thus, the maximum size is the one corresponding to j = 0 having size, [K]2([+]K[r]−[−]1)[2][i] [. Thus, the communication load is]
L1(r) = 2 ·
K−r
�
i=1
K + r 2i
−
2(K 1)
−
� 1 �
= ((K r)(K + r) (K r)(K r + 1))
− − − −
K 1
−
= [(][K][ −] [r][)(2][r][ −] [1)] .
(K 1)
−
2) Scheme 2: The coded broadcasts made in Scheme 2 involve WK[{][i]−[}] r+i [⊕] [W]K[ {][K]−[−]r+[r][+]i+1[i][}] [for each][ i][ ∈] [[][r][ −] [1]][ (c.f.]
Algorithm 4 Lines 2-5). Since the smaller subsegment of each pair is padded to match the size of the larger, the
cost involved in making the broadcast depends on the larger subsegment. Now, the size of these subsegments are
K2(+rK−−21)j−2 and [K][−]2([r]K[+2(]−1)[j][+1)] respectively (c.f. Figures 6, 7, 8). For the first subsegment to be larger, [K]2([+][r]K[−]−[2]1)[j][−][2] ≥
K−2(rK+2(−1)j+1) . It is easy to verify that this occurs when j ≤ 2[r] [−] [1][. We separate our analysis into cases based on the]
parity of r.
For odd r, we have,
r−2 1 −1 K + r 2j 2 r−2 K r + 2(j + 1)
L2(r) = � − − + � −
2(K 1) 2(K 1)
j=0 − j= [r][−]2 [1] −
-----
r−2 1 −1 K r + 2(r 2 j[′] + 1)
� − − −
2(K 1)
j[′]=0 −
(a)
=
r−2 1 −1
�
j=0
K + r 2j 2
− −
+
2(K 1)
−
= 2
r−2 1 −1
�
j=0
K + r 2j 2
− −
2(K 1)
−
� � r 1 �� r 1
− −
(K + r 2) 1
− − −
2 2
��
� 2
=
2(K 1)
−
� 1
=
2(K 1)
−
��� r 1
−
2
� � �
(r 1) K + [r][ −] [1],
−
2
where (a) is obtained by changing the variable j[′] = (r 2) j.
− −
Similarly, for even r, we have,
r
2 [−][2]
K + r 2j 2 K
− − + �
2(K 1) 2(K 1) [+]
− − j[′]=0
r−2
�
j= [r]2
K r + 2(j + 1)
−
2(K 1)
−
L2(r) =
=
(a)
=
r
2 [−][1]
�
j=0
r
2 [−][2]
�
j=0
r
2 [−][2]
�
j=0
K + r 2j 2
− −
+
2(K 1)
−
K r + 2(r 2 j[′] + 1)
− − −
2(K 1)
−
K + r − 2j − 2 + [K][ +][ r][ −] [2] � 2r [−] [1]� − 2 +
2(K 1) 2(K 1)
− −
r−2
�
j= [r]2
K r + 2(j + 1)
−
2(K 1)
−
= 2
r
2 [−][2]
�
j=0
K + r 2j 2 K
− −
+
2(K 1) 2(K 1)
− −
� 2 ��� r � � r �� r �� K
= (K + r 2) +
− −
2(K 1) 2 2 2 2(K 1)
− [−] [1] [−] [2] [−] [1] −
� 1 � � � K
= (r 2) K + [r] +
−
2(K 1) 2 2(K 1) [,]
− −
where (a) is obtained by changing the variable j[′] = (r 2) j.
− −
The total communication load is therefore Lrem(r) = (KK−−1)r [+ min(][L][1][(][r][)][, L][2][(][r][))][.]
F. Advantage over the uncoded scheme
In this subsection, we bound the advantage of the rebalancing schemes presented in this work, over the uncoded
scheme, in which the nodes simply exchange all the data which was available at the removed node via uncoded
transmissions. We know that the load of the uncoded scheme must thus be r. Consider the ratio of the communication
load of Scheme 1 to that of the uncoded scheme. We then have the following sequence of equations.
KK−−1r [+][ L][1][(][r][)] = [1] � K − r �
Lu(r) r K − 1 [+ (][K][ −]K[r] −[)(2]1[r][ −] [1)]
K r
−
=
r(K 1) [(1 + 2][r][ −] [1)]
−
� (K − 1) − (r − 1)
= [2(][K][ −] [r][)] = 2
K 1 K 1
− −
�
-----
�
= 2 1
− [r][ −] [1]
K 1
−
�
.
Now we do the same for Scheme 2.
KK−−1r [+][ L][2][(][r][)] = [1] � K − r r − 1 �K + [r][ −] [1] ��
Lu(r) r K − 1 [+] 2(K − 1) 2
1 � �
= 2K 2r + rK K + [(][r][ −] [1)][2]
− −
2r(K 1) 2
−
1 � �
= K(r + 1) + [(][r][ −] [1)][2][ −] [4][r]
2r(K 1) 2
−
1 � �
< K(r + 1) + [r][2][ −] [4][r]
2r(K 1) 2
−
r 4
−
+
≤ [(][K][ −] [1 + 1)(][r][ + 1)]
2r(K 1) 4(K 1)
− −
r + K r 4
−
≤ [1]
2 [+] 2r(K 1) [+] 4(K 1)
− −
1 � K r � r
−
+ [r] < [1]
≤ [1]
2 [+] 2(K 1) r 2 2 [+ 1]2r [+] 4(K 1) [,]
− −
where the last inequality follows by using the fact that K − r ≤ K − 1. Observe that [1]2 [+][ 1]2r [+] 4(Kr−1) [<][ 1][, as]
r 3 and K 1 r. As the choice of the scheme selected for transmissions is based on the minimum load of
≥ − ≥
Scheme 1 and Scheme 2, we have the result in the theorem. This completes the proof of the node-removal part of
Theorem 1.
IV. REBALANCING SCHEME FOR SINGLE NODE ADDITION IN CYCLIC DATABASES
We consider the case of a r-balanced cyclic database system when a new node is added. Let this new empty
node be indexed by K +1. For this imbalance situation, we present a rebalancing algorithm (Algorithm 6) in which
nodes split the existing data segments and broadcast appropriate subsegments, so that the target database which
is a r-balanced cyclic database on K + 1 nodes, can be achieved. Each subsegment’s size is assumed to be an
integral multiple of K1+1 [. This is without loss of generality, by the condition on the size][ T][ of each segment as in]
Theorem 1. Since node K + 1 starts empty, there are no coding opportunities; hence, the rebalancing scheme uses
only uncoded transmissions. We show that this rebalancing scheme achieves a normalized communication load of
r
K+1 [, which is known to be optimal from the results of [1]. This proves the node-addition part of Theorem 1.]
-----
Algorithm 6 Rebalancing Scheme for Single Node Addition
1: procedure DELIVERY SCHEME
2: for each i [K] do
∈
K 1
3: Split Wi into two subsegments, labelled W[˜] i of size K+1 [and][ W]i[ {][K][+1][}∪][[min(][r][−][1][,i][−][1)]] of size K+1 [.]
4: Node i broadcasts Wi[{][K][+1][}∪][[min(][r][−][1][,i][−][1)]].
5: end for
6: Each node in {(K + 1) ⊞K+1 ⟨r⟩} initializes the segment W[˜] K+1 as an empty vector.
7: for each i [K] do
∈
8: Each node in {(K +1) ⊞K+1 ⟨r⟩} performs the concatenation W[˜] K+1 = W[˜] K+1|Wi[{][K][+1][}∪][[min(][r][−][1][,i][−][1)]].
9: end for
10: for each i = (K r + 2) to K do
−
11: Node i transmits W[˜] i to node K + 1.
12: end for
13: for each i = 1 to r 1 do
−
14: Node i discards W[˜] K−r+1+i.
15: end for
16: end procedure
Note: Once the target segments W[˜] 1, . . ., W[˜] K+1 are recovered at the required nodes, any extra bits present at the
node are discarded.
A. Correctness
We verify the correctness of the rebalancing algorithm, i.e., we check that the target r-balanced cyclic database on
K + 1 nodes is achieved post-rebalancing. To achieve the target database, each target segment W[˜] i, for i ∈ [K + 1],
must be of size KK+1 [and stored exactly in][ i][ ⊞][K][+1][ ⟨][r][⟩][. Consider the segments][ ˜][W][i][ for each][ i][ ∈] [[][K][ −] [r][ + 1]][. Since,]
these were a part of Wi, they are already present exactly at nodes {i ⊞K+1 ⟨r⟩}.
Now, consider the segments W[˜] i : i ∈{(K − r + 1) ⊞K ⟨r⟩}. We recall Si as the set of nodes where Wi is
present in the initial database. Let S[˜]i be the set of nodes where it must be present in the final database. Now,
the nodes where W[˜] i is not present and must be delivered to are given by S[˜]i\Si = {K + 1}. This is performed
in lines 10-12 of Algorithm 6. Also, the nodes where Wi is present but W[˜] i must not be present are given by
Si\S[˜]i = {i − K + r − 1}. This node discards W[˜] i in lines 13-15 of Algorithm 6. Finally, W[˜] K+1 must be present
in nodes {(K + 1) ⊞K+1 ⟨r⟩}. This can be obtained by the those nodes from the broadcasts made in line 4 in
Algorithm 6 and the concatenation performed in lines 7-9.
Finally, It is easy to calculate that each target segment W[˜] i : i ∈ [K + 1] is of size KK+1 [. Thus, the final database]
satisfies the cyclic storage condition.
-----
B. Communication Load
For broadcasting each Wi[{][K][+1][}∪][[min(][r][−][1][,i][−][1)]] of size K1+1 [,][ ∀][i][ ∈] [[][K][]][, the communication load incurred is]
K. K1+1 [=] KK+1 [. Now, to transmit][ ˜][W][K][−][r][+2][ . . .][ ˜][W][K][ to node][ K][ +1][, the communication load incurred is][ (][r][−][1)][.] KK+1 [.]
rK
Hence, the total communication load is Ladd(r) = K+1 [.]
V. DISCUSSION
We have presented a XOR-based coded rebalancing scheme for the case of node removal and node addition in
cyclic databases. Our scheme only requires the file size to be only cubic in the number of nodes in the system.
For the node removal case, we present a coded rebalancing algorithm that chooses between the better of two coded
transmission schemes in order to reduce the communication load incurred in rebalancing. We showed that the
communication load of this rebalancing algorithm is always smaller than that of the uncoded scheme. For the node
addition case, we present a simple uncoded scheme which achieves the optimal load.
We give a few comments regarding other future directions. Constructing good converse arguments in the cyclic
database setting for the minimum communication load required for rebalancing from node-removal seems to be
a challenging problem, due to the freedom involved in choosing the target database, the necessity of the target
database to be balanced, and also because of the low file size requirement. Fig. 2 shows a numerical comparison of
the loads of our schemes with the converse from [1]. However, the conditions of having a constrained file size or
balanced target database are not used to show this converse, thus this bound is possibly quite loose for our cyclic
placement setting. It would be certainly worthwhile to construct a tight converse for our specific setting.
As we do not have a tight converse, it is also quite possible that rebalancing schemes for node removal exist
with lower communication load for the cyclic placement setting. Designing such schemes would be an interesting
future direction.
ACKNOWLEDGEMENTS
The first two authors would like to acknowledge Shubhransh Singhvi for fruitful discussions on the problem.
REFERENCES
[1] P. Krishnan, V. Lalitha, and L. Natarajan, “Coded data rebalancing: Fundamental limits and constructions,” in 2020 IEEE International
Symposium on Information Theory (ISIT). IEEE, 2020, pp. 640–645.
[2] “Data rebalancing in apache ignite (apache ignite documentation),” (Last accessed in 2019). [Online]. Available:
[https://apacheignite.readme.io/docs/rebalancing](https://apacheignite.readme.io/docs/rebalancing)
[3] “Data rebalancing in apache hadoop (apache hadoop documentation),” (Last accessed in 2019). [Online]. Available:
[http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Balancer](http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html#Balancer)
[4] “No shard left behind: dynamic work rebalancing in google cloud dataflow,” (Last accessed in 2019). [Online]. Available:
[https://cloud.google.com/blog/products/gcp/no-shard-left-behind-dynamic-work-rebalancing-in-google-cloud-dataflow](https://cloud.google.com/blog/products/gcp/no-shard-left-behind-dynamic-work-rebalancing-in-google-cloud-dataflow)
[5] “Rebalancing in ceph (ceph architecture),” (Last accessed in 2019). [Online]. Available:
[http://docs.ceph.com/docs/mimic/architecture/#rebalancing](http://docs.ceph.com/docs/mimic/architecture/#rebalancing)
[6] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE Transactions on information theory, vol. 60, no. 5, pp. 2856–2867,
2014.
-----
[7] K. S. Sree and P. Krishnan, “Coded data rebalancing for decentralized distributed databases,” in 2020 IEEE Information Theory Workshop
(ITW). IEEE, 2021, pp. 1–5.
[8] K. Renuga, S. Tan, Y. Zhu, T. Low, and Y. Wang, “Balanced and efficient data placement and replication strategy for distributed backup
storage systems,” in 2009 International Conference on Computational Science and Engineering, vol. 1, 2009, pp. 87–94.
[9] R. Marcelin-Jimenez, S. Rajsbaum, and B. Stevens, “Cyclic storage for fault-tolerant distributed executions,” IEEE Transactions on Parallel
and Distributed Systems, vol. 17, no. 9, pp. 1028–1036, 2006.
[10] N. Woolsey, R.-R. Chen, and M. Ji, “Uncoded placement with linear sub-messages for private information retrieval from storage constrained
databases,” IEEE Transactions on Communications, vol. 68, no. 10, pp. 6039–6053, 2020.
[11] M. Ji, X. Zhang, and K. Wan, “A new design framework for heterogeneous uncoded storage elastic computing,” CoRR, vol.
[abs/2107.09657, 2021. [Online]. Available: https://arxiv.org/abs/2107.09657](https://arxiv.org/abs/2107.09657)
[12] Q. Yan, M. Cheng, X. Tang, and Q. Chen, “On the placement delivery array design for centralized coded caching scheme,” IEEE
Transactions on Information Theory, vol. 63, no. 9, pp. 5821–5833, 2017.
APPENDIX A
PROOF OF CLAIM 1
First, we can assume K 4 without loss of generality, as our replication factor r lies between 3, . . ., K 1 .
≥ { − }
Consider the expressions in Theorem 1 for L1(r) and L2(r) as continuous functions of r. Also, consider L2,o(r) =
(KK−−1)r [+] 2(rK−−11) �K + [r][−]2 [1] � be the continuous function of r which matches with [K]K[−]−[r]1 [+][ L][2][(][r][)][ in Theorem 1 for]
odd values of r ∈{3, . . ., K − 1}. Similarly, let L2,e(r) = (KK−−1)r [+] 2(rK−−21) �K + 2[r] � + 2(KK−1) [be the continuous]
function of r which matches with K[K]−[−]1[r] [+][ L][2][(][r][)][ in Theorem 1 for even values of][ r][ ∈{][3][, . . ., K][ −] [1][}][.]
Let ro be a real number in the interval [3 : K − 1] such that L2,o(ro) = L1(ro). Similarly, let re be a real number
r in the interval [3 : K − 1] such that L2,e(re) = L1(re).
With these quantities set up, the proof then proceeds according to the following steps.
1) Firstly, we show that L2,o(r) > L2,e(r) for K ≥ 4.
2) We then find the values of ro and re, which turn out to be unique. We also show that re > ro and that
⌈re⌉ = ⌈ [2][K]3[+2] ⌉ = ⌊ro⌋ + 1.
3) Then, we shall show that for any integer r > re, we have L1(r) < L2,e(r). Further, we will also show that
for any integer r < ro, we have L2,o(r) < L1(r).
It then follows from the above steps that the threshold value is precisely rth = ⌈re⌉ = ⌈ [2][K]3[+2] ⌉. We now show the
above steps one by one.
1) Proof of L2,o(r) > L2,e(r) for K ≥ 4: We have that
r − 1 � � r − 2 � � K
L2,o(r) − L2,e(r) = K + [r][ −] [1] − K + [r] −
2(K 1) 2 2(K 1) 2 2(K 1)
− − −
= [(][r][ −] [1)(2][K][ +][ r][ −] [1)][ −] [(][r][ −] [2)(2][K][ +][ r][)][ −] [2][K]
4(K 1)
−
= [2][rK][ −] [2][K][ +][ r][2][ −] [2][r][ + 1][ −] [2][rK][ + 4][K][ −] [r][2][ + 2][r][ −] [2][K]
4(K 1)
−
1
=
4(K 1) [>][ 0][,]
−
which holds as K ≥ 4. Hence, L2,o(r) > L2,e(r) for K ≥ 4.
-----
2) Finding ro and re and their relationship: Calculating L2,o(r) − L1(r), we get the following
r 1 � �
−
L2,o(r) − L1(r) = K + [r][ −] [1] − [(][K][ −] [r][)(2][r][ −] [1)]
2(K 1) 2 (K 1)
− −
= [(][r][ −] [1)(2][K][ +][ r][ −] [1)][ −] [4(][K][ −] [r][)(2][r][ −] [1)]
4(K 1)
−
= [9][r][2][ −] [6][r][(][K][ + 1) + 2][K][ + 1] .
4(K 1)
−
Solving for r from L2,o(r) − L1(r) = 0 with the condition that r ≥ 3, we get
r = [2][K][ + 1] .
3
It is easy to see that [2][K]3[+1] - 2 for K ≥ 4. Also, we can check that [2][K]3[+1] < (K − 1), when K ≥ 4. Thus, we
√
have that ro = [2][K]3[+1] . By similar calculations for L2,e(r) − L1(r), we see that re = [K][+1+]3 K [2]+1 . Further, we
√
observe that re − ro = K [2]+13 −K, for K ≥ 4. Hence, re > ro for K ≥ 4. Also observe that ⌈re⌉≤⌈ [2][K]3[+2] ⌉ =
⌊ [2][K]3[+1] ⌋ + 1 = ⌊ro⌋ + 1, where the first equality holds as K ≥ 4. Now, as re > ro we have that ⌈re⌉ - ⌊ro⌋. Thus,
we see that ⌈re⌉ = ⌈ [2][K]3[+2] ⌉ = ⌊ro⌋ + 1.
Proof of 3): We now prove that for any integer r if r > re, then L1(r) < L2,e(r). Let r = re + a, for some real
number a > 0 such that re + a is an integer. We know that L1(re) = L2,e(re). Consider the following sequence of
equations.
L1(r) = L1(re + a)
= [K][ −] [(][r][e][ +][ a][)] + [(][K][ −] [(][r][e][ +][ a][))(2(][r][e][ +][ a][)][ −] [1)]
(K 1) (K 1)
− −
a
= [K][ −] [r][e]
K 1 K 1 [+ (][K][ −] [r][e][ −] K[a][) (2][r]1[e][ −] [1 + 2][a][)]
− [−] − −
= [K][ −] [r][e] + [−][a][ −] [a][(2][r][e][ −] [1 + 2][a][) + 2][a][(][K][ −] [r][e][)]
K 1 [+ (][K][ −] K[r][e][)(2]1[r][e][ −] [1)] K 1
− − −
= L1(re) + [2][aK][ −] [2][a][2][ −] [4][ar][e]
K 1
−
= L2,e(re) + [2][aK][ −] [2][a][2][ −] [4][ar][e] .
K 1
−
Similarly, consider the following.
L2,e(r) = L2,e(re + a)
�
K + [r][e][ +][ a]
2
�
� �
= [K][ −] [(][r][e][ +][ a][)] + [(][r][e][ +][ a][)][ −] [2] K + [r][e][ +][ a]
(K 1) 2(K 1) 2
− −
a � �
= [K][ −] [r][e] K + [(][e][+][a]
(K 1) K 1 [+ (][r]2([e][ −]K [2) +]1)[ a] 2
− [−] − −
= [K][ −] [r][e] �K + [r]2[e] � + [−][6][a][ +][ a][2][ + 2][aK][ + 2][ar][e]
(K 1) [+ (][r][e][ −]2([2)]K 1) 4(K 1)
− − −
= L2,e(re) + [−][6][a][ +][ a][2][ + 2][aK][ + 2][ar][e] .
4(K 1)
−
Now, L2,e(r) − L1(r) = [18][ar][e][ −] [6][aK][ + 9][a][2][ −] [6][a]
4(K 1)
−
-----
= [a][(18][r][e][ −] [6][K][ + 9][a][ −] [6)]
4(K 1)
−
(a) √K [2] + 1 + 9a)
- [a][(6]
4(K 1)
−
(b)
- 0,
√
where (a) holds on substituting re = [K][+1+]3 K [2]+1, (b) holds as K, a > 0. Therefore, when r > re, L1(r) < L2,e(r).
We now prove that for any integer r if r < ro, then L2,o(r) < L1(r). Let r = ro − a, for some real number
a < ro such that ro a is an integer. We know that L1(ro) = L2,o(ro). Consider the following sequence of
−
equations.
L1(r) = L1(ro − a)
= [K][ −] [(][r][o][ −] [a][)] + [(][K][ −] [(][r][o][ −] [a][))(2(][r][o][ −] [a][)][ −] [1)]
(K 1) (K 1)
− −
a
= [K][ −] [r][o]
K 1 [+] K 1 [+ (][K][ −] [r][o][ +][ a]K[) (2][r]1[o][ −] [1][ −] [2][a][)]
− − −
= [K][ −] [r][o] + [a][ +][ a][(2][r][o][ −] [1][ −] [2][a][)][ −] [2][a][(][K][ −] [r][o][)]
K 1 [+ (][K][ −] K[r][o][)(2]1[r][o][ −] [1)] K 1
− − −
= L1(ro) + [4][ar][o][ −] [2][aK][ −] [2][a][2]
K 1
−
= L2,o(ro) + [4][ar][o][ −] [2][aK][ −] [2][a][2] .
K 1
−
Similarly, consider the following.
L2,o(r) = L2,o(ro − a)
�
K + [(][r][o][ −] [a][)][ −] [1]
2
�
� �
= [K][ −] [(][r][o][ −] [a][)] + [(][r][o][ −] [a][)][ −] [1] K + [(][r][o][ −] [a][)][ −] [1]
(K 1) 2(K 1) 2
− −
a � �
= [K][ −] [r][o] K + [(][r][o][ −] [1)][ −] [a]
(K 1) [+] K 1 [+ (][r]2([o][ −]K [1)][ −]1)[a] 2
− − −
(ro − 1) �K + [(][r][o]2[−][1)] �
= [K][ −] [r][o] + [6][a][ −] [2][ar][o][ +][ a][2][ −] [2][aK]
(K 1) [+] 2(K 1) 4(K 1)
− − −
= L2,o(ro) + [6][a][ −] [2][ar][o][ +][ a][2][ −] [2][aK] .
4(K 1)
−
Now, L1(r) − L2,o(r) = [18][ar][o][ −] [6][aK][ −] [9][a][2][ −] [6][a]
4(K 1)
−
= [a][(18][r][o][ −] [6][K][ −] [9][a][ −] [6)]
4(K 1)
−
= [3][a][(3][r][o][ −] [3][a][ + 3][r][o][ −] [2][K][ −] [2)]
4(K 1)
−
(a)
≥ [3][a][(3 + 3][r][o][ −] [2][K][ −] [2)]
4(K 1)
−
≥ [3][a][(3][r][o][ −] [2][K][ + 1)]
4(K 1)
−
(b)
≥ [3][a][(1 + 1)]
4(K 1)
−
-----
- 0,
where (a) holds because ro − a ≥ 1 and (b) holds on substituting ro = [2][K]3[+1] . Therefore, when r < ro, L2,o(r) <
L1(r). The proof is now complete.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2205.06257, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2205.06257"
}
| 2,022
|
[
"JournalArticle"
] | true
| 2022-05-12T00:00:00
|
[
{
"paperId": "0416d5f8288ccdc04462e8e0c2e8174158abfcdf",
"title": "A New Design Framework for Heterogeneous Uncoded Storage Elastic Computing"
},
{
"paperId": "c2d629448f16c3a3453a846ac7a74895187b0020",
"title": "Coded Data Rebalancing for Decentralized Distributed Databases"
},
{
"paperId": "68f84b9776fbe8abc27354a47397d093eb575a55",
"title": "Uncoded Placement With Linear Sub-Messages for Private Information Retrieval From Storage Constrained Databases"
},
{
"paperId": "13beb41621cf997e414a7c5930175a89b9c9c42f",
"title": "Coded Data Rebalancing: Fundamental Limits and Constructions"
},
{
"paperId": "cf9c42e9c13cce39911c788df83c6ab5ee4616c6",
"title": "On the Placement Delivery Array Design for Centralized Coded Caching Scheme"
},
{
"paperId": "e0ccc22a6a3d8874302e450b2fec897eed66710f",
"title": "Fundamental Limits of Caching"
},
{
"paperId": "515aab3cf5ec358dba7c183bb182ff73741a8e9a",
"title": "Balanced and Efficient Data Placement and Replication Strategy for Distributed Backup Storage Systems"
},
{
"paperId": "9e37a536cda91a76f0b0bf8269ab63a4354d2ea5",
"title": "Cyclic Storage for Fault-Tolerant Distributed Executions"
},
{
"paperId": "7ca3e35bc0c6e7559b0803aafc03d47b1224fa01",
"title": "Coding on demand by an informed source (ISCOD) for efficient broadcast of different supplemental data to caching clients"
},
{
"paperId": "5a554c8d22d47ac499aeb7fb0532ca9be65e5a2e",
"title": "A. and Q"
},
{
"paperId": "8065f1bde165594e2d7bcaaf37c41bcf08a505f4",
"title": "Γ and B"
},
{
"paperId": null,
"title": "Rebalancing in ceph (ceph architecture)"
},
{
"paperId": null,
"title": "“No shard left behind: dynamic work rebalancing in google cloud dataflow,”"
},
{
"paperId": null,
"title": "Data rebalancing in apache ignite (apache ignite documentation)"
},
{
"paperId": null,
"title": "we shall show that for any integer r > r e , we have L 1 ( r ) < L 2 ,e ( r )"
}
] | 26,688
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/033426666ace00ac25b4b45eaf99aff4fccebf59
|
[
"Computer Science"
] | 0.895365
|
Balancing Loads among MEC Servers by Task Redirection to Enhance the Resource Efficiency of MEC Systems
|
033426666ace00ac25b4b45eaf99aff4fccebf59
|
Applied Sciences
|
[
{
"authorId": "31398174",
"name": "Jaesung Park"
},
{
"authorId": "49415312",
"name": "Yujin Lim"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Appl Sci"
],
"alternate_urls": [
"http://www.mathem.pub.ro/apps/",
"https://www.mdpi.com/journal/applsci",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-217814"
],
"id": "136edf8d-0f88-4c2c-830f-461c6a9b842e",
"issn": "2076-3417",
"name": "Applied Sciences",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-217814"
}
|
To improve the resource efficiency of multi-access edge computing (MEC) systems, it is important to distribute the imposed workload evenly among MEC servers (MECSs). To address this issue, we propose a task redirection method to balance loads among MECSs in a distributed manner. In conventional methods, a congested MECS selects only one MECS to which it redirects tasks. By contrast, the proposed method enables a congested MECS to distribute its tasks to a set of MECSs, the loads of which are lower than that of the congested MECS by determining the number of tasks that it redirects to each selected MECS. We prove that our task redirection method drives a MEC system to a state where the resulting MECS load vector is lexicographically minimal. Through extensive simulation studies, we show that compared with the conventional methods, the proposed method can achieve the smallest load difference between the load of the MECS, the load of which is the highest, and that of the MECS, the load of which is the smallest. By lexicographically minimizing the MECS load vector, the proposed method decreases the average task blocking rate when the task offload rate is high. In addition, we show that the proposed method outperforms the conventional methods in terms of the number of tasks, the delay requirements of which are not satisfied.
|
# applied sciences
_Article_
## Balancing Loads among MEC Servers by Task Redirection to Enhance the Resource Efficiency of MEC Systems
**Jaesung Park** **[1]** **and Yujin Lim** **[2,]***
1 School of Information Convergence, Kwangwoon University, Seoul 01897, Korea; jaesungpark@kw.ac.kr
2 Department of IT Engineering, Sookmyung Women’s University, Seoul 04310, Korea
***** Correspondence: yujin91@sookmyung.ac.kr; Tel.: +82-2-2077-7305
[����������](https://www.mdpi.com/article/10.3390/app11167589?type=check_update&version=2)
**�������**
**Citation: Park, J.; Lim, Y. Balancing**
Loads among MEC Servers by Task
Redirection to Enhance the Resource
Efficiency of MEC Systems. Appl. Sci.
**[2021, 11, 7589. https://doi.org/](https://doi.org/10.3390/app11167589)**
[10.3390/app11167589](https://doi.org/10.3390/app11167589)
Academic Editor: Eui-Nam Huh
Received: 12 July 2021
Accepted: 17 August 2021
Published: 18 August 2021
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright: © 2021 by the authors.**
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: To improve the resource efficiency of multi-access edge computing (MEC) systems, it is**
important to distribute the imposed workload evenly among MEC servers (MECSs). To address
this issue, we propose a task redirection method to balance loads among MECSs in a distributed
manner. In conventional methods, a congested MECS selects only one MECS to which it redirects
tasks. By contrast, the proposed method enables a congested MECS to distribute its tasks to a set of
MECSs, the loads of which are lower than that of the congested MECS by determining the number
of tasks that it redirects to each selected MECS. We prove that our task redirection method drives a
MEC system to a state where the resulting MECS load vector is lexicographically minimal. Through
extensive simulation studies, we show that compared with the conventional methods, the proposed
method can achieve the smallest load difference between the load of the MECS, the load of which is
the highest, and that of the MECS, the load of which is the smallest. By lexicographically minimizing
the MECS load vector, the proposed method decreases the average task blocking rate when the task
offload rate is high. In addition, we show that the proposed method outperforms the conventional
methods in terms of the number of tasks, the delay requirements of which are not satisfied.
**Keywords: task redirection; load balancing; lexicographically minimum; resource efficiency;**
distributed consensus
**1. Introduction**
In a multi-access edge computing (MEC) system, a device offloads tasks to nearby
MEC servers (MECSs). By serving offloaded tasks at the edge of a network, a MEC system
can facilitate delay-sensitive and computing-intensive applications in devices that are
limited with respect to energy, storage, and computing power [1,2]. However, devices
are not distributed uniformly in MEC systems, and each device may have a different task
offload rate. In addition, the service capacities of MECSs differ, and there is no central
entity controlling the mapping between offloaded tasks and MECSs. Therefore, the number of tasks offloaded to a MEC system is likely not evenly distributed among MECSs,
which results in the situation where some MECSs are heavily congested while other MECSs
are lightly loaded. When tasks are offloaded to a congested MECS, it is highly probable that
they will be blocked or their delay requirements will be violated. Furthermore, the capacity
of a MEC system, in terms of the number of acceptable tasks, will be reduced because
lightly loaded MECSs cannot serve the tasks blocked by congested MECSs. Therefore,
to enhance the resource efficiency of MEC systems, tasks from overloaded MECSs must be
redistributed to underloaded MECSs so that the workload imposed on the MEC system is
evenly distributed among MECSs.
Various load balancing methods that allocate a task to the least-loaded MECS have
been proposed. In [3], whenever a task is offloaded, a central controller assigns it to the
least-loaded MECS sequentially. We note that a role transfer solution can be used to design
a centralized method for solving the load balancing problem [4–6]. However, as the size
of the MEC system, in terms of the number of offloaded tasks and MECSs, increases, it
-----
_Appl. Sci. 2021, 11, 7589_ 2 of 15
becomes challenging for a central controller to assign tasks to the least-loaded MECS every
time a task is offloaded. Distributed approaches were proposed in [7,8]. The method
proposed in [7] transfers tasks from highly loaded MECSs to the least-loaded MECS.
Similarly, to transfer the tasks offloaded from devices to a MECS, the authors in [8] forced
an overloaded MECS to select two MECSs randomly from a set of neighboring MECSs and
choose the least-loaded one for offloading. In the case of distributed methods, since all
the overloaded MECSs redistribute their tasks to the least-loaded MECS simultaneously,
the least-loaded MECS can easily become heavily loaded, which results in a further load
unbalance among MECSs. In addition, most approaches do not explicitly consider the
delay requirement of a task, nor the amount loads to be redirected to the least-loaded
MECS. Load balancing methods based on machine learning were proposed in [9–11].
After predicting the state of the MEC system in terms of the MECSs loads, the authors
proposed methods to balance loads among MECSs. However, the signaling cost of these
methods is very high because a central controller must collect an enormous amount of
data to analyze the state of the MEC system and transfer the learned model parameters
to each MECS. In addition, a relatively long time is required for a controller to learn by
analyzing big data, so these methods are not easily applied to short timescales. In [12],
game theory was used to resolve the load balancing problem. The cost minimization
problem was formulated as a transportation problem, and Vogel’s approximation method
was used to calculate the optimal solution. However, global information is needed to solve
the optimization problem, and the static threshold required to determine the load state of a
MECS was not systematically configured.
To address these issues, we propose a task redirection method to balance loads among
MECSs using a decentralized consensus method [13,14]. We explicitly consider the delay
requirement of a task when estimating the MECS load. In our task redirection method,
each MECS determines whether to redirect tasks by considering its load state relative to
that of the other MECSs. Once MECS i decides to redirect its tasks; instead of transferring
tasks to the least-loaded MECS, i redistributes its tasks to a set of MECSs whose loads
are smaller than its own load. In addition, MECS i determines the number of tasks to be
redirected to each MECS in the set according to the difference between the load of each
MECS in the set and its own load.
This paper is organized as follows. In Section 2, we introduce a system model and
define the load balancing problem in a MEC system. In Section 3, we present our task redirection method and discuss its properties. In Section 4, we validate the proposed method
by comparing its performance with those of the conventional methods. We conclude the
paper and provide future works in Section 5. Before we proceed, in Table 1, we present the
notations used in this paper for the readers’ convenience.
**Table 1. Notations used.**
**Notations** **Meaning**
_N_ A set of MECSs in the system.
_Ci_ CPU frequency of MECS i.
_Oi(t)_ A set of tasks in the waiting queue of MECS i at the end of time slot t.
_Si(t)_ A set of tasks in the service queue of MECS i at the end of time slot t.
_dx_ Data size of a task x.
_wx_ Workload imposed by a task x in terms of the number of CPU cycles.
_Tx_ Maximum delay allowed to finish a task x.
_ri,x(t)_ Uplink transmission rate to send a task x to MECS i during a time slot t.
_ρi,w(t)_ Load of MECS i imposed by the tasks in Oi(t).
_ρi,s(t)_ Load of MECS i imposed by the tasks in Si(t).
_ρi(t)_ Load of MECS i at the end of time slot t.
-----
_Appl. Sci. 2021, 11, 7589_ 3 of 15
**Table 1. Cont.**
**Notations** **Meaning**
_ρ¯i(t)_ Avg. load of a MEC system at the end of time slot t (i.e., _|N1_ _|_ [∑][j][∈][N][ ρ][j][(][t][)][).]
∆i(t) A set of MECSs whose loads are lower than ρi(t).
_δi,j(t)_ Load difference between MECS i and j (specifically, _ρi(t)|−Nρ|_ _j(t)_, j ∈ ∆i(t)).
_αi,j(t)_ The amount of workload that MECS i redirects to j.
Φi,j(t) A set of tasks that MECS i redirects to MECS j.
**2. System Model and Problem Formulation**
We considered a MEC system composed of a set N of MEC servers. We denote the
capacity of MECS i in terms of the number of CPU cycles per second as Ci. We assumed
that a MECS is installed in a base station (BS) and ignored the information transfer delay
between the MECS and BS. Following [9], we assumed that each device offloads its tasks
to the MECS installed in the BS, giving it the highest signal strength. Time is divided into
slots whose length is assumed to be the frame time between a device and a BS.
A MECS maintains two queues: a waiting queue and a service queue. The set of
tasks in the waiting queue of MECS i at the end of time slot t is denoted as Oi(t), and the
set of tasks in the service queue of MECS i at the end of the time slot t is denoted as
_Si(t). The waiting queue is used to temporarily buffer tasks offloaded from devices to_
a MECS during a time slot. At the end of a time slot, a MECS makes a task redirection
decision to balance the loads among MECSs. Depending on the decision made by a MECS,
the tasks in its waiting queue are either moved to its service queue or redirected to another
MECS. A MECS uses its service queue to accommodate tasks until they are served in a
FIFO manner. Thus, the tasks in the service queue of a MECS can be classified into two
groups: a group composed of tasks moved from its waiting queue and a group composed
of tasks redirected from other MECSs. Once a task is located in a service queue, it cannot
be redirected to other MECSs to avoid unnecessary increases in the delay involved in
transferring a task from one MECS to another.
During each time slot, MECS i receives tasks offloaded from devices and places them
in its waiting queue. A task x is composed of three tuples (dx, wx, Tx), where dx is the
data size of the task, wx is the workload of the task in terms of the number of CPU cycles
required to process the task, and Tx is the maximum delay allowed to finish the task.
We denote the uplink transmission rate to send a task x from a device to its serving MECS
during a time slot t as ri,x(t). If we assume ri,x(t) does not change during a time slot,
even though it can change across time slots to satisfy the delay requirement of task x,
MECS i has to complete the task within:
_dx_
_bi,x = Tx −_ _ri,x(t)_ [.] (1)
Since the MECS is installed in a BS, it can be synchronized in time with the associated
devices. Thus, _[d][x]_
_ri,x_ [can be obtained by subtracting the time when a device sends task][ x][ from]
the time when the MECS receives the task. Therefore, the CPU frequency (i.e., the number
of CPU cycles per second) required to finish tasks x becomes:
_fi,x =_ _[w][x]_ . (2)
_bi,x_
Thus, at the end of time slot t, the load of MECS i imposed by the tasks in Oi(t) is:
_ρi,w(t) =_ [∑][x][∈][O][i][(][t][)][ f][i][,][x] . (3)
_Ci_
-----
_Appl. Sci. 2021, 11, 7589_ 4 of 15
If a task x resides in the service queue of MECS i for di,x, MECS i has to finish x in
_ci,x = bi,x_ _di,x to meet the deadline requested by x. MECS i removes tasks whose ci,x_ 0
_−_ _≤_
from Si(t) because the delay requirement of the task is not satisfied. If we denote the set of
tasks in Si(t) whose ci,x > 0 as Si[′][(][t][)][, the load imposed by a task][ x][ ∈] _[S]i[′][(][t][)][ in terms of the]_
number of CPU cycles per second is given as:
_fi[′],x_ [=][ w][x] . (4)
_ci,x_
Therefore, the load imposed by the tasks in Si(t) is obtained as follows:
_ρi,s(t) =_ ∑x∈Si[′][(][t][)][ f][ ′]i,x . (5)
_Ci_
The first step to balance the loads among MECSs is to determine whether a MECS
will have a higher load than the other MECSs. At the end of each time slot t, the set of
tasks received by MECS i is Oi(t) ∪ _Si(t). Thus, if ρi,w(t) + ρi,s(t) is larger than ρj,w(t) +_
_ρj,s(t), MECS i will have a higher load than MECS j if no other action is taken. Therefore,_
using Equations (3) and (5), we define the load of MECS i at the end of time slot t as:
_ρi(t) = ρi,w(t) + ρi,s(t)._ (6)
We can observe that ρi(t) depends on many factors such as the attribute of a task
(dx, wx, Tx), the uplink transmission rate (ri,x(t)), the computing power of a MECS (Ci),
and the number of tasks in a MECS (|Oi(t)|, |Si(t)|). The attribute of a task depends on the
application services. Thus, ρi(t) ̸= ρj(t) even when all the other variables affecting ρi(t)
and ρj(t) are the same. As we can see in Equation (1), ri,x(t) influences the heterogeneity
of the task attribute by changing the delay requirement of a task when it arrives at a MECS.
The number of tasks in a MECS i depends on the task input rate and the task service
rate. The task input rate to a MECS i during a time slot t, which we denote by ai(t), is
mainly affected by the number of devices offloading their tasks to a MECS i. Generally,
_ai(t) ̸= aj(t), (i, j ∈_ _N) because devices are not evenly distributed over the region that_
a MEC system serves. In addition, the association method that a device selects a MECS
to which the device offloads its tasks also influences ai(t). Usually, it is difficult for a
device to know the load situation of each MECS. Thus, following [9], we assumed that a
device offloads tasks to the MECS colocated with the BS, which gives it the highest signal
strength. By selecting the BS giving the highest signal strength, a task may obtain high
_ri,x(t). However, it was shown in [15,16] that the number of devices associated with each_
BS is not evenly distributed when a device determines its association according to the
signal strength from a BS, which results in ai(t) ̸= aj(t), (i, j ∈ _N). The task service rate of_
a MECS is determined by Ci and the workload imposed by a task (wx), which are not the
same for all tasks.
Therefore, ρi(t) ̸= ρj(t), (i, j ∈ _N) in general, which means the ρi(t)s of some MECSs_
are high, while those of the other MECSs are low, as shown in Figure 1a. If a task is
offloaded to a MECS whose load is high, it is highly probable that the delay requirement of
the task is violated. However, we can avoid the situation if we redirect the tasks from a
highly loaded MECS to a lightly loaded MECS, as we depict in Figure 1. Thus, our goal is
to balance loads among MECSs by redirecting tasks so that:
1
_ρi = ρj = ¯ρ =_ _ρk, ∀i, j ∈_ _N,_ (7)
_|N|_ _k[∑]∈N_
where |N| is the cardinality of set N.
To balance the loads between a highly loaded MECS i and a lightly loaded MECS j,
MECS i must determine the amount of workload to redirect to MECS j. Generally, to make
such a decision, MECS i needs to know the local information of MECS j such as Oj(t) and
-----
_Appl. Sci. 2021, 11, 7589_ 5 of 15
_Sj(t). However, by including ρi,w(t) in ρi(t), we enable MECS i to determine the amount_
of workload to redirect to MECS j without the local information of MECS j. We detail our
task redirection method in Section 3.
(a) Load balancing problem and task redirection
(b) Load balancing after task redirection
**Figure 1. Load balancing problem and task redirection approach.**
**3. Task Redirection Method**
In this section, we first present our task redirection algorithm that drives a MEC system
to the state where loads among MECSs are balanced; then, we discuss the properties of
the algorithm.
_3.1. Task Redirection Algorithm_
To balance the loads among MECSs in a distributed manner, each MECS i must be able
to decide autonomously whether to redirect tasks in Oi(t). If MECS i decides to redirect
tasks, i must select MECS j to which it transfers tasks. In addition, MECS i has to determine
the amount of workload to redirect to the selected target MECS j. To make such decisions,
each MECS i exchanges ρi(t) with other MECSs at the end of a time slot and calculates the
average load.
1
_ρ¯(t) =_ _ρj(t)._ (8)
_|N|_ _j[∑]∈N_
If ρi(t) ≤ _ρ¯(t), i considers that it is relatively underloaded compared with the other_
MECSs. Thus, i does not redirect a task in Oi(t) and moves all the tasks in Oi(t) to Si(t).
By contrast, if ρi(t) > ¯ρ(t), the load of MECS i is higher than that of some other MECSs;
therefore, i decides to redirect tasks in Oi(t) to MECS j. We adopt the distributed consensus
method in [14] for MECS i to determine not only a target MECS, but also the amount
-----
_Appl. Sci. 2021, 11, 7589_ 6 of 15
of workload that i redistributes to the selected target MECS. The procedure is shown
in Algorithm 1. MECS i collects a set ∆i(t) of MECSs whose loads are lower than ρi(t).
For each MECS j ∈ ∆i(t), a MECS i calculates the load difference:
_δi,j(t) =_ _[ρ][i][(][t][)][ −]_ _[ρ][j][(][t][)]_, j ∈ ∆i(t). (9)
_|N|_
Then, MECS i searches for the MECS j[∗] _∈_ ∆i(t) that gives the highest δi,j(t). Since the
capacity of MECS i is Ci, the amount of workload corresponding to δi,j∗ (t) becomes
_θi,j∗_ (t) = δi,j∗ (t)Ci. Since the total workload imposed by the tasks in Oi(t) is oi(t) =
∑x∈Oi(t) fi,x, the amount of workload that i redirects to j[∗] is αi,j[∗] (t) = min(oi(t), θi,j[∗] (t)).
MECS i constructs a set Φi,j∗ (t) of tasks to be transferred to j[∗] by randomly selecting
tasks from Oi(t). Specifically, MECS i randomly selects tasks from Oi(t) as long as
∑x∈Φi,j∗(t) fi,x < αi,j[∗] (t). Then, i redirects all the tasks in Φi,j[∗] (t) to j[∗].
After removing j[∗] from ∆i(t), MECS i repeats the procedure until ∆i(t) is empty or all
the tasks in Oi(t) are redirected, whichever comes first. If ∆i(t) becomes empty before all
the tasks in Oi(t) are redirected, MECS i moves the remaining tasks to its service queue.
**Algorithm 1 Task redirection algorithm.**
1: Ti(t) = ∆i(t)
2: if Ti(t) ̸= ∅ **then**
3: find j[∗] = arg maxj∈Ti(t) δi,j(t)
4: _θi,j∗_ (t) = δi,j∗ (t)Ci
5: _αi,j∗_ (t) = min(oi(t), θi,j∗ (k))
6: Construct Φi,j∗ (t)
7: redistribute all the tasks in Φi,j∗ (t) to j[∗]
8: _oi(t) = oi(t) −_ ∑x∈Φi,j∗(t) fi,x
9: **if oi(t) ≤** 0 then
10: break
11: **else**
12: _Ti(t) = Ti(t) −{j[∗]}_
13: else
14: move remaining tasks in Oi(t) to its service queue.
_3.2. Properties of the Task Redirection Method_
In [17], the load of each MECS was shown to converge to ¯ρ = ∑i∈N ρi/|N| in polynomial time if each MECS repeats Algorithm 1.
To state how the proposed method distributes tasks to MECSs, we first introduce the
following definitions.
**Definition 1. A vector** _⃗a_ _X_ _R[n]_ _is said to be a min-max fair vector on X if and only if we_
_∈_ _⊆_
_cannot decrease a component in ai ∈_ _⃗a without increasing another component aj ≥_ _ai. Formally,_
_for all_ _[⃗]b ∈_ _X, if there exists i = {1, . . ., n} such that bi ∈_ _[⃗]b < ai ∈_ _⃗a, then there exists_
_j = {1, . . ., n} such that bj > aj ≥_ _ai._
**Definition 2. Given a set of vectors A** **R[n], a vector ⃗a** _A is said to be leximax minimal if_
_⊂_ _∈_
_< ⃗a > is lexicographically less than or equal to < [⃗]b > for any vector_ _[⃗]b_ _A, where < ⃗a >_
_∈_
_represents the vector obtained from vector_ _⃗a by rearranging its elements in nonincreasing order._
We say that a MECS load vector ⃗ρ = (ρ1, . . ., ρn) is feasible if ρi < 1 for all i ∈ _N and_
denote the set of feasible ⃗ρs as Ψ ⊆ _R[n]. Then, we state the fairness of our task redirection_
method as Proposition 1.
-----
_Appl. Sci. 2021, 11, 7589_ 7 of 15
**Proposition 1. The MECS load vector ⃗ρ = (ρ1, . . ., ρn) = ( ¯ρ, . . ., ¯ρ) ∈** Ψ obtained by our task
_redirection method is min-max fair._
**Proof. Let us suppose that there is a min-max fair load vector** _⃗a = (a1, . . ., an) ∈_ Ψ such
that ai ̸= aj for some i, j(i ̸= j). Without loss of generality, we assumed that ai is the
smallest and aj is the second smallest element in⃗a. If we choose ϵ such that 0 < ϵ < aj − _ai,_
we have ai + ϵ < aj. Let us consider another load vector[⃗]b = (b1, . . ., bn) ∈ Ψ. If the loads
of some MECSs decrease, the decreased load has to be accommodated by another MECSs.
Specifically, for 0 < δi < ϵ and 0 ≤ _δk < ϵ, the decrease in the load of MECS i from ai to_
_ai −_ _δi induces variation in the loads of other MECSs from ak to ak + δk and ∑k̸=i δk = δi._
When bi = ai _δi and bj = aj + δj, bi < ai and bj = aj + δj < ai, which contradicts that_ _⃗a is_
_−_
a min-max fair vector.
**Proposition 2. The min-max fair MECS load vector ⃗ρ = (ρ1, . . ., ρn) = ( ¯ρ, . . ., ¯ρ) is a unique**
_leximax minimal load vector._
**Proof. It was shown in [18,19] that if a max-min fair vector exists on a set X** _R[n], then it_
_⊆_
is the unique lexicographically maximal vector on X. Since ⃗ρ is min-max fair on Ψ, −⃗ρ is a
max-min fair load vector on −Ψ, which makes −⃗ρ the unique lexicographically maximal
vector on −Ψ. Therefore, ⃗ρ is the unique leximax minimal vector on Ψ.
According to Proposition 2, by redirecting tasks among MECSs in a distributed manner, our task redirection method makes ⃗ρ lexicographically smallest given a set of tasks
accommodated in a MEC system.
**4. Performance Evaluation**
In this section, we verify the proposed method by comparing its performance with that
of conventional schemes under the same environment. Henceforth, we call the proposed
method, leximax minimal load balancing redirection (LMLBR). We selected the following
three representative schemes for performance comparison: (1) Random redirection (RR).
In RR, the overloaded MECS redirects its tasks to a randomly selected MECS; (2) Nearest
**redirection (NR). In NR, the overloaded MECS redirects its tasks to the MECS that is**
geographically closest; (3) Least-loaded redirection (LR). In LR, the overloaded MECS
redirects its tasks to the least-loaded MECS.
We distributed 50 MECSs in an area of 1000 1000 m. According to [20–22], we set
_×_
the simulation parameters as shown in Table 2. The capacity of each MECS (Ci) was
randomly selected in [9 GHz, 11 GHz], according to a uniform distribution. We set the task
arrival process at a MECS to follow a Poisson point process (PPP) with rate λ. When task
_x is generated, its dx, wx, and Tx are randomly chosen from the given ranges in Table 2_
according to a uniform distribution. For example, the data size of a task is randomly
selected in [3 kbits, 6 kbits]. We set the length of a time slot to 100 ms, which is the frame
time between a device and a BS in an LTE system. We set the size of waiting queue and
service queue of a MECS so that the number of offloaded tasks does not exceed the queue
capacity of each MECS, when the number of devices associated with each MECS is evenly
distributed and the number of offloaded tasks for each time slot is distributed uniformly.
We first investigate the load variance of a MECS in Figure 2. The figure presents a
box plot for the load changes of the 32nd MECS during the simulation time when the task
offload rate is 0.5, 0.7, and 0.9. In NR and LR, all the overloaded MECSs select the best
MECS in terms of the distance (NR) or the load (LR) as their target MECS and redirect
their tasks to the best MECS. Therefore, the load of the best MECS increases sharply after it
accepts the tasks redirected to it within a time slot. Therefore, the load of a MECS varies
substantially depending on whether the MECS is chosen as the best MECS. However,
in the proposed method, the variance is relatively small because tasks are redirected from a
congested MECS to more than one target MECS according to the load difference between
the congested MECS and the target MECS.
-----
_Appl. Sci. 2021, 11, 7589_ 8 of 15
**Table 2. Simulation parameters.**
**System Parameters** **Values**
Subcarrier bandwidth 15 kHz
Background noise 10[−][13] W
Data size of task x (dx) 3000–6000 bits
Workload of task x (wx) 0.1–1.0 GHz
Maximum delay allowed for task x (Tx) 0.1–0.2 s
CPU frequency of MECS i (Ci) 9–11 GHz
Waiting queue size of a MECS 5 tasks
Service queue size of a MECS 5 tasks
120
100
80
60
40
20
0
RR NR LR LMLBR
method
(a) task offload rate, λ = 0.5
250
200
150
100
50
0
RR NR LR LMLBR
method
(b) task offload rate, λ = 0.7
**Figure 2. Cont.**
-----
_Appl. Sci. 2021, 11, 7589_ 9 of 15
300
250
200
150
100
50
0
RR NR LR LMLBR
method
(c) task offload rate, λ = 0.9
**Figure 2. Load per time slot of four methods.**
To scrutinize the load variance among MECSs, in Figure 3, we present the loads of
all 50 MECSs at a given time slot when the task offload rate is 0.5, 0.7, and 0.9. LMLBR
minimizes the MECS load vector lexicographically. In NR and LR, since the redirected
tasks are absorbed into the best MECS at once, the loads of some MECSs are very high,
while the loads of the other MECSs are low. However, in LMLBR, tasks are assigned to
multiple MECSs, which makes the load difference among MECSs the smallest.
To compare the four methods in terms of the quality of service provided by the MEC
system, we inspected the average task blocking rate and the rate of tasks completed within
their delay constraints. When a task is offloaded to a MECS whose queue is full, the task
is blocked by the MECS. The average task blocking rate is defined as the fraction of tasks
that are blocked by the MECSs in the system. In Figure 4, the average task blocking rate
is depicted with varying λ. In NR and LR, some MECSs are highly loaded; thus, if a task
is offloaded to a highly loaded MECS, the task is likely to be blocked. Since the two lines
representing RR and LMLBR look similar in Figure 4, we show and scrutinize the average
blocking rate obtained by these two methods for λ = 0.5, 0.7, 0.9, separately, in Table 3.
In our method, an overloaded MECS i considers the loads of other MECSs when it selects a
MECS j to which it redirects its tasks. In addition, our method determines the amount of
tasks to redirect to each j according to the load difference between i and j. On the contrary,
in RR, an overloaded MECS redirects its tasks to one randomly selected MECS until it
becomes underloaded, regardless of the load of the selected MECS. Therefore, two adverse
cases can happen when RR is used. Firstly, an overloaded MECS may redirect its tasks
to another overloaded MECS. Secondly, an overloaded MECS may redirect an excessive
amount of tasks to the selected MECS, which results in overloading the selected MECS.
Thirdly, multiple overloaded MECSs may select the same MECS i to which they redirect
their tasks. In this case, it is very likely that the selected MECS i becomes overloaded
even though it was underloaded before accommodating the redirected tasks. If the load
imposed on a MEC system is not so high, the number of MECSs overloaded by the tasks
offloaded from devices is small. Accordingly, the adverse cases occur rarely. Therefore, we
can observe in Table 3 that the average blocking rate obtained by RR is smaller than that
achieved by LMLBR when λ ≤ 0.7. However, as the traffic offload rate increases, the two
adverse cases by RR occur more frequently, which increases the average task blocking rate.
Since our method avoids the two adverse cases, it can achieve a lower task blocking rate
than RR when λ = 0.9. To further compare RR and LMLBR in terms of the blocking rate,
we show not only the average blocking rate, but also the standard deviation of the blocking
rate obtained by RR and our method in Figure 5. When we investigated the standard
deviation of the blocking rate (denoted by σb) in Figure 5, our method achieved smaller
-----
_Appl. Sci. 2021, 11, 7589_ 10 of 15
_σb than RR for all λ > 0.1. This is attributed to the behavior of the proposed method._
Unlike RR, our method redirects tasks from highly loaded MECSs to lightly loaded MECSs
by considering the load difference between MECSs. Therefore, compared with RR, the
proposed method achieves smaller load difference between the load of the MECS whose
load is the highest and that of the MECS whose load is the smallest.
50 RR
NR
LR
40
LMLBR
30
20
10
0
0 10 20 30 40 50
server number
(a) task offload rate, λ = 0.5
60
RR
NR
50
LR
LMLBR
40
30
20
10
0
0 10 20 30 40 50
server number
(b) task offload rate, λ = 0.7
60 RR
NR
LR
50
LMLBR
40
30
20
10
0
0 10 20 30 40 50
server number
(c) task offload rate, λ = 0.9
**Figure 3. Load per server for a given time slot (i.e., ρi(t), ∀i ∈** _N)._
-----
_Appl. Sci. 2021, 11, 7589_ 11 of 15
0.8
0.6
RR
NR
LR
0.4
LMLBR
0.2
0.0
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
task offload rate
**Figure 4. Average task blocking rate for various task offload rates.**
**Table 3. Comparison of RR and LMLBR in terms of the average task blocking rate (Difference (δbr)**
represents the average blocking rate obtained by LMLBR minus the average blocking rate obtained
by RR).
**_λ_** **0.5** **0.7** **0.9**
RR 0.0247 0.0491 0.0792
LMLBR 0.0521 0.0605 0.0666
Difference (δbr) 0.0274 0.0114 _−0.0126_
**Figure 5. Comparison of RR and LMLBR in terms of the average task blocking rate. Each bar**
represents the average blocking rate at a given task offload rate. If we denote the average blocking
rate as b and the standard deviation of the blocking rate as σb, each line in each bar indicates the
range of blocking rates from b − _σb to b + σb._
The rate of tasks completed within their delay constraints is affected by the total
delay from the beginning of the task offloading process to the end of task computing.
The total delay is composed of three parts. The first component is the transmission delay
between a device and the MECS to which the device offloads a task. The second part is
the queuing and service delay experienced by a task in the MECS that serves the task.
The third component is a redirection delay that is included in the total delay only when a
task is redirected from one MECS to another MECS. According to Little’s law, the queuing
-----
_Appl. Sci. 2021, 11, 7589_ 12 of 15
delay is proportional to the queue length, which is proportional to the MECS load. Figure 6
shows that the rate of tasks completed within their delay constraints is the smallest for
all λs when LMLBR is used. Since the MECS load vector obtained by LMLBR is leximax
minimal, the queue length vector whose element is the queue length of each MECS is also
leximax minimal, which results in the smallest total delay for a given λ. Overall, the rate
of tasks completed within their delay constraint for LMLBR is 10–42% better than that
achieved by other methods.
To inspect the influence of the number of redirection per task on the distribution of
MECS load, we show in Figure 7 the loads of all 50 MECSs at the same time slot when
_λ = 0.9. As the number of task redirection increases, the load imposed on a MEC system_
is distributed among MECSs more fairly. However, the congestion level of a network
connecting MECSs also increases with the number of task redirections. In addition, the
time from when a task is first offloaded from a device to a MEC system to the time when it is
finally served by a MECS also increases with the number of times that the task is redirected.
Since the problem of determining the optimal number of task redirection in itself deserves
to be investigated thoroughly, we set this problem as one of our future works.
1.0
0.9
0.8
0.7
0.6
0.5
RR
NR
0.4
LR
LMLBR
0.3
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
task offload rate
**Figure 6. Rate of tasks completed within the delay constraints versus the task offload rate.**
**Figure 7. Influence of the number of redirect per task on the load distribution among MECSs when**
_λ = 0.9._
To investigate the sensibility of performance with respect to the simulation parameters,
we evaluated the performance by varying the queue size. In each MECS, we set the
size of the service queue to be the same as that of the waiting queue. We show the
-----
_Appl. Sci. 2021, 11, 7589_ 13 of 15
results in Tables 4 and 5. In Table 4, as the queue size increases, the task blocking rate
decreases, but the difference is too small to mean much in practice. However, in Table 5,
the performance comparison is meaningful in terms of the rate of tasks completed within
the delay constraints. This is attributed to the following facts. In LMLBR, a MECS i
redirects tasks to the MECSs whose loads are lower than ρi. In addition, the amount of
tasks redirected from MECS i to MECS j is not excessive because LMLBR considers the load
difference between i and j. However, in RR, an overloaded MECS redirects the tasks to the
randomly selected MECS until it becomes underloaded. If tasks are redirected to one MECS
excessively, the service queue of the MECS increases sharply, which results in the large
waiting time and the decreases in the rate of tasks completed within the delay constraints.
**Table 4. Task blocking rate versus queue size.**
**Q Size = 20** **Q Size = 10** **Q Size = 5**
**_λ_**
**RR** **LMLBR** **RR** **LMLBR** **RR** **LMLBR**
0.5 0 0 0 0 0.0247 0.0521
0.7 0 0 0.0021 0.0024 0.0491 0.0605
0.9 0.0004 0.0001 0.0051 0.0035 0.0792 0.0666
**Table 5. Rate of tasks completed within the delay constraints versus queue size.**
**Q Size = 20** **Q Size = 10** **Q Size = 5**
**_λ_**
**RR** **LMLBR** **RR** **LMLBR** **RR** **LMLBR**
0.5 0.7641 0.8528 0.7892 0.8532 0.7745 0.8437
0.7 0.6652 0.7936 0.7213 0.8225 0.7154 0.8097
0.9 0.5596 0.7423 0.6113 0.7612 0.614 0.7301
Since the proposed method requires the exchange of information between MECSs,
it incurs communication overhead. If a broadcast channel is used to exchange the load
information among _N_ MECSs, the communication cost is O( _N_ ). If an unicast channel
_|_ _|_ _|_ _|_
is used to exchange the information, the overhead is O( _N_ ). The factor determining
_|_ _|[2]_
whether the exchange of information between MECSs is required or not is the way that
a MECS decides whether it is overloaded. If a MECS decides to redirect its tasks when
its load is larger than a local threshold value, the communication cost of the method is
zero. However, it is difficult to configure an optimal threshold value according to the
dynamic network condition. If a MECS decides to redirect its tasks if its load is higher than
_ρ¯, it requires exchanging its load information with neighboring MECSs. All four methods_
that are used for performance comparison in our experiments use ¯ρ for the threshold value.
Thus, each MECS exchanges its load information with other MECSs at each time slot and
calculates the average load to decide whether to redirect its tasks. Therefore, they have the
same communication overhead.
The computational complexity of our method can be derived as follows. In our
method, when the load of a MECS is higher than the average load, the MECS selects a set of
target MECSs whose loads are lower than its load. Then, the MECS redistributes its tasks to
the target MECSs in order in proportion to the load differences between itself and the target
MECSs. Thus, the computational complexity of LMLBR is O( _N_ _log_ _N_ ) for sorting the
_|_ _|_ _|_ _|_
target MECSs in order. In RR, when the load of a MECS is higher than the average load, the
MECS randomly selects a target MECS to redistribute the tasks. Thus, the computational
complexity of RR is O(1). However, considering the results in terms of the quality of the
service provided by a MEC system, the proposed method outperforms RR in terms of the
rate of tasks completed within the delay constraints, while being comparable to RR with
respect to the task blocking rate.
-----
_Appl. Sci. 2021, 11, 7589_ 14 of 15
**5. Conclusions and Future Works**
In this paper, we presented a distributed task redirection method among MECSs to
improve the resource efficiency of MEC systems. We proved that our method drives the
MEC system to the state where the loads of the MECSs are lexicographically minimal.
Through simulation studies, we showed that compared with other conventional load
balancing methods, the proposed method achieves the smallest load difference between
the load of the MECS whose load is the highest and that of the MECS whose load is the
smallest. By obtaining the leximax minimal load vector, the proposed method improves the
rate of tasks completed within their delay constraints. In addition, our method decreases
the average task blocking rate when the task offload rate is high.
We are planning the following future works. Firstly, we will devise a task selection
method that selects tasks to redirect from the waiting queue of a highly loaded MECS.
By considering the delay requirements of the tasks in Oi(t) when constructing the set
Φi,j∗, we expect that the rate of tasks completed within the delay constraints will improve.
Secondly, we will inspect the influence of the delay required to redirect a task from one
MECS to another MECS. We will also scrutinize the optimal number of redirection per task.
Finally, we will extend the proposed method so that it can be used in the environment
where only a subset of MECSs in the system can exchange their load information.
**Author Contributions: Conceptualization, J.P. and Y.L.; methodology, J.P.; software, Y.L. Both authors**
have read and agreed to the published version of the manuscript.
**Funding: This research was supported by stage 4 BK21 project in Sookmyung Women’s Univ of the**
National Research Foundation of Korea Grant. This work was supported by the National Research
Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2021R1F1A1047113).
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Mach, P.; Becvar, Z. Mobile Edge Computing: A Survey on Architecture and Computation Offloading. IEEE Commun. Surv. Tutor.
**[2017, 19, 1628–1656. [CrossRef]](http://doi.org/10.1109/COMST.2017.2682318)**
2. Filali, A.; Abouaomar, A.; Cherkaoui, S.; Kobbane, A.; Guizani, M. Multi-Access Edge Computing: A Survey. IEEE Access 2020, 8,
[197017–197046. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.3034136)
3. Lai, S.; Fan, X.; Ye, Q.; Tan, Z.; Zhang, Y.; He, X.; Nanda, P. FairEdge: A Fairness-Oriented Task Offloading Scheme for Iot
[Applications in Mobile Cloudlet Networks. IEEE Access 2020, 8, 13516–13526. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.2965562)
4. Zhu, H.; Zhou, M. Efficient Role Transfer Based on Kuhn–Munkres Algorithm. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum.
**[2012, 42, 491–496. [CrossRef]](http://dx.doi.org/10.1109/TSMCA.2011.2159587)**
5. Zhu, H.; Zhou, M. M-M Role-Transfer Problems and Their Solutions. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2009, 39,
448–459.
6. Zhu, H.; Zhou, M. Roles in Information Systems: A Survey. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev. 2008, 38, 377–396.
7. Sharmin, Z.; Malik, A.W.; Rahman, A.U.; Noor, R.M. Toward Sustainable Micro-Level Fog-Federated Load Sharing in Internet of
[Vehicles. IEEE Internet Things 2020, 7, 3614–3622. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2020.2973420)
8. Liu, L.; Chan, S.; Han, G.; Guizani, M.; Bandai, M. Performance Modeling of Representative Load Sharing Schemes for Clustered
[Servers in Multiaccess Edge Computing. IEEE Internet Things J. 2019, 6, 4880–4888. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2018.2879513)
9. Wang, Z.; Xue, G.; Qian, S.; Li, M. CampEdge: Distributed Computation Offloading Strategy under Large-Scale AP-based Edge
[Computing System for IoT Applications. IEEE Internet Things J. 2021, 8, 6733–6745. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2020.3026862)
10. Li, J.; Luo, G.; Cheng, N.; Yuan, Q.; Wu, Z.; Gao, S.; Liu, Z. An End-to-End Load Balancer Based on Deep Learning for Vehicular
[Network Traffic Control. IEEE Internet Things J. 2019, 6, 953–966. [CrossRef]](http://dx.doi.org/10.1109/JIOT.2018.2866435)
11. Filali, A.; Mlika, Z.; Cherkaoui, S.; Kobbane, A. Preemptive SDN Load Balancing with Machine Learning for Delay Sensitive
[Applications. IEEE Trans. Veh. Technol. 2020, 69, 15947–15963. [CrossRef]](http://dx.doi.org/10.1109/TVT.2020.3038918)
12. Abedin, S.F.; Bairagi, A.K.; Munir, M.S.; Tran, N.H.; Hong, C.S. Fog Load Balancing for Massive Machine Type Communications:
[A Game and Transport Theoretic Approach. IEEE Access 2018, 7, 4204–4218. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2018.2888869)
13. Cybenko, G. Dynamic Load Balancing for Distributed Memory Multiprocessors. J. Parallel Distrib. Comput. 1989, 7, 279–301.
[[CrossRef]](http://dx.doi.org/10.1016/0743-7315(89)90021-X)
14. Xiaoa, L.; Boyd, S.; Kim, S.-J. Distributed Average Consensus with Lead-Mean-Square Deviation. J. Parallel Distrib. Comput. 2007,
_[67, 33–46. [CrossRef]](http://dx.doi.org/10.1016/j.jpdc.2006.08.010)_
15. Andrews, J.G.; Singh, S.; Ye, Q.; Lin, X.; Dhillon, H.S. An Overview of Load Balancing in HetNets: Old Myths and Open Problems.
_[IEEE Wirel. Commun. 2014, 21, 18–25. [CrossRef]](http://dx.doi.org/10.1109/MWC.2014.6812287)_
-----
_Appl. Sci. 2021, 11, 7589_ 15 of 15
16. Park, J. Association Game for Conflict Resolution between UEs and Small Cells. Hindawi Wirel. Commun. Mob. Comput. 2020,
_[2020, 5801217. [CrossRef]](http://dx.doi.org/10.1155/2020/5801217)_
17. Alex, O.; Tsitsiklis, J.N. Convergence Speed in Distributed Consensus and Averaging. SIAM J. Control Optim. 2009, 48, 33–55.
18. Sarkar, S.; Tassiulas, L. Fair Allocation of Discrete Bandwidth Layers in Multicast Networks. In Proceedings of the 2000 IEEE
Conference on Computer Communications (INFOCOM’00), Tel Aviv, Israel, 26–30 March 2000; pp. 1491–1500.
19. Sarkar, S.; Tassiulas, L. Fair Bandwidth Allocation for Multicasting in Networks with Discrete Feasible Set. IEEE Trans. Comput.
**[2004, 53, 785–797. [CrossRef]](http://dx.doi.org/10.1109/TC.2004.30)**
20. Yang, X.; Yu, W.; Huang, H.; Zhu, H. Energy Efficiency based Joint Computation Offloading and Resource Allocation in
[Multi-access MEC Systems. IEEE Access 2019, 7, 117054–117062. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2019.2936435)
21. Zhang, K.; Mao, Y.; Leng, S.; Zhao, Q.; Li, L.; Peng, X.; Pan, L.; Maharjan, S.; Zhang, Y. Energy-Efficient Offloading for Mobile
[Edge Computing in 5G Heterogeneous Networks. IEEE Access 2016, 4, 5896–5907. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2016.2597169)
22. [3GPP TR 38.912 Version 15.0.0 Release 15. 5G Study on New Radio (NR) Access Technology. Available online: https://www.etsi.](https://www.etsi.org/deliver/etsi_tr/138900_138999/138912/15.00.00_60 \ /tr_138912v150000p.pdf)
[org/deliver/etsi_tr/138900_138999/138912/15.00.00_60\/tr_138912v150000p.pdf (accessed on 15 August 2021).](https://www.etsi.org/deliver/etsi_tr/138900_138999/138912/15.00.00_60 \ /tr_138912v150000p.pdf)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/app11167589?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/app11167589, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2076-3417/11/16/7589/pdf?version=1629340851"
}
| 2,021
|
[] | true
| 2021-08-18T00:00:00
|
[
{
"paperId": "4d2731c756c4df8674711fa8d9073f9bd256e7a4",
"title": "CampEdge: Distributed Computation Offloading Strategy Under Large-Scale AP-Based Edge Computing System for IoT Applications"
},
{
"paperId": "899e610f70b333c1ff5780da2de53944e34111a9",
"title": "Preemptive SDN Load Balancing With Machine Learning for Delay Sensitive Applications"
},
{
"paperId": "0e90d4bb280e00738f626f4c9d93c5764335d89e",
"title": "Association Game for Conflict Resolution between UEs and Small Cells"
},
{
"paperId": "630f95c853a07d42da2d427c023a2e90b0b7b95a",
"title": "Toward Sustainable Micro-Level Fog-Federated Load Sharing in Internet of Vehicles"
},
{
"paperId": "c1b230ee02fa3f0e3db4723f0cffd22c3c0cb609",
"title": "Energy Efficiency Based Joint Computation Offloading and Resource Allocation in Multi-Access MEC Systems"
},
{
"paperId": "4c5f13a3a715b70b96a91380e9718e1ed7fc707a",
"title": "Performance Modeling of Representative Load Sharing Schemes for Clustered Servers in Multiaccess Edge Computing"
},
{
"paperId": "593114f52d7ee03b1f47bc4693684536321df663",
"title": "An End-to-End Load Balancer Based on Deep Learning for Vehicular Network Traffic Control"
},
{
"paperId": "afd9dadac8d3354615e26b2038887ecdbc9e33e5",
"title": "Mobile Edge Computing: A Survey on Architecture and Computation Offloading"
},
{
"paperId": "3149e65ebd8f3c0bbab410c8c0733a00a842cde4",
"title": "Energy-Efficient Offloading for Mobile Edge Computing in 5G Heterogeneous Networks"
},
{
"paperId": "b8ef33151808b86c3d6111149308160119b33088",
"title": "An overview of load balancing in hetnets: old myths and open problems"
},
{
"paperId": "5b989f62d2c72ab4d1422c4cd1b6f7f13b8d5abf",
"title": "Efficient Role Transfer Based on Kuhn–Munkres Algorithm"
},
{
"paperId": "cc3b53df88c35e7253a535bf07683f5efe9aef00",
"title": "M–M Role-Transfer Problems and Their Solutions"
},
{
"paperId": "4c9042c447e5afcceb2104a7b1463eb83ba403b0",
"title": "Convergence Speed in Distributed Consensus and Averaging"
},
{
"paperId": "e9f7fa2de539cee075e5c7750cb99699474a6363",
"title": "Roles in Information Systems: A Survey"
},
{
"paperId": "e4343e9338471441ecbb8a6b4ac551a7c16f49e6",
"title": "Fair bandwidth allocation for multicasting in networks with discrete feasible set"
},
{
"paperId": "2d3f005a7b52423bed531198b47dada5b072450c",
"title": "Fair allocation of discrete bandwidth layers in multicast networks"
},
{
"paperId": "ccc91e6fc8c43f7b3c845ce271acedaef2c515eb",
"title": "Dynamic Load Balancing for Distributed Memory Multiprocessors"
},
{
"paperId": "c795fc154280aeaf27f13ee0e8392a6294fe10bc",
"title": "FairEdge: A Fairness-Oriented Task Offloading Scheme for Iot Applications in Mobile Cloudlet Networks"
},
{
"paperId": "1697b4a37ecc65bc5335117fe8f7826f2a8cbd47",
"title": "Multi-Access Edge Computing: A Survey"
},
{
"paperId": "571444d2c597cdde7675586e632f8cdd20f55a24",
"title": "Fog Load Balancing for Massive Machine Type Communications: A Game and Transport Theoretic Approach"
},
{
"paperId": "5359fb2362ee22a18a5cc1bf9ff7f69d7ce533bf",
"title": "Distributed average consensus with least-mean-square deviation"
},
{
"paperId": null,
"title": "Release 15. 5G Study on New Radio (NR) Access Technology"
},
{
"paperId": "8e8a5e75f14784e9d7cb00908dd460652291c238",
"title": "Role Transfer Problems and their Solutions"
}
] | 13,442
|
en
|
[
{
"category": "Art",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0334e99111a6e31edb699d8be43a792244edbba6
|
[] | 0.899717
|
Exploring the Digital Creative Product Design of Luoshan Shadow Based on Non-Fungible Tokens
|
0334e99111a6e31edb699d8be43a792244edbba6
|
Arts in society
|
[
{
"authorId": "2217933994",
"name": "Taotao Xu"
},
{
"authorId": "87475656",
"name": "M. Shanat"
}
] |
{
"alternate_issns": [
"2709-9830"
],
"alternate_names": [
"Art Soc",
"Art soc",
"arts in society",
"art soc",
"Art and Society"
],
"alternate_urls": [
"http://uwdc.library.wisc.edu/collections/Arts/ArtsSoc"
],
"id": "b41573c2-7fe4-439c-8f09-e83cd7449cba",
"issn": "0004-4024",
"name": "Arts in society",
"type": "journal",
"url": "http://digicoll.library.wisc.edu/Arts/subcollections/ArtsSocAbout.shtml"
}
|
The popularity of Non-fungible Tokens is reshaping the Internet, digital assets and content. Many cultural sectors worldwide have switched and proposed using NFT technology to enhance the liquidity of financialised art assets. Numerous companies are trying to take advantage of the rapid growth of the NFT markets to increase their competitiveness. Although digital artwork provides numerous technical advantages, few design practices are available for Luoshan Shadow’s NFT virtual creative products. This study aimed to investigate the factors influencing the digital design of the Luoshan shadow by NFT technology. We conducted a thematic analysis of the comments and suggestions given by designers and non-geneticists. Data of 185 designers, consumers, and non-geneticists indicated the main concerning themes; i) copyright and security issues; ii) innovation of art form and content; iii) new presentation, and iv) benefits from Non-fungible Tokens technology. These findings may provide experience for Luoshan shadow art to start the layout of digital creative design in the context of the much-needed NFT market and to organically combine its digital derivatives with physical derivatives to achieve joint development.
|
Paradigm Academic Press
Art and Society
ISSN 2709-9830
JUN. 2023 VOL.2, NO.3
# Exploring the Digital Creative Product Design of Luoshan Shadow
Based on Non-Fungible Tokens
Taotao Xu[1,2] & Musdi bin Hj. Shanat[1]
1 Faculty of Applied and Creative Arts, Universiti Malaysia Sarawak, Sarawak, Malaysia
2 School of Art and Design, Xinyang University, Xinyang, China
Correspondence: Taotao Xu, Faculty of Applied and Creative Arts, Universiti Malaysia Sarawak, Sarawak,
Malaysia; School of Art and Design, Xinyang University, Xinyang, China.
doi:10.56397/AS.2023.06.02
**Abstract**
The popularity of Non-fungible Tokens is reshaping the Internet, digital assets and content. Many cultural sectors
worldwide have switched and proposed using NFT technology to enhance the liquidity of financialised art assets.
Numerous companies are trying to take advantage of the rapid growth of the NFT markets to increase their
competitiveness. Although digital artwork provides numerous technical advantages, few design practices are
available for Luoshan Shadow’s NFT virtual creative products. This study aimed to investigate the factors
influencing the digital design of the Luoshan shadow by NFT technology. We conducted a thematic analysis of
the comments and suggestions given by designers and non-geneticists. Data of 185 designers, consumers, and
non-geneticists indicated the main concerning themes; i) copyright and security issues; ii) innovation of art form
and content; iii) new presentation, and iv) benefits from Non-fungible Tokens technology. These findings may
provide experience for Luoshan shadow art to start the layout of digital creative design in the context of the
much-needed NFT market and to organically combine its digital derivatives with physical derivatives to achieve
joint development.
**Keywords: non-fungible tokens, digital creative product, Luoshan shadow, innovation**
**1. Introduction**
The continued growth of the Non-fungible Tokens segment is driving the fast-reading development of the digital
market. According to Coin Gecko data, the overall market capitalization of NFT reached USD 12.7 billion in the
first half of 2021, an increase of nearly 310 times compared to 2018. According to Non-Fungible data, NFT
transaction size reached USD 754 million in the second quarter of 2021, up 3453 percent year-over-year and 39
percent sequentially, with explosive growth in transaction volume. However, the NFT market has seen some
degree of reduced heat since 2022, but the overall trend is still forward. Artists from several countries believed
that NFT technology was an unparalleled innovation that could become a new chapter in art history.
Meanwhile, NFT plays a crucial role in protecting rights and economic aspects. Therefore, under the trend of
NFT technology, the digital creative design of Luoshan shadow will break the situation that most art creations
mainly express the author’s emotion, and many art creations will pay more attention to the connection between
works’ actual public. It is lead most directly characterized by the commerciality and uniqueness of artworks. In
addition, designers need to develop innovative digital products to pass on the Luoshan shadow and adapt it to the
current environment to better integrate it with modern technology.
**2. Literature Review**
Several studies have reviewed and investigated the creative impact of Non-fungible Tokens infection on digital
designers (Logan Kugler, 2021; Lawrence J. Trautman, 2021; Sakib Shahriar & Kadhim Hayawi, 2021; Dan
-----
Weijers & H. Joseph Turton, 2021; Florian Horky, Carolina Rachel & Jarko Fidrmuc, 2022). The factors
associated with Non-Fungible Token among digital arts include environment, property, management, copyright,
and also prices (Joshua Fairfield, 2021; Andres Guadamuz, 2021; Usman W. Chohan, 2021).
Simultaneously, the application of NFT is expanding to various fields, from artwork and collectibles to avatars
and pictures to games and traditional culture. The cultural assets are transformed into unique digital items using
the Non-Fungible Token technology (Emre Ertürk; Murat Doğan; Ümit Kadiroğlu; Enis Karaarslan, 2021). The
impact of blockchain technologies on cultural heritage preservation is made better with digital evolution support
(Denis Trček, 2022).
Moreover, the Non-Fungible Token has also greatly affected intangible cultural heritage (Erica Del Vacchio &
Francesco Bifulco, 2022). Although this research has shown the critical signs of blockchain’s roles in cultural
heritage, its application may have several limitations. These indicators include empirical research, guideline,
legal regulation, and consumers (Erica Del Vacchio & Francesco Bifulco, 2022).
Empirical research also facilitates the effective transmission of intangible cultural heritage, such as breaking
through the conflict between the economic value of NFT art and the inherent value of traditional art (Yangyang
Zhang & Xiaotian Wang, 2021). It is considered a part of the development of the influence of NFT on cultural
heritage. Denis Trček (2022) claimed that cultural heritage should be as conserving as possible, and blockchains
were perfect for this purpose. The impacts of these technologies in the cultural heritage domain are still
comparatively minor. With this empirical research in place amongst cultural heritage, exploiting the advantages
and opportunities of new technologies, and thus leading towards cultural heritage better inherited in the face of
any social change (Bosone, M.; Nocca, F., Fusco Girard, L, 2021). Regarding relevant design practices, the
national intangible cultural heritage of Huaxian shadow puppets released four Beijing Winter Olympic
Games-themed shadow digital collections on the digital collection platform in February 2022.
Guidelines of NFT applications such as market access standards, the responsibility of the trading platform,
database construction, and digital product regulation are vital indicators of management for the digital art market
(Shuangzhou Liu & Zhiwei Guo, 2022). They found that NFT digital artwork value identification was difficult
due to opaque transaction information, and payment through virtual currency and other transaction
characteristics poses a difficult regulatory challenge. Digital product regulation, on the other hand, affects how to
achieve the standardization and legalization of NFT market development. The value realization of various
non-homogeneous assets by promoting the compliance flow of assets in the chain under the regulatory
framework (Chen Jiang, 2022). Moreover, a legal form of ownership of NFT is both sorely needed and has not
yet been established online (Joshua Fairfield, 2021).
Many customers have a different need for NFTs, which significantly impacted the online market (Joshua
Fairfield, 2021). The decision environment has different moderating effects on consumers with NFT of cultural
heritage. These environments are equally crucial for high NFT consumers and low NFT consumers (Lingli Dong,
2017). The demand of NFT consumers is critical to digital market development. It is not easy to sustain a buyer’s
market for NFT in the artwork category with low-frequency trading. Consumers and suppliers should be closely
involved in connected and intelligent cultural heritage (Johan Oomen & Lora Aroyo, 2011). Hence, a larger
consumer group is needed to expand sustainable consumers of entertainment, social, and interest in NFT (Jian
Song & Lin Liu, 2022).
Even though we know that NFT has a significant role in the development of cultural heritage through global
experiences, current studies on the impact of NFT on the creative design of digital Intangible Culture Heritage
(ICH) have yet to describe in-depth. Furthermore, there is sparse information on NFT’s digital creative design of
ICH in China to inform the appropriate experience of digital creative design for Luoshan shadow. Most of the
current study and practice have been from the experience of others elsewhere with different cultural backgrounds
compared to Luoshan’s shadow. Opinions and experiences of these digital creative designs, which have direct
contact with NFT, are invaluable for the current creative design of Luoshan shadow. Hence, this study aimed to
explore the influence and matters of NFT on factors contributing to enhancing the level related to the digital
creative design of Luoshan shadow.
**3. Methodology**
This section discusses the overall methodology, including the dataset and analysis, the interviews with digital
designers, and impacts. Hybrid approaches are employed to ensure the result and finding is dynamic and
relevant.
_3.1 Data Collection_
This study is conducted in China, where many ICHs are protected and utilized to ensure the transmission of
traditional culture. A publicly available dataset on Yuxiaoshu (https://www.yuxshu.cn/Questionnaire) was
utilized for the research. The dataset contains demographic studies such as occupation, age, income, price of
-----
digital products, and attitudes towards NFT products. In this current work, we also analyzed the qualitative data
of designers’ and folk artists’ opinions and suggestions. The questions provided for the digital designers and folk
artists were as follows:
- What influences does NFT influence the digital creative design of Luoshan shadow?
- What is the end user’s primary concern regarding these products?
- Does Non-Fungible Token have an impact on the creative design?
- How can NFT help them financially?
The questionnaires on the survey are distributed to some designers, the inheritors of Luoshan shadow play and
ordinary consumers. The selection criteria include input data in Henan, Hubei, Shandong, Jiangxi, Guizhou,
Hebei, and Sichuan. Respondents aged for this study must be 18 years old and above. All data were retrieved
within one month, from 28th of April 2022 to 29th of May 2022; We performed the thematic analysis of our
dataset across questionnaires and interviews. Data retrieved from Yuxiaoshu, including the respondents’ opinions
and suggestions, were stored in this data platform. The analyses involved the researchers double-checking the
data to confirm topics, ideas, and impact of NFTs on Luoshan shadow digital design. We also perform a
fundamental correlation analysis on future Luoshan shadow digital products’ development.
_3.2 Data Analysis_
To understand the relationship between the Non-Fungible Token and Luoshan shadow digital design, we present
the study’s results by interviewing some digital designers and Luoshan shadow puppet inheritors. The gathered
data of publicly available information about NFT would provide significant insights into Luoshan shadow digital
design development. Apart from the information extracted from Yuxiaoshu, the designers and Luoshan shadow
puppet inheritors also provide us with many valuable opinions about the NFTs on Luoshan shadow digital
design.
We measure the NFT popularity of Luoshan shadow digital design by aggregating information from all
respondents mentioned above. We have 185 questionnaires in the Yuxiaoshu dataset, with the majority 95% of
them being conducted in May 2022. Also, over 65% of all respondents were interested in NFT creative products
of Luoshan shadow, compared to only 19% who had no feeling about them. We also found that more than 72%
of respondents’ attitudes toward people around them holding NFT creative products of Luoshan shadow were a
novelty. Only 48% of respondents are willing to spend money on the NFT digital creative products of Luoshan
shadow in the future. We attribute this to the inflated prices of NFT products in the current market.
_3.3 Results_
Refer to Table 1 for a detailed background of the respondents. A total of 262 inputs by 185 participants were
retrieved from the Yuxiaoshu during the study period. The interviews were conducted in person and online with
eight professionals. Based on our systematic analysis of the dataset, we have found creative design quality and
capability of the Luoshan shadow can be improved by NFT. Almost 57% of respondents in our sample were
concerned about the price of NFT virtual products about Luoshan shadow. They expected the price of these
products to be under $200. These data demonstrated how people with different occupations and incomes reflect
their attitudes to the NFT creative products of Luoshan shadow.
Table 1. Background of the Participants
Variables n Percentage (%)
Gender
Province
Age
Male
Female
Henan
Shandong
Hubei
Jiangxi
Sichuan
Others
<18
18-25
26-35
66
119
60
38
24
12
6
45
0
64
84
35.7
64.3
32.4
20.5
13.0
6.50
3.20
24.4
0
34.6
45.4
-----
35
2
126
13
10
10
26
2
96
67
18
5
105
73
7
0
18.9
1.10
68.1
7.00
5.40
5.40
14.1
1.10
51.9
36.2
9.70
2.70
56.7
39.5
3.80
0
Major
Income
Exposure to
NFT
36-50
>51
Art
Engineering
Literature
Management
Others
<2000
2500~5500
5500~10000
10000~20000
>20000
<200
200~1000
1000~5000
>5000
**Function 1: Enhancement of Design Capabilities**
Digital product designers and non-genetic inheritors shared their opinion on several functions related to NFT
virtual creative products of Luoshan shadow. They emphasized design matters related to NFT, including
complementing the NFT product category of Luoshan shadow, combining modern and popular elements in terms
of shape and theme, and emphasizing creativity for Luoshan shadow digital creative products. Moreover, they
emphasized the creativity of creative digital products in the field of NFT, as said by one of the digital designers
who studied the Luoshan shadow:
_“The creative digital design of the Luoshan shadow requires a certain degree of topicality. For example, the_
_scenic digital collection of the Datang City of Night has chosen the shape of the Netflix Miss Invincible, which_
_would increase the sales of the collection with a higher audience. Although NFT is a recent trendy topic, and_
_there are many cases on the Internet where random doodles sell for high prices, as a special status of national_
_NFT, its collection should not be the purpose of the creator but a critical carrier to publicize and promote the_
_national culture, representing the cultural image of China. Therefore, a high degree of originality and attentive_
_design supports the digital creative design of Luoshan shadow”._
The digital product designers shared their opinions on how to improve the quality of product design, and they
included the factors that clarify supply and demand, optimization of development and design process, and
attention to detail in sensory response. For the “design methods”, most designers’ suggestions included virtual
cultural crossing, flexible use of typical symbols, and enhanced immersion and interaction of the product.
**Function 2: Identification and Copyright**
“Identification” was an essential function that emerged from the NFT virtual creative products of Luoshan
shadow. Digital product designers and inheritors expect more artists and developers are entering the field of NFT.
They claimed that NFT can protect the copyright of authors and that it is impossible to publish photos on the
Internet without the copyright owner’s permission unless approved by a blockchain transaction. For example,
one of the designers was very excited because many NFTs offer new or more convenient ways to protect and sell
their creative works.
_“In terms of intellectual property and related laws, the NFT allows artists to increase their financial income_
_through the sale of digital works. On the other hand, we know that it is very easy to reproduce paintings,_
_photography, music, games, and other digital media in the digital art world. However, NFT is no substitute for_
_crypto art. It aims to guarantee the provenance of digital artworks through blockchain technology”._
At the same time, most participants wanted the original digital products of Luoshan shadow to be recognized
through NFT. Purchase of NFT can be considered as a purchase of a unique art print signed by the creator. The
NFT can also better manage the creation and trading of works in a virtual environment through smart contracts.
**Function 3: Relationship Medium**
-----
The combination of virtual and realistic Luoshan shadow innovative products will break through the traditional
physical space limitation in terms of display. Traditional exhibitions are limited by physical space and can only
display part of the exhibits. Nevertheless, virtual worlds have no limits in this regard. The immersive virtual
environments will provide us with new ways of looking at art and create new forms of interaction with Luoshan
shadow art, including physical, auditory, olfactory, and other forms.
Many participants described that the NFT platform would explore new artists, solve the scarcity of digital art at
Luoshan shadow, and explore the possibility of interaction between Luoshan shadow and digital media. For
example, one of the inheritors said, “More people will own and participate in the NFT products of Luoshan
shadow with blockchain technology.” Another inheritor also shared her opinion on this question,
_“Intangible cultural heritage will gain a more durable life through NFT technology because it can break the_
_limits of time and space. The history and culture behind Luoshan shadow can be conveyed by innovative digital_
_products in a more youthful experience, thus becoming an essential breakthrough in promoting its innovative_
_development”._
Most inheritors claimed that NFT is a robust platform to showcase the charm that will enhance the elegance of
the digital artwork of Luoshan shadow to the world.
**Function 4: Economic System**
Some respondents believed that NFT creative products of Luoshan shadow as a sustainable model that does not
create overcapacity and can raise the income of ordinary individuals. To reach a younger consumer group and
achieve better sales results, many well-known brand companies have launched their own digital NFTs. Shanniu,
one of the inheritors of Luoshan shadow puppets, shared her views on NFT. She believed that the profound
combination of NFT and marketing activities could promote the development of the traditional Luoshan shadow
industry. The combination of NFT and Luoshan shadow allows everyone on the network to view and buy their
favorite shadow products. Most importantly, they reminded the public that NFT products could generate a steady
stream of royalty income for creative designers and inheritors of the Luoshan shadow and boost the local
regional economy.
**4. Discussion**
Through our discussion of the subjective opinions and suggestions, in keeping with some crucial issues related to
the NFTS of Luoshan shadow, we also identified that blockchain technology enhances the vitality of digital
creative design. Furthermore, the advantages of Luoshan shadow NFT virtual creative products, as found by
some researchers, focus on copyright advantages, innovation in form and content, and economic benefits.
_4.1 Security and Copyright_
Many designers believe the blockchain provides a traceable ownership guarantee for Luoshan shadow digital
artwork. Each NFT reflects a unique serial number on a specific blockchain. The transactions associated with it
are recorded on a decentralized blockchain ledger, which is tradable, verifiable, tamper-proof, and traceable.
Leqing Wang, the product leader of Tencent Cloud’s blockchain “to the chain,” pointed out that NFT is unique
and has properties such as resistance to tampering, authenticity, and scarcity (Chen Jiang, 2022). NFT is
equivalent to a certificate of artwork, proving that digital artwork of Luoshan shadow is original, and others are
just copies for preservation. The blockchain will record the work’s creator and when it was created, so works
freely copied and distributed on the Internet can also be distinguished from forgeries. It will solve the problem of
opaque information about the origin and authenticity of Luoshan shadow digital artworks. The blockchain
enhances the efficiency of transactions between buyers and sellers of Luoshan shadow digital artworks because
it is not necessary to complete the identification of it through professional institutions, saving time and breaking
the geographical limitations.
Compared to homogenized tokens such as Bitcoin and Ether, NFT provides a way to note or mark ownership of
natively digital assets, making the commodity a unique digital asset on the blockchain. Therefore, through
technological innovations such as timestamps and smart contracts, it is possible to register the copyright of each
digital creative work of Luoshan shadow, which can protect copyright better. For example, if a digital work of
Luoshan shadow is sold on the trading platform, the designer can cast the work as NFT and set rules for
authorization or transfer of copyright to work. Meanwhile, the blockchain system will keep a permanent record
of copyright flow.
In terms of the security of Luoshan shadow NFT products, some scholars argue that NFT enables the audience to
share revenue with art creators and emphasizes the importance of the circulation of copyright transactions and
the appreciation of copyright value issues.
_4.2 Innovation Issues_
For the consumer subject, the creativity of products is the main factor in attracting their consumption. Therefore,
-----
some designers claimed that the uniqueness of Luoshan shadow digital creative products should be an
independent creative process from nothing to something new, requiring that the creative results originate from
oneself and cannot be copied by others. Its creativity is closely related to the quality and value of the result,
emphasizing that it can be objectively identified. Many experts believe it is more important to set a higher
standard for the originality of work and emphasize its spiritual and cultural connotations.
A recent study supported our findings and emphasized that digital derivatives can leverage user-produced
content to compensate for creative shortcomings and reinforce each other with physical derivatives. NFT trading
platform provides ideas and creativity for digital derivatives of Luoshan shadow by guiding users to deep
participation. The whole process combines co-creation and market research to pool ideas in different aspects of
creation, such as digital hand-painting of the production process, conceptual design, character styling design, et
cetera. A series of digital assets of Luoshan shadow can be designed accordingly and offered to the market using
blockchain technology. On the other hand, NFT digital artworks of Luoshan shadow break the absolute
boundaries between designers, audiences, and investors and expand the depth and breadth of its artistic
expression.
Furthermore, some researchers proposed to strengthen the construction of resource service platforms, drive
product innovation, model innovation and industrial innovation with technological innovation, promote the
cross-fertilization of traditional culture with literature, games, film, music, and other content forms, and
continuously enrich product forms and service models (Hong Yin & Man Zhao, 2021). For example, the design
of Luoshan shadow cartoon image will not be limited by language and national boundaries and will prepare
design solutions with the help of different cultural elements. Designers also need to break through the limitations
of traditional thinking, develop design ideas in the established scenes, and conduct a comprehensive analysis and
research to complete the design work successfully.
In conclusion, NFT technology provides space for creating virtual artworks of Luoshan shadow. Technology and
traditional culture collide with sparks, stimulating the vitality of creators and attracting the public’s attention to
Luoshan’s shadow.
_4.3 Differences in Presentation_
Blockchain technology will enable a change in the form of the digital creative design of the Luoshan shadow.
Some designers believed that NFT technology could make the digital creative design of Luoshan shadow
produce a new form of artistic expression and bring a new visual impact to the audience. An inheritor of the
shadow mentioned that with data support, people would interpret the Luoshan shadow with a long history from a
new dimension and perspective, making it a living content. Based on the research of similar products on sale, we
found that digital creations are no longer limited to images, short videos, audio, souvenir cards, skins, avatars,
and other forms. In other words, blockchain technology will effectively extend the traditional presentation and
give audiences a more intuitive feel for the product.
The problems of empty and indiscriminate issuance that existed in the past of digital artworks are solved by
blockchain technology. Meanwhile, NFT technology also extends the dissemination channels of Luoshan
shadow digital creative works, transmitting them in real-time at any time and place and giving people quick
access to product information. It makes the creative works of Luoshan shadow more perfect and can be modified
at a later stage to increase the authenticity of artistic creation, which will enhance the interaction between the
audience and its creative work.
In essence, compared with the physical, cultural, and creative products, blockchain technology expands the
boundary of the information conveyed by innovative digital products of Luoshan shadow. Designers can explain
their work better with the help of digital technology and communicate the background, design process, and
experience related to their work. Consumers can also purchase, view, and collect innovative digital products of
Luoshan shadow at lower prices through more timely and convenient access.
_4.4 Other Issues_
As we mentioned earlier, NFT protects the copyright of the digital creative works of Luoshan shadow. At the
same time, it generates a steady stream of royalty income for the artists. After executing the on-chain contract,
the artist can get the corresponding royalty after each on-chain transaction and permanently own the right of
income of their artworks. The traditional royalty mechanism is challenging to monitor the secondary trading and
circulation of artworks, while the NFT art of Luoshan shadow has solved the problems of genuine and fake
copyright and orderly inheritance.
In addition, although many designers have carried out the digital creative design of Luoshan shadow, the overall
development is slow. Very few professionals and designers are involved in it, making it difficult to effectively
support the rapid development of innovative digital products of Luoshan shadow. Compared with Shanxi shadow,
Tangshan shadow, Hebei shadow, and Sichuan shadow, which cover the frontier fields of cloud games, animation
-----
comics, e-sports, music, digital visual arts, and immersive script killing, Luoshan shadow lacks multi-scene and
multi-ecological digital creative products. Therefore, making full use of NFT technology to promote the
digitalization and intelligent upgrading of traditional Luoshan shadow puppets and related arts can accelerate the
formation of products that are compatible with the development needs of the digital economy and intelligent
society.
**5. Conclusion**
This study emphasizes that the key to designing NFT virtual creative products for Luoshan shadow lies in
innovation so that they can be more readily accepted, purchased, and identified by the audience. It not only
extends the scope of digital works of Luoshan shadow from images to non-homogeneous contents but also
ensures the uniqueness, authenticity, and permanence of Luoshan shadow digital assets and solves its ownership
and storage problems. The essential factors of NFT technology affecting the digital creative design of Luoshan
shadow include copyright application and registration of works, expansion of financial and social attributes of
digital artworks, and the innovation of means and contents of artistic creation.
However, many professionals have also raised a series of problems arising from the NFT digital creative design
of the Luoshan shadow because of the artificially created scarcity. For example, without the supervision of
traditional supervisory authorities, lowering the threshold for artistic creation may produce a surplus of digital
artworks of Luoshan shadow, and many unfiltered artworks flow into the market. In addition, the digital art of
Luoshan shadow is prone to high manipulation, and the cost of casting will prevent more people from
participating.
In the future, NFT technology can advance the development of artistic language and artistic concept of digital
creative design of Luoshan shadow and produce more diverse artistic forms. Such as audio, game, apps et cetera.
Blockchain technology will facilitate the integration of the Luoshan shadow with existing cultures. Designers
and inheritors creatively recombine the various elements in Luoshan shadow to create modern works with
traditional cultural connotations. Not surprisingly, it can provide designers and inheritors with scientific
guidance to support the sustainable design of digital creative works of Luoshan shadow and give these products
a competitive edge in a rapidly changing market.
**Acknowledgements**
The authors would like to thank the Henan Province Philosophy and Social Science Planning, and those staff
members who were directly or indirectly involved in completing this study.
**Fund Project**
This research was supported by Special Project Funding for Henan Province Philosophy and Social Science
Planning (2022XWH237).
**References**
A Fowler, J Pirker, (2021). Tokenfication - The potential of non-fungible tokens (NFT) for game development.
_Extended Abstracts of the 2021 Annual Symposium on Computer-Human Interaction in Play, 152-157,_
https://doi.org/10.1145/3450337.3483501.
Andres Guadamuz, (2021). The treachery of images: non-fungible tokens and copyright. Journal of Intellectual
_Property Law & Practice, 16(12), 1367-1385, https://doi.org/10.1093/jiplp/jpab152._
Baifan F & Ziyue T, (2022). Limitations and Prospects: A Staged Look at Digital Art and the NFT Art Market.
_Contemporary Artists, 2, 30-33._
Chen J, (2022). NFT and the Future of Art, Colloquium Overview. Shanghai Culture, 2, 123-126.
Dan Weijers, H. Joseph Turton, (2021). Environmentally smart contracts for artists using non-fungible tokens.
_2021_ _IEEE_ _International_ _Symposium_ _on_ _Technology_ _and_ _Society_ _(ISTAS),_ 1-4,
doi:10.1109/ISTAS52410.2021.9629203.
Denis Trček, (2022). Cultural heritage preservation by using blockchain technologies. Heritage Science, 10(6),
2-11.
Erica Del Vacchio, and Francesco Bifulco, (2022). Blockchain in Cultural Heritage: Insights from Literature
Review.Sustainability, 14(4), 2324, https://doi.org/10.3390/su14042324.
Florian Horky, Carolina Rachel, Jarko Fidrmuc, (2022). Price Determinants of Non-fungible Tokens in the
Digital Art Market. Finance Research Letters, 31, 3-10.
Gongming W, (2021). Analysis of the value of NFT artworks and discussion of issues. Art in China, 4, 38-43.
-----
Jian S & Lin L, (2022). China digital collections (NFT) market analysis.
https://zhuanlan.zhihu.com/p/474444565.
Johan Oomen & Lora Aroyo, (2011). Crowdsourcing in the cultural heritage domain: opportunities and
challenges. C&T ‘11: Communities and Technologies, 138–149, https://doi.org/10.1145/2103354.2103373.
Johan Oomen, Lora Aroyo, (2021). Crowdsourcing in the Cultural Heritage Domain: Opportunities and
Challenges. C&T ‘11: Proceedings of the 5th International Conference on Communities and Technologies,
138-149, https://doi.org/10.1145/2103354.2103373.
Joshua Fairfield, (2021). Tokenized: The Law of Non-Fungible Tokens and Unique Digital Property. Indiana
_Law Journal, Forthcoming, 7, 76-82._
Jun G, (2022). Exploring the innovation of film-derived digital collections based on blockchain technology.
_Modern Audio-Video Arts, 3, 80-83._
Lawrence J. Trautman, (2022). Virtual Art and Non-fungible Tokens. Hofstra Law Review,12, 365-381.
Lin G & Tingchen S, (2022). Research on the Application of blind box cultural and creative product
development based on the element of Luoshan shadow play head stubble. Tomorrow Fashion, 6, 147-150.
Lina Y., (2021). Image inheritance and protection of Luoshan shadow play in the new media era. New Chronicle,
11, 88-90.
Lingli D, (2017). The influence of haptic experience on consumers’ purchase decision judgment: the moderating
role of NFT and decision environment. Economic Survey, 34(05), 123-127.
Logan Kuglerl, (2021). Non-fungible tokens and the future of art. Communications of the ACM, 64(9), 19-20.
Michael Dowling, (2021). Fertile LAND: Pricing non-fungible tokens. Finance Research Letters, 44,
https://doi.org/10.1016/j.frl.2021.102096.
Sakib Shahriar, Kadhim Hayawi, (2021). NFTGAN: Non-Fungible Token Art Generation Using Generative
Adversarial Networks. Computer Science, 27, 1-4.
Sasha Shilina, (2021). Blockchain and non-fungible tokens (NFTs): A new mediator standard for creative
industries communication. https://www.researchgate.net/publication/356493275.
Sen W, (2018). The Development Situation and Countermeasures of Luoshan Shadow Play in Henan. Journal of
_Kaifeng Vocational College of Culture, 38(08), 254-255._
Shangzhou Liu & Zhiwei Guo, (2022). Risk and prevention applications of NFT in the digital art market.
_Journal of Arts Management, 1, 113-119._
Tao J, (2021). The Future of the NFT Crypto Art Market. Art Market, 7, 130-131.
Wei H, (2021). Product Design and Development of Luoshan Shadow Culture and Creativity Based on Plastic
Art. Light and Textile Industry and Technology, 50(5), 97-99.
Weifeng L & Yongqing J, (2022). Blockchain NFT: Unlocking the Era of Originality in Contemporary Art.
_Ethnic Art Studies, 35(02), 96-101._
Xuan W (2021). New Developments in Blockchain Technology — NFT Digital Artwork and its Innovations.
_Investment And Entrepreneurship, 32(21), 9-11._
Yangyang Z & Xiaotian W, (2021). Reflections on the artistic value of NFT. Ginseng flowers, 11, 92-93.
Yantong G & Tianyi K & Jiazhen L & Xinyu T, (2021). The Impact and Implication of New Media on the
Pattern Development of Art Exhibitions and Art Appreciation, Advances in Social Science, Education and
_Humanities Research, 631, 788-793._
Yingxue Y, (2019). The Living Communication of Shadow Culture in the Media Culture Perspective: The Case
of Luoshan Shadow Play. News Research, 3, 80-82.
**Copyrights**
Copyright for this article is retained by the author(s), with first publication rights granted to the journal.
This is an open-access article distributed under the terms and conditions of the Creative Commons Attribution
license (http://creativecommons.org/licenses/by/4.0/).
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.56397/as.2023.06.02?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.56397/as.2023.06.02, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://www.paradigmpress.org/as/article/download/622/534"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-06-01T00:00:00
|
[] | 8,206
|
en
|
[
{
"category": "Economics",
"source": "external"
},
{
"category": "Economics",
"source": "s2-fos-model"
},
{
"category": "Business",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0336acd79011c0ef2bdea9fe97b6f81686adf3dd
|
[
"Economics"
] | 0.830331
|
Dynamic connectedness and integration in cryptocurrency markets
|
0336acd79011c0ef2bdea9fe97b6f81686adf3dd
|
International Review of Financial Analysis
|
[
{
"authorId": "1726583",
"name": "Qiang Ji"
},
{
"authorId": "89251322",
"name": "Elie Bouri"
},
{
"authorId": "153893226",
"name": "Chi Keung Marco Lau"
},
{
"authorId": "51369489",
"name": "D. Roubaud"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int Rev Financial Anal"
],
"alternate_urls": [
"http://www.sciencedirect.com/science/journal/10575219"
],
"id": "32f9b574-cd8b-4700-9c21-6b3ab43d76e5",
"issn": "1057-5219",
"name": "International Review of Financial Analysis",
"type": "journal",
"url": "http://www.elsevier.com/wps/find/journaldescription.cws_home/620166/description#description"
}
| null |
#### Dynamic connectedness and integration among large cryptocurrencies
**Qiang Ji**
Center for Energy and Environmental Policy Research, Institutes of Science and
Development, Chinese Academy of Sciences, Beijing 100190, China
School of Public Policy and Management, University of Chinese Academy of Sciences,
[Beijing 100049, China. Email: jqwxnjq@163.com](mailto:jqwxnjq@163.com)
**Elie Bouri**
USEK Business School, Holy Spirit University of Kaslik, Jounieh, Lebanon, Email:
eliebouri@usek.edu.lb
**Chi Keung Marco Lau**
Department of Accountancy, Finance and Economics, Huddersfield Business School,
[University of Huddersfield, Queensgate, Huddersfield, UK. Email: c.lau@hud.ac.uk](mailto:c.lau@hud.ac.uk)
**David Roubaud**
Energy and Sustainable Development (ESD), Montpellier Business School, Montpellier,
France. Email: d.roubaud@montpellier-bs.com
1
-----
#### Dynamic connectedness and integration among large cryptocurrencies
**Abstract**
This study applies a set of measures developed by Diebold and Yilmaz (2012, 2016) to
examine connectedness via return and volatility spillovers across six large cryptocurrencies
from August 7, 2015 to February 22, 2018. Regardless of the sign of returns, the results
show that Litecoin is at the centre of the connected network of returns, followed by the
largest cryptocurrency, Bitcoin. This finding implies that return shocks arising from these
two cryptocurrencies have the most effect on other cryptocurrencies. Further analysis shows
that connectedness via negative returns is largely stronger than via positive ones. Ripple and
Ethereum are the top recipients of negative-return shocks, whereas Ethereum and Dash
exhibit very weak connectedness via positive returns. Regarding volatility spillovers,
Bitcoin is the most influential, followed by Litecoin; Dash exhibits a very weak
connectedness, suggesting its utility for hedging and diversification opportunities in the
cryptocurrency market. Taken together, results imply that the importance of each
cryptocurrency in return and volatility connectedness is not necessarily related to its market
size. Further analyses reveal that trading volume and global financial and uncertainty effects
as well as the investment-substitution effect are determinants of net directional spillovers.
Interestingly, higher gold prices and US uncertainty increase the net directional
negative-return spillovers, whereas they do the opposite for net directional positive-return
spillovers. Furthermore, gold prices exhibit a negative sign for net directional-volatility
spillovers, whereas US uncertainty shows a positive sign. Economic actors interested in the
cryptocurrency market can build on our findings when weighing their decisions.
**Keywords: Cryptocurrencies; market integration; return and volatility connectedness**
networks; asymmetric spillover.
**JEL classification: C52, G11, G17.**
2
-----
**1. Introduction**
The cryptocurrency market has quickly become an important element of the global
financial market (Gajardo et al., 2018) and a new asset class (Corbet et al., 2018). It has
seen exponential growth in both market value and number of digital coins, growing from
around $17.7 billion in market value at the start of 2017 to more than $700 billion in early
2018[1]. Importantly, newly introduced cryptocurrencies such as Ethereum, Ripple, Litecoin,
Stellar and Dash are gradually cutting into Bitcoin’s historically dominant market-value
share, [2] suggesting that investors are taking a breather from Bitcoin and looking at
alternative cryptocurrencies. The latter, which have generally borrowed some concepts and
technological elements (e.g., blockchain technology) from Bitcoin, have recently attracted
much attention and created tremendous opportunities for cryptocurrency investors to
maximize returns. This is not surprising, given that each of these alternative
cryptocurrencies outperformed Bitcoin in 2017, delivering astonishing returns ranging from
5000% (Litecoin) to 36 000% (Ripple) as compared to the 1300% price appreciation in
Bitcoin. In addition to a middle group of individual investors who consider
cryptocurrency-related investment, fund managers have been viewing cryptocurrencies as
an investable asset class capable of generating high returns despite their extreme volatility.
Surprisingly, the growing interest in alternative cryptocurrencies for investment
purposes is still accompanied by a limited understanding of how leading cryptocurrencies –
with a market value exceeding 10 billion USD and relatively high liquidity – interact with
one another in terms of return and volatility. In fact, the short history of the cryptocurrency
market has shown some relative heterogeneity among leading cryptocurrencies in terms of
returns, volatility and market value. [3] Extending the limited literature on dynamic
connectedness and integration in cryptocurrency markets would help crypto-investors in
devising investment and trading strategies that may involve combining leading
cryptocurrencies within the same portfolio. Accordingly, the aim of this study is to examine
1 Notably, since that peak, the cryptocurrency market lost most of its upside momentum and its value tumbled
by more than 70% by mid-2018.
2 Bitcoin’s market value accounted for more than 85% of the total cryptocurrency market in the first quarter of
2015. Since then, it has seen a significant drop in its market share, falling to 39% at the end of 2017. In contrast,
Ethereum has become the second-largest cryptocurrency, accounting for 15% of the total cryptocurrency
market. At the end of 2017, the combined market value of Ethereum, Ripple, Litecoin, Stellar and Dash is
slightly shy of Bitcoin’s market value.
3 It is intuitive that Litecoin, a fork of Bitcoin launched in in 2011, should have a close relationship with Bitcoin.
3
-----
connectedness via return and volatility spillovers across large cryptocurrencies using a set
of measures developed by Diebold and Yilmaz (2012, 2016). In doing so, we differentiate
between positive and negative returns. We also consider the determinants of net directional
return and volatility spillovers.
Generally, building network connectedness among price returns and volatility is
hardly new in conventional assets such as equities (e.g., Fowowe and Shuaibu, 2016;
Shahzad et al., 2018) and bonds (Louzis, 2015; Ahmad et al., 2018). Interestingly, it helps
in understanding stress periods (i.e. financial and economic crises) and their propagation
mechanisms as well as in identifying systemic risk (Louzis, 2015). In terms of implications,
the construction of network connectedness helps policy-makers in formulating their policies
that consist in preserving financial stability. Investors and risk managers can also benefit
from building network of connectedness across asset classes to adjust their investment and
hedging decisions. Prior studies have uncovered the network of connectedness among and
within different assets/markets that include equities (Fowowe and Shuaibu, 2016; Shahzad
et al., 2018; Zhang et al., 2018), bonds (Louzis, 2015; Ahmad et al., 2018), currencies
(Baruník, et al., 2017; Singh et al., 2018), commodities (Ji et al., 2018a & b; Zhang and
Broadstock, 2018), and interest rates (Louzis, 2015). Generally, empirical evidence suggests
that connectedness in both return and volatility is significant, time-varying, and is shaped by
crisis periods (Shahzad et al., 2018; Zhang and Broadstock, 2018). Importantly, the related
literature often finds that the largest stock market such as the US is the largest transmitter of
shocks to the stock markets of developed and emerging markets (e.g., Candelon et al., 2018).
Quite similar results are reported for the case of bonds (Ahmad et al., 2018). Furthermore,
connectedness among price returns and volatility intensifies during crises periods, leading to
contagion that jeopardizes the stability of the financial system and to less possibilities for
portfolio diversification.
However, the network of connectedness is extremely understudied in the
cryptocurrency market that becomes an appealing investment ground for investors.
Surprisingly, there is still a lack of understanding of the return and volatility spillovers
among leading cryptocurrencies and that for the sake of risk management and portfolio
diversification. Specifically, understanding the spillovers among cryptocurrencies provides
useful information regarding investment and hedging decisions. For example, investors can
exploit evidence of weak connectedness across cryptocurrencies to maximize diversification
opportunities or hedging strategies. An investigation by Corbet et al. (2018) is among the
4
-----
rare studies that examine network connectedness involving the Bitcoin market.[4] Our study
differs in several ways. Most notably, we not only study aggregate returns but are interested
in asymmetric connectedness between positive- and negative-return spillovers. This allows
us to highlight the relative importance of negative and positive shocks to each of the
cryptocurrencies under study. Further on, we compute daily volatility and then investigate
volatility connectedness among cryptocurrency markets, which makes our analysis the first
to provide findings on the dynamic volatility spillover of the six leading cryptocurrencies,
which account for more than 72% of the cryptocurrency market’s value. Accordingly, our
larger dataset and a refined methodology that differentiates between the connectedness of
positive and negative returns make our analysis highly informative to market participants
interested in the diversification potential among the largest cryptocurrencies, which are also
the most liquid. Finally, we explore several factors as determinants of total and net
directional spillovers by considering various market conditions and market-development
characteristics in order to paint a comprehensive picture of the integration of the
cryptocurrency market.
The main results provide evidence that Bitcoin and Litecoin are at the centre of the
connected network of returns and that shocks arising from these two cryptocurrencies have
the greatest effect on other cryptocurrencies. Connectedness via negative returns is stronger
than via positive ones and that as far as the volatility spillovers are concerned, Bitcoin is the
most influential cryptocurrency. Further analyses show that trading volumes, global
financial and uncertainty effects, as well as the investment-substitution effect, are
determinants of net directional spillovers.
The paper proceeds as follows: Section 2 reviews the related literature on the
cryptocurrency market; Section 3 describes the econometric models; Section 4 presents the
data and empirical results; Section 5 concludes.
**2. Methodology**
The methodological framework of this study for constructing connectedness
measures follows the lines of Diebold and Yilmaz (2014). Specifically, positive/negative
return and volatility connectedness networks are built. Furthermore, regression models are
used to identify the drivers of the degree of integration of the various cryptocurrencies.
4 Corbet et al. (2018) focus on dynamic relationships between three cryptocurrencies and several financial
assets.
5
-----
Assume a stationary covariance _six -variable VAR( p ):_
_Rt_ ip1i _Rt i_ t, (1)
where _[R]t[ is the ]_ [6 1] vector of cryptocurrency returns, _i_ are 6 6 autoregressive
coefficient matrices and t is the vector of error terms that are assumed to be serially
uncorrelated. If the VAR system above is a stationary covariance, then a moving-average
representation is written as _Rt_ _j0_ _A j_ _t_ _j_, where the 6 6 coefficient matrix _A obeys a j_
recursion of the form _Aj_ 1Aj1 2 _Aj2_ K _p_ _Aj p_, where _A0 is the_ _n n_ identity
matrix and _[A]j_ 0 for _j_ 0 . Using the moving-average framework, we can measure
pairwise connectedness, directional connectedness and total connectedness based on the
generalized forecast-error variance decomposition (FEVD) approach. The advantage of the
FEVD method is that it can eliminate any disturbance induced on the results by the ordering of
the variables.
Koop et al. (1996) and Pesaran and Shin (1998) proposed the following
##### H -step-ahead generalized forecast-error variance decomposition:
ij _H_ jj1hH01hH0e A1ie AihhA eh ie _j_ 2
,
(2)
where ij H is the variance contribution of variable _j to variable_ _i,_ is the variance matrix
of the vector of errors and jj is the standard deviation of the error term of the _j_ [th]
equation. Finally, _[e]i[ is a selection vector with a value of 1 for the ]_ _[i][ th][ element and 0 otherwise. ]_
The spillover index yields an _n n_ matrix _H_ ij _H_ , where each entry gives the
contribution of variable _j to the forecast-error variance of variable_ _i . Own-variable and_
cross-variable contributions are contained in the main diagonal and off-diagonal elements,
respectively, of the H matrix. Each entry in the H matrix is normalized by the row sum
to ensure that the row sum is equal to 1. We then construct several measures to investigate the
information spillover of the whole cryptocurrency-market system.
**2.1 Connectedness measures**
**(1) Net pairwise connectedness**
6
-----
In general, ij ji, according to the definition of FEVD. Consequently, the
difference between ij and ji can be measured as the pairwise net connectedness. The
net spillover effect from variable j to variable _i can be measured by_ ij ji .
Subsequently, a directional connectedness network can be built based on pairwise net
connectedness. In this network, each market is set as a node, and the condition in which a
directional edge from _i to_ _j exists in the network is_ ji ij 0 .
**(2) Total directional connectedness “From” and “To”**
We use total directional connectedness “From” and “To” to measure the total
information spillover from and to each market. Total directional connectedness “From” is
defined as the information inflow from other markets to one market, which is calculated as
_N_
_Cig_ _j1ij_, _j_ _i_ . Similarity, total directional connectedness “To” is defined as the
information outflow from one market to other markets, which is calculated as
_N_
_Cg_ _j_ i1ij,i _j_ .
**(3) Total net connectedness**
Total net connectedness measures the net information-spillover contribution of one
node by the difference between total directional connectedness “To” and “From”, defined as
##### Ci Cgi Cig[. ]
**(4) Total connectedness for the system**
1 _N_
Finally, _TSI_ _N_ i j, 1ij,i _j_ is defined as the total spillover index to measure
the integration or systemic risk of the cryptocurrency-market system.
**2.2 Various connectedness network measures**
In addition to returns connectedness, we investigate asymmetry in the connectedness
of cryptocurrency markets. In the broad empirical findings, asset markets usually present
asymmetry effects in response to good news and bad news (e.g., Apergis et al., 2017;
Barunik et al., 2016). However, there is thus far no clear evidence in the cryptocurrency
market to confirm this rule. In addition, cryptocurrency is a newly developed financial
7
-----
product, made possible by the improvement of blockchain technology. The future of the
cryptocurrency market is uncertain due to its applications, policy regulations and whether
traders in the cryptocurrency market are sensitive to volatility. Therefore, it is useful to
analyse the asymmetric return spillovers among cryptocurrency markets in order to well
understand the systemic risk of this system. For simplicity, we build positive- and
negative-returns connectedness networks, respectively. The positive and negative returns
series are measured as follows:
_R( )_ _R if Rt_, _t_ 0
0, _otherwise_
_R( )_ _R if Rt_, _t_ 0
0, _otherwise_
(3)
(4)
_Rt_ _R( )_ _R( )_ (5)
We also consider volatility connectedness. Referring to Diebold and Yilmaz (2016) and
Garman and Klass (1980), we use daily range-based volatility to estimate volatility
connectedness. The detailed estimation equation is as follows:
2 2
_V_ 0.511(h _l)_ 0.019[(c _o h)(_ l 2 )o 2(h _o l)(_ _o)]_ 0.383(c _o)_, (6)
where _h l,_ are the log daily high price and low price and _o c,_ are the log opening price
and close price, respectively.
**2.3 Determinant modelling for total connectedness index**
We build regression models to identify the determinants that can influence the
integration degree of the cryptocurrency-market system. Referring to the existing literature,
trading volume (Balcilar et al., 2017), global financial factors (Ji et al., 2018c; Bouri et al.,
2017a, b & c), US uncertainties (Bouri et al., 2017a & b; Demir et al., 2018) and major
commodity markets (Ji et al., 2018c; Bouri et al., 2017c; Bouri et al., 2018a) are chosen in
the following regression[5]:
5 Some previous literature had verified the validity of internet concern on influencing asset prices and their
comovement (Guo and Ji, 2013; Ji and Guo, 2015a & b). Due to the limited search data of cryptocurrency
during our sample period, we don’t consider google trend as a determinant in this paper. But, the influence of
internet concern on the integration of the cryptocurrency market should be an interesting research path in the
future.
8
-----
_p_ _q_ _m_ _n_
_TSIt l,_ iVolumei j _FFj_ hISFh kUFk t, (7)
_i1_ _j1_ _h1_ _k_ 1
where _TSI denotes the dynamic total connectedness of the cryptocurrency-market system t l,_
for returns, positive returns, negative returns and volatility. _Volumei_ represents trading
volume for each of the six cryptocurrencies in this paper. _FFj_ denotes global financial
factors represented by the Global Financial Stress Index (GFSI) and MSCI World stock
index. _ISFh_ indicates investment-substitution factors that measure the influence of capital
inflow and outflow to major commodities. They are represented by the GSCI Energy index
and Gold Bullion index. _UFk_ denotes the influence of uncertainty factors as represented by
US economic policy uncertainty (EPU) and US VIX.
**3. Empirical analysis**
**3.1 Data and sample analysis**
Out of the 10 largest cryptocurrencies by market capitalization from
[https://coinmarketcap.com, we collected daily price data on six cryptocurrencies (Bitcoin,](https://coinmarketcap.com/)
Ethereum, Ripple, Litecoin, Stellar and Dash) because the length of their price data is the
longest. In fact, it covers almost two-and-a-half-year period. Accordingly, we had to
excluded other leading cryptocurrencies such as Bitcoin cash, Cardano, Neo, and EOS,
which have price data available for shorter period not exceeding the one year. In doing so,
we ensured a relatively wider time span that allows us to make the most of our empirical
analysis. Otherwise, if Bitcoin cash, Cardano, Neo, and EOS are kept, the common sample
period would have been reduced significantly. In fact, our sample period spans from August
7, 2015 to February 22, 2018 (931 observations), as depicted by the availability of price
data on some cryptocurrencies. Each of the six selected cryptocurrencies has a market value
above 5 billion USD, and the combined market value of these six cryptocurrencies
represents 72.06% of the total cryptocurrency market.[6] The empirical analyses are based on
daily returns, calculated as the difference in the log of prices, and a daily range-based
volatility, referring to Diebold and Yilmaz (2016).
6 Bitcoin ranks first, accounting for 39.01% of the total cryptocurrency market, followed by Ethereum
(18.99%), Ripple (8.73%), Litecoin (2.61%), Stellar (1.58%) and Dash (1.13%).
9
-----
**Figure 1. Historical trend of cryptocurrency prices**
Figure 1 shows that the price trends of the six cryptocurrencies follow almost the
same path, with substantial price appreciations experienced mostly during 2017. Notably,
the prices of Bitcoin, Litecoin and Dash reached their peaks in late 2017, whereas Ethereum,
Ripple and Stellar reached their highest prices during January 2018.
**Table 1. Summary statistics for returns and volatility of cryptocurrencies**
**Panel A: Returns**
Variables Mean Max. Min Std. Dev Skewness Kurtosis Jarque-Bera
Bitcoin 0.385 22.512 -20.753 4.114 -0.277 8.268 1087.184***
Ethereum 0.611 41.234 -130.211 8.485 -3.575 64.964 150762.400***
Ripple 0.511 102.736 -61.627 8.102 3.118 41.477 58875.780***
Litecoin 0.413 51.035 -39.515 6.022 1.453 16.493 7381.983***
Stellar 0.539 72.306 -36.636 9.075 2.081 17.345 8645.028***
Dash 0.567 43.775 -24.323 6.156 0.964 9.309 1686.675***
**Panel B: Positive returns**
Bitcoin 1.510 22.512 0.000 2.599 3.047 16.268 8259.925***
Ethereum 2.825 41.234 0.000 5.144 2.845 13.780 5757.992***
Ripple 2.292 102.736 0.000 6.484 7.101 80.685 241673.500***
Litecoin 1.906 51.035 0.000 4.503 4.720 34.190 41148.470***
Stellar 2.990 72.306 0.000 6.923 5.043 38.535 52871.400***
Dash 2.335 43.775 0.000 4.399 3.534 21.361 14999.010***
**Panel C: Negative returns**
Bitcoin -1.126 0.000 -20.753 2.600 -3.581 18.643 11469.970***
10
-----
Ethereum -2.215 0.000 -130.211 5.745 -12.954 269.423 2776522.000***
Ripple -1.781 0.000 -61.627 3.928 -6.432 73.235 197562.900***
Litecoin -1.494 0.000 -39.515 3.207 -4.310 32.686 37027.750***
Stellar -2.450 0.000 -36.636 4.446 -3.202 16.508 8660.302***
Dash -1.769 0.000 -24.323 3.204 -3.030 14.831 6847.119***
**Panel D: Volatility**
Bitcoin 0.288E-3 0.007 4.69E-07 6.89E-04 5.530 41.594 62457.59***
Ethereum 1.058 E-3 0.005 3.76E-06 2.56E-03 9.320 147.601 823709.9***
Ripple 0.918 E-3 0.006 1.01E-06 3.41E-03 9.028 111.929 472420.3***
Litecoin 0.556 E-3 0.025 5.61E-07 1.57E-03 7.927 93.087 324219.7***
Stellar 1.700 E-3 0.010 1.47E-05 5.14E-03 10.570 164.758 1031232***
Dash 1.121 E-3 0.241 1.46E-05 8.36E-03 26.101 736.627 20961190***
Note: *** denotes the significance at the 1% level.
The summary statistics of returns, including positive and negative returns as well as
volatility, are given in Table 1. Results from Panel A indicate that the highest mean of
returns is for Ethereum, followed by Dash. Stellar has the highest standard deviation,
followed by Ethereum. Interestingly, Bitcoin has both the lowest mean returns and lowest
standard deviation. This observation is not surprising, given the fact that, although Bitcoin
increased by around 1300% in 2017, each of the other five cryptocurrencies under study
increased in value by at least 5000%. All cryptocurrencies have excess levels of kurtosis,
especially Ethereum. Bitcoin and Ethereum have a negative skewness, whereas the rest have
a positive one. As for the summary statistics of positive returns (Panel B), Stellar has the
highest average returns and standard deviation, whereas Bitcoin has the lowest ones. All
series have excess kurtosis, especially Ripple, which also exhibits the highest skewness.
Moving to the statistics of negative returns (Panel C), Stellar also has the highest negative
returns, whereas Ethereum has the highest levels of standard deviation, kurtosis and
negative skewness. In contrast, Bitcoin exhibits the lowest negative average returns and
lowest standard deviation. Regarding the realized volatility of the six cryptocurrencies
(Panel D), Stellar is the most volatile, while Bitcoin is the least; the volatility of volatility is
highest for Dash, followed by Bitcoin, whereas Litecoin has the lowest volatility of
volatility.
The correlation matrices among the returns and the volatility of the six
cryptocurrencies are given in Table 2. Overall, weak to moderate positive correlations exist
among the six cryptocurrencies’ returns. Specifically, the correlation coefficients are highest
for the pairs Bitcoin/Litecoin (0.551) and Ripple/Stellar (0.517), whereas Ethereum/Ripple
and Ripple/Dash have the lowest correlation coefficients (0.133 and 0.147, respectively).
11
-----
Expectedly, the correlation among negative returns is generally stronger than among
positive returns. Considering negative returns, the Bitcoin/Litecoin pair has the highest
correlation (0.760), followed by the pair Ripple/Stellar (0.618), whereas the lowest
correlations are for the pairs Ethereum/Ripple (0.195) and Ethereum/Stellar (0.221).
As for the correlation between positive returns, Ripple and Stellar exhibit the highest
positive correlation (0.453), followed by Bitcoin/Litecoin (0.367), while Ethereum and
Ripple are uncorrelated. Moving to the correlation of price volatility, it is highest for the
pair Bitcoin/Litecoin (0.706), while the weakest correlation is found between Dash and the
other cryptocurrencies, which does not exceed the 0.098 mark in any instance. Overall, the
correlation between the returns of Bitcoin and its fork Litecoin is unsurprisingly much
stronger compared to the others, and that is also the case for positive/negative returns and
for volatility.
12
-----
**Table 2. Correlations among cryptocurrency markets**
**Returns correlations** **Positive returns correlations**
Bitcoin Ethereum Ripple Litecoin Stellar Dash Bitcoin Ethereum Ripple Litecoin Stellar Dash
Bitcoin 1 Bitcoin 1
Ethereum 0.288*** 1 Ethereum 0.207*** 1
Ripple 0.219*** 0.133*** 1 Ripple 0.116*** 0.059 1
Litecoin 0.551*** 0.271*** 0.279*** 1 Litecoin 0.367*** 0.164*** 0.247*** 1
Stellar 0.288*** 0.177*** 0.517*** 0.319*** 1 Stellar 0.165*** 0.088*** 0.453*** 0.211*** 1
Dash 0.375*** 0.273*** 0.147*** 0.350*** 0.209*** 1 Dash 0.261*** 0.222*** 0.084** 0.240*** 0.111*** 1
**Negative returns correlations** **Volatility correlations**
Bitcoin Ethereum Ripple Litecoin Stellar Dash Bitcoin Ethereum Ripple Litecoin Stellar Dash
Bitcoin 1 Bitcoin 1
Ethereum 0.321*** 1 Ethereum 0.302*** 1
Ripple 0.381*** 0.195*** 1 Ripple 0.397*** 0.202*** 1
Litecoin 0.760*** 0.323*** 0.412*** 1 Litecoin 0.706*** 0.283*** 0.567*** 1
Stellar 0.429*** 0.221*** 0.618*** 0.472*** 1 Stellar 0.323*** 0.158*** 0.478*** 0.427*** 1
Dash 0.547*** 0.287*** 0.391*** 0.537*** 0.398*** 1 Dash 0.093*** 0.049 0.085*** 0.098*** 0.047 1
Note: *** denotes the significance at the 1% level.
13
-----
**3.2 Static connectedness-network analysis**
**3.2.1 Returns connectedness network over the full sample**
Table 3 presents the matrix of directional spillovers among cryptocurrencies,
directional spillovers from each cryptocurrency to all other cryptocurrencies (“To
others”) and directional spillovers from all other cryptocurrencies to each
cryptocurrency (“From others”). Table 3 also reports the net directional spillover
(“Net”), where a positive (negative) value indicates that the corresponding
cryptocurrency is a net transmitter (receiver) of spillover effects.
**Table 3. Full-sample connectedness matrix for cryptocurrency returns**
**Returns**
Bitcoin Ethereum Ripple Litecoin Stellar Dash From others
Bitcoin 0.592 0.058 0.032 0.183 0.050 0.086 0.408
Ethereum 0.072 0.744 0.020 0.061 0.033 0.071 0.256
Ripple 0.037 0.019 0.683 0.061 0.184 0.016 0.317
Litecoin 0.180 0.048 0.049 0.583 0.064 0.075 0.417
Stellar 0.054 0.028 0.172 0.071 0.649 0.026 0.351
Dash 0.101 0.063 0.021 0.090 0.029 0.697 0.303
To others 0.443 0.215 0.294 0.466 0.360 0.275 **TSI=0.342**
**Net** **0.035** **-0.041** **-0.023** **0.049** **0.008** **-0.028**
Notes: This table presents the net directional spillover amongst the returns of the six cryptocurrencies over the
period August 7, 2015–February 22, 2018. Net: spillover transmitted by each cryptocurrency to all other
cryptocurrencies, where positive (negative) values indicate that the currency in question is a net transmitter
(receiver) of spillovers to all other cryptocurrencies. TSI: total spillover index.
Litecoin is the largest net transmitter of spillover, followed by Bitcoin;
interestingly, these two cryptocurrencies are also the two largest transmitters and
receivers of spillover effects from other cryptocurrencies. The two largest net
receivers of spillovers are Ethereum and Dash; again, these two cryptocurrencies are
the smallest transmitters and receivers of spillover effects from other
cryptocurrencies. The spillover index (TSI) reaches 34.20%, indicating a sizable
degree of connectedness among the six cryptocurrencies during the sample period,
which exhibits substantial increases in the prices of all cryptocurrencies. This result
indicates that these cryptocurrencies are linked with each other, adding to the results
from the correlation matrix in Table 2.
14
-----
**Figure 2.** **Directional-returns connectedness network over the full sample**
Notes: This figure shows the net directional connectedness among the six cryptocurrencies’ returns. The size of
each node indicates the overall magnitude of spillover transmission for each cryptocurrency, which is measured
by net connectedness in Table 3. The thickness of the arrows reflects the strength of the spillover between a pair
of variables, with thicker arrows indicating stronger net directional pairwise connectedness.
To better visualize the structure of connectedness, the direction and the
strength of spillovers between the six cryptocurrencies, Figure 2 provides the
network of pairwise return connectedness.[7] Litecoin and Bitcoin are at the centre of
the connected network. They are both strongly connected with Ethereum and Dash,
while Litecoin is more connected with Ripple than is Bitcoin.
However, Litecoin and Bitcoin are the least connected to each other, with the
former surprisingly transmitting its return spillovers to the largest cryptocurrency,
Bitcoin. Interestingly, the importance of Stellar in the network is also clear,
especially through its strong connection with Ripple. Litecoin is the largest
transmitter, followed by Bitcoin; whereas Ethereum is the largest receiver, followed
by Dash and Ripple. It is worthy of note that no direct connection exists between
Ethereum and Ripple, suggesting potential diversification benefits.
7 The size of the node captures the importance of each cryptocurrency within the network structure,
whereas the thickness of the arrows indicates the magnitude of the spillover for each cryptocurrency.
As for the node colours, dark (light) colours indicate a large (small) influence on network
connectedness.
15
-----
**3.2.2 Asymmetric-connectedness analysis over the full sample**
The previous analysis considered the return connectedness among
cryptocurrencies. However, it is possible that positive returns and negative returns
are perceived differently by market participants and that connectedness may exhibit
asymmetries. To address this potential asymmetry, we decompose returns into
positive and negative returns and present the resulting connectedness matrix in
Table 4.
**Table 4. Full-sample connectedness matrix for positive returns and**
**negative returns of cryptocurrencies**
**Positive Returns**
Bitcoin Ethereum Ripple Litecoin Stellar Dash From others
Bitcoin 0.773 0.037 0.009 0.102 0.022 0.056 0.227
Ethereum 0.041 0.875 0.003 0.025 0.009 0.047 0.125
Ripple 0.009 0.004 0.736 0.071 0.169 0.010 0.264
Litecoin 0.105 0.020 0.043 0.752 0.035 0.045 0.248
Stellar 0.017 0.008 0.140 0.036 0.789 0.009 0.211
Dash 0.060 0.045 0.005 0.051 0.011 0.827 0.173
To others 0.233 0.114 0.199 0.286 0.246 0.167 **TSI=0.208**
**Net** **0.007** **-0.011** **-0.064** **0.038** **0.035** **-0.005**
**Negative Returns**
Bitcoin Ethereum Ripple Litecoin Stellar Dash From others
Bitcoin 0.424 0.064 0.064 0.244 0.078 0.126 0.576
Ethereum 0.094 0.601 0.053 0.093 0.068 0.091 0.399
Ripple 0.076 0.046 0.510 0.087 0.191 0.090 0.490
Litecoin 0.240 0.064 0.072 0.412 0.092 0.120 0.588
Stellar 0.090 0.056 0.182 0.110 0.483 0.079 0.517
Dash 0.148 0.071 0.074 0.142 0.077 0.488 0.512
To others 0.649 0.300 0.444 0.676 0.507 0.506 **TSI=0.514**
**Net** **0.073** **-0.099** **-0.046** **0.088** **-0.010** **-0.006**
Note: See notes to Table 3.
Litecoin and Stellar are the two largest net transmitters of positive-return
spillovers, whereas Ripple is the largest net receiver of positive-return spillovers.
The two largest net transmitters of negative-return spillovers are Litecoin and
Bitcoin, whereas Ethereum and Ripple are the two largest net receivers of
negative-return spillovers. Importantly, the TSI of negative returns is almost 2.5
times stronger than that of positive returns, highlighting an intensified
connectedness during the downturn state of cryptocurrencies.
16
-----
**Figure 3.** **Directional positive-returns connectedness network over the full**
**sample**
Note: See Figure 2.
Moving to the structure of connectedness between positive returns (Figure 3),
it appears that a weaker connectedness network emerges between positive returns.
Litecoin is firmly at the centre of the network, and Stellar surprisingly exhibits a
more important spillover role than Bitcoin. Specifically, Litecoin and Stellar are the
two largest transmitters of spillovers, whereas Ripple is the largest receiver.
Interestingly, Ethereum is the least connected to the other cryptocurrencies,
especially with the lack of direct connectedness between Ethereum and Ripple and
Ethereum and Stellar, which suggests diversification and hedging possibilities.
17
-----
**Figure 4.** **Directional negative-returns connectedness network over the full**
**sample**
Note: See Figure 2.
The network diagram of pairwise connectedness using negative returns of
cryptocurrencies is shown in Figure 4. Litecoin and Bitcoin are the greatest
transmitters of negative shocks, whereas Ethereum and Ripple are the greatest
receivers of negative shocks. The pair Bitcoin/Ethereum has the strongest
connectedness, followed by Litecoin/Ethereum. The lowest connectedness is
reported for the pairs Bitcoin/Litecoin and Ripple/Ethereum. Although Ethereum is
second in market value, it has almost no influence on other, smaller cryptocurrencies
(Litecoin, Ripple, Stellar and Dash). In contrast, smaller cryptocurrencies (Stellar
and Dash) are found to transmit negative shocks to larger cryptocurrencies
(Ethereum and Ripple).
To summarize, the overall connectedness, including the strength of spillovers,
among negative returns is stronger than across positive ones, suggesting that return
spillovers due to negative shocks materialize more frequently. Therefore, in terms of
return spillovers, cryptocurrency investors are not attuned to positive signals only.
18
-----
**3.2.3 Volatility-connectedness network analysis over the full sample**
The connectedness matrix of volatility spillover is reported in Table 5.
Contrary to its position in the case of return spillovers, Bitcoin is the largest net
transmitter of volatility spillover, followed by Litecoin. Interestingly, these two
cryptocurrencies are both also the largest transmitters and receivers of spillover
effects from the other cryptocurrencies. The two largest net receivers of spillovers
are Ethereum and Stellar; again, these two cryptocurrencies are the smallest
transmitters and receivers of spillover effects. The total volatility spillover across the
six cryptocurrencies is 32.90%. That is quite similar to that of returns in Table 3.
Intuitively, the spillover index indicates a sizable degree of connectedness among
the six cryptocurrencies in the period under study, during which all of them
experienced substantial price volatility.
**Table 5. Full-sample connectedness matrix for range-based volatility of**
**cryptocurrencies**
**Volatility**
Bitcoin Ethereum Ripple Litecoin Stellar Dash From others
Bitcoin 0.564 0.056 0.086 0.241 0.050 0.004 0.436
Ethereum 0.083 0.776 0.043 0.074 0.022 0.002 0.224
Ripple 0.100 0.031 0.585 0.158 0.124 0.003 0.415
Litecoin 0.251 0.044 0.157 0.467 0.077 0.003 0.533
Stellar 0.072 0.021 0.137 0.106 0.662 0.001 0.338
Dash 0.009 0.003 0.007 0.009 0.002 0.971 0.029
To others 0.515 0.154 0.431 0.588 0.275 0.013 **TSI=0.329**
**Net** **0.078** **-0.070** **0.016** **0.056** **-0.063** **-0.016**
Note: See notes to Table 3.
Interestingly, Dash (Litecoin) depends more (less) on its own volatility than
the others, suggesting a weak (strong) volatility connectedness with the other
cryptocurrencies under study; this finding points to the ability of Dash to reduce the
overall risk of a portfolio of leading cryptocurrencies. Specifically, Litecoin appears
to have a strong influence on the other cryptocurrencies, which cannot be explained
by its relatively small size[8] but can be better explained by the fact that Litecoin is a
fork of the largest and most popular cryptocurrency, Bitcoin.
8 Among the six cryptocurrencies under study, Litecoin’s market value is ranked fourth.
19
-----
**Figure 5.** **Directional-volatility connectedness network over the full sample**
Note: See Figure 2.
The structure of volatility connectedness is shown in Figure 5. Interestingly,
Bitcoin is at the centre of volatility connectedness, as it is the most influential
cryptocurrency and transmits volatility spillovers to each of the five
cryptocurrencies, including Litecoin. Also important is the role of Litecoin as a large
volatility transmitter, especially to Ethereum and Stellar. All of the six
cryptocurrencies are interconnected, with substantial differences in the degree and
magnitude of the volatility spillovers. Stellar is the largest receiver of volatility
spillovers, followed by Ethereum. Dash is the least influential in the network of
connectedness, offering potential diversification benefits if combined in a portfolio
with each of the other cryptocurrencies. On a one-to-one basis, there is noticeably a
very weak connectedness across the pairs Ethereum/Stellar, Ethereum/Dash and
Litecoin/Ripple.
20
-----
**3.2.4 Robustness test based on subsample data**
Our full sample period includes the 2017 bull market for cryptocurrencies,
which may have an increasing effect on the connectedness because of the strong
market interest towards all the cryptocurrencies. To test the robustness of our
full-sample results, two different subsample periods are considered for further
investigation: Subsample I (07/08/2015-31/12/2016) which and subsample II
(01/01/2017-22/02/2018). The first includes a “stable” market where
cryptocurrencies tended to move horizontally, while the second includes the 2017
bull market. The connectedness matrices for original returns, positive returns,
negative returns, and volatility for the two subsamples are presented in Table 6-7.
“The results show that there are some similarity and difference in our subsamples
compared with the full-sample results. First, Bitcoin and Litecoin are the largest
transmitters in the returns and volatility cryptocurrency connectedness system, while
Ripple and Ethereum always tend to be the top recipients in response to shocks from
other cryptocurrencies in most of the two subsamples. This finding is consistent with
our full-sample results, which show the stability of interdependence among
cryptocurrencies. Another similar finding is that connectedness via negative returns
is also largely stronger than via positive ones in the two subsamples. For example, in
subsample II, the TSI in the positive return connectedness network is only 0.228,
while the TSI in the negative one reaches 0.618. The largest difference between the
two subsamples is the connectedness intensity in the cryptocurrency system. In the
subsample I, the TSI in the return and volatility connectedness networks are around
0.2, while in subsample II, the TSI in the return and volatility connectedness
networks are relatively higher, reaching above 0.4. It indicates that the
connectedness tightness among cryptocurrencies has largely strengthened since 2017
when the cryptocurrency market entered into a bull market. Sharp price rise and
active market trading have increased the comovement of cryptocurrency returns.
21
-----
**Table 6. Connectedness matrix for cryptocurrencies based on subsample I (07/08/2015-31/12/2016)**
**Returns** **Positive returns**
Bitcoin Ethereum Ripple Litecoin Stellar Dash From Bitcoin Ethereum Ripple Litecoin Stellar Dash From
Bitcoin 0.578 0.007 0.021 0.336 0.025 0.033 0.422 0.691 0.008 0.004 0.275 0.006 0.018 0.309
Ethereum 0.012 0.964 0.003 0.009 0.003 0.009 0.036 0.012 0.967 0.000 0.005 0.001 0.015 0.033
Ripple 0.030 0.004 0.831 0.022 0.101 0.012 0.169 0.008 0.001 0.880 0.004 0.103 0.005 0.120
Litecoin 0.346 0.002 0.016 0.596 0.017 0.022 0.404 0.280 0.004 0.004 0.697 0.001 0.013 0.303
Stellar 0.035 0.003 0.095 0.028 0.829 0.010 0.171 0.007 0.003 0.087 0.001 0.893 0.008 0.107
Dash 0.045 0.008 0.013 0.031 0.010 0.893 0.107 0.026 0.019 0.007 0.017 0.009 0.922 0.078
To 0.469 0.024 0.148 0.427 0.155 0.086 **TSI=0.218** 0.333 0.035 0.103 0.303 0.119 0.059 **TSI=0.159**
Net **0.047** **-0.012** **-0.021** **0.023** **-0.016** **-0.021** **0.024** **0.001** **-0.018** **0.000** **0.011** **-0.019**
**Negative returns** **Volatility**
Bitcoin Ethereum Ripple Litecoin Stellar Dash From Bitcoin Ethereum Ripple Litecoin Stellar Dash From
Bitcoin 0.518 0.008 0.043 0.350 0.043 0.037 0.482 0.557 0.017 0.033 0.390 0.003 0.000 0.443
Ethereum 0.016 0.937 0.009 0.007 0.016 0.016 0.063 0.029 0.934 0.002 0.033 0.003 0.000 0.066
Ripple 0.062 0.011 0.813 0.043 0.059 0.013 0.187 0.065 0.002 0.862 0.044 0.027 0.000 0.138
Litecoin 0.362 0.004 0.036 0.536 0.033 0.029 0.464 0.396 0.015 0.021 0.567 0.000 0.000 0.433
Stellar 0.067 0.011 0.060 0.051 0.809 0.003 0.191 0.002 0.005 0.026 0.014 0.953 0.001 0.047
Dash 0.062 0.013 0.014 0.047 0.003 0.862 0.138 0.000 0.000 0.000 0.000 0.000 0.999 0.001
To 0.569 0.047 0.163 0.498 0.154 0.096 **TSI=0.254** 0.493 0.039 0.082 0.480 0.035 0.001 **TSI=0.188**
Net **0.087** **-0.016** **-0.024** **0.034** **-0.038** **-0.042** **0.050** **-0.028** **-0.056** **0.047** **-0.013** **0.000**
Note: See notes to Table 3.
22
-----
**Table 7. Connectedness matrix for cryptocurrencies based on subsample II (01/01/2017-22/02/2018)**
**Returns** **Positive returns**
Bitcoin Ethereum Ripple Litecoin Stellar Dash From Bitcoin Ethereum Ripple Litecoin Stellar Dash From
Bitcoin 0.542 0.125 0.033 0.139 0.053 0.107 0.458 0.781 0.079 0.005 0.058 0.014 0.061 0.219
Ethereum 0.123 0.543 0.031 0.108 0.059 0.135 0.457 0.075 0.765 0.006 0.045 0.020 0.088 0.235
Ripple 0.038 0.036 0.641 0.066 0.203 0.015 0.359 0.005 0.009 0.736 0.068 0.177 0.005 0.264
Litecoin 0.138 0.106 0.051 0.537 0.076 0.092 0.463 0.059 0.041 0.042 0.773 0.041 0.045 0.227
Stellar 0.056 0.062 0.182 0.081 0.586 0.033 0.414 0.010 0.017 0.147 0.039 0.782 0.006 0.218
Dash 0.117 0.136 0.021 0.100 0.036 0.590 0.410 0.067 0.081 0.005 0.046 0.009 0.793 0.207
To 0.472 0.465 0.319 0.494 0.427 0.383 **TSI=0.427** 0.216 0.227 0.205 0.256 0.261 0.204 **TSI=0.228**
Net **0.014** **0.008** **-0.040** **0.032** **0.013** **-0.027** **-0.003** **-0.008** **-0.059** **0.029** **0.043** **-0.003**
**Negative returns** **Volatility**
Bitcoin Ethereum Ripple Litecoin Stellar Dash From Bitcoin Ethereum Ripple Litecoin Stellar Dash From
Bitcoin 0.364 0.155 0.059 0.194 0.077 0.152 0.636 0.425 0.160 0.066 0.175 0.038 0.136 0.575
Ethereum 0.157 0.356 0.081 0.159 0.107 0.141 0.644 0.159 0.401 0.102 0.150 0.046 0.142 0.599
Ripple 0.072 0.100 0.440 0.082 0.203 0.103 0.560 0.080 0.116 0.484 0.122 0.094 0.104 0.516
Litecoin 0.190 0.156 0.065 0.352 0.095 0.142 0.648 0.189 0.142 0.115 0.370 0.056 0.127 0.630
Stellar 0.086 0.121 0.185 0.108 0.401 0.100 0.599 0.064 0.076 0.112 0.088 0.608 0.052 0.392
Dash 0.160 0.144 0.076 0.151 0.092 0.378 0.622 0.148 0.154 0.118 0.138 0.037 0.406 0.594
To 0.664 0.675 0.466 0.693 0.574 0.638 **TSI=0.618** 0.641 0.647 0.514 0.672 0.270 0.561 **TSI=0.551**
Net **0.028** **0.031** **-0.094** **0.045** **-0.026** **0.016** **0.066** **0.048** **-0.002** **0.043** **-0.121** **-0.033**
Note: See notes to Table 3.
23
-----
**3.3 Dynamic-connectedness network analysis**
**3.3.1 Dynamic-return connectedness network**
The results presented in Table 3 summarize the net connectedness of
cryptocurrencies, yet they overlook any time variation in the spillover effect.
Therefore, we report in Figure 6 the time evolution of the total connectedness for
cryptocurrency returns.
**Figure 6. Dynamic total connectedness for cryptocurrency returns**
The TSI varies substantially over time. In particular, it declines during 2016
from over 40% to around 20% and then oscillates between 40% and 20% before
peaking at around 70% in October 2017. After that, it retraces most of the upward
movement before experiencing a sharp upward movement to around 60% at the end
of the period under study. The time-varying nature of the TSI confirms the spike in
the levels of spillover during 2016 and 2017, possibly due to the hack of the Bitfinex
exchange, which created uncertainty to the cryptocurrency market. The introduction
of Ripple, acting as bridge currency for real-time settlement and allowing for the
efficient exchange of value across borders (Corbet et al., 2018), and the subsequent
24
-----
introduction of the Ripple/Bitcoin trading pair may increase the connectedness of
the cryptocurrency market.
**Figure 7. Dynamic total net connectedness for cryptocurrency returns**
The time-varying of net directional return spillovers from each
cryptocurrency to all other cryptocurrencies is shown in Figure 7. In most of the
cases, the net spillover effects switch between negative and positive territories,
suggesting that each cryptocurrency can act as a net transmitter or a net receiver at
given points of time.[9] Specifically, Bitcoin is a net transmitter from the beginning of
the sample period until April 2017, whereas it behaves more as a net receiver
afterwards, especially toward the end of the period. Ethereum oscillates between
positive and negative territories until the end of the period, when it acts as a net
transmitter. Litecoin is more a net transmitter, especially toward the end of the
sample period. Stellar, Ripple and Dash exhibit no particular pattern, although the
latter clearly acts as a net receiver.
9 Positive (negative) values indicate that the cryptocurrency is a net transmitter (receiver) of spillover
effects.
25
-----
**3.3.2 Dynamic-asymmetric connectedness analysis**
The above analyses do not consider potential asymmetries in the return
spillovers but merely provide evidence that the net directional return spillovers from
each cryptocurrency to all other cryptocurrencies vary over time. Accordingly, we
differentiate between positive and negative returns in order to uncover the
asymmetries in return connectedness within the framework of Diebold and Yilmaz
(2016). The results of the dynamics of connectedness for positive returns and
negative returns are reported in Figures 8 and Figure 9, respectively. It appears from
Figure 8 that Bitcoin and Litecoin may be aptly described as positive-return
transmitters. As for the dynamics of connectedness for negative returns (Figure 9),
the picture is different from the one associated with positive returns in Figure 8.
**Figure 8. Dynamic total net connectedness for cryptocurrency positive returns**
26
-----
**Figure 9. Dynamic total net connectedness for cryptocurrency negative returns**
To provide evidence for the intuition that returns movement among
cryptocurrency markets is asymmetrical in response to positive and negative
information shocks, the dynamic total connectedness and asymmetry indicators for
cryptocurrency positive and negative returns are presented in Figure 10. The
asymmetry indicator is measured by TSI(-)/TSI(+). A TSI(-)/TSI(+) of larger than 1
indicates that bad news contributes more to system risk than good news. Figure 10
clearly shows the overall presence of an asymmetric effect.
**Figure 10. Dynamic total connectedness and asymmetry indicators for**
**cryptocurrency positive and negative returns**
27
-----
**3.3.3 Dynamic-volatility connectedness network analysis**
The TSI of cryptocurrency volatilities (Figure 11) fluctuates sharply between
10% and over 80%, confirming a considerable time-varying feature. Specifically,
the peaks correspond to the introduction of Ripple on major exchanges, such as
Bitstamp, and to new trading-pair arrangements in 2017, whereas most of the
troughs coincide with increasing uncertainty on economic policy and blockchain
security. These periods coincide with several structural events.[10]
**Figure 11. Dynamic total connectedness for cryptocurrency volatilities**
Moving to the time-varying of net directional-volatility spillovers from each
cryptocurrency to all other cryptocurrencies, Figure 12 shows evidence of large
fluctuations in the cases of Litecoin and, to a lesser extent, Bitcoin, especially
around the beginning and middle of the sample period. In contrast, Ripple, Stellar
and Ethereum appear to be the calmest cryptocurrencies, as the net
directional-volatility spillovers are quite low.
10 For example, Bitstamp brought in the first Ripple/Bitcoin trading pair on 16 February 2017,
providing digital assets and a bridge currency to the market for real-time settlement. This allows for
the efficient exchange of value across borders.
28
-----
**Figure 12. Dynamic net connectedness for cryptocurrency volatilities**
**3.4 Determinants of cryptocurrency integration**
We consider the determinants of the cryptocurrency market’s returns and
volatility connectedness by considering a set of financial, economic and other
variables.[11] As indicated in the methods section, our choice for these explanatory
variables depends on prior studies.
Results from the OLS regressions are reported in Tables 8–11.[12] Tables 8-10
reports the regression coefficients for returns, positive returns, and negative returns
respectively, while table 11 reports the results for volatility. The results reveal that
the coefficient of trading volume of most of the cryptocurrencies is significant in
many cases, but its sign is mixed. Specifically, it is positive for Bitcoin, Litecoin and
Stellar and negative for the others. However, the trading volume for Litecoin
exhibits a negatively significant impact on the net pairwise directional
negative-return spillovers, whereas Stellar exhibits a negatively significant impact
on the net pairwise directional positive-return spillovers.
11 Table A in the Appendix describes the set of these explanatory variables.
12 The adjusted R-squared for the regression models varies between 35.40% and 52.50% (see the last
row in Tables 8-11).
29
-----
For the empirical findings of “Total Connectedness”, “Positive Returns
Connectedness”, and “Negative returns Connectedness”, the results among the four
specifications are consistent in the sense that new additional variables/controls do
not affect the role played by the volumes of the cryptocurrencies[13]. Although the
direct linkage between trading volume and “return connectedness” for the
cryptocurrency markets remains unexplored, one may expect a significant linkage
between “return connectedness” and “trading volumes” given that there is a strong
relationship between “return” and “trading volumes”. Our finding is therefore in line
with Balcilar et al. (2017) and Bouri et al. (2018c) who find evidence of Granger
causality from trading volume to the returns in the cryptocurrency market.
Interestingly we observed that for some cryptocurrencies (depends on the
statistical significance level), the volumes are not significantly affecting volatility[14].
There is also variation in statistical significance regarding to whether some
additional variables are included in the model specification. This finding may be
justified by the fact that omitted variables may be an issue if we ignored some
important regressors in the specification[15]. The last model consists of all important
variables including magnitude effect, global financial effect, investment substitution
effect, and uncertainty effect. There are only three cryptocurrencies with magnitude
coefficient of 5 percent significant level, namely Bitcoin, Ripple, and Dash. We
found significant positive coefficient of trading activity for the connectedness of
volatility in Bitcoin market. The result is not surprising as Bitcoin has the highest
market capitalization accounted for 39% of the cryptocurrency market at the end of
2017, and it is the dominant contributor of volatility spillovers in the cryptocurrency
market and it has enjoyed more influence over other cryptocurrencies (Koutmos,
2018). The negative coefficient attached to Ripple and Dash may be attributed to the
fact that they are the net volatility recipient, and therefore they have less influence
over other cryptocurrencies. Furthermore, transaction cost of cryptocurrencies may
play a role on volatility connectedness as the transaction cost of Bitcoin is lower
[than that of retail foreign exchange markets, and this may encourage algorithmic](https://www.sciencedirect.com/topics/economics-econometrics-and-finance/foreign-exchange-market)
trading and thus become a dominant force on Bitcoin trading volume (and hence
13 We would like to thank the anonymous reviewer for raining this interesting
14 For example, volumes of Ethereum and Litecoin are not significant at 5 percent significant level
for all model specifications.
15 The adjusted R2 is the highest among all 4 model specifications.
30
-----
increase cryptocurrency market’s stability). As reported by Garcia and Schweitzer
(2015), very high profits are earned in less than a year by using algorithmic trading
strategy that takes into account of social media sentiment. It is also interesting to
note that Yang (2018) found evidence that speculators plays extreme weight in the
Bitcoin market, while Yeh and Yang (2011) emphasized the role of speculator’s
overconfidence that can increase market volatility. Also new information can
cause price volatility to rise due to differences in its interpretation among traders in
different market (Gębka, 2012).
**Table 8. Determinants of dynamic total connectedness for returns**
Coefficient Model 1 Model 2 Model 3 Model 4
Constant -0.733***
(0.088)
Magnitude effect Volume (Bitcoin) 0.087***
(0.008)
Volume (Ethereum) -0.015***
(0.005)
Volume (Ripple) -0.023***
(0.004)
-13.671***
(1.450)
0.031***
(0.010)
-0.030***
(0.004)
-0.020***
(0.003)
-17.173***
(1.402)
0.029***
(0.009)
-0.023***
(0.004)
-0.013***
(0.005)
-16.904***
(1.432)
0.025***
(0.008)
-0.028***
(0.004)
--
Volume (Litecoin) --- 0.016***
(2.940)
--- --
Volume (Stellar) 0.010**
(0.005)
Volume (Dash) -0.009**
(0.004)
--- --- 0.010**
(0.005)
--- -0.014***
(0.004)
Global financial effect GFSI 0.048***
(0.008)
MSCI World 1.877***
(0.208)
0.071***
(0.008)
3.132***
(0.234)
-0.012***
(0.004)
0.052***
(0.010)
3.092***
(0.237)
-0.388***
(0.049)
-0.471***
(0.081)
Investment substitution
effect
GSCI Energy -0.405***
(0.050)
Gold Bullion -0.448***
(0.090)
Uncertainty effect US EPU -0.014*
(0.007)
US VIX 0.070***
(0.026)
Adj. R[2] 0.354 0.437 0.525 0.522
Notes: The standard errors are reported in parentheses. *, **, *** denote the significance at the 10%, 5% and 1%
levels.
31
-----
**Table 9. Determinants of dynamic total connectedness for positive returns**
Coefficient Model 1 Model 2 Model 3 Model 4
Constant -0.155**
(0.062)
Magnitude effect Volume (Bitcoin) 0.037***
(0.007)
Volume (Ethereum) -0.010***
(0.003)
Volume (Ripple) -0.021***
(0.003)
Volume (Litecoin) 0.022***
(0.004)
Volume (Stellar) -0.009***
(0.003)
Volume (Dash) --
-4.315***
(1.140)
0.021***
(0.008)
-0.013***
(0.003)
-0.020***
(0.003)
0.027***
(0.004)
-0.010***
(0.004)
--- -0.007***
(0.003)
-8.551***
(0.987)
-0.010***
(0.003)
-0.009***
(0.003)
0.016***
(0.004)
-0.011***
(0.003)
-8.485***
(0.995)
--- --
Global financial effect GFSI 0.020***
(0.006)
MSCI World 0.596***
(0.163)
0.050***
(0.005)
2.031***
(0.154)
-0.008***
(0.003)
-0.013***
(0.004)
0.017***
(0.004)
-0.007*
(0.004)
-0.007***
(0.003)
0.061***
(0.007)
2.063***
(0.157)
-0.354***
(0.038)
-0.570***
(0.064)
Investment substitution
effect
GSCI Energy -0.365***
(0.037)
Gold Bullion -0.538***
(0.062)
Uncertainty effect US EPU -0.006*
(0.005)
US VIX -0.049***
(0.021)
Adj. R[2] 0.208 0.224 0.404 0.410
Notes: See notes to Table 8.
32
-----
**Table 10. Determinants of dynamic total connectedness for negative returns**
Coefficient Model 1 Model 2 Model 3 Model 4
Constant -0.920**
(0.095)
Magnitude effect Volume (Bitcoin) 0.074***
(0.010)
Volume (Ethereum)
--
Volume (Ripple) -0.010**
(0.005)
Volume (Litecoin) -0.014**
(0.006)
Volume (Stellar) 0.034***
(0.005)
Volume (Dash) -0.013***
(0.004)
-18.720***
(1.446)
0.012
(0.009)
-0.021***
(0.005)
--
0.012**
(0.005)
-0.006
(0.004)
-22.870***
(1.495)
0.031***
(0.010)
-0.025***
(0.004)
-21.858***
(1.485)
0.022***
(0.008)
-0.033***
(0.005)
--
--- -0.012**
(0.005)
--- --
--- --
-0.008**
(0.004)
0.029***
(0.010)
3.010***
(0.249)
-0.354***
(0.050)
0.255***
(0.080)
Global financial effect GFSI 0.058***
(0.008)
MSCI World 2.566***
(0.207)
-0.010**
(0.004)
0.057***
(0.008)
3.200***
(0.250)
Investment substitution
effect
GSCI Energy -0.384***
(0.054)
Gold Bullion 0.257***
(0.096)
Uncertainty effect US EPU -0.021***
(0.008)
US VIX 0.129***
(0.027)
Adj. R[2] 0.589 0.680 0.704 0.714
Notes: See notes to Table 8.
33
-----
**Table 11. Determinants of dynamic total connectedness for volatility**
Coefficient Model 1 Model 2 Model 3 Model 4
Constant -1.201**
(0.092)
Magnitude effect Volume (Bitcoin) 0.136***
(0.010)
Volume (Ethereum) 0.006
(0.005)
Volume (Ripple) -0.063**
(0.004)
Volume (Litecoin) -0.006
(0.005)
Volume (Stellar) 0.010**
(0.005)
Volume (Dash) -0.007*
(0.004)
-11.924***
(1.534)
0.093***
(0.011)
-0.002
(0.005)
-0.062***
(0.004)
0.006
(0.006)
0.011*
(0.006)
-0.005
(0.004)
-13.358***
(1.526)
0.064***
(0.011)
0.002
(0.005)
-0.040***
(0.005)
-0.005
(0.006)
0.012**
(0.005)
-0.009**
(0.004)
0.085***
(0.008)
2.647***
(0.251)
-13.091***
(1.463)
0.061***
(0.011)
-0.028***
(0.005)
-0.010*
(0.005)
--
-0.010***
(0.003)
0.046***
(0.010)
2.495***
(0.247)
-0.159***
(0.053)
-0.713***
(0.094)
Global financial effect GFSI 0.054***
(0.220)
MSCI World 1.536***
(0.220)
Investment substitution
effect
GSCI Energy -0.132***
(0.054)
Gold Bullion -0.801***
(0.097)
Uncertainty effect US EPU 0.009
(0.007)
US VIX 0.156***
(0.027)
Adj. R[2] 0.625 0.655 0.698 0.714
Notes: See notes to Table 8.
Regarding global financial effect, which represents global financial stress and
world equities, it has a positively significant effect on cryptocurrency’s market
connectedness for both returns and volatility. The finding is consistent with existing
literature as the cryptocurrency market still lacks transparency and the major traders
are young and inexperienced individual investors [16] .There are dispersion of
information and uncertainty among crypto traders (Bouri et al., 2018b). Indeed, the
extreme speculative nature of the Bitcoin makes the cryptocurrency markets highly
16 Generally, individual investors rely on social media and online chat forums for information content
about the cryptocurrencies.
34
-----
volatile, which may encourage herding behaviour in Bitcoin market (Baur et al.,
2018)[17]. There is also evidence that herding behaviour tends to occur and intensify
during financial stress periods (Demirer and Kutan, 2006).
U.S. EPU and energy prices have a negative effect, and that regardless of the type of
returns considered. The finding is in line with existing literature, where Demir et al.,
(2018) found evidence that U.S. EPU index has predictive power on Bitcoin returns,
and Bitcoins returns are negatively correlated with the U.S. EPU. Therefore, Bitcoin
can serve as a hedging tool against EPU.
However, the picture is different for the explanatory role of gold prices, energy
price, and US VIX. Gold prices and Energy prices have a negatively significant effect
when considering aggregate- and positive-return spillovers, whereas US VIX exhibits
a positive effect in aggregate- and negative-return spillovers. The finding is not
surprising as Bitcoin possess some of the same hedging ability as gold (Dyhrberg,
2016). As a substitute to Bitcoin, an increase in gold price will decrease demand for
cryptocurrency, and therefore weaken the return connectedness of return spillover for
the cryptocurrency market. Furthermore, it has been documented in the literature that
an inverse relationship exists between the US stock market uncertainty (as measured
by the VIX) and the Bitcoin volatility, implying that, in an environment of high
uncertainty in the stock market, market participants can move into Bitcoin to hedge
any possible stock market losses (Bouri et al., 2017a, b). In our case, the hedge effect
occurs in cryptocurrency market, making its returns connectedness stronger for
aggregate- and negative-return spillovers. In conclusion, the magnitude of the effect,
as measured by the level of the coefficient associated with the explanatory variables,
indicates that world equities, energy and gold prices are the most influential on
cryptocurrency integration, with some nuanced differences between negative- and
positive-return spillovers.
As for the determinants of the net pairwise directional-volatility spillovers
(Table 11), it is interesting to see that not every cryptocurrency’s volume is
17 It has been found in the literature that herding could intensify the volatility of asset class and make
the financial system unstable (Demirer and Kutan, 2006).
35
-----
significant, and that US EPU has no effect. In contrast, global financial stress, world
equities and US VIX have a positively significant effect, whereas energy and gold
prices exhibit a negative effect. These results are quite similar to that reported for
the determinants of dynamic total connectedness for returns (Table 8), suggesting
that the same factors (global financial effect, investment-substitution effect and US
VIX) drive both return and volatility spillovers in the cryptocurrency market.
**4. Conclusions**
This study contributes to the growing empirical literature on the
cryptocurrency market by quantifying for the first time spillover effects across six
large cryptocurrencies in order to better understand the spillover nature of each
cryptocurrency. By applying the connectedness framework of Diebold and Yilmaz
(2012, 2016) on daily data, we built positive and negative returns-connectedness and
volatility-connectedness networks. The empirical results show that, in addition to the
largest cryptocurrency (Bitcoin), a relatively smaller one (Litecoin) is at the centre
of returns and volatility connectedness, sharing with Bitcoin the dominant
transmitting role to total return and volatility spillovers.
Interestingly, the second-largest cryptocurrency (Ethereum) is a recipient of
spillovers and is thus quite dominated by both larger and smaller cryptocurrencies.
Although these results confirm the findings of Corbet et al. (2018) that leading
cryptocurrencies are interconnected, they differ in finding that Litecoin has
significant influence on Bitcoin as well as on other leading cryptocurrencies. This
finding suggests that Bitcoin is losing its dominant role in the evolving
cryptocurrency market. All cryptocurrencies are found to alternate between being
transmitters and receivers, depending on the time. Asymmetries in negative-return
spillovers are significant and have a more substantial magnitude than in
positive-return spillovers, implying that negative returns materialize quite frequently
and that their magnitudes are not lessened by positive-return spillovers.
Regression analyses show that the drivers of the integration degree of the
cryptocurrency-market system are affected by a diverse set of variables. Overall, the
results point to the importance of trading volume, the global and
investment-substitution effect and the uncertainty effect to the determination of the
net directional spillover among cryptocurrencies returns. This finding is not
36
-----
surprising and partially concords with prior studies that highlight the importance of
trading volume (Balcilar et al., 2017), US stock-market volatility (Bouri et al., 2017a
& b) and economic policy uncertainty (Demir et al., 2018).
The interdependency across the largest cryptocurrencies and its determinants
affect the decision-making of investors, policy-makers and scholars. It is interesting
to know that, overall, large cryptocurrencies exhibit relatively diverse levels of
integration and that, consequently, shocks to one cryptocurrency do not generally
induce large spillovers to the other segments in a way that would reduce
diversification possibilities. In fact, crypto-investors may benefit from some
evidence of weak integration in some cases (e.g., Dash) to improve their portfolio
diversification by exploiting the findings on how cryptocurrencies’ returns influence
one another, while differentiating between positive and negative returns. As for the
results of volatility connectedness, they can assist crypto-investors in building
volatility-hedging strategies and consistently managing risk via measures such as
value-at-risk.
As the cryptocurrency market evolves and matures, it is of particular interest
to policy-makers and investors to extend our analysis by constructing a diversified
cryptocurrency portfolio that maximizes return and balances risk while accounting
for the risk preferences of crypto-investors.
**Acknowledgements**
Supports from the National Natural Science Foundation of China under Grant
No. 71774152, No. 91546109 and Youth Innovation Promotion Association of
Chinese Academy of Sciences (Grant: Y7X0231505) are acknowledged.
**References**
Ahmad, W., Mishra, A. V., Daly, K. J. (2018). Financial connectedness of BRICS and
global sovereign bond markets. Emerging Markets Review.
https://doi.org/10.1016/j.ememar.2018.02.006
Apergis, N., Baruník, J., & Lau, M. C. K. (2017). Good volatility, bad volatility: What
drives the asymmetric connectedness of Australian electricity markets?. Energy
Economics, 66, 108-115.
37
-----
Balcilar, M., Bouri, E., Gupta, R., Roubaud, D. (2017). Can volume predict Bitcoin
returns and volatility? A quantiles-based approach. Economic Modelling, 64, 74-81.
Baruník, J., Kočenda, E., & Vácha, L. (2016). Asymmetric connectedness on the US
stock market: Bad and good volatility spillovers. Journal of Financial Markets, 27,
55-78.
Baruník, J., Kočenda, E., & Vácha, L. (2017). Asymmetric volatility connectedness
on the forex market. Journal of International Money and Finance, 77, 39-56.
Baur, D. G., Hong, K., & Lee, A. D. (2018). Bitcoin: Medium of exchange or
speculative assets?. Journal of International Financial Markets, Institutions and
Money, 54, 177-189.
Bouri E., Azzi G., Dyhrberg, A.H. (2017a). On the Return-volatility Relationship in
the Bitcoin Market around the Price Crash of 2013. Economics: The Open-Access,
Open-Assessment E-Journal, 11 (2017-2), 1-16.
http://dx.doi.org/10.5018/economics-ejournal.ja.2017-2
Bouri E., Gupta R., Tiwari A., Roubaud, D. (2017b). Does Bitcoin Hedge Global
Uncertainty? Evidence from Wavelet-Based Quantile-in-Quantile Regressions.
Finance Research Letters, 23, 87-95.
Bouri E., Jalkh N., Molnár P., Roubaud D. (2017c). Bitcoin for energy commodities
before and after the December 2013 crash: Diversifier, hedge or more?. Applied
Economics, 49(50), 5063-5073.
Bouri, E., Gupta, R., Lahiani, A., Shahbaz, M. (2018a). Testing for Asymmetric
Nonlinear Short-and Long-Run Relationships between Bitcoin, Aggregate
Commodity and Gold Prices. Resources Policy.
https://doi.org/10.1016/j.resourpol.2018.03.008
Bouri, E., Gupta, R., & Roubaud, D. (2018b). Herding behaviour in
cryptocurrencies. Finance Research Letters. https://doi.org/10.1016/j.frl.2018.07.008
Bouri, E., Lau, C. K. M., Lucey, B., & Roubaud, D. (2018c). Trading volume and
the predictability of return and volatility in the cryptocurrency market. Finance
Research Letters. https://doi.org/10.1016/j.frl.2018.08.015
Candelon, B., Ferrara, L., Joets, M. (2018). Global Financial Interconnectedness: A
Non-Linear Assessment of the Uncertainty Channel. Banque de France Working
Paper No. 661. http://dx.doi.org/10.2139/ssrn.3123077.
Corbet, S., Meegan, A., Larkin, C., Lucey, B., Yarovaya, L. (2017). Exploring the
dynamic relationships between cryptocurrencies and other financial assets.
Economics Letters. https://doi.org/10.1016/j.econlet.2018.01.004
38
-----
Demir, E., Gozgor, G., Lau, C. K. M., Vigne, S. A. (2018). Does economic policy
uncertainty predict the Bitcoin returns? An empirical investigation. Finance
Research Letters. https://doi.org/10.1016/j.frl.2018.01.005
Demirer, R., & Kutan, A. M. (2006). Does herding behavior exist in Chinese stock
markets?. Journal of international Financial markets, institutions and money, 16(2),
123-142.
Diebold, F.X., Yilmaz, K. (2014). On the network topology of variance
decompositions: Measuring the connectedness of financial firms. Journal of
Econometrics 182, 119-134.
Diebold, F.X., Yilmaz, K. (2016). Trans-Atlantic equity volatility connectedness:
U.S. and European financial institutions, 2004–2014. Journal of Financial
Econometrics, 14, 81-127.
Dyhrberg, A. H. (2016). Hedging capabilities of bitcoin. Is it the virtual
gold?. Finance Research Letters, 16, 139-144.
Fowowe, B., Shuaibu, M. (2016). Dynamic spillovers between Nigerian, South
African and international equity markets. International economics, 148, 59-80.
Gajardo, G., Kristjanpoller, W. D., Minutolo, M. (2018). Does Bitcoin exhibit the
same asymmetric multifractal cross-correlations with crude oil, gold and DJIA as the
Euro, Great British Pound and Yen? Chaos, Solitons & Fractals, 109, 195-205.
Garcia, D., & Schweitzer, F. (2015). Social signals and algorithmic trading of
Bitcoin. Royal Society open science, 2(9), 150288.
Garman, M.B., Klass, M.J. (1980). On the estimation of security price volatilities
from historical data. Journal of Business 53, 67–78.
Gębka, B. (2012). The dynamic relation between returns, trading volume, and
volatility: Lessons from spillovers between Asia and the United States. Bulletin of
Economic Research, 64(1), 65-90.
Guo, J., Ji, Q. (2013). How does market concern derived from the Internet affect oil
prices? Applied Energy, 112, 1536-1543.
Ji, Q., Guo, J. (2015a). Market interdependence among commodity prices based on
information transmission on the Internet. Physica A: Statistical Mechanics and its
Applications, 426, 35-44.
Ji, Q., Guo, J. (2015b). Oil price volatility and oil-related events: An Internet concern
study perspective. Applied Energy, 137, 256-264.
Ji, Q., Geng, J., Tiwari, A.K. (2018a). Information spillovers and connectedness
networks in the oil and gas markets. Energy Economics, 75, 71-84.
39
-----
Ji, Q., Zhang, D., Geng, J. (2018b). Information linkage, dynamic spillovers in prices
and volatility between the carbon and energy markets. Journal of Cleaner Production,
198, 972-978.
Ji, Q., Bouri, E., Gupta, R., Roubaud, D. (2018c). Network causality structures among
Bitcoin and other financial assets: A directed acyclic graph approach. The Quarterly
Review of Economics and Finance. https://doi.org/10.1016/j.qref.2018.05.016.
Kim, T. (2017). On the transaction cost of Bitcoin. Finance Research Letters, 23,
300-305.
Koutmos, D. (2018). Return and volatility spillovers among cryptocurrencies.
Economics Letters, 173, 122-127.
Koop, G., Pesaran, M.H., Potter, S.M. (1996). Impulse response analysis in
nonlinear multivariate models. Journal of Econometrics 74, 119–147.
Louzis, D. P. (2015). Measuring spillover effects in Euro area financial markets: a
disaggregate approach. Empirical Economics, 49(4), 1367-1400.
Pesaran, H.H., Shin, Y. (1998). Generalized impulse response analysis in linear
multivariate models. Economics Letters 58, 17–29.
Shahzad, S. J. H., Hernandez, J. A., Rehman, M. U., Al-Yahyaee, K. H., Zakaria, M.
(2018). A global network topology of stock markets: Transmitters and receivers of
spillover effects. Physica A: Statistical Mechanics and its Applications, 492,
2136-2153.
Singh, V. K., Nishant, S., Kumar, P. (2018). Dynamic and Directional Network
Connectedness of Crude Oil and Currencies: Evidence from Implied Volatility.
Energy Economics. https://doi.org/10.1016/j.eneco.2018.09.018
Yang, H. (2018). Behavioral Anomalies in Cryptocurrency Markets.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3174421
Yeh, C. H., & Yang, C. Y. (2011). Examining the effects of traders’ overconfidence
on market behavior. In Agent-Based Approaches in Economic and Social Complex
Systems VI (pp. 19-31). Springer, Tokyo.
Zhang, D., Broadstock, D. C. (2018). Global financial crisis and rising connectedness
in the international commodity markets. International Review of Financial Analysis.
https://doi.org/10.1016/j.irfa.2018.08.003
Zhang, D., Lei, L., Ji, Q., Kutan, A. (2018). Economic policy uncertainty in the US
and China and their impact on the global markets. Economic Modelling.
https://doi.org/10.1016/j.econmod.2018.09.028.
40
-----
**APPENDIX**
**Table A.1. Explanatory variables of spillovers**
**Variable** **Description**
Trading volume under study
GFSI Stress Index
MSCI World
GSCI Energy Spot
Gold Bullion
US EPU
US VIX 500 Index option prices
41
|Variable|Description|
|---|---|
|Trading volume|Trading volume on each of the leading cryptocurrencies under study|
|GFSI|Bank of America Merrill Lynch’s Global Financial Stress Index|
|MSCI World|Morgan Stanley Capital International World index. It represents large- and mid-cap equity performance across 23 developed-market countries|
|GSCI Energy|The S&P Goldman Sachs Commodity Energy Index Spot|
|Gold Bullion|The spot price of one ounce of gold|
|US EPU|The news-based US Economic Policy Uncertainty Index|
|US VIX|The CBOE US Implied Volatility Index, which measures 30-day expected volatility conveyed by S&P 500 Index option prices|
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1016/J.IRFA.2018.12.002?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1016/J.IRFA.2018.12.002, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://pure.hud.ac.uk/ws/files/15282463/R_Cryptocurrency_connectedness_Final.pdf"
}
| 2,019
|
[] | true
| 2019-05-01T00:00:00
|
[] | 21,664
|
en
|
[
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Environmental Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0336daa909c75455af171b58ab811096d1bfb92f
|
[] | 0.877192
|
Optimal scheduling of DG and EV parking lots simultaneously with demand response based on self‐adjusted PSO and K‐means clustering
|
0336daa909c75455af171b58ab811096d1bfb92f
|
Energy Science & Engineering
|
[
{
"authorId": "30940501",
"name": "Farag K. Abo‐Elyousr"
},
{
"authorId": "144542330",
"name": "A. Sharaf"
},
{
"authorId": "145004099",
"name": "M. Darwish"
},
{
"authorId": "2074063109",
"name": "M. Lehtonen"
},
{
"authorId": "15062175",
"name": "K. Mahmoud"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Recently, the proliferation of distributed generation (DG) has been intensively increased in distribution systems worldwide. In distributed systems, DGs and utility‐owned electric vehicle (EV) to grid aggregators have to be efficiently scaled for cost‐effective network operation. Accordingly, with the penetration of power systems, demand response (DR) is considered an advanced step towards a smart grid. To cope with these advancements, this study aims to develop an innovative solution for the day‐ahead sizing approach of energy storage systems of EVs parking lots and DGs in smart distribution systems complying with DR and minimizing the pertinent costs. The unique feature of the proposed approach is to allow interactive customers to participate effectively in power systems. To accurately solve this optimization model, two probabilistic self‐adjusted modified particle swarm optimization (SAPSO) algorithms are developed and compared for minimizing the total operational costs addressing all constraints of the distribution system, DG units, and energy storage systems of EV parking lots. The K‐means clustering and the Naive Bayes approach are utilized to determine the EVs that are ready to participate efficiently in the DR program. The obtained results on the IEEE‐24 reliability test system are compared to the genetic algorithm and the conventional PSO to verify the effectiveness of the developed algorithms. The results show that the first SAPSO algorithm outperforms the algorithms in terms of minimizing the total running costs. The finding demonstrates that the proposed near‐optimal day‐ahead scheduling approach of DG units and EV energy storage systems in a simultaneous manner can effectively minimize the total operational costs subjected to generation constraints complying with DR.
|
##### This is an electronic reprint of the original article. This reprint may differ from the original in pagination and typographic detail.
#### Abo-Elyousr, Farag K.; Sharaf, Adel M.; Darwish, Mohamed M.F.; Lehtonen, Matti; Mahmoud, Karar Optimal scheduling of DG and EV parking lots simultaneously with demand response based on self-adjusted PSO and K-means clustering
_Published in:_
Energy Science and Engineering
_DOI:_
[10.1002/ese3.1264](https://doi.org/10.1002/ese3.1264)
Published: 01/10/2022
_Document Version_
Publisher's PDF, also known as Version of record
_Published under the following license:_
CC BY
_Please cite the original version:_
Abo-Elyousr, F. K., Sharaf, A. M., Darwish, M. M. F., Lehtonen, M., & Mahmoud, K. (2022). Optimal scheduling
of DG and EV parking lots simultaneously with demand response based on self-adjusted PSO and K-means
clustering. Energy Science and Engineering, 10(10), 4025-4043. Advance online publication.
[https://doi.org/10.1002/ese3.1264](https://doi.org/10.1002/ese3.1264)
This material is protected by copyright and other intellectual property rights, and duplication or sale of all or
part of any of the repository collections is not permitted, except that material may be duplicated by you for
your research use or educational purposes in electronic or print form. You must obtain permission for any
other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not
an authorised user.
-----
/
O R I G I N A L A R T I C L E
# Optimal scheduling of DG and EV parking lots simultaneously with demand response based on self‐adjusted PSO and K‐means clustering
#### Farag K. Abo‐Elyousr[1] | Adel M. Sharaf [2,3] | Mohamed M. F. Darwish[4,5] | Matti Lehtonen[4] | Karar Mahmoud[4,6]
1Department of Electrical Engineering,
Faculty of Engineering, Assiut University,
Assiut, Egypt
2Sharaf Energy Systems, Inc., New
Maryland, New Brunswick, Canada
3Intelligent Environmental Energy
Systems, Canada Inc. of Fredericton,
New Brunswick, Canada
4Department of Electrical Engineering
and Automation, School of Electrical
Engineering, Aalto University, Espoo,
Finland
5Department of Electrical Engineering,
Faculty of Engineering at Shoubra, Benha
University, Egypt
6Department of Electrical Engineering,
Faculty of Engineering, Aswan
University, Aswan, Egypt
Correspondence
Mohamed M. F. Darwish, Department of
Electrical Engineering and Automation,
School of Electrical Engineering, Aalto
University, Espoo 02150, Finland.
[Email: mohamed.m.darwish@aalto.fi and](mailto:mohamed.m.darwish@aalto.fi)
[mohamed.darwish@feng.bu.edu.eg](mailto:mohamed.darwish@feng.bu.edu.eg)
Farag K. Abo‐Elyousr, Department of
Electrical Engineering, Faculty of
Engineering, Assiut University, Assiut
71516, Egypt.
[Email: farag@aun.edu.eg](mailto:farag@aun.edu.eg)
Karar Mahmoud, Department of
Electrical Engineering and Automation,
School of Electrical Engineering, Aalto
University, Espoo 02150, Finland.
[Email: karar.mostafa@aalto.fi](mailto:karar.mostafa@aalto.fi)
Abstract
Recently, the proliferation of distributed generation (DG) has been intensively
increased in distribution systems worldwide. In distributed systems, DGs and
utility‐owned electric vehicle (EV) to grid aggregators have to be efficiently
scaled for cost‐effective network operation. Accordingly, with the penetration
of power systems, demand response (DR) is considered an advanced step
towards a smart grid. To cope with these advancements, this study aims to
develop an innovative solution for the day‐ahead sizing approach of energy
storage systems of EVs parking lots and DGs in smart distribution systems
complying with DR and minimizing the pertinent costs. The unique feature of
the proposed approach is to allow interactive customers to participate
effectively in power systems. To accurately solve this optimization model,
two probabilistic self‐adjusted modified particle swarm optimization (SAPSO)
algorithms are developed and compared for minimizing the total operational
costs addressing all constraints of the distribution system, DG units, and
energy storage systems of EV parking lots. The K‐means clustering and the
Naive Bayes approach are utilized to determine the EVs that are ready to
participate efficiently in the DR program. The obtained results on the IEEE‐24
reliability test system are compared to the genetic algorithm and the
conventional PSO to verify the effectiveness of the developed algorithms.
The results show that the first SAPSO algorithm outperforms the algorithms in
terms of minimizing the total running costs. The finding demonstrates that the
proposed near‐optimal day‐ahead scheduling approach of DG units and EV
energy storage systems in a simultaneous manner can effectively minimize the
total operational costs subjected to generation constraints complying with DR.
K E Y W O R D S
demand response, electrical vehicles, K‐means clustering, Naive Bayes approach, objective
function, optimal scheduling, particle swarm optimization
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided
the original work is properly cited.
© 2022 The Authors Energy Science & Engineering published by the Society of Chemical Industry and John Wiley & Sons Ltd
-----
#### 1 | INTRODUCTION
Recently, the use of resilient distributed generations
(DG) aggregators in distribution systems has been rapidly
increasing. With the penetration of vehicle to utility‐
owned grid (V2G) aggregators to the grid, demand
response (DR) has a significant role in effectively
utilizing the demand side resources due to the constraints related to conventional distributed generators.[1]
Besides this, the significant improvements in smart grid
communication enable system designers and developers
to develop DR with the optimal format. By definition, DR
response is related significantly to the final customer
electric consumption changes in comparison to the
ordinary usage patterns.[2] For this definition, electric
vehicle (EV) owners and DR primary agents are
considered customers. For these customers, an efficient
impact on the power system is expected. They can
improve the day‐ahead system reliability and decrease
total operational costs by voluntary management of load
demands and DR.[2,3] On the other hand, there are a
number of aspects of the contemporary power systems
that make the V2G optimization inadequate for the
charging/discharging of EVs. These characteristics
include (i) The widespread utilization of renewable
energy sources, which are uncertain and intermittent.
(ii) Typically, conventional optimization techniques are
nonlinear, thus discovering a global minimum in a multi‐
energy resources system might not be assured. (iii) At the
parking lot, V2Gs establish random day‐ahead scheduling with different time energy costs, charger capacities,
and charging/discharging regulations, (iv) future proposals and potential validations for future smart grids are
required due to the energy demand's rising complexity,
which is worsened by the advent of particular load
fluctuations such V2G and DR impacts. Consequently,
the authors of this study are motivated to develop a novel
modified effective day‐ahead scheduling technique to
handle all such issues for the purpose of minimizing
overall expenses while maintaining a degree of customer
satisfaction that is acceptable.
DR can be modeled using the elasticity matrix of the
electricity prices of the load demands.[4,5] Using a constant
elasticity matrix over a pre‐determined period of time, it
was concluded by various studies[6][–][11] that DR has a
significant positive impact on the electricity market
prices, reliability, and spinning reserves issues. However,
this assumption of fixed elasticity of specific over specific
time results in the incredibility of EVs of the proposed
methods.[1] With the modern penetration of EVs into the
grid, the scheduling of DR has become more complicated
due to a lack of information on the demand characteristics patterns For this reason DR program operators
inquire information from the final consumer for better
credibility.[12] Demand resources require initial information from the customer to participate effectively in the
DR program. In the study by Asadinejad et al.,[13] the
evaluation of DR is evaluated using the elasticity and
fabrication matrices. The regression modes for DR were
introduced by Srivastava et al.[14] An optimal scheduling
approach for DR within smart grids was introduced by
Nan and Zhou.[15] In the study by Viana et al.,[16] DR with
renewable photovoltaic generation was discussed. Similar work with energy hub optimization was presented by
Huo et al.[17]
Several metaheuristic algorithms were used to solve the
economic dispatch with DR aspects.[18] Yet, particle swarm
optimization (PSO) has been reported to have a remarkable
exploitation feature.[19] Elnozahy et al.[19] utilized this feature
to enhance other metaheuristic algorithms. In the study by
Goudarzi et al.,[20] it was combined with an artificial bee
colony for vertical handover in wireless networks. It was
combined with a genetic algorithm (GA) to optimize total
costs for a hybrid wind‐PV battery system in the study by
Ghorbani et al.[21] It was developed by Sharaf et al. to
improve wind energy conversion dynamics, permanent
magnet synchronous motor performance, and other power
system issues.[22][–][24] PSO was employed successfully for
solving DR issues in various studies.[25,26] PSO exploration
ratio, on the other hand, does not have the same repute as
the exploitation feature. This motivates the authors of this
study to develop two probabilistic self‐adjusting metaheuristic algorithms based on PSO optimizer to solve the
generation and DR with V2G impact. In this manner, the
exploration feature would benefit from the self‐adjustment
and converge fast towards the near‐optimal solution.
Over time, EV penetration into the utility grid acquires
more intention. In the study by Gough et al.,[27] an economic
feasibility study was done. The authors concluded that V2G
could provide a significant income if V2G was coordinated
properly. A study done[28] was achieved by using V2G
impact to minimize total emissions with microgrid energy
scheduling. Optimal scheduling of EVs was carried out in
the study by Mortaz and Valenzuela[29] at the microgrid
level, where various control strategies for enhancing the
operation of microgrids connected to energy storage
systems were introduced in various studies.[30][–][34] The
optimal charging management was investigated by Mkahl
et al.[35] Similar work was achieved by Bin‐Humayd and
Bhattacharya.[36] The parking coordination of EVs was
investigated in the study by Faddel et al.[37] The investigation
of the above research[2][–][13,15][–][17] reveals that a constant
elasticity matrix for a specific interval of time was used,
which results in some incredibility. In the study by
Srivastava et al.,[14] regression methods generally need
training and the accuracy of the regression models depends
-----
on the number of available data. Furthermore, the impact
of V2G was not addressed.[1,17] The investigation of various
studies[27][–][37] show that they did not include DR in V2G
research studies. This encourages the authors of the current
study to develop optimal day‐ahead scheduling of utility‐
owned V2G combined with DR, which is rare in the
literature.
To cover the above‐mentioned research gaps, the goal
of this study is to provide an optimal simultaneous
hourly scheduling strategy for energy storage systems of
EV parking lots and distributed generators in smart
distribution networks that conform with DR. The
suggested solution is unique in that it allows interactive
consumers to engage successfully in power systems. Two
self‐adjusted particle swarm optimization (SAPSO)
methods are devised and compared to minimize overall
operational costs while addressing all restrictions of the
distribution system, DG units, and energy storage
systems of EV parking lots. In particular, this study
contributes to the literature as follows: (i) two modified
probabilistic metaheuristic algorithms integrated with
the K‐means clustering approach based on conventional
PSO are developed so that both the exploration and
exploitation features of the conventional PSO are
enhanced. In both optimizers, the Naive Bayes classifier
is employed to investigate the day‐ahead EVs to
participate efficiently in the DR program. Furthermore,
the DR and V2G demand is converted into a virtual
generation whose marginal cost function is that of the
load reduction. The K‐means clustering, which is an
unsupervised machine learning approach, is used to find
the EVs that are ready to engage in the DR program
effectively. (ii) Optimal scheduling subjected to constraints of generation DR with V2G is developed by
minimizing system total operational costs. In turn, the
validation of the developed optimizers is demonstrated
through an impartial comparison with the conventional
PSO and GA optimizers. The SAPSO optimization
techniques created to solve the model's nonlinearity
and non‐convexity are based on dynamic error adjustments of the weightings, speed deviations, and position
equations. In contrast to the traditional PSO, corrective
action is developed in terms of errors and rate of change.
To validate the efficacy of the created algorithms, the
acquired results on the IEEE‐24 reliability test. According to the results, the first SAPSO algorithm beats the
other algorithms in terms of lowering total running
expenses. It is revealed that the suggested optimum
scheduling methodology for DG units and EV energy
storage systems may successfully decrease total operating
costs while complying with DR generation limits.
The remaining of this manuscript is organized as
follows Problem description and formulation are
introduced in Sections 2 and 3, respectively. The
simulated results are given in Section 4. Finally, the
discussions and conclusions are presented in Sections 5
and 6, respectively.
#### 2 | PROBLEM DESCRIPTION
Figure 1 shows the structure of modern distribution
systems in which various distributed energy sources and
EVs are distributed along with smart meters that are
utilized for DR. Accordingly, this study concerns the DGs
and DR with V2G optimal scheduling. The current
electric microgrids are undergoing a change as distributed energy resources, including infrequent renewable
production resources on the distribution side, become
more and more integrative. Therefore, if effective day‐
ahead scheduling of DGs is adequately coordinated, there
would be real benefits for both utilities and their
consumers.
#### 2.1 | System description
The study brings together the formulation of optimal
scheduling of generation and DR. The considered
IEEE 24‐bus system includes DGs, DR with V2G
to minimize total operational costs. DR is transformed
into virtual generation units. The modeling of
the costs of the individual components is presented
in the following sections. The objective function is to
reduce DG, DR, and V2G costs. In the subsections that
follow, each of these costs is mathematically
expressed.
#### 2.1.1 | Cost modeling of distributed generating units
Considering the DG unit status, the total costs of a
generating unit are given in (1) in terms of its output
power.[2,38][–][40] Table A1 gives the parameters at the
corresponding buses for estimating the total operational
costs.[1,41]
2
_Cgi_ (Pgi ( ) =t ) _α Pi(_ _gi_ ( )t ) + _β Pi(_ _gi_ ( ) +t ) _γ s ti i_ ( )
+ _STC b ti i_ ( ) _t_ ∈ _T_ and _i_ ∈ _G,_
(1)
where T presents the set of day‐ahead hourly periods, G
is the set of generating units, Cgi is the operational cost of
a generating unit P[i] is the output power of a generating
-----
FIGURE 1 Modern distribution system. DG, distributed generation; EV, electric vehicle.
unit, s[i] is unit commitment flag {0,1}, STC[i] is the
startup cost of unit i, and b[i] is starting flag of a unit.
#### 2.1.2 | Cost modeling of DR units
Customers were asked to supply basic information to
participate successfully in the DR program. Besides, the
DR program collects the required historical data based on
the responsive nature of a customer. However, there is
some information necessary to express the DR costs. The
first is the maximum reduction power M _[j]_ in megawatts
that a customer j can bear. The second is the duration at
which a customer j is available. The third is the number
of yearly participation or frequency of a customer.
With the above information and the relevant parameters
in Appendix section, the DR costs of a customer j
are expressed as in (2).[1,2,42] The parameters α _[j]_ and β _[j]_
depend on the DR marginal cost as will be explained in
the problem formulation section. The key finding is to
find the virtual resource DR(t) so that an objective
function is satisfied
where CV2Gk describes the operational cost of a V2G unit
in $/kWh V2G[k] describes the output power of a V2G
_CDRj_ (DR tj ( )) = _α DR tj_ ( _j_ ( )) +2 _β_ _j_ (DR tj ( ))
_t_ ∈ _T_ and _j_ ∈ _DRG,_
(2)
where CDRj presents the operational cost of a generating
unit, DR[j] represents the output power of a demand
resource unit, and DRG is the set of demand resource units.
#### 2.1.3 | Cost modeling of V2G units
V2G energy storage batteries are probabilistic in nature.
It depends on the state of charge (SOC) for an EV to be a
load or demand resource. The DR cost of a V2G is
basically related to the on‐gird battery operational costs
and expressed as follows[43]:
_CV2Gk_ (V2Gk ( )) =t _βk_ (V2Gk ( ))t _t_ ∈ _T_ and
(3)
_k_ ∈ _V2GR,_
-----
demand resource unit, and V2GR describes the set of
demand resource units.
#### 2.2 | Overview self‐adjusting PSO
Conventional PSO utilizes stochastic solutions, which
makes the derivative information for conversions long.
SAPSO utilizes different approaches to speed up the
conversion process. In the following, a brief overview of
the conventional PSO is presented. Then, the required
improvements for the developed SAPSO are investigated.
Conventional PSO requires a few numbers of
parameters to be adjusted.[44] It was inspired to imitate
animals' and birds' movement behavior.[19,20,45] The
particles are distributed randomly. The particle positions
contain the decision variables. Besides, each particle
represents a possible solution. The particle decision
variables and the corresponding fitness value is defined
by a position and a fitness function. The particles proceed
in a recursive issue to calculate the optimal decision
variables according to the fitness function.
The position of a particle (Pk) is modeled by a location
in the XY plane as shown in Figure 2. The particle
velocity is represented by Vx and Vy in the x‐axis and
y‐axis, respectively. The particle collection in the XY
plane is explored by a pest value (Pbest). Among the group
of Pbest values, a global best (gbest) is required. The
particle's position is updated according to (4).[46]
_WVnew_ + _c r GB1 1_ ( − _CP) +_ _c r PB2 2_ ( − _CP),_ (4)
where Vnew is the new velocity of the particle's position
change, W is the Inertia weight, GB is the global best, PB is
the personal best, CP is the current position, r1 and r2
represent the two random variables, c1 is the global
learning coefficient, c2 is the personal learning coefficient.
Hence, the procedures for PSO are as follows:
1‐ The system is started by defining a population of
random solutions. The optimization problem is
FIGURE 2 Particle position concept by particle swarm
optimization.
formulated by random velocity. Each potential
solution with a velocity is recognized as a particle.
2‐ Within the population, the fitness function is evaluated.
3‐ For every iteration, Pbest is recorded.
4‐ The particle's best solution (Pbest) within the population is compared with other populations and swarm
global best (gbest)is determined.
5‐ The velocity of a particle is updated.
6‐ The steps from 2–5 are repeated till a maximum
number of iterations (Nmax) is reached.
The developed SAPSO algorithms suggest a modification for the inertia weight given in (1). The value of the
inertia weight is adjusted based on the error estimation in
each iteration. Two algorithms are developed in this study.
#### 2.2.1 | SAPSO#1
In this algorithm, the inertia weight is calculated using
the following equations from (5) to (7). The error
estimation in (5) represents the gbest improvement. Using
the normalized error (ξk ) given in (6), the inertia weight
(Wk ) is updated according to (7).
∆ek = _GBk−1_ − _GBk−2,_ (5)
_ξk_ ∆ek (6)
= max(GBk) [,]
_Wk_ = _W0_ × (1 + _ξk) × (k_ −1)/ .d (7)
With k being the iteration number index, the particle
position (Xnew) is updated through the following
equations. In (9), d0 is a design parameter between 10
and 100. The variable (Dk ) is used to obtain the new
position of particles in terms of their velocity as in (10).
_ηk_ = max(GBkGB−1 _k) [,]_
(8)
_Dk_ = _d0_ × (1 + _ηk),_ (9)
_Xnew_ = _Xold_ × _Dk_ + _Vold._ (10)
#### 2.2.2 | SAPSO#2
In this algorithm, two parameters are introduced (αk and
_βk ) according to (11) and (12). The inertia weight factor_
in this algorithm is evaluated according to (13) based on
_αk and βk in terms of the global best during each_
iteration.
_αk_ = _GBk_ − _GBk−1,_ (11)
-----
_βk_ = _αk_ − _αk−1,_ (12)
algorithm.[48] As a result of the clustering using
K‐means, it is utilized in this study to investigate
whether to charge or discharge an EV inside a cluster.
In particular, it determines the likelihood of EV to
charge and stay linked to the utility grid based on
studying the historical data of the whole vehicles
inside the cluster.[49] The generic form Naive Bayes
algorithm is given in Equation (14).
_P C x(_ _k| ) =_ _P x C( |_ _k) ×_ _P C(_ _k)_, (14)
_P x( )_
where Ck is the output of class‐k, xk is the dataset
attributes (x x1, 2, …, _xn_ ), P x C( | _k_ ) is called the likelihood
to charge the vehicle, P C( _k_ ) is the prior, P x( ) is the
evidence, and P C x( _k| )_ is posterior. The posterior is the
target to estimate and fortunately its value is binary,
which is relevant to EVs battery status.
#### 3 | PROBLEM FORMULATION
The DR with V2G issues comprises an objective function
subject to constraints. It is assumed that the V2Gs in DR
operate at a unity power factor and whatever output kWh
available from them in kWh is directly supplied/absorbed
from the grid.
#### 3.1 | Objective function
The proposed optimal scheduling is to minimize the
fitness function J, which minimizes the sum of generation, DR, and V2G total operational costs.
_Wk_ = _Wo_ (1 + _αk_ + _βk)_ . (13)
max(GBk)
#### 2.3 | K‐means clustering
K‐means clustering is an unsupervised machine learning
approach that tries to group comparable observations into
clusters to aid in determining if V2G status is a load or a
power resource. It attempts to divide the data into groups
with several centroids by minimizing the Euclidean
distance to the centroids.[47] The algorithm starts by defining
the number of clusters (k), which is hyperparameters as
demonstrated in Figure 3. The placement of the centroids is
initiated at random, and the approach proceeds to divide
the data (i.e., observations) based on the shortest distance to
the centroids. New locations of other centroids are then
inferred based on the average data values within each
group. Ultimately, the algorithm runs until there are no
changes at the clustering.
#### 2.4 | Probabilistic Naive Bayes algorithm
For multi‐classification tasks, the Naive Bayes technique is a well‐known supervised machine learning
_J_ = min _Ng_ _Cgi_ (Pgi ( ) +t ) _Nd_ _CDRj_ (DR tj ( ))
i=1 j=1
(15)
+ _kN=1v_ _CV2Gk_ (V2Gk ( )),t
where Ng, Nd, and Nv are the total number of DGs, DR
virtual resources, and V2G units.
#### 3.2 | DR as virtual generating units
In this study, DR is transformed into a virtual generating
unit, in which each demand reduction is handled as an
equivalent generating resource. The DR price is treated
FIGURE 3 K means clustering concepts in terms of its marginal cost (mc) as in (16) [1,2]
-----
_mc_ _j_ = 2PHDR t −j ( )PL _[s]_ _j_ ( )t dr tj ( ) + _P sL_ _j_ ( )t
(16)
#### 3.4 | K‐means optimal number of clusters
The K‐means algorithm is utilized to reduce the
distances between points in a cluster. It, on the other
hand, aims to maximize the distances between clusters.
The goal of the current research is to establish whether
the parking lot battery is a load demand or a distributed
energy resource. As a result, clusters of centroids with
SOC of greater than 50% are considered resources and
are likely to be part of the DR program, while the
remainder is considered load demands. However, determining the optimal number of clusters is a challenge.
The Euclidean distance within cluster is based on the
sum of squares, sometimes called inertia. As a result,
inertia could be a useful way to choose a cluster number
that is close to optimum. Furthermore, the Silhouette
score (Si) concept might be utilized to measure the
quality of K‐means clustering fit.[47] The Si score is
calculated for each data point in each cluster based on
the following data observation distances:
1‐ The average distance (a) between a single observation
(i.e., data point) and all other data points in a cluster.
2‐ The average distance (b) between the observation and
the next closest cluster's other data points. In turn,
the Silhouette score is estimated as:
_b_ − _a_
_Si_ = (17)
max(, ) a b [,]
where max refers to selecting the maximum value
between a and b. The cluster is adequately split if S is
closer to unity. A score approaching zero would indicate
overlapping clusters with samples extremely close to the
surrounding clusters' border. A negative score of −1 to 0
demonstrates that the data was incorrectly allocated to
the clusters.
#### 3.5 | Implementation of Naive Bayes probabilistic
= _α t dr tj_ ( ) _j_ ( ) + _β tj_ ( ),
where PH and PL are the high and low electricity prices
when customer j shares in the DR program. dr tj ( )
represents the hourly DR contribution by customer j.
_DR tj_ ( ) is the average DR of customer j during its
contribution in DR. One can refer to Kwang and
colleagues[1,2] for the exact determination of the DR
parameters. The DR resources commands are given in
Figure 4. The power demand peaks from 8 a.m. to
21 p.m., implying that the costs are passed on to the end‐
user. The DR pattern is established in such a way that,
during the peak hours, the developed optimizers specify
the optimal virtual generation or the demand reduction
thereby reducing the total costs.
#### 3.3 | On‐grid energy storage EVs batteries
V2G energy storage batteries are represented by a
parking lot at the relevant bus. When connected to the
utility grid, plugin electric vehicles (PEVs) could be a
load demand or a resource. Based on the SOC, the
parking lot is a large battery, whose capacity is defined by
the size of the individual parking vehicle batteries' that
are shared in the DR program. This big battery is highly
stochastic in terms of the EV number and SOC.[50][–][53]
Furthermore, the number itself depends on the incoming
and exiting cars. For this reason, the parking lot battery is
simplified by a probability density function (PDF) as will
be demonstrated in the next section. The main target is to
find the optimal sharing capacity according to the PDF so
that the total operational costs are minimized. Accordingly, a near‐optimal portion from the big battery at the
parking lot is treated as virtual inertia DR.
The K‐means clustering divides the EVs into clusters
according to their SOC. Determining the status of the
incoming cars sophisticated task. Since Naive Bayes is a
trained‐based approach based on the EVs historical data,
therefore, it is expected to provide a robust decision of
EVs that are likely to be part of the DR program or
considered load demands provided that the centroids
have the same SOC. The Factors that impact the EVs
drivers were investigated before in the literature,[49,54] on
FIGURE 4 Demand response (DR) command pattern the basis of which an excel sheet was established as in
-----
Appendix section. The “OneHotEncoder” technique is
employed to convert the Table A2 information into
binary data. The accuracy of the naive is estimated
according to (18), in which yˆ is the predicted value of y
and nEV is the net EVs within a cluster.
1 _nEV_ −1
accuracy (, ˆ ) =y y 1 : _if yˆ =_ _y._
_nEV_ _i=0_
(18)
#### 3.6 | Constraints
The power balance constraints are given in (19) to ensure
the electrical load/generation balance at a time interval t,
in which Nc represents the total number of customers'
loads. The DG, DR resources, and V2G limitation
constraints are given in Table A1.
_Ng_ _Nd_ _Nv_ _Nc_
_i=1Pgi_ ( )+t j=1DR tj ( ) + _k=1V2Gk_ ( ) =t j=1 _Plj_ ( )t
(19)
_t_ ∈ _T_
.
For a group of transmission branches (Nbr), the power
losses are estimated as (20), in which Rbr is the branch
resistance and Ibr is the corresponding current.
_Nbr_
_PLOSS_ ( ) =T _i=1_ _I Rbr2_ _br._
(20)
For customer j, the DR constraints are assumed in
(26), in which Mj is the maximum allowable reduction in
the DR program. Furthermore, the DR prices limits are
given in (27), in which P td( ) is the energy price at hour t.
0 _DRj_ _Mj,_ (26)
_PH_ _P td( )_ _PL._ (27)
For a parking lot facility, the DR participation is
governed in (26), in which V2GR is the DR participation.
0 _V2GR_ _PDF t( ) ×_ _V2Gmax_ (28)
#### 3.7 | Degree of satisfaction
In this study, an index is defined in (29), which is
considered as a measure for the degree of satisfaction of a
customer in the DR program. The higher η is the higher
the customer's degree of satisfaction.
_η_ = 1 − _Pa_ − _Pb_, (29)
_Pb_
where Pa is the total electricity costs after considering
DR, and Pb is the total electricity costs before considering DR.
#### 3.8 | DR participation ratio
The customer participation rate represents a customer
contribution in the DR program as given in (30), in
which M represents the maximum DR magnitude a
customer j can allow. It depends on the customer
performance.
For N buses, the voltage and current constraints are
given as in (21) and (22), respectively.
_Vmin_ _Vi_ _Vmax_ _i_ ∈ _N_ (21)
_Ii_ _Imax_ _i_ ∈ _Nbr ._ (22)
For S is the subset of transmission lines branches,
which connect bus i and bus k, the next equations are
used to estimate the active and reactive powers as:
_Pj_ = _g Vik_ _i2_ − _V V gi_ _k_ ( _ikcos(θi_ − _θk)_
+ _biksin(θi_ − _θk))_ _i k,_ ∈ _S,_
_PR tj_ ( ) = _DR tM tjj( )( )_ _t_ ∈ _T._ (30)
#### 3.9 | Implementation of self‐adjusted PSO
Optimal operation of generation and DR resources with
V2G offers a way to decrease total operational costs.
Figure 5A shows a flow chart of the developed
probabilistic SAPSO algorithms. Herein, the developed
SAPSO algorithms are developed to manage the optimal
day‐ahead scheduling of generation, demand resources,
and the V2G PDF operation. To verify the economical
viewpoint, it is required to run load flow analysis.
The algorithms start by defining the hyperparameters
and then generate a PDF for the V2G possible scenarios.
_Qj_ = _b Vik_ _i2_ − _V V gi_ _k_ ( _ikcos(θi_ − _θk)_
+ _biksin(θi_ − _θk))_ _i k,_ ∈ _S,_
(23)
(24)
where j iteration is the iteration number, gik is the
branch conductance, and bik is the branch substance. For
DGs, the power generation constraints are specified as
(25), in which G is the total number of DGs.
_Pgi_,min _Pgi_ _Pgi_,max _i_ ∈ _G._ (25)
-----
FIGURE 5 Mathematical formulation description.
(A) Probabilistic modified particle swarm optimization‐based day‐
ahead optimization; and (B) overview of the mathematical
formulation. DR, demand response; V2G, vehicle to grid.
Provided that each cluster has the same SOC centroid, the
Naive Bayes classifier is trained. The training factors
include battery SOC, driver yearly income, education level,
wind speed, and the main goal is to determine the driver's
proclivity to charge the battery To cover the EVs' is trained
for each cluster, uncertainty, the K‐means clustering
demonstrated in Figure 3 is checked to identify the electric
cars that would engage in the DR program. In each cluster,
the Naive Bayes classifier is used to predict whether the
EVs will charge or discharge. The algorithms then proceed
to find a global best solution that reduces overall costs while
meeting the objective function adequately. Eventually, the
optimal solution is identified throughout the predetermined
number of iterations. The problem formulation is to put
forward the near‐optimal solution for the distributed
system in terms of the economic issues as demonstrated
in Figure 5B. The fundamental concept is to transform such
a stochastic problem into a deterministic concept. To
forecast EV charge/discharge status, the Naïve Bayes
algorithm is employed. The SOC is thus produced at
random. To find the essential clusters, K‐means clustering
is used. To meet the objective function, deterministic
optimizers search for the best or nearly best decision
variables involoved in (15). The remaining issues are
related to the DGs; however, they are deterministic in their
nature. Following the activation of the hyper‐parameters,
each of the deterministic optimizers seeks to minimize the
objective function provided in (15), which contains the
decision variables. The decision variables comprise the size
of the DGs and DR participation as well as the parking lot
battery. In turn, both the day‐ahead total expenses, the
load, and the resources are scheduled in a manner to
increase the customer's degree of satisfaction.
#### 4 | SIMULATED RESULTS
For optimal network functioning in distributed systems,
DGs and utility‐owned V2G aggregators have to be
efficiently rescaled. To develop the day‐ahead sizing
strategy of energy storage systems of EVs parking lots
and DGs in smart distribution systems, compliant DR,
which is regarded as an advanced step towards a smart
grid, is used as a result of their penetration into the
power systems. Applying the developed problem formulation algorithm in the previous section to the modified
IEEE 24 RTS bus network will help find the near‐optimal
times to schedule DG generating units, DR, and V2G
resources in accordance with the objective function in
(15). Metaheuristics are useful methods for finding the
near‐optimal size within the scheduling problem, which
would both assist to lower total expenses and provide an
acceptable degree of satisfaction. Thus, the obtained
results are compared with two algorithms, namely the
mature GA and the traditional exploitive PSO, to
demonstrate the efficacy of the two developed SAPSO
optimizers and address the aforementioned constraints.
In the day‐ahead scheduling the EVs' states at the
-----
parking lots are adjusted in the DR program utilizing
both the developed K‐means clustering and the Naïve
Bayes probabilistic techniques. The GA, PSO, SAPSO#1,
and SAPSO#2 models are constructed using Matlab
2017a with a 1‐h time step. Python is used to investigate
the Naïve Bayes classifier and K‐means clustering. The
Appendix section includes the developed SAPSO#1 and
SAPSO#2's parameters.
The following investigation scenarios attempt to
reduce total operating expenses by including day‐ahead
scheduling with an acceptable level of satisfaction.
Initially, the case study is presented followed by
investigation of the effectiveness of the developed
optimizers. Overall, the decision variables are the
DGs, DR resources, and parking lots sizing in kW.
Applying the four optimizers results in a reduction of
overall costs with an acceptable degree of satisfaction,
scheduling both DGs and V2G cars, and comparing the
execution times of the individual optimizers. The single
optimizers' hourly energy production shares are demonstrated to provide a fair comparison. Pseudocode 1
illustrates the general approach to determine the DGs,
DR, and V2G status depending on the load profile and
energy costs. However, the charging or discharging
mode of the EVs is determined by K‐means clustering
and the Naive Bayes techniques. In addition, deterministic optimizers are used to estimate the objective
function. In turn, they are then given the results of the
charging and discharging modes, and they go on to
obtain the sizing decision of the variables in accordance
with the varied profitability prices.
#### 4.1 | Case study
Figure 6 shows a modified one‐line diagram of the IEEE
24 reliability test system. It consists of 11 generation units
at Buses 2, 1, 7, 13, 14, 15, 16, 18, 21, 22, 23. Bus#13 is
considered the slack bus. The DR buses are located at 3,
4, 5, 8, 9, 10, 19, and 20. Among the DR buses, V2G
charging stations are assumed at Buses 3 and 19 as
shown in Figure 6 with red circles. The developed DR
scheduling has been applied to the IEEE 24‐bus RTS bus
network. The electrical demand for a day‐ahead is given
in Figure 7 where the maximum loading is 2650.5 MW
and peaks at Hour 18. The locations of the DGs, DR
resources, and V2G resources are given in Figure 6.
#### 4.2 | Effectiveness of the developed algorithms
Figure 8 demonstrates a convergence behavior test of the
objective function for the developed optimizers. GA
illustrates a longer time to relax. Meanwhile, other
developed algorithms show satisfactory performance.
Yet, it is obvious that the developed SAPSO#1 is a strong
competitor to the other algorithms. It converges fast
towards the near‐optimal point.
Each optimizer tries to identify the best amount of DR
at a specific time in response to DR command signals. The
size of the demand profile before and after the DR program
remains unchanged, but the DR reaction adjusts the flexible
load timespan, resulting in a more cost‐effective solution.
-----
FIGURE 6 Modified IEEE 24 reliability test
system.
FIGURE 7 Typical system hourly demand profile.
FIGURE 8 Objective function variation. GA, genetic
algorithm; PSO particle swarm optimization
FIGURE 9 Demand profile before and after demand response
(DR) program via SAPSO#1.
The load profile before and after the DR program via
SAPSO#1 is demonstrated in Figure 9, in which the
algorithm moves the load during peak hours into low
demand requirements times. Furthermore, the total costs of
the developed algorithms are shown in Figure 10.
The developed optimizers are based on the PSO, which
has a robust exploitation feature. Besides, the major target of
this section is to verify the effectiveness of the developed
optimizers compared to the GA and the exploitive PSO.
Accordingly, the developed PSO‐based optimizers are
expected to demonstrate a satisfactory exploitation feature
to meet the objective function which is verified in Figure 8
-----
FIGURE 10 Total costs before and after
considering the demand response (DR)
program.
TABLE 1 Comparison results of the
individual optimizers.
Maximum Minimum Degree of
Algorithm voltage (pu) voltage (pu) satisfaction
Time of
simulation No. of
(s) iterations
GA 1.0000 0.9336 0.7021 92 100
PSO 1.0000 0.9337 0.9211 86 100
SAPSO#1 1.0000 0.9335 1.0157 89 100
SAPSO#2 1.0000 0.9336 0.9731 85 100
Abbreviations: GA, genetic algorithm; PSO, particle swarm optimization.
and Table 1, respectively. Nonetheless, both the total cost
reduction and the execution time are considered in the
current study to judge which optimizer is more effective. The
comparison cost reduction results are given in Table 1.
SAPSO#1 shows a reduction of $7.8k (i.e., 506.3–498.5),
which represents a reduction of 1.65% with respect to the
“After the DR” case. Meanwhile, SAPSO#2 shows a cost
decrease of $13.6k (2.7%), which represents 29.8% when
compared to the GA considering the “before the DR”
example. Consequently, the lowest day‐ahead costs are
provided by SAPSO#1, which represents 24% compared to
the GA considering the “before the DR” case. In addition, it
offers the greatest degree of satisfaction. However, the
shortest time of the simulation is provided by SAPSO#2. It
records 85s with a PC having an Intel Core i5‐7200U CPU
2.5 GHz.
#### 4.3 | Optimal PEV charging with DR
The main target of optimally scheduling the PEVs to the
utility grid is to determine the optimal charging profile
and the corresponding daily benefit cost, which comes
back to the customer through the participation in the DR
issue according to Section 3 and the constraints in
Table A1 With the assumption that the parking lot is a
FIGURE 11 Parking lot battery state of charge (SOC).
large battery, a PDF with a bell‐shaped curve for V2G is
assumed for the battery SOC as in Figure 11.
The scheduling method in the current study is utilized in
the distributed systems primarily to focus on the electric
automobile operations in which the EVs battery status is the
point of interest. The main purpose is to minimize the total
energy costs. On the other hand, the PSO‐based techniques
in the current study are dependent on training to identify the
V2G state, which is based on the charging and discharging
rates of the EVs that fluctuate arbitrarily over time. For this
reason Naive Bayes is trained randomly as in Table A2
-----
shown in the Appendix section. Thus, to consider the
uncertainty within the parking lot battery hourly status,
a ±20% variability around each SOC observation of the PDF
is assumed as shown in Figure 12A. The parking lot battery
is assumed to be a big virtual battery. Since it is virtual, the
parking lot battery is therefore made simpler by a PDF. The
number of vehicles entering and exiting the parking lots,
which essentially meets the V2G typical operating SOC
requirements, influences this virtual battery capacity.
Ultimately, the individual automobiles operate realistically
(between 20% and 95%), but the virtual battery capacity is
dependent on the incoming and departing vehicles. Herein,
(A)
(B)
(C)
the goal is to determine whether the parking lot is a load
demand or is considered a distributed energy resource. The
investigation of the Silhouette score and the inertia in
Figure 12B,C, respectively, illustrate that the Silhouette score
decreases fairly from 0.54 to approximately 0.4 at 9 clusters.
Since the inertia curve has an elbow shape, it might help
figure out how many clusters are satisfactory. Fortunately,
the investigation reveals that 10 clusters slightly alter the
inertia curve compared to 9 clusters; therefore, 9 clusters are
chosen in this study.
Figure 13 depicts the near‐optimal clusters, with each
centroid represented by the symbol ×. The Naive Bayes
classifier is used to predict the status of an EV battery either
to charge or discharge according to pretrained historical data
as in Appendix section. In their participation in the DR
program, the drivers with “YES” admit selling energy to the
utility grid, and the others are considered as hourly loads.
The battery SOC range is divided into three main statuses:
low, medium, and high. Such statuses are converted to zeros
and ones using the “OneHotEncoder” technique accessible
in Python packages. Likewise, additional factors that
influence the battery charging are transformed into binary
statuses. With the Gaussian fitting option, Naive Bayes
prediction achieves an accuracy of 80%. Assuming the EVs'
battery capacity is in the range of 50 kWh and assuming
1000 total number of EVs at the relevant buses, the
aggregation of the predicted only cars might sell energy to
the utility grid according to the PDF perspective. Other cars
are considered as load demands. Based on the SOC,
Figure 14 shows the optimal charging profile of V2G at
Bus#3. The hourly benefit‐cost for Buses#3 and #19 are
given in Figures 15 and 16, respectively. Based on the
training data in Table A2, the near‐optimal V2G charging is
demonstrated in Figure 14 at Bus#3 wherein a parking lot
exists. The act of charging indicates that the owners of EVs
pay money to acquire electricity from the grid. However,
Figures 15 and 16 show how the V2G may contribute to the
DR program at different times, which lowers the total
FIGURE 12 K‐means indices; (A) parking lot variability,
(B) Silhouette score and (C) inertia FIGURE 13 Near optimal vehicle to grid clustering
-----
expenses by the amounts therein shown. Summing such
day‐ahead benefit costs is shown in Table 2, which are
saved from the customers' side. The findings show that
SAPSO#1 and SAPSO#2 both save around $117.7
(e.g., $61.05 + $56.62) and $112, respectively. In turn,
SAPSO#1 exhibits revenue improvement compared to
SAPSO#2 by 5%, referred to the latter results. It is
clear that SAPSO#1 certainly strengthens the customer
benefit cost over SAPSO#2
FIGURE 14 Optimal vehicle to grid
charging at Bus#3.
FIGURE 15 Benefit‐cost of vehicle to grid
charging at Bus#3 (blue: SAPSO#1, red:
SAPSO#2).
FIGURE 16 Benefit‐cost of vehicle to grid
charging at Bus#19 (blue: SAPSO#1, red:
SAPSO#2).
#### 4.4 | Impact of scheduling DR and V2G resources upon running cost
Figures 17–19 show the system performances involving
DR and V2G resources via SAPSO#1 from 8 to 21 h.
Once, the DR signal is received, the algorithm seeks the
optimal DR value to minimize the total costs. It can be
observed that the demand resources decrease the overall
running costs The optimal scheduling enables demand
-----
TABLE 2 Net daily benefit‐cost of V2G.
Algorithms
Daily benefit‐cost ($)
Bus#3 Bus#19
SAPSO#1 61.05189 56.62496
SAPSO#2 58.18773 53.84188
FIGURE 17 Demand response (DR) resources scheduling with
SAPSO#1.
FIGURE 18 Vehicle to grid resources scheduling with
SAPSO#1. EV, electric vehicle.
FIGURE 19 Running cost with demand response (DR)
FIGURE 20 Running cost versus participation rate at Hour 18.
resources to become competitive to the DGs with the
highest costs. Thus, the effectiveness of the developed
SAPSO is verified.
#### 4.5 | Impact of DR participation ratio
The customer participation rate represents a customer
contribution to the DR program as given in Equation (30).
Figure 20 shows the echelon of the daily running costs
versus a customer participation rate via SAPSO#1 at Hour
18, which meets the maximum loading conditions. Below
(PR = 0.5), DR has a slight impact on the running costs. It
could be concluded that the above PR equals 0.5, and the
DR scheduling could improve the net hourly cost.
#### 5 | DISCUSSION
Table 3 compares the outcomes obtained by a few
schedulers that produced adequate performance in the
literature. In each study, a fundamental situation is
taken into account, from which the data in the second
column was derived. Despite the fact that the figures in
Table 3 vary based on the current energy prices, the
characteristics of the distributed systems under consideration, or at the level of microgrids, RESs, and
DGs, it is clear that the developed optimizer performs
satisfactorily. Yet, the findings in Table 3 show that to
achieve greater cost savings, a more sophisticated
stochastic approach, such as reinforcement learning,
is required. From the above case studies, the following
notices can be drawn:
1‐ Both of the developed controllers show better performance compared with GA and the conventional PSO.
2‐ SAPSO#2 outperforms the other algorithms in terms of
the time consumed for optimal scheduling processing
-----
TABLE 3 Optimizers, effectiveness.
Method Cost saving (%)
Greedy[55] 85
GA[56] 5.9
Mixed integer linear programming[57] 29
Markov chain[29] 24
First in first Saving[58] 3.4
SAPSO#1 24
SAPSO#2 29.8
Abbreviation: GA, genetic algorithm.
3‐ SAPSO#1 has the highest degree in terms of
convergence processing.
4‐ Optimal scheduling of DR and PEV resources enables
them to be competitors to DGs with the operational
highest costs.
#### 6 | CONCLUSION
The day‐ahead sizing of the flexible distributed generators
and resilient EV aggregators in distributed networks is
investigated in this study. Besides, two modified probabilistic SAPSO algorithms integrated K‐means clustering, and
Naive Bayes classifiers were utilized to evaluate the optimal
day‐ahead scheduling of generation and remand response
with V2G participation. The optimal scheduling was
conducted to minimize the total operational costs of
generations, DR, and V2G resources. The results show that
the running costs decrease as the customer participation
rate increases. The K‐means clustering technique was
utilized to divide the EVs into clusters according to their
batteries' state of arrival. The Naive Bayes classifier was
employed to predict the EVs which participate in the day‐
ahead scheduling. From the above development and
discussions, the next conclusions could be drawn: (1) the
developed algorithms allow the optimal scheduling of
generation and remand response with V2G participation in
an economic manner. (2) The effectiveness of the developed
SAPSO#1 to minimize the total running costs was achieved
and compared with other algorithms. (3) The algorithm is
effective and can be cooperated with optimal scheduling
issues with different operating conditions to minimize total
operating costs and maximize net savings.
For the purpose of future work, this study could be
extended within resilient interconnected microgrids with
more advanced machine learning techniques such as
reinforced machine learning‐based algorithms. A future
study might also look at the impact of façade thermal
photovoltaic systems for storing green hydrogen and the
versatile V2G energy storage batteries.
ACKNOWLEDGMENT
This study was supported by the Department of Electrical
Engineering and Automation, School of Electrical
Engineering, Aalto University, Espoo, Finland.
ORCID
Farag K. Abo‐Elyousr [https://orcid.org/0000-0002-](https://orcid.org/0000-0002-1692-5003)
[1692-5003](https://orcid.org/0000-0002-1692-5003)
Adel M. Sharaf [https://orcid.org/0000-0002-4147-0901](https://orcid.org/0000-0002-4147-0901)
Mohamed M. F. Darwish [http://orcid.org/0000-0001-](http://orcid.org/0000-0001-9782-8813)
[9782-8813](http://orcid.org/0000-0001-9782-8813)
Matti Lehtonen [https://orcid.org/0000-0002-9979-7333](https://orcid.org/0000-0002-9979-7333)
Karar Mahmoud [http://orcid.org/0000-0002-6729-6809](http://orcid.org/0000-0002-6729-6809)
REFERENCES
1. Kwang HG, Kim J. Optimal combined scheduling of generation
and demand resource constraints. Appl Energy. 2012;96:161‐170.
2. Alazemi FZ, Hatata AY. Ant lion optimizer for optimum
economic dispatch considering demand response as a visual
power plant. Electr Power Compon Syst. 2019;47:629‐643.
3. Jalili H, Siano P. Modeling of unforced demand response
programs. Int J Emerg Electr Power Syst. 2021;22:233‐241.
4. Sharifi R, Fathi SH, Moghaddam A, Guerrero JM,
Vahidinasab V. An economic customer‐oriented demand
response model in electricity market. Paper presented at:
International Conference on Industrial Technology (ICIT),
IEEE conference; 2018:1149‐1153.
5. Bie Z, Xie H, Hu G, Li G. Optimal scheduling of power
systems considering demand response. J Mod Power Syst
Clean Energy. 2016;4(2):180‐187.
6. Zheng W, Wu W, Zhang B, Lin C. Distributed optimal
residential demand response considering constraints of
unbalanced distribution networks. IET Gener Transm Distr.
2018;12(9):1970‐1979.
7. Aalami HA, Moghaddam P, Yousefi GR. Modeling and
prioritizing demand response programs in power markets.
Electr Power Syst Res. 2010;80:426‐435.
8. Campos J, Csontos C, Harmat A, Csüllög G, Munkácsy B. Heat
consumption scenarios in the rural residential sector: the
potential of heat pump‐based demand‐side management for
sustainable heating. Energy Sustain Soc. 2020;10:40.
9. Chau S, Kirschen D. Quantifying the effect of demand
response on electricity markets. IEEE Trans Power Syst.
2009;24:1199‐1207.
10. Shayesteh E, Elias M, Kohan N, Moghaddam MP. Security‐
based demand response allocation. Paper presented at: IEEE
Power & Energy Society General Meeting. 2009.
11. Shareef H, Maytham SA, Mohammad A, Al‐Hassan E. Review
on home energy management system considering demand
response, smart technologies, and intelligent controllers. IEEE
Access. 2018;6:24498‐24509.
12. Muller T, Most D. Demand response potentials: available
when needed? Energy Policy. 2018;115:181‐198.
-----
13. Asadinejad A, Rahimpour A, Tomsovic K, Qi H, Chen C.
Evaluation of residential customer elasticity for incentive based
demand response. Electric Power Syst Res. 2018;158:26‐36.
14. Srivastava A, Passal SV, Laes E. Assessing the success of
electricity demand response programs: a meta‐a meta‐analysis.
Energy Res Soc Sci. 2018;40:110‐117.
15. Nan S, Zhou M. Optimal residential community demand
response scheduling in smart grid. Appl Energy. 2018;210:
1280‐1289.
16. Viana MS, Junior GM, Udaeta MEM. Analysis of demand
response and photovoltaic distributed generation as resources
for power utility planning. Appl Energy. 2018;217:456‐466.
17. Huo D, Gu C, Yang G, Le. Blond S. Combined domestic demand
response and energy hub optimization with renewable generation uncertainty. Energy Procedia. 2017;142:1985‐1990.
18. Basu M. Fast convergence evolutionary programming for
multi‐area economic dispatch. Electr Power Compon Syst.
2017;45:1629‐1637.
19. Elnozahy A, Ramadan HS, Abo‐Elyousr FK. Efficient metaheuristic Utopia‐based multi‐objective solutions of optimal battery‐
mix storage for microgrids. J Clean Prod. 2021;303:127038.
20. Goudarzi S, Hassan W, Anisi MH, et al. ABC‐PSO for vertical
handover in heterogeneous wireless networks. Neurocomputing.
2017;256:63‐81.
21. Ghorbani N, Kasaeian A, Toopshekan A, Bahrami L,
Maghami A. Optimizing of a hybrid wind‐PV‐Battery system
using GA‐PSO and MOPSO for reducing cost and increasing
reliability. Energy. 2018;154:581‐591.
22. Sharaf AM, El‐Gammal AAA. An optimal voltage and energy
utilization for a stand‐alone wind energy conversion scheme
wecs based on particle swarm optimization PSO. Int J Power
Eng Green Technol. 2010;1:51‐64.
23. Sharaf AM, El‐Gammal AAA. A novel error driven dynamic
tri‐loop controller based on multistage particle swarm
optimization‐MSPSO for industrial PMDC motor drives. Paper
presented at: 6th International Conference‐Workshop Compatibility and Power Electronics; 2009.
24. Sharaf AM, El‐Gammal AAA. Particle swarm optimization PSO: a
new tool in power system and electro technology. In:
Panigrahi BK, Abraham A, Das S, eds. Computational Intelligence
in Power Engineering. Vol 302. Springer‐Verlag; 2010:235‐294.
25. Kerdphol T, Qudaih Y, Mitani M. Optimum battery energy
storage system using PSO considering dynamic demand response
for microgrids. Int J Electr Power Energy Syst. 2016;83:58‐66.
26. Hashemi S, Aghamohammadi MR, Sangrody H. Restoring
desired voltage stability security margin based on demand
response using load‐to source impedance ratio index and PSO.
Int J Electr Power Energy Syst. 2018;67:143‐151.
27. Gough R, Dickerson C, Rowley P, Walsh C. Vehicle‐to‐grid
feasibly: a techno‐economic analysis of EV‐based energy
storage. Appl Energy. 2017;192:12‐23.
28. Hoehne CG, Chester MV. Optimizing plug‐in electric vehicle
and vehicle‐to‐grid charge scheduling to minimize carbon
emissions. Energy. 2016;115:646‐657.
29. Mortaz E, Valenzuela J. Microgrid energy scheduling using
storage from electric vehicles. Electr Power Syst Res. 2017;143:
554‐562.
30. Emara D, Ezzat M, Abdelaziz AY, Mahmoud K, Lehtonen M,
Darwish MMF. Novel control strategy for enhancing microgrid
operation connected to photovoltaic generation and energy storage
systems. Electronics. 2021;10(11):1261‐1278.
31. Abubakr H, Guerrero JM, Vasquez JC, et al. Adaptive LFC
incorporating modified virtual rotor to regulate frequency and
tie‐line power flow in multi‐area microgrids. IEEE Access.
2022;10:33248‐33268.
32. Elsisi M, Mahmoud K, Lehtonen M, Darwish MMF.
Effective nonlinear model predictive control scheme tuned
by improved NN for robotic manipulators. IEEE Access.
2021;9:64278‐64290.
33. Abubakr H, Vasquez JC, Mahmoud K, Darwish MMF,
Guerrero JM. Robust PID‐PSS Design for Stability Improvement of Grid‐Tied Hydro Turbine Generator. Paper presented
at: 22nd International Middle East Power Systems Conference;
2021:607‐612.
34. Ali MN, Soliman M, Mahmoud K, Guerrero JM, Lehtonen M,
Darwish MMF. Resilient design of robust multi‐objectives PID
controllers for automatic voltage regulators: D‐decomposition
approach. IEEE Access. 2021;9:106589‐106605.
35. Mkahl R, Nait‐sidi‐Moh A, Gaber J, Wack M. An optimal
solution for charging management of electric vehicles. Electr
Power Syst Res. 2017;146:177‐188.
36. Bin‐Humayd AS, Bhattacharya K. Distribution system planning to accommodate distributed energy resources and PEVs.
Electr Power Syst Res. 2017;145:1‐11.
37. Faddel S, AL‐Awami AT, Abido MA. Fuzzy optimization for
the operation of electric vehicle parking lots. Electr Power Syst
Res. 2017;145:166‐174.
38. Sadat H. Power System Analysis. 2nd ed. McGraw Hill; 2004.
39. Abbas AS, El‐Sehiemy RA, Abou El‐Ela A, et al. Optimal
harmonic mitigation in distribution systems with inverter
based distributed generation. Appl Sci. 2021;11(2):774‐790.
40. Ali ES, El‐Sehiemy RA, Abou El‐Ela AA, Mahmoud K,
Lehtonen M, Darwish MMF. An effective Bi‐Stage method for
renewable energy sources integration into unbalanced distribution systems considering uncertainty. Processes. 2021;9(3):
471‐486.
41. Ordoudis C, Pinson P, Morales G, Juan M, Zugno M. An
Updated Version of the IEEE RTS 24‐Bus System for Electricity
Market and Power System Operation Studies. Technical
University of Denmark; 2017.
42. Yustaa JM, Khodrb HM, Urdanet AJ. Optimal pricing of
default customers in electrical distribution system: effect
behavior performance of demand response models. Electr
Power Syst Res. 2007;77:4468‐4558.
43. Average retail price of electricity to ultimate customers by
end‐use sector. [https://www.eia.gov/electricity/monthly/](https://www.eia.gov/electricity/monthly/epm_table_grapher.php?t=epmt_5_6_a)
[epm_table_grapher.php?t=epmt_5_6_a](https://www.eia.gov/electricity/monthly/epm_table_grapher.php?t=epmt_5_6_a)
44. Kennedy J, Eberhart RC. Particle swarm optimization. Paper
presented at: Proceedings of IEEE International Conference
on Neural Networks; 1995:1942‐1948.
45. Chatterjee A, Siarry P. Nonlinear inertia weight variation for
dynamic adaptation in particle swarm optimization. Comput
Oper Res. 2006;33:859‐871.
46. Sharaf AM, Mavalizadeh H, Ahmadi A, Shayanfar H,
Gandoman FH, Homaee O. Chapter 3 Application of new
fast, efficient‐self adjusting PSO‐search algorithms in power
systems' studies. Classical and Recent Aspects of Power System
Optimization. 1st ed. Academic Press; 2018:33‐61.
-----
47. Shahapure KR, Nicholas C. Cluster quality analysis using
silhouette score. Paper presented at: IEEE 7th International
Conference on Data Science and Advanced Analytics (DSAA);
2020:747‐748.
48. Yudhana A, Sulistyo D, Mufandi I. GIS‐based and Naïve Bayes
for nitrogen soil mapping in Lendah, Indonesia. Sens Bio‐Sens
Res. 2021;33:100435.
49. van Heuveln K, Ghotge R, Annema JA, van Bergen E,
van Wee B, Pesch U. Factors influencing consumer acceptance
of vehicle‐to‐grid by electric vehicle drivers in the Netherlands. Travel Behav Soc. 2021;24:34‐45.
50. Al‐Gabalawy M, Mahmoud K, Darwish MMF, Dawson JA,
Lehtonen M, Hosny NS. Reliable and robust observer for
simultaneously estimating state‐of‐charge and state‐of‐health
of LiFePO4 batteries. Appl Sci. 2021;11(8):3609‐3630.
51. Zhou S, Han Y, Yang P, et al. An optimal network constraint‐
based joint expansion planning model for modern distribution
networks with multi‐types intermittent RERs. Renew Energy.
2022;194:137‐151.
52. Prashant - Sarwar M, Siddiqui AS, Ghoneim SSM,
Mahmoud K, Darwish MMF. Effective transmission congestion management via optimal DG capacity using hybrid
swarm optimization for contemporary power system operations. IEEE Access. 2022;10:71091‐71106.
53. Abubakr H, Vasquez JC, Mahmoud K, Darwish MMF,
Guerrero JM. Comprehensive review on renewable energy
sources in Egypt—current status grid codes and future vision.
IEEE Access. 2022;10:4081‐4101.
54. Wu H, Niu D. Study on influence factors of electric vehicles
charging station location based on ISM and FMICMAC.
Sustainability. 2017;9(4):484.
55. Abdel‐Hakim AE, Abo‐Elyousr FK. Heuristic greedy scheduling of electric vehicles in vehicle‐to‐grid microgrid owned
aggregators. Sensors. 2022;22(6):2408.
56. Moradijoz M, Moghaddam MP, Haghifam MR, Alishahi E.
A multi‐objective optimization problem for allocating parking
lots in a distribution network. Int J Electr Power Energy Syst.
2013;46:115‐122.
57. Mortaz E. Portfolio diversification for an intermediary energy
storage merchant. IEEE Trans Sustain Energy. 2020;11(3):
1539‐1547.
58. Fernandez GS, Krishnasamy V, Kuppusamy S, et al. Optimal
dynamic scheduling of electric vehicles in a parking lot using
particle swarm optimization and shuffled frog leaping algorithm. Energies. 2020;13:6384.
APPENDIX A
1‐ The relevant parameters for generating, DR, and V2G
aggregation units are given in Table A1.[1,2]
2‐ SAPSO#1 parameters: inertia weight (W0) = 0.2,
population = 100, c1 = 7.5, c2 = 7.5, d0 = 0.7, inertia
weight damping ratio = 1, and no. of iterations
(Nmax) = 100.
3‐ SAPSO#2 parameters: inertia weight (W0) = 0.4,
population = 100, c1 = 1.5, c2 = 1.5, inertia weight
damping ratio = 0.99, and no. of iterations
(Nmax) = 100.
4‐ The bell‐shaped probability density function, which is
a Matlab‐based function, and the following parameters are used: a = 7, b = 5, c = 14.
5‐ The following Table A2 gives the historical data of
EVs within clusters. It should be noted that this table
is created randomly.
TABLE A1 Generating, DR, V2G units characteristics.
DR 9 0.034 35.2 0 0 0.5 9
DR 10 0.034 18.2 0 0 0.5 9
DG 13 0.075 10.546 30 80 200 590
DG 14 0.0075 8.02 50 150 13 60
DG 15 0.008 6.34 50 140 54 155
DG 16 0.005 4.123 100 300 54 155
DG 18 0.001 1.213 400 800 100 400
V2G 19 0 0.117 0 0 0 50
DR 20 0.074 20.1 0 0 0.5 12
DG 21 0.002 2.678 180 400 100 400
DG 22 0.002 3.231 150 400 200 300
DG 23 0.005 3.451 100 300 108 310
Abbreviations: DG, distributed generation; DR, demand response;
V2G, vehicle to grid.
Unit Bus
type no. **_α[i]_** **_β[i]_** **_γ_** **_[i]_** **_STC_** **_[i]_**
Pmin Pmax
(MW) (MW)
DG 1 0.008 18.325 30 40 0 50
DG 2 0.0085 25.324 20 20 0 20
V2G 3 0 0.117 0 0 0 50
DR 4 0.02 15.12 0 0 0.5 10
DR 5 0.034 15.12 0 0 0.5 8
DG 7 0.077 30.12 0 0 75 350
DR 8 0.114 17.1 0 0 0.5 2.3
-----
TABLE A2 Status of EVs based on historical data.
Education Car
No. SOC Income level Wind discharging
1 Medium High High Weak No
2 Medium High High Strong No
3 Medium High High Strong No
4 High High High Weak Yes
5 Low Mild High Weak Yes
6 Low Low Normal Weak Yes
7 Low Low Normal Strong No
8 High Low Normal Strong Yes
9 Medium Mild High Weak No
10 Medium Low Normal Weak Yes
11 Low Mild Normal Weak Yes
12 Medium Mild Normal Strong Yes
13 High Mild High Strong Yes
14 High High Normal Weak Yes
15 Rain Mild High Strong No
16 High High Normal Weak Yes
17 High High Normal Weak Yes
18 High Low Normal Strong Yes
19 High Low Normal Strong Yes
20 Low Mild High Weak Yes
21 High High Normal Weak Yes
22 High High Normal Weak Yes
23 High High Normal Weak Yes
24 Low Mild High Weak Yes
Abbreviations: EV, electric vehicle; SOC, state of charge.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1002/ese3.1264?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1002/ese3.1264, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://research.aalto.fi/files/87131222/Energy_Science_Engineering_2022_Abo_u2010Elyousr_Optimal_scheduling_of_DG_and_EV_parking_lots_simultaneously_with_demand.pdf"
}
| 2,022
|
[] | true
| 2022-08-09T00:00:00
|
[] | 17,214
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/03378f6afdfb4535a7df86287ddf7efc44df2fcc
|
[
"Computer Science"
] | 0.912381
|
Extending cryptographic logics of belief to key agreement protocols
|
03378f6afdfb4535a7df86287ddf7efc44df2fcc
|
Conference on Computer and Communications Security
|
[
{
"authorId": "1748222",
"name": "P. V. Oorschot"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Int Workshop Cogn Cell Syst",
"CCS",
"Comput Commun Secur",
"CcS",
"International Symposium on Community-centric Systems",
"International Workshop on Cognitive Cellular Systems",
"Conf Comput Commun Secur",
"Comb Comput Sci",
"Int Symp Community-centric Syst",
"Combinatorics and Computer Science",
"Circuits, Signals, and Systems",
"Computer and Communications Security",
"Circuit Signal Syst"
],
"alternate_urls": null,
"id": "73f7fe95-b68b-468f-b7ba-3013ca879e50",
"issn": null,
"name": "Conference on Computer and Communications Security",
"type": "conference",
"url": "https://dl.acm.org/conference/ccs"
}
| null |
# Extending Cryptographic Logics of Belief to Key Agreement Protocols
#### (Extended Abstract)
Paul C. van Oorschot
Bell-Northern Research P.O. Box 3511, Station C, Ottawa, Canada K2C 1Y7 paulv@bnr.ca
address until July 31 1994:
School of Computer Science, Carleton University, Ottawa, Canada K1S 5B6 (paulv@scs.carleton.ca)
#### Abstract. The authentication logic of Burrows, Abadi and
Needham (BAN) provided an important step towards rigourous
analysis of authentication protocols, and has motivated several
subsequent refinements. We propose extensions to BAN-like logics which facilitate, for the first time, examination of public-key
based authenticated key establishment protocols in which both parties contribute to the derived key (i.e. key agreement protocols).
Attention is focussed on six distinct generic goals for authenticated
key establishment protocols. The extended logic is used to analyze
three Diffie-Hellman based key agreement protocols, facilitating
direct comparison of these protocols with respect to formal goals
reached and formal assumptions required.
#### 1 Introduction
Authentication protocols serve a fundamental role in the cryptographic security of many systems, including the control of access
to restricted areas, computer systems, and wireless telecommunications systems, and authentication in electronic banking transactions. The history of authentication protocols has highlighted the
extreme difficultly of designing efficient authentication protocols
which contain neither redundancies nor security flaws. The literature contains numerous examples of published protocols whose
supposed correctness, as established by ad-hoc techniques and
informal arguments, proved fleeting as subsequent examination
revealed serious security weaknesses (e.g. see [3]). This has suggested the need for more rigourous methods to examine the correctness of authentication and associated key exchange protocols.
To this end, Burrows, Abadi and Needham defined a logic of
authentication (BAN) to allow formal modelling and exploration
of beliefs in such protocols [3], [4]. Gaarder and Snekkenes (GS)
extended the logic to allow further reasoning about public-key
based protocols, and to capture the notion of “duration” related to
timestamps; they then carried out a detailed analysis [11] of the
public-key based X.509 two-way authentication protocol [5].
Related cryptographic logics of belief have been proposed to
address recognized deficiencies [9] of BAN, including those of
Gong, Needham and Yahalom (GNY) [10], and Abadi and Tuttle
(AT) [1]. The X.509 analysis notwithstanding, much of the focus
of research in this area has been on protocols involving on-line
trusted servers and keys generated by a single party (symmetrically). Research to date has encompassed public key techniques for
Version dated August 12, 1993. This paper will appear in the Pro_ceedings of the 1st ACM Conference on Communications and Com-_
_puter Security, November 3-5, 1993, Fairfax, Virginia._
authentication and key transport, but not for key generation. More
specifically, public-key based key establishment protocols in
which both parties contribute to the established key — referred to
as key agreement protocols — have not previously been analyzed
by these methods. We propose extensions of BAN and BAN-like
logics to facilitate more precise identification and examination of
the goals and beliefs arising in authenticated key agreement protocols. We then illustrate the modified logic by examining three quite
distinct such protocols based on Diffie-Hellman key exchange [6],
including one which is identity-based.
The remainder of the paper is organized as follows. Section 2 (with
Appendix A) reviews the essential features of the logic, and introduces new extensions including a refinement of the fundamental
BAN construct “shares the good crypto key”, a new primitive
regarding key confirmation, and new postulates which facilitate for
the first time reasoning about jointly established keys. Section 3
highlights six fundamental candidate formal goals for authenticated key establishment protocols. Section 4 gathers in one place
the various formal assumptions required in our subsequent analyses. Section 5 applies the logic to the Station-to-Station (STS) protocol [7], Section 6 analyzes the Goss protocol [12], and Section 7
analyzes the Günther protocol [13]. Section 8 uses these results to
compare the assumptions and goals of these protocols, and the
aforementioned X.509 protocol [5]. Section 9 provides concluding
remarks.
This work was originally framed entirely on that of BAN and GS.
As this resulted in our work inheriting several of the known deficiencies in BAN [9], we have made selective use of more recent
advancements, primarily from GNY and AT. Familiarity with
GNY and AT is assumed. Where appropriate, we comment how
the new extensions apply to these logics; further such examination
remains to be done.
#### 2 Authentication logic refinements and extensions
The BAN logic, with minor extensions by GS, is reviewed in
Appendix A. In this section we propose new refinements to allow
more precise reasoning for protocols involving jointly established
keys. In what follows, A and B denote principals (or parties); U is
used to denote a principal when we wish to specifically emphasize
that its identity is unknown or uncorroborated. For clarity, we use
the AT nomenclature “said” in place of the more verbose BAN
“once_said”. To denote that A has sent a formula X in the present
epoch (i.e. has recently said X), we use the AT construct “A says
X” (whereas in BAN, “Α |≡ X” is used to denote both this and the
fact that A believes X). This requires use of AT axiom A20
(fresh(X) ∧ P said X ⊃ P says X) in place of the BAN nonceverification rule (R2 in Appendix A).
-----
We begin with a few new constructs and notation. In order to reason more precisely about cryptographic keys (hereafter called
_keys), the concept shares the good key,[1] denoted by the symbol A_
K B, requires refinement. This is necessary both to remove ambi↔
guity, and to help avoid confusion about the meaning and (mis)use
of the symbol (e.g. see [16]). For example, whether or not B actually knows K, A |≡ A ↔K B is used in BAN to mean A believes
that K is a good key for use with party B. Here the key is “good” in
the sense that if and when it eventually becomes known to B, it
will be safe to use for secure communications. For our purposes,
more precision is required. One step in this direction is the AT construct “A has K”, meaning that A actually possesses K.[2] It is
important to note that possession is quite distinct from the notion
of holding any beliefs about the quality or properties of K (e.g. K is
a good key, K is shared with B). Without further information (e.g.
whether K is also known by some unidentified or uncorroborated
party B), such a key K which A has is called an “unqualified” key
(from A’s point of reference). Supplementing this refinement we
now replace A K B with the following as appropriate:
↔
A ↔K- B K is A’s unconfirmed secret suitable for B. No one
aside from A and B knows or could deduce K. This
construct emphasizes, however, that while A knows
K, the specific party B may or may not. A may consider K a “qualified” or “secure” key for use with B.
A ↔K+ B K is A’s confirmed secret suitable for B. A knows K,
and has received evidence (key confirmation) from B
indicating that B actually knows K. No other parties
know or can deduce K.
Note that in these new constructs, A and B are not interchangeable.
We reserve the term authenticated key establishment for mechanisms providing keys which are both secure and confirmed, in the
above sense. Our motivation is alignment with the term entity
_authentication, which is not necessarily provided by mechanisms_
establishing (only) unconfirmed secrets. By this terminology, note
that secure key establishment does not require entity authentication
(in Diffie et al. [7], this is discussed in terms of direct authentication and indirect authentication). In what follows, the STS protocol
is seen to provide authenticated key establishment; the Goss and
Günther protocols provide unconfirmed secrets.
We also refine the symbols PK(K, A) and ∏(A) from BAN and
GS. One reason is to distinguish the use of asymmetric key pairs
for signatures, encryption, and key agreement, forcing explicit
acknowledgment when one key pair is used for multiple purposes;
this also precludes the incorrect assumption that signature key
pairs can be used as encryption key pairs in all cryptosystems. A
second reason is to separate the notion of binding a public key to a
principal from the notion of the goodness of that key; we specifically associate “goodness” with the private key of a key pair. The
symbols we use in place of these are:
_PKσ(A, K) K is the public signature verification key associated_
with principal A.
_PK[-1]σ[(A)]_ A’s private signature key K[-1] _is good. Here K[-1] corre-_
sponds to the public key K in PKσ(A, K). The key is
“good” in that it is known only to A, cannot be
derived by others, and does not result in a “weak”
public key susceptible to specialized attacks.
1GNY uses the more generic semantics “K is a suitable secret for
the two parties”, which we find preferable.
2A similar GNY construct, “A possesses K”, means that A either
has, or is capable of computing, K.
_PKδ(A, K) K is the public key-agreement key associated with_
principal A. When the specific value of the key is not
of central focus or is evident by context, we write simply PKδ(A).
_PK[-1]δ[(A)]_ A’s private key-agreement key K[-1] _is good. Here K[-1]_
corresponds to the public key K in PKδ(A, K). The
key is “good” in that it is known only to A, cannot be
derived by others, and does not result in a “weak”
public key (e.g. arising from a trivial exponential such
as “1” in a Diffie-Hellman key exchange).
For asymmetric encryption keys, we suggest defining PKψ(A, K)
and PK[-1]ψ(A) analogously; however, we do not require these symbols in the present work.[3] We also introduce a notational convenience, an _origin-mapping construct, and a_ _knowledge-_
_demonstration construct:_
⊥Y This denotes the key value associated with key symbol
Y. This allows one, for example, to use ⊥PK[-1]δ[(A) to]
denote the value of the implied private key, in the
absence of an explicit name (e.g. “K”) for the associated public key.
_G(RA)_ _G(RA) = {principals U: U said RA}. This denotes the_
party U (or set of all parties U) which once conveyed
or sent the nonce RA. It facilitates reasoning about
random numbers serving as challenges in challengeresponse protocols (see Appendix A). This allows
refinement of the time period “current epoch” (see
Appendix A) to the more specific notion of “current
run”, to address “interleaving attacks” as discussed by
Bird et al. [2].
_confirm(K) Current knowledge of K has been demonstrated (with-_
out compromising K). Note demonstration of knowledge of K differs from both actual and claimed
possession of K. In particular, “A |≡ _*confirm(K)”[4]_
differs subtly but significantly from “A |≡ U has K”;
while the latter belief could arise even if U does not
possess K,[5] the former requires “hard evidence”.
The semantics of confirm(K) are best understood in light of the following new Confirmation Axioms:[6]
C1. _fresh(X) ∧φ({X}K)_ ⊃ _confirm(K)_
C2. _fresh(X) ∧φ(MACK(X)) ⊃_ _confirm(K)_
C3. _fresh(K) ∧φ(H(K))_ ⊃ _confirm(K)_
These specify that current knowledge of a key K can be demonstrated by: encrypting a fresh formula X with K; computing a message authentication code (MAC) over a fresh message, with key K;
3The subscript characters sigma and psi were chosen as memoryaids for the words signature and ciphering; delta was chosen for its
association with a “changing” key, as key-agreement keys are often short-term (session) keys.
4Here “*” is the GNY “not-originated-here” symbol, intended to
formalize the implicit BAN assumption that parties can distinguish
messages they generate from those generated by other principals.
5For example, if A granted U jurisdiction on claims of possession,
and U lied; or if beliefs are based on messages sent — called “eager” beliefs in [14] — but not necessarily received (e.g. using
GNY P1, P2, and GNY rationality rule).
6Here “φ” is the GNY “recognizability” construct, which formalizes the implicit BAN assumption of sufficient a priori knowledge,
or redundancy, in encrypted messages to allow recognition of correct decryption keys.
-----
or hashing a fresh key K using a one-way hash function H. Other
similar axioms could be specified, but these suffice for our present
purposes.
The original BAN postulates are listed in Appendix A. We now
introduce three new postulates, based on the following model for
public key agreement: each of two parties involved in a joint key
agreement has a public and a private key-agreement key, and can
derive the common (jointly established) key from their own private
key and the other party’s public key, using some agreement function f(private_info, public_info); here each parameter may actually
be composite. One example of f is Diffie-Hellman key agreement
[6]. We assume f results in a joint key K which can be deduced
only with knowledge of the appropriate private information, and
that knowledge of K does not compromise the secrecy of the other
party’s private information. (Recall that logics of belief generally
assume soundness of all underlying cryptographic mechanisms.)
The new postulates are:
R30. (Unqualified key-agreement rule):[1]
A has PK[-1]δ[(A), A][ has PK]δ[(U)]
A has K
where K = f( PK[-1]δ[(A),][ PK]δ[(U) )]
By R30, A can compute a joint key from a private key-agreement
key of her own and a public key-agreement key of some other
party; this is basically the model for key agreement. A should treat
this joint key as an unqualified key, as the binding between party U
and its public key may be uncorroborated. R30 is a specific
instance of GNY possession rule P2 which defines computability
(A has X ∧ A has Y ⊃ A has F(X,Y)). Using P2 we also require
GNY P1 (A sees X ⊃ A has X).
R31. (Qualified key-agreement rule):
A |≡�PK[-1]δ[(A), A][ |≡] _[PK]δ[(B), A][ |≡]_ _[PK][-1]δ[(B)]_
A |≡ A K- B
↔
where K = f(PK[-1]δ[(A),][ PK]δ[(B) )]
In R31, party A has another party’s public key-agreement key,
believes the binding of that key to the claimed identity, and
believes the corresponding private key, in addition to her own, is
good. This allows A to believe that the resulting key-agreement
key is a qualified or secure (but unconfirmed) key.
R32. (Key confirmation rule):
A [|≡] A ↔K- B, A sees *confirm(K)
A [|≡] A ↔K+ B
R32 allows A to upgrade her belief in the quality of key K — from
a qualified to a confirmed key — upon observing evidence that a
party other than herself knows the key. Note the definition of
A K- B guarantees there is at most one other such party, namely
↔
B. In the original BAN syntax, the pre-condition “A sees *con_firm(K)” might be stated as “fresh(X) ∧ A sees{X}K from U, where_
U≠A”. R32 is a message interpretation rule (cf. BAN R1, Appendix A). It has much in common with GNY rule I1 [10], although
the GNY definition of “B possesses K” results in I1 not distin
1To be technically precise, we should use ⊥PK-1δ[(A) and][ ⊥][PK]δ[(U)]
in place of PK[-1]δ[(A) and][ PK]δ[(U) in the statement of R30, and as]
arguments to the key agreement function f, but we write simply the
latter for appearance; the meaning should be clear by context. A
similar comment applies to the arguments of f in R31.
guishing between B knowing/having K and being capable of computing K, nor does it imply B has demonstrated knowledge of K.
#### 3 Generic formal goals
The apparently obvious goal of an authentication protocol is the
provision of some degree of assurance of the identity of another
party. In authenticated key establishment, an intended outcome is
the creation and/or distribution of a (possibly new) secret key.
While these goals may appear obvious, as stated they are quite
imprecise, and subtle differences exist among protocols regarding
the exact properties established. Failure to understand the precise
meaning of specific goals has lead to misunderstandings about the
differences between various protocols. This motivates us to explicitly identify six distinct candidate positions which may or may not
be intended as formal goals in a specific authenticated key establishment protocol. We state these as beliefs of party A, with party
B the other intended party in the protocol.
**(G1)** _Far-end operative:_ Α |≡ B says Y
**(G2)** _Targeted entity authentication:[2]A |≡ B says (Y,_
_R(G(RA), Y))_
**(G3)** _Secure key establishment:_ Α |≡ A ↔K- B
**(G4)** _Key confirmation:_ Α |≡ A ↔K+ B
**(G5)** _Key freshness:_ A |≡ _fresh(K)_
**(G6)** _Mutual belief in shared secret: Α |≡ (B |≡ B ↔K- A)_
(G1) states that A believes B recently conveyed a message Y. It
implies that B is currently operational (or “alive”), i.e. has taken
action subsequent to the start of the protocol. Inherent is the fact
that the identity of B has been corroborated, but it is unclear who B
intended to convey the message to. Note “aliveness” also follows
from (G2) and (G4), but not (G3).
(G2) states A believes a message Y was recently conveyed by B in
response to the specific challenge RA (RA here is a “random number” — see Appendix A). It provides authentication of B to A in
the sense that the response is from a corroborated operational
entity, and is targeted in response to a (preferably fresh) challenge
from A. That B’s formula is specifically in response to A’s challenge ties B’s reply to the protocol run A is executing. Note while
entity authentication requires parties be operative, key establishment protocols do not, as not all provide entity authentication;
indeed, store-and-forward environments do not support on-line
entity authentication.
(G3) states that A believes that a key K is shared with no party
other than possibly party B. K is suitable for use by A with B if and
when B eventually acquires it. (G3) does not imply that B participated in any manner in the protocol, nor even possesses K.
(G4) states A believes a key K is shared with party B alone, and
that B has provided evidence of knowledge of the key to A. It
implies both that the quality of the key is good, and confirmation
that the far end has knowledge of K; aliveness and corroboration of
identity of the far end party are inherent.
(G5) states A believes a key K is fresh. It addresses the possibility
that a key might be reused or replayed.
2We hesitate to call this “entity authentication”, due to the absence
of a universal definition; our accompanying formal description
clarifies our intended meaning. This particular expression of entity
authentication is specifically based on challenge-response. For
protocols that are not based on challenge-response (e.g. timestamps or sequence numbers), “G(RA)” might be replaced simply
by “A” in this goal, but this changes the implications of the goal
significantly.
-----
(G6) states that A believes that party B believes K is an unconfirmed secret suitable for use with A. Note B’s beliefs are beyond
the control of A, and the beliefs of B of true importance to A are
those which concern A directly. Thus of greater import to A here
than B’s (presumed) belief that K is secure is A’s confidence that B
has correctly identified her as the party with which B shares the
trusted key. Competence on the part of B is assumed here by A.
Among other goals that might be stated, but upon which we do not
dwell, include:
**(G7)** _Key possession:_ A has K
**(G8)** _Belief in far-end possession: Α |≡ B has K_
Since (G3) is a minimum goal for key establishment, and it inherently implies (G7), we view the latter as somewhat redundant. We
feel that (G8), although of possible theoretical interest, is of questionable use in practice, with “physical world” evidence as given
by (G4) being preferable. (See §6 of [10], §3.2 of [1], and §3.3 of
[9] for related discussions.)
We note that (G4), i.e. Α |≡ A ↔K+ B, is independent of B’s beliefs
about K, and is thus distinct from (G6). Previous BAN-like logics
have lacked a suitable construct to express the concept of key confirmation. Also, from the definition of confirm(K), it should be
clear that Α |≡ A ↔K+ B is not equivalent to the conjunction of
(G3) and (G7), i.e. (Α |≡ A ↔K- B) ∧ (Α |≡ B has K).
#### 4 Generic formal assumptions
In this section we collect for reference candidate formal assumptions which the protocols subsequently analyzed require. In a typical BAN-type analysis, preliminary formal assumptions that
appear necessary or “obvious” are recorded, and additional
assumptions become apparent as necessary pre-conditions to use
of logic postulates required in formal proofs. (Assumptions which
appear intuitively necessary at the outset, but are not found to be
required anywhere in the formal proofs, should be carefully reexamined.) Despite the latter, most analyses are presented “topdown”, with the appearance that all assumptions are known a pri_ori.[1] The assumptions below are stated as typical candidate_
requirements on a party A, with B the other intended participant.[2]
A believes she has[3] a valid copy of the public signature key (KT)
of the trusted authority T. A also believes that the corresponding
private signature key is “good”:
Α |≡ _PKσ(T, KT)_ (A1)
Α |≡ _PK[-1]σ[(T)]_ (A2)
A believes that T has jurisdiction over the binding of a public signature key (KB) with a specific party (note T is not given jurisdiction over the quality of the corresponding private key). A similar
assumption concerns a pre-certified public key-agreement key
(RB):
Α |≡ T controls PKσ(B, KB) (A3)
1Mao and Boyd have recently suggested formalizing the “bottomup” approach to proofs, to systematically derive a minimum set of
necessary pre-conditions, starting from a fixed set of goals [15].
2To follow GNY strictly, we would add further assumptions about
“recognizability” of encrypted and signed values.
3Again, following GNY strictly, we would record additional assumptions regarding the initial possessions of A, and track subsequent acquisitions in annotated analysis. For example, (A1)
implies both possession of a key and belief that the binding of it to
T is good. For brevity, we typically treat possessions informally, as
in BAN.
Α |≡ T controls PKδ(B, RB) (A4)
A believes any private key-agreement key she herself uses is good:
Α |≡ _PK[-1]δ[(A)]_ (A5)
While it is necessary for A to guard her own private signature key
so that the beliefs of others that her private key is “good” are valid,
this is not necessary for A to establish her own beliefs; however
(A5) is. A also believes all principals (e.g. B) have the competence
to acquire “good” private keys for themselves, and to safeguard
such keys; this is consistent with the basic assumption that all principals act in their own best interest (including to guard jointly
derived secrets), and does not preclude the existence of attackers:
Α |≡ _PK[-1]σ[(B)]_ (A6)
Α |≡ _PK[-1]δ[(B)]_ (A7)
A grants any principal B jurisdiction over his own public keyagreement key RB (cf. (A4)):
Α |≡ Β controls PKδ(B, RB) (A8)
Note this does not grant to B jurisdiction over claims regarding the
corresponding private key; corroborating evidence is required to
back such claims. Thus while (A8) allows the possibility that B
may claim another party’s public key as his own (indeed, this is a
practical possibility which the logic cannot simply “rule out”), note
a dishonest such principal is unable to compute the associated
secret key and provide key confirmation.
A believes a random number (RA) she generates herself is fresh,
i.e. was not used in a previous protocol run (note this does not
require that A believe in the freshness of nonces created by other
parties):
Α |≡ _fresh(RA)_ (A9)
A final assumption is related to the (continuing) validity of public
keys. Certificates are typically used to make one party’s public
key(s) available to other parties. A certificate is a block of information containing a party’s credentials (the subject party’s public
key(s), identifying information such as distinguishing name, and
possibly other information) together with the signature of a trusted
_(certification) authority T over the credentials. The validity of a_
certificated public key is verified by using T’s public signature key
to verify the signature on the certificate. The assumption we state
is as follows:
Note this is not a universal inference rule; it is a novel usage of an
inference rule as an assumption. The intent is to use this rule in
place of the informal assumption that some procedure is available
to verify that statements of T from the past, which bind a public
key to a distinguishing name, hold true in the current run. The
objective is that this assumption generically handle the large number of ways in which the validity of a public key may be controlled
in practice (e.g. validity periods in certificates, revocation of certificates, etc.). Many alternatives to (A10) as given above exist. In a
particular application, one might assume that a public key, once
valid, is valid for all time; i.e. assume a priori that Α |≡ T |≡
_PKσ(B, KB), as essentially done in the X.509 analysis of Burrows_
et al. [3]. A second approach is available if the concrete protocol is
such that the validity period is constrained by a timestamp in the
certificate, as in GS; this then requires further assumptions regarding timestamps and timestamp verification. A third is to assume
that a public key in a certificate is valid as long as the signature key
of T used to sign the certificate remains valid; this requires re-verification of the certificate signature each time the public key within
A |≡ ( T said PK(B, KB))
A |≡ ( T |≡ _PK(B, KB))_
(A10)
-----
it is used, which could be handled formally by the GNY “recognizability” construct applied to signatures. The generic (A10) above
conveniently handles these and other possibilities simultaneously.
#### 5 Key exchange protocol #1: STS protocol
We first review the authenticated key agreement protocol of Diffie
et al. called the “Station-to-Station” (STS) protocol [7]. A publicly
known appropriate prime p and primitive element α in GF(p) are
fixed for use in Diffie-Hellman key exchange. Parties A and B use
a common signature scheme. sU{•} denotes the signature on the
specified argument using the private signature key of party U.
EK(•) denotes the encryption of the specified argument using algorithm E under key K. Public key certificates are used to make the
public signature keys of A and B available to each other. In a onetime process prior to the exchange between A and B, each party
must present to T their identity and public key (e.g. IDA, KA), have
T verify their true identity by some (typically non-cryptographic)
means, and then obtain from T their own certificate. The protocol
analyzed is as follows.
1. A generates a random positive integer x, computes RA = α[x] and
sends RA to a second party.
2. Upon receiving RA, B generates a random positive integer y,
computes RB = α[y] and K = (RA)[y].
3. B computes the authentication signature sB{RB, RA} and sends
A the encrypted signature TokenBA = EK(sB{RB, RA}) along
with RB and his certificate CertB. Here “,” denotes concatenation.
4. A receives these quantities, and from RB computes K = (RB)[x].
5. A verifies the validity of B’s certificate CertB by verifying the
signature thereon using the public signature key of the
authority. If the certificate is valid, A extracts B’s public signature key, KB, from CertB.
6. A verifies the authentication signature of B by decrypting
TokenBA, and using KB to check the signature on the decrypted
token is valid for the known, ordered pair RB, RA.
7. A computes sA{RA, RB} and sends to B her certificate CertA
and TokenAB = EK(sA{RA, RB}).
8. Analogously, B checks CertA. If valid, B extracts A’s public
signature key, KA and proceeds.
9. Analogously, B verifies the authentication signature of A by
decrypting TokenAB, and checking the signature on it using KA
and knowledge of the expected pair of data RA, RB.
The protocol is successful from each party’s point of view if signature verification succeeds on both the received certificate and
authentication signature. In this case, the protocol provides assurance that a shared secret has been jointly established with the party
identified in the received certificate.
TABLE 1 provides a summary of both the messages exchanged,
and the actions taken, by each of the parties in this protocol.
##### TABLE 1 STS protocol (concrete)
#### 5.1 Formal analysis of STS protocol
The protocol is first idealized into a form suitable for logic manipulation. With K = α[xy] as above, define
RA = α[x], RB = α[y] and (1)
ΜB = (TB, R(G(RA), TB), PKδ(B, RB)) with TB = (RB, RA) (2)
ΜA = (TA, R(G(RB), TA), PKδ(A, RA)) with TA = (RA, RB) (3)
A’s certificate is idealized as {PKσ(A, KA)}sT. Note this idealization contains the signature over, but not the actual data values A,
KA. These latter cleartext data values, omitted in the BAN idealization (“as they do not contribute to the beliefs held, and do not
enter into the analysis”), are nonetheless implicitly assumed available to the recipient for operational reasons; they would be
included explicitly in a GNY idealization. Either approach suffices
for our purposes, and indeed we use both as convenient. The idealized STS protocol is:
A → B: RA (M1)
A ← B RB, (B, KB, {PKσ(B, KB)}sT), {{MB}sB}K (M2)
A → B: (A, KA, {PKσ(A, KA)}sT), {{MA}sA}K (M3)
The idealization conveys some beliefs implicit, but not directly
represented, in the actual protocol. For example, the binding of
public key to name — PKδ(B, RB) — in MB of (M2) is not explicit
in the actual protocol, nor is the intended recipient of TB in (M2),
which is the party who challenged with RA, i.e. G(RA). However,
B’s signature on TB in the actual protocol implicitly conveys this
information. Note that only the cryptographically protected messages, i.e. steps (M2) and (M3), will contribute directly to the logical beliefs that result.
The formal assumptions from Section 4 required of party A in the
STS protocol are listed in TABLE 4 in Section 8. Analogous
assumptions are required of B. Regarding the security of underlying algorithms in the STS protocol, use of rule R31 relies on an
assumption regarding the particular function f used. The assumption here is the standard Diffie-Hellman assumption: given values
RA and RB which are exponentials based on secret keys x and y,
respectively, it is computationally infeasible to compute the corresponding Diffie-Hellman key K without knowledge of either x or
y.
We now focus on the formal goals related to the final position of A.
While the intended goals of the STS protocol were not stated [7] in
the syntax of BAN-like logics, the goals from Section 3 it attains
include: (G1), (G2), (G3), (G4), (G5) and (G6). We view the last
three as the major end goals; these encompass the first three. We
now outline proofs that these goals are actually attained.
**Lemma 1 The STS protocol establishes that the far-end party is**
operative, i.e. achieves goal (G1).
_Proof: Upon A’s reception of (M2), R10 provides:_
A sees RB (4)
## A B
CertA = (KA, IDA, sT{KA, IDA}) CertB = (KB, IDB, sT{KB, IDB})
generate x, RA = α[x] → RA generate y, RB = α[y]; K = (RA)[y]
K = (RB)[x]; verify CertB, TokenBA RB, CertB, TokenBA ← TokenBA = EK(sB{RB, RA})
TokenAB = EK(sA{RA, RB}) → CertA, TokenAB verify CertA and TokenAB
-----
Α sees {PKσ(B, KB)}sT (5)
Α sees {{ΜB}sB}K (6)
By A’s belief (and possession) of her private key-agreement key
(A5), and (4) along with GNY rule P1 as noted earlier (sees
implies has), R30 provides
A has K (7)
where K = f(⊥PK[-1]δ[(A),][ ⊥][PK]δ[(U)). K is an unqualified key,]
potentially shared with an uncorroborated party U. (7) and (13)
with R22 yields
Α sees {ΜB}sB. (8)
Assumption (A1), (A2) and (5) with R13 yields Α |≡ T said
_PKσ(B, KB), which (A10) strengthens to_
Α |≡ T |≡ _PKσ(B, KB)_ (9)
Then (9) and (A3) with R3 yields Α |≡ _PKσ(B, KB). This, (8)_
and (A6), with R13 provides
Α |≡ B said ΜB (10)
By assumption (A9), A |≡ _fresh(x). Exponentiation of the primi-_
tive element α by x induces a bijection, suggesting freshness
propagation from x to RA = α[x] based on R12 (cf. GNY rule F1).
This provides
Α |≡ _fresh(RA)_ (11)
Then R12 further yields Α |≡ _fresh(TB), and freshness propaga-_
tion again (note from (13) TB and ΜB are cryptographically
sealed) allows
Α |≡ _fresh(ΜB)_ (12)
Now (10) and (12) with AT axiom A20 (cf. BAN R2) yields
Α |≡ B says ΜB (13)
Thus A believes that B recently said ΜB. This is (G1) with
Y=MB as defined in (2).
### K
**Lemma 2 The STS protocol provides targeted entity authentica-**
tion, i.e. achieves goal (G2).
_Proof:_ From (13) of Lemma 1 and R7, A |≡ B says (TB,
_R(G(RA), TB)). This is precisely (G2) with Y=TB as given in_
(2); here RA = α[x]. Thus upon successful completion, A believes
that B conveyed TB in the current epoch, as an intended
response to the specific challenge RA.
### K
Provided A does not intentionally re-use nonces, and generates a
nonce x (and RA = α[x]) in the current epoch using an appropriate
random number generator (producing unpredictable numbers from
a sufficiently large space, with vanishing probability of repetition),
the nonce will be a duplicate of a previous nonce with vanishing
probability. Then whereas G is a one-to-many mapping on an unrestricted domain, A can conclude that with vanishing probability of
error, G(RA) is the singleton set {A}. In this case Lemma 2 allows
A to conclude she was the intended recipient of B’s token, i.e. Α |≡
B says R(A, TB). Both Lemma 1 and Lemma 2 rely directly on
(A9).
**Lemma 3 The STS protocol provides secure key establishment,**
i.e. achieves goal (G3).
_Proof: From Lemma 1 and R6 it follows that Α |≡ B says PKδ(B,_
RB). Using this and B’s jurisdiction over his public key (A8),
R3 yields Α |≡ _PKδ(B, RB). By A’s belief in private key-agree-_
ment keys (A5) and (A7), R31 then yields Α |≡ A ↔K- B. That
is, A believes K is shared with no party other than possibly B.
Implicitly, A also now possesses this key.
### K
**Lemma 4** The STS protocol provides key confirmation, i.e.
achieves goal (G4).
_Proof: [Outline only] In the BAN logic, the proof is short: from_
Lemma 3 and (13), R32 (modified for BAN as discussed in
Section 2) yields the result directly. However, this is unsatisfying due to the recognized limitation of the BAN construct {X}K
from U[ (see below). Consequently, we provide the following]
alternate proof outline using GNY constructs. We require one
additional formal assumption:[1] Α |≡φ(ΜB), from which GNY
recognition rules R2 and R3 provide Α |≡φ({{ΜB}sB}K). From
(12), freshness propagation (GNY F1) allows the conclusion
Α |≡ _fresh({ΜB}sB). Confirmation Axiom C1 (Section 2) and_
these two beliefs yield: A sees confirm(K). As A creates no
message of the specific form {{ΜB}sB} in the protocol, ΜB
would be marked with a “not-originated-here” symbol — *ΜB
— following GNY protocol parsing. The confirmation belief is
then: A sees *confirm(K). This, with Lemma 3 and R32, allows
the conclusion Α |≡ A ↔K+ B. That is, upon successful completion, A believes K is shared with B alone, and that B has provided to A evidence of his knowledge of this key.
### K
**Lemma 5 The STS protocol provides key freshness, i.e. achieves**
goal (G5), provided B does not choose y ≡ 0 (mod p-1).
_Proof: As in (11), A believes that α[x] is a nonce and a random ele-_
ment of the field. For non-zero y, the entropy of K = (α[x])[y] is
then large (even in the worst case of “smooth” primes p; see
[17]). Therefore freshness propagation (by R12 or GNY F1)
over this exponentiation provides freshness of the key K.
### K
Note A’s belief in key freshness is “pure” in the sense that it is
based only on factors within her own control; it requires (A9), but
no trust or honesty in other parties. This differs, for example, from
a belief that a key from a trusted server is fresh simply because the
key is integrated with a nonce.
We finally consider goal (G6). Upon receipt of (M2), what may A
deduce about B’s beliefs? From Lemma 5 it follows that A may
believe B possesses K, and although we do not provide details, one
may derive Α |≡ (B |≡ B ↔K- U). However, as A does not identify
herself until (M3), it is clear B cannot yet know U=A; indeed, B is
anticipating U=G(RA), but cannot verify this before receiving
(M3). However, as noted following Lemma 2, A may deduce
_G(RA)=A, and may thus arrive at Α |≡_ (B |≡ B ↔K- A) as an “eager
belief” (using the terminology of [14]). This belief is “eager” in
that it anticipates B’s receipt of the final message, but the belief is
not reinforced within the protocol as A receives no further messages. We state this eager belief without further proof.
**Lemma 6 The STS protocol provides mutual belief in a shared**
keying relationship, i.e. achieves goal (G6); this is
however an “eager” belief on A’s part.
### K
Now consider the beliefs of party B after successfully completing
the STS protocol. B is able to arrive at beliefs analogous to those
of party A given above. The proofs are analogous, except that in
1This recognizability assumption is implicit in the BAN proof, and
follows from the action of successful verification of B’s signature
in (13). This is the main reason we find our BAN-logic proof unsatisfying.
-----
B’s case, goal (G6) may be attained without qualification. This
results from B having the advantage of being recipient of the final
message. Note the mere presence of message (M3) implies A has
successfully completed her end of the protocol, and arrived at her
final beliefs. Confirmation of A’s successful completion could be
explicitly modelled in the idealized protocol, e.g. by incorporating
appropriate beliefs into the signed token MA in idealized message
(M3).[1] Granting additional assumptions giving A jurisdiction over
her own final beliefs, these beliefs would then be derivable as
mutual beliefs of B. For example, Lemma 3 would lead to B |≡
(Α |≡ A ↔K- B), i.e. (G6) for B.
#### 6 Key exchange protocol #2: Goss protocol
The key agreement protocol of Goss [12] results in the establishment of a shared secret key; two Diffie-Hellman exponentiations
are used, combining fixed and (per-run) variant parameters, allowing the creation of a unique key for each protocol run while reusing certified public key-agreement keys. A publicly known
appropriate prime p and primitive element α in GF(p) are fixed.
The parties A and B and the trusted authority T use a common signature scheme in association with certificates; sU{•} denotes the
signature of party U as before. In a preliminary, one-time process,
A selects a secret random number x, computes RA = α[x], and gives
this to T; T verifies A’s identity and returns a certificate CertA consisting of RA, a distinguishing identifier IDA for A, and T’s signature over their concatenation. RA serves as A’s fixed public keyagreement key, which can now be made available to others by certificate. Similarly, B obtains a secret number y, computes RB = α[y],
and obtains CertB. The protocol between A and B then consists of
a single message in each direction, as outlined below and summarized in TABLE 2:
1. A generates a random integer x>0, computes RA = α[x] and
sends RA to B along with certificate CertA.
2. B generates a random integer y>0, computes RB = α[y] and
sends RB to A along with certificate CertB.
3. A and B establish the authenticity of each other’s certificates
by verifying the signature of T thereon using T’s known public
key, and establish a common key K by respectively computing
K = (RB)[x] ⊕(RB)[x] and K = (RA)[y] ⊕(RA)[y].
##### TABLE 2 Goss protocol (concrete)
tected information contributes directly to the establishment of logical beliefs, and thus the cleartexts RA and RB could be omitted
from the idealization.
We now turn to formal assumptions. The assumptions of Section 4
required by party A in the Goss protocol are listed in TABLE 4 in
Section 8; analogous assumptions are required of B. Regarding the
security of underlying algorithms in the Goss protocol, use of R31
requires the following assumptions about the function f: given
exponentials RA, RB, RA, and RB, it is computationally infeasible
to compute the key K without knowledge of (x, x) or (y, y); and
knowledge of one of these pairs, together with the first four values,
does not allow one to recover the other pair.
We now consider the protocol goals. Informally, the Goss protocol
is a technique “in which two users establish a common session key
by exchanging information over an insecure communication channel, and in which each user can authenticate the identity of the
other” [12]. The formal goal which can be proven reachable by
party A[2] upon protocol completion is (G3), i.e. Α |≡ A ↔K- B. Corroboration that B actually knows the key K, i.e. (G4), while not
part of the basic protocol, could be achieved by a subsequent message making use of K. We also note key freshness, i.e. (G5), as a
reachable goal.
**Lemma 7 The Goss protocol provides secure key establishment,**
i.e. achieves goal (G3).
_Proof: Upon receiving (M5), Α sees {PKδ(B, RB)}sT. Using (A1),_
(A2) and this in R13 provides Α |≡ T said PKδ(B, RB), which
(A10) strengthens to Α |≡ T |≡ _PKδ(B, RB). This and (A4) with_
R3 yields Α |≡ _PKδ(B, RB). From here, using A’s belief in the_
goodness of the private key-agreement keys of both A and B —
(A5) and (A7) — in R31 provides Α |≡ A ↔K- B, i.e. A believes
K is shared with no party other than possibly B. Here the fixed
certified key RB = α[y] plays the role of B’s public key-agreement
key in R31, A’s fixed secret key x plays the role of A’s private
key-agreement key, and the uncertified time-variant keys (x and
RB) are secondary private and public parameters, respectively,
for the key agreement function f.
### K
**Lemma 8 The Goss protocol provides key freshness, i.e. achieves**
goal (G5), provided B does not choose y ≡ 0 (mod p-1)
or y ≡ 0 (mod p-1).
## A B
generate x, RA = α[x] generate y, RB = α[y]
CertA = (RA, IDA, sT{RA, IDA}) CertB = (RB, IDB, sT{RB, IDB})
generate x, RA = α[x] → CertA, RA generate y, RB = α[y]
verify CertB; K = (RB)[x]⊕ (RB)[x] CertB, RB ← verify CertA; K = (RA)[y]⊕ (RA)[y]
#### 6.1 Formal analysis of Goss protocol
The protocol must first be idealized. A’s certificate is idealized as
{PKδ(A, RA)}sT. Note here the public key bound is a key-agreement key rather than a signature key. The idealized Goss protocol
is:
A → B: (A, RA, {PKδ(A, RA)}sT), RA (M4)
A ← B: (B, RB, { PKδ(B, RB)}sT), RB (M5)
The idealization from the concrete protocol to the above form is
straightforward. As in Section 5.1, only cryptographically pro
1This is essentially equivalent to “message extension” in GNY.
_Proof: Similar to proof of Lemma 5._
### K
Note freshness assumption (A9) is used by Lemma 8 but not by
Lemma 7.
#### 7 Key exchange protocol #3: Günther protocol
The authenticated key agreement protocol of Günther [13] is an
identity-based key establishment protocol, employing the idea of
identity-based schemes for signatures/authentication, Diffie-Hell
2The protocol being essentially identical from either party’s perspective, we consider only the goal of the initiator.
-----
man key agreement [6], and ElGamal signatures [8]. An appropriate public prime p and primitive element α in GF(p) are fixed. The
trusted authority T chooses an integer v as its secret key, 1 ≤ v ≤ p1, and makes (KT =) u = α[v] public. In a preliminary, one-time process, it also assigns to each party a unique identifier D, and for
each D chooses a random integer kD with gcd(kD, p-1) = 1 and
computes rD = α[k][D]; then with h() a suitable hash function, solves
the following equation for sD (re-choosing kD if sD=0):
h(D) ≡ v· rD + kD· sD (mod p-1) (14)
The pair (rD,sD) is an ElGamal signature on identifier D, which T
gives to party D for safekeeping. rD is publicly available; sD is D’s
secret key allowing subsequent secure key establishment as outlined below. For use in what follows, note (rD,sD) satisfies the
equation
α[h(D)] ≡ u[rD] · rDsD (mod p), and hence (15)
rDsD ≡αh(D)· u-rD (mod p) (16)
Note rDsD can thus be computed entirely from publicly available
information.[1] The protocol between A and B, which take the place
of “D” above, consists of steps as follows:
1. A sends to B the pair (A, rA); similarly B sends to A the pair
(B, rB).
2. A generates a random positive integer x, and sends to B the
quantity (rB)[x]; similarly B generates a random positive integer
y, and sends to A the quantity (rA)[y].
3. A computes RB, where RB = α[h(B)]· u[-rB] (= rBsB from (16)), and
K = (rAy)sA· RBx; similarly B computes RA, where RA = αh(A)·
u[-rA] (= rAsA), and K = (rBx)sB· RAy.
Both parties then share the key K = (rAsA)y (rBsB)x. TABLE 3 summarizes.
##### TABLE 3 Günther protocol (concrete)
housing the key RA is idealized as {PKδ(A, RA)}sT. The idealization is as follows (compare to Goss, from which this idealization
was motivated):
A → B: A, rA (M6)
A ← B: B, rB, RB (M7)
A → B: RA (M8)
A recovers {PKδ(B, RB)}sT; B recovers {PKδ(A, RA)}sT(M9)
RA and RB are not transmitted by A and B, respectively, to the
other, but rather are computed from unprotected data transmitted
during the protocol and publicly available information (see (16)).
As it is assumed only T can produce a pair (rA, sA) satisfying (16)
for A, RA is viewed as a public key pre-certified (by construction)
by T; this pre-certification is idealized as a certificate. B is able to
reconstruct RA; its authenticity is implicit in the expected equality
of the resulting keys K computed by A and B. Here the idealization
is no longer restricted to data from message exchanges; idealization is extended to apply to the data computed by parties, i.e.
resulting from parties’ actions within the protocol.[3] The timeinvariant parameters (A, rA) and (B, rB) transmitted in the concrete
protocol are not protected cryptographically, but their integrity is
implicitly verifiable by the identity-based nature of the scheme.
The same is true of the exchanged cleartexts RA and RB — in fact,
the protocol contains no messages which are explicitly cryptographically protected.
We next consider formal assumptions of the Günther protocol.
Those of Section 4 required by party A in the protocol are listed in
TABLE 4 in Section 8; analogous assumptions are required of B.
Here, since sA is a secret generated by T, (A5) also implies the
assumption that T is trusted to generate and securely transfer this
secret to A without disclosing it to any other party; the same holds
true for (A7). (A10) here applies to the validity of the secret signature pair (r, s) computed in the past by T, and used in the present as
## A B
T’s signature (rA,sA) on h(A); sA secret → A, rA T’s signature (rB,sB) on h(B); sB secret
generate random x, compute (rB)[x] B, rB, (rA)[y] ← generate random y, compute (rA)[y]
compute RB = α[h(B)]· u[-rB] → (rB)[x] compute RA = α[h(A)]· u[-rA]
compute K = (rAy)sA · RBx compute K = (rBx)sB · RAy
Since (A, rA) and (B, rB) are constant across protocol runs (as well
as RA, and RB, for fixed parties A and B), if these are known a priori then the protocol may be reduced from three to two message
exchanges. In this case, the protocol more closely resembles the
Goss protocol, and can be made more similar if the multiply “ ·” in
the computation of K is replaced by an exclusive or (⊕).
#### 7.1 Formal analysis of Günther protocol
We first idealize the protocol. RA = (rB)[x] and RB = (rA)[y] are viewed
as uncertified time variant keys of A and B, respectively. RA =
(rA)[sA] and RB = (rB)[sB] are idealized as fixed public key-agreement
keys[2] of A and B. These four quantities are then analogous to those
of the same names in the Goss protocol of Section 6. A certificate
1The protocol can be made independent of the ElGamal signature
scheme, by using any suitable alternate method to generate a pair
(r, s), where r is a public key, s a secret key, and r[s] publicly recoverable.
2These might alternatively be viewed as fixed “public identity
keys” rather than fixed “public key-agreement keys”.
the certified public key r[s]. Regarding the security of underlying
algorithms in the Günther protocol, use of R31 in proofs of beliefs
for this protocol requires the following assumptions about the key
agreement function f: given values RA, RB, RA, and RB, as defined
in Section 7.1, it is computationally infeasible to compute the key
K without knowledge of (x, sA) or (y, sB); knowledge of one of
these pairs, together with the first four values, does not allow one
to recover the other pair; and a solution (r, s) to (14) requires
knowledge of v. Some redundancy is typically embedded in D of
(14) to preclude feasibility of attackers finding a solution by trial
and error.
We finally consider the formal goals of the protocol. Günther [13]
informally states that “the two parties construct keys which agree
if they are both legitimate and do both conform to the protocol.
The actual authentication is established when the decryption of the
message sent by the other party is meaningful”. Any demonstration
3While related to the AT/GNY idea of “computable” possessions,
this differs in that data is not only computed, but the result is idealized; the idealization of data in this protocol might be referred to
as implicit signatures or implicit certificates.
-----
of knowledge of the key (without compromising it) would serve
equally well. “Actual authentication” is thus not part of the
Günther key exchange per se. The intended formal goal is the same
as that of Goss, namely secure key establishment (G3): Α |≡
A K- B. The Günther protocol “has the advantage to generate a
↔
different key at each session” [13]; this is goal (G5). It was noted
that “Proving the security of this scheme seems to be outside the
scope of today’s methods”, and “the security could not be assessed
within the current terminology” (p.32 and 36 resp. in [13]). These
statements remain true, because the conclusions of logic analysis
rely on the robustness of underlying algorithms. Nonetheless,
given this, logic analysis establishes meaningful results about protocol security. We now outline these.
**Lemma 9 The Günther protocol provides secure key establish-**
ment, i.e. achieves goal (G3).
_Proof: [Outline only] Given the idealized form of the protocol, a_
proof analogous to that of Lemma 7 is as follows. After A
receives RB in (M7), as noted above A can compute B’s identity
public key RB: A has RB. The semantics of the protocol lead A
to conclude that this is B’s public key-agreement key. A also has
enough information to compute a joint key, denoted K, by R30:
A has K (see TABLE 3). To this point, our reasoning has established no properties of K; it is unqualified. Liberalizing the
BAN symbol sees to include “computes from available information” (i.e. using it interchangeably with the AT/GNY has), we
derive
Α sees { PKδ(B, RB) }sT (17)
Using (A1) for authenticity of T’s public key KT = u = α[v], and
(A2) which allows A to trust the public key RB computed from
B, rB and u, (17) yields, by R13,
Α |≡ T said PKδ(B, RB) (18)
“Verification” of the signature in (17) may include verifying the
current validity of T’s public key u used in computing RB, and
checking for revoked certificates.[1] These considerations are
taken into account by (A10) which strengthens (18) to Α |≡ T |≡
_PKδ(B, RB). This and (A4) with R3 yields Α |≡_ _PKδ(B, RB)._
Combining this with A’s belief in the quality of the private keyagreement keys of both A and B — (A5) and (A7) — R31 then
provides: Α |≡ A ↔K- B. That is, upon protocol completion, A
believes that K is shared with no party other than possibly party
B. Here the fixed (certified) public key RB = (rB)[sB] plays the
role of B’s public key-agreement key in R31, A’s fixed secret
key sA plays the role of A’s private key-agreement key, and the
uncertified time-variant keys (x and RB) are secondary parameters for the key agreement function f.
### K
**Lemma 10** The Günther protocol provides key freshness, i.e.
achieves goal (G5), provided B does not choose y ≡ 0
(mod p-1).
_Proof: Similar to proof of Lemma 5._
### K
#### 8 Comparison of formal assumptions and goals
The formal analysis of the three protocols above allows a meaningful comparison to be made of their assumptions (TABLE 4) and
1However, true signature verification, or recognizability of a correct
signature in (17), is not possible in this identity-based scheme. Instead, it is implicit: if the signature is invalid, the parties will not
derive the same key K. This subsequent key confirmation is beyond the scope of the protocol as specified.
guarantees (TABLE 5) below. These tables highlight the fact that
the Günther and Goss protocols are identical with respect to formal
goals, and very similar with respect to formal assumptions. The
Günther protocol makes use of an identity-based scheme to
authenticate the Diffie-Hellman public key r[s], whereas Goss uses
explicit certificates to ensure their authenticity. The Günther protocol requires additional trust in the trusted party not to divulge userspecific secret keys. In Günther, with RB and RB as in Section 7,
the key computed by A can be written K = (RB)[x][.](RB)[sA]; i.e. B’s
fixed certified key-agreement key powered by A’s uncertified time
variant secret, times B’s uncertified time-varying exponential powered by A’s fixed certified secret key. Directly analogous in Goss
with RB and RB as in Section 6, the key computed by A can be
written K = (RB)[x] ⊕(RB)[x]; i.e. B’s fixed certified exponential powered by A’s time variant uncertified secret, combined with B’s time
variant exponential powered by A’s fixed certified secret. It is also
interesting to note that a neutral-party view of the Günther key is
as K = (RB)[x][.](RA)[y].
##### TABLE 4 Comparison of formal assumptions
**Assumption description[1]** **STS** **Goss** **Günther**
integrity of T’s public key (A1) (A1) (A1)
quality of T’s private key (A2) (A2) (A2)
control of binding sig. key (A3) — —
quality of own k.a. key (A5) (A5)[2] (A5)[3]
quality of other’s sig. key (A6) — —
quality of other’s k.a. key (A7) (A7)[4] (A7)[5]
control of binding k.a. keys (A8) (A4) (A4)
freshness of own nonce (A9) (A9) (A9)
ability to validate certificates (A10) (A10) (A10)
1 T=trusted authority; k.a.=key agreement key; sig.=signature
2 required for both A’s fixed key x and variant secret x
3 required for both A’s fixed secret sA and variant secret x
4 required for both B’s fixed key y and variant secret y
5 required for both B’s fixed secret sB and variant secret y
##### TABLE 5 Comparison of formal goals
**Formal Goal** **STS** **Goss** **Günther**
far-end operative (G1) yes — —
entity authentication (G2) yes — —
secure key establishment (G3) yes yes yes
key confirmation (G4) yes — —
key freshness (G5) yes yes yes
mutual belief - shared key (G6) yes — —
The assumptions from Section 4 not required in the Goss and
Günther protocols are (A3) and (A6) — since individual parties do
not have their own signature key pairs — and (A8), replaced by
(A4). However, as noted in the table, Goss and Günther require
(A5) and (A7) twice each.
Consider now the goals of the Goss (and Günther) protocols relative to those of STS. (Comments about Goss apply equally to
Günther). Goss results in the creation of a shared secret key which
can be known to no one else aside from the intended party B, but
does not provide proof of aliveness (G1); there is no freshness evident from the far-end’s message. Goss does not provide entity
authentication in the sense of (G2); it is not evident that B’s message is either targeted to a specific party, or in response to a spe
-----
cific challenge. Finally, the Goss protocol does not set out to
provide key confirmation (G4). While this allows an intruder to
replay old messages and “complete” a fraudulent protocol run,
fooling another principal into believing the run was successful, this
is not a serious threat in practice as it provides no real advantage as
an intruder cannot compute the resulting key, as will be evident
once key usage commences. These missing goals can be easily
provided in the Goss (or Günther) protocol by an additional message employing the established key, e.g. via encryption or a MAC.
The above analysis also allows us to compare these protocols with
the X.509 two-way authentication protocol previously analyzed
using BAN [11]. Due to space limitations here, the reader is
referred to [11] for a description of the protocol, the formal
assumptions it requires (α1 through α7), and the formal goals (Γ1
through Γ12). The X.509 three-way authentication protocol is more
closely related to the above than the two-way protocol; the major
difference is the use of timestamps (two-way) vs. random numbers
(three-way). Both may be modified to accomplish Diffie-Hellman
key agreement, although this was not their original purpose; in the
three-way protocol, this may be done by replacing the X.509-specified “non-repeating numbers” rA and rB with Diffie-Hellman
exponentials as in the protocols of the present paper.
Two assumptions in the X.509 logic analysis (α5 and α6) require
that parties believe in the freshness of their opponent’s timestamps
and are able to check freshness in practice; the latter requires synchronized and secure time clocks. This is more demanding than
(A9) above, which requires belief in the freshness of self-gener_ated nonces. The X.509 analysis in [11] reflects an alternative to_
(A10) for handling public key distribution and checking the current validity of certificates in actual systems. “Duration stamps”
and the ensuing requirement of α7 (assumption that the trusted
party will not deliver certificates with invalid duration periods)
were introduced to handle certificates having lifetimes spanning
across protocol runs. The protocols analyzed in the current paper
avoid use of timestamps, and thus certificate analysis necessarily
differs. Aside from these differences, and the handling of certificates in X.509 as discussed above obviating (A4) and (A8), the
X.509 analysis shows formal assumptions analogous to those in
TABLE 4.
Regarding formal goals of the X.509 protocol, (G1) is attained, as
is a goal similar to (G2) regarding entity authentication (namely
Γ5). A goal related to (G3) regarding secure key establishment was
intended (Γ11), but formal analysis revealed a technical problem in
reaching this goal [11] (and thus (G6) also). Key confirmation (G4)
was not intended as an original X.509 goal, nor was key freshness
(G5), although the latter follows from Diffie-Hellman key agreement.
A comparison of the number of message exchanges required in the
various protocols, excluding initial exchanges required for parties
to acquire their own certificate, is given in TABLE 6. As implied
by their names, the X.509 two- and three-way protocols require 2
and 3 messages, respectively; each requires one or more additional
messages if the optional X.509 encryption field is utilized to
exchange encrypted data.
##### TABLE 6 Comparison of number of messages
**STS** **Goss** **Günther**
3 2 3 [1]
1 Can be reduced to 2 if fixed information is known a priori.
#### 9 Concluding remarks
Several extensions and refinements applicable to BAN-like logics
have been proposed to facilitate examination of beliefs and goals
in authenticated key agreement protocols. Analysis using the
extended logic has allowed direct comparison of the assumptions
and goals of four authentication protocols. This highlighted the
similarities between the Goss and Günther protocols, and qualitative differences between protocols providing key confirmation
(e.g. STS) and those giving secure key establishment with implicit
authentication (e.g. Goss, Günther).
While the most obvious objective of this method of formal analysis
is to establish whether specified goals are achieved, this is only one
of many benefits. The exercise forces one to identify, and express
in precise detail, these goals; this is important in the absence of a
universal definition of authentication. It also forces one to explicitly record the precise assumptions under which a protocol must
operate. Furthermore, the exercise may in some cases result in
more accurate specification of a protocol itself, as it requires
detailed consideration of all protocols steps. For these reasons, and
to allow meaningful comparisons, we feel there should be an onus
on protocol designers to provide the results of such analysis concurrent with the proposal of a new protocol.
While many of these benefits might equally be achieved without
logic techniques, the formality is a useful tool providing a template
to follow, and a vocabulary for precise discussion of assumptions
and goals. However as noted elsewhere, we emphasize these techniques do not (yet) provide an “automated theorem prover”; while
the proofs in the logic themselves follow easily once a protocol is
idealized and the requisite assumptions and goals are specified, the
critical steps of capturing the assumptions and goals, and idealizing the protocol, do not appear amenable to automation simultaneously, and themselves require proof of correctness. Nonetheless,
recent advances by others that allow automation, or partial automation, of one or more of these stages are encouraging.
#### Acknowledgments
Conversations with and/or comments from Li Gong, Lynn Marshall, Rainer Rueppel, Paul Syverson, Michael Wiener, and anonymous referees are gratefully acknowledged.
#### References
[1] M. Abadi, M. Tuttle. “A semantics for a logic of authentication”. Proc. 1991 ACM Symp. on Principles of Distributed
_Computing, 201-216._
[2] R. Bird, I. Gopal, A. Herzberg, P. Janson, S. Kutten, R.
Molva, M. Yung. “Systematic design of two-party authentication protocols”. Advances in Cryptology — CRYPTO’91,
_Lecture Notes in Computer Science 576, J. Feigenbaum (ed.),_
Springer-Verlag, 1991, 44-61.
[3] M.Burrows, M.Abadi, R.Needham. “A logic of authentication”. ACM Trans. Computer Systems 8 (Feb. 1990), 18-36.
[4] M. Burrows, M. Abadi, R. Needham. “A logic of authentication”. Digital Systems Research Centre, SRC Report #39
(1990 Feb. 22).
[5] CCITT Blue Book, Recommendation X.509: The Directory
— Authentication Framework (1988). Also ISO 9598-4.
[6] W. Diffie, M. Hellman. “New directions in cryptography”.
_IEEE Transactions on Information Theory, vol. IT-22 (1976),_
644-654.
-----
[7] W. Diffie, P. Van Oorschot, M. Wiener. “Authentication and
authenticated key exchanges”. Designs, Codes and Cryptog_raphy 2 (Jun. 1992), 107-125._
[8] T. ElGamal. “A public key cryptosystem and a signature
scheme based on discrete logarithms”. IEEE Transactions on
_Information Theory, vol. IT-31 (1985), 469-472._
[9] V. Gligor, R. Kailar, S. Stubblebine, L. Gong. “Logics for
cryptographic protocols — virtues and limitations”. Proc.
_IEEE 1991 Computer Security Foundations Workshop (Fran-_
conia, New Hampshire).
[10] L. Gong, R. Needham, R. Yahalom. “Reasoning about belief
in cryptographic protocols”. Proc. 1990 IEEE Symp. on
_Security and Privacy (Oakland, California), 234-248._
[11] K. Gaarder, E. Snekkenes. “Applying a formal analysis technique to the CCITT X.509 strong two-way authentication
protocol”. J. Cryptology 3 (Jan. 1991), 81-98.
[12] K.C. Goss. Cryptographic method and apparatus for public
key exchange with authentication. U.S. Patent 4,956,863
(granted 1990 Sept. 11).
[13] C. Günther. “An identity-based key-exchange protocol”.
_Advances in Cryptology — EUROCRYPT’89, Lecture Notes_
_in Computer Science 434, J.-J. Quisquater and J. Vandewalle_
(eds.), Springer-Verlag 1990, 29-37.
[14] R. Kailar, V. Gligor. “On belief evolution in authentication
protocols”. Proc. IEEE 1991 Computer Security Foundations
_Workshop (Franconia, New Hampshire), 103-116._
[15] W. Mao, C. Boyd. “Towards formal analysis of security protocols”. Proc. Computer Security Foundations Workshop VI
(Franconia, New Hampshire, June 1993), 147-158, IEEE
Computer Society.
[16] D.M.Nessett. “A critique of the Burrows, Abadi and
Needham logic”. Operating Systems Review 24 (1990), 3538.
[17] C.P.Waldvogel and J.L.Massey. “The probability distribution
of the Diffie-Hellman key”. Presented at AUSCRYPT’92
(Dec. 1992).
#### Appendix A: Authentication logic background
This section reviews the BAN logic [3] including refinements by
GS [11]. Proofs are constructed in the logic by a four-stage process. First the protocol is “idealized” — the actual or concrete protocol is expressed as a sequence of formal steps (A [→] B: X) where
A and B are the communicating entities and X is a statement in the
syntax of the logic. Second, the assumptions under which the protocol operates are identified and formally expressed. Third, the
goals of the protocol are identified and formally expressed. Finally,
a proof is constructed showing that given the basic assumptions,
upon observing the proper protocol messages the parties involved
are able, through the logical postulates (see below), to arrive at a
state where they believe the formal goals. The nature of the logic
analysis depends heavily on the precise details of the formalization
of initial assumptions, the idealization of the protocol, and the formalization of goals. Unfortunately, transformation of the protocol
into an idealized form cannot itself be automated, nor proven to be
correct. For these reasons, the idealization is recognized as the
most critical step.
We first review the basic notation — the logic symbols and their
informal semantics. A and B are parties (principals) involved in
the protocol, X is a statement, and K is a cryptographic key.
A once_said X A once sent the message X. This could have
been in either the current protocol run or a previous
run. In BAN it is understood that a principal only
says things which he believes.
A |≡� X A believes X (or is entitled to believe X). If X is a
data value rather than a statement, “A believes X” is
best interpreted as “A believes X is true” or “A said
X in the current epoch”.
A controls X A has jurisdiction over X; A can be trusted on
this matter. If you believe that A believes X, and if
you trust A on X, then you can believe X also.
A sees X A observes message X. A sees X if X arrives as a
protocol message.
K
A B A shares the good key K with B. The key is suitable
↔
for use as a cryptographic key in that only A and B
know it, it will not be disclosed to others, and it can
not be deduced by others.
_fresh(X)_ X is recent, and has not been seen prior to its present
use. Time is broken into two periods: the present
(the current epoch, beginning with the start of the
current protocol run) and the past. X is fresh if it is
not the replay of a message from the past. Formulas
generated specifically for the purpose of being fresh
are called nonces.
In the protocols examined in the present paper, so-called “random
numbers” serve as nonces. The critical properties are that they be
unpredictable, and drawn from a sufficiently large space, with vanishing probability of repetition. Provided such numbers are not
intentionally re-used, and are generated using an appropriate number generator, the probability that such a number duplicates a previous such number is vanishingly close to zero, and for practical
purposes can be assumed equal to zero.
PK(K,A) A has associated with it the good public key K. The
key is good in the sense that K is A’s authentic public key, and there exists a unique (public, private)
key pair corresponding to K.
∏(A) A has associated with it some good private key. The
key is good in the sense that it is known by no one
else, nor can it be deduced by anyone else.
R(A, X) A is the intended recipient of X.
The last three constructs are from GS. BAN uses a construct similar to PK(K,A) with semantics “A has K as a public key”, implicitly defining a corresponding private key K[-1] never discovered by
anyone aside from A. To embrace the idea of a signed message and
an encrypted message, we use the following notation and semantics (notation for the first differs from BAN):
{X}sA The signature of A on X, using A’s private signature
key. Note that X is not in general recoverable from
{X}sA, depending on the type of signature mechanism used and the possible hashing before signing.
{X}K Data resulting from encipherment of X under symmetric key K, with a fixed symmetric encipherment
algorithm assumed. Where relevant (e.g. R1 below),
this is short for {X}K from R, and in BAN it is
assumed a party can distinguish its own enciphered
formulae from those generated by other parties R.
BAN logic establishes beliefs a party is entitled to when all protocol steps are successful. In proofs, B sees {X}sA should be taken to
mean B has received a message containing a data item Y, and has
-----
verified the format of Y to be that of a signature on X using a key
corresponding to the public signature key which B associates with
A. It is implicitly assumed that B possesses this public key, and
that there is sufficient information available, or redundancy in X,
to allow signature verification. Such verification itself appears in
BAN only implicitly (e.g. see R13 below). Similarly, A sees {X}K
is taken to mean A has received a message containing a data item
Y, and has verified the format of Y to be that of the encryption of X
under key K; again it is assumed there is sufficient a priori knowledge or redundancy to allow verification that K is the correct key,
and verification itself appears only implicitly (e.g. R1 below). The
GNY “recognizability” construct (see Section 2) addresses this
explicitly.
For reference, and to put our work in perspective, we now list a
subset of BAN inference rules previously proposed. The rules are
logical postulates which allow proofs to be constructed. Of those
below, all but R13 (which is from [11]) are from the original BAN
logic; R1 through R13 are numbered as in [11] for cross-reference.
The first rule R1 is read as follows: If A believes that A shares a
good key K with B, and if A sees a message X encrypted under key
K (which she herself did not encrypt), then A believes that B once
said X.
R1. (Message meaning rule for shared keys)
A |≡ A ↔K B, A sees {X}K from U
A |≡ (B once_said X)
R2. (Nonce-verification rule)[1]
where U ≠�A
R10. (Sight projection)
A sees (X, Y)
A sees X, A sees Y
R12. (Freshness propagation rule)[2]
A |≡ _fresh (X)_
A |≡ fresh (X, Y)
R13. (Message meaning rule for public signature keys)[3]
A |≡ PK(B, K), A |≡� ∏(B), A sees {X}sB
A |≡ � (B once_said X)
R21. (Message decryption rule for symmetric keys)
A |≡ A ↔K B, A sees {X}K
A sees X
R22. (Message decryption rule for unqualified keys)[4]
A has K, A sees {X}K
A sees X
R23. (Hash function rule)
A |≡ (B once_said H(X)), A sees X
A |≡� (B once_said X)
where H() is an appropriate hash function.
2R12 implies that if part of a formula is fresh, the entire formula is.
Note a non-fresh formula cannot be made fresh by concatenating
it to a fresh formula; here (X, Y) is a message unit whose integrity
is protected, e.g. cryptographically.
3R13 assumes a message X can be recovered from a signature on it
(i.e. signature with message recovery, no hashing) and requires
possession of the corresponding public key. If the former is not
possible, a pre-condition is A has X.
4Modified slightly from BAN, to make use of the GNY/AT “has”
construct (see Section 2).
A |≡ _fresh (X), A |≡ (B once_said X)_
A |≡ (B |≡ X)
R3. (Jurisdiction rule)
A |≡� (B controls X), A |≡� (B |≡ X)
A |≡ X
R4. (Belief aggregation)
A |≡ X, A |≡ Y
A |≡ (X, Y)
R5. (Belief projection)
A |≡ (X, Y)
A |≡� X
R6. (Mutual belief projection)
A |≡ (B |≡ (X, Y))
A |≡ (B |≡ X)
R7. (Once-said projection)
A |≡ (B once_said (X, Y))
A |≡ B once_said X, A |≡ B once_said Y
1X must be fresh in R2 as in BAN a party is bound to its beliefs
(only) for the duration of a single protocol run.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1145/168588.168617?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1145/168588.168617, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://www.scs.carleton.ca/~paulv/papers/acmccs93.pdf"
}
| 1,993
|
[
"JournalArticle"
] | true
| 1993-12-01T00:00:00
|
[] | 19,953
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/033837f1a11a5d0c9a221eff1345dfd583c688fb
|
[
"Computer Science"
] | 0.848386
|
Computation of Private Key Based on Divide-By-Prime for Luc Cryptosystems
|
033837f1a11a5d0c9a221eff1345dfd583c688fb
|
[
{
"authorId": "35134806",
"name": "Z. M. Ali"
},
{
"authorId": "72165369",
"name": "Nawara Makhzoum Alhassan Makhzoum"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Problem statement: One of the public key cryptosystem is Luc cryptosystems. This system used Lucas Function for encryption and decryption process. Lucas Function is a special form of second-order linear recurrence relation. An encyption process is used to encrypt an original message to ciphertext by using public key. A decryption process is the process to decrypt a ciphertext into original message using private key. The existing algorithm on computing private key computation involved some redundant computations. Approach: In this study, an efficient algorithm to compute private key for Luc cryptosystem is developed. The Extended Euclidean Algorithm will be enhanced by implementing Divide-By-Prime in its computations. The comparison is focused on the computation time by the existing and new algorithms. The more efficient algorithm means the better computation time. The shorter computation time the better algorithm. Results: A new algorithm shows better computation time. In all experiments, the computation time by new algorithm is always better than the existing algorithm. Conclusion: The new computation algorithm that based on Divide-By-Prime provided better efficiency of decryption process compared to the existing algorithm.
|
Journal of Computer Science 8 (4): 523-527, 2012
ISSN 1549-3636
© 2012 Science Publications
# Computation of Private Key Based on Divide-By-Prime for Luc Cryptosystems
Zulkarnain Md Ali and Nawara Makhzoum Alhassan Makhzoum
School of Computer Science, Faculty of Information Science and Technology,
University Kebangsaan Malaysia, 43600 Bangi, Selangor Darul Ehsan, Malaysia
**Abstract: Problem statement: One of the public key cryptosystem is Luc cryptosystems. This system**
used Lucas Function for encryption and decryption process. Lucas Function is a special form of
second-order linear recurrence relation. An encyption process is used to encrypt an original message to
ciphertext by using public key. A decryption process is the process to decrypt a ciphertext into original
message using private key. The existing algorithm on computing private key computation involved
some redundant computations. Approach: In this study, an efficient algorithm to compute private key
for Luc cryptosystem is developed. The Extended Euclidean Algorithm will be enhanced by
implementing Divide-By-Prime in its computations. The comparison is focused on the computation
time by the existing and new algorithms. The more efficient algorithm means the better computation
time. The shorter computation time the better algorithm. **Results: A new algorithm shows better**
computation time. In all experiments, the computation time by new algorithm is always better than the
existing algorithm. **Conclusion:** The new computation algorithm that based on Divide-By-Prime
provided better efficiency of decryption process compared to the existing algorithm.
**Key words: Luc cryptosystem, decryption process, private key**
**INTRODUCTION**
Public key cryptosystem is a way that is used a
secret communication between the sender and
receiver, without needing for a secret key exchange
and it can used for create a digital signature (Diffie
and Hellman, 1976).
Public key cryptosystem is a widely used
technology around the world, which enables
information to be transmission in a secret channel on
the Internet. An encryption process is the process that
used to obtain ciphertext C from original message P
using public key e. While, in reverse, the decryption
process is obtained original message P by decryption
the ciphertext C using private key d.
In fact, the encryption of P is relatively easy, since
the plaintext P and public key e are known publically.
The knowledge of two primes p and q is not important,
because the value of these two primes is known as N
where N is the product of p and q.
On the other hand, the decryption process is not
easy as the encryption; the reason is that, the private
key is hidden from the public. It is difficult to obtain
it. The strength of cryptosystem depends on the
length of public key e and the two prime’s p and q.
In fact, the increasing of the size in these parameters
will also be increased the time required for the
decryption computation.
Diffie and Hellman (1976) introduced the concept
of public key cryptography, which opened up a whole
new research field within the cryptographic community.
One of the first public key cryptosystem technique and
probably widely used is the RSA (Rivest et al., 1978).
In RSA using a modular exponentiation of message
block for a very large power after that, reducing this
number to modulo N, where n equal to multiply of two
large primes p and q.
Smith and Lennon (1993) then introduced a new
technique of public key cryptosystem based on Lucas
Function, which is believed offers better alternative to
the RSA. It is used Lucas function to perform the
processing of encryption and decryption instead of
using exponentiation technique.
There are some researcher, interested on using Luc
cryptosystem and also introduced fast computation
algorithms (Horster _et al., 1996; Ali_ _et al., 2007;_
Othman et al., 2008).
There is a difficult mathematical problem in Luc
as in RSA, In RSA the mathematical problem is
known as the Discrete Logarithm (DL). Although,
Luc use Lucas functions, it’s still based on analogous
to the DL problem (Smith and Lennon 1993).
Sometimes, implementation of Lucas Functions
**Corresponding Author:** Zulkarnain Md Ali, School of Comptuer Science, Faculty of Information Science and Technology,
University Kebangsaan Malaysia, 43600 Bangi, Selangor Darul Ehsan, Malaysia
523
-----
_J. Computer Sci., 8 (4): 523-527, 2012_
ciphers has large complication in timing **Important Number Theories Techniques: The very**
consumer. basic and important number theories techniques are
In this study, the proposed algorithms will compare required. The following section will discussed the
with an existing algorithm which is proposed in (Ali _et_ features briefly.
_al., 2009). There are three set of data are tested on each_
**Legendre Symbol (LS):** Legendre symbol is a
algorithm. These data could be categories in different
multiplicative function with values 1, −1, or 0, if (a) is
size of messages, public keys and two relatively primes.
an integer number and (p) is an odd prime, the
The proposed algorithms and the existing algorithm
will be tested on every set of data. In addition the Legendre Symbol [a] is:
computation time will be recorded for each algorithm. p
The computation time of each algorithm can decided
which algorithm is better in term of speed and efficiency. - 0 if p divides a; else
- 1 if a is a quadratic residue modulo p,
**MATERIALS AND METHODS** - - 1 if a is a quadratic non-residue modulo p.
**Lucas Functions: Two functions Vn and Un are defined** Some properties of Legendre Symbol which can be
in Lucas sequences as follows: speed up its computation.
V0 = 2, V1 = P; V1 = PVn-1-QVn-2 for n≥2 a b
U0 = 0, U1 = 1; Un = PUn-1–QUn-2 for n≥2 - Let p be an odd prime, then [ab]p = p p
The computations of Vn need huge computations in - If a ≡ b (mod p), then [a] = b
view of the fact that the nature of Lucas Functions is a p p
recurrence relation. ab2 a
The computation of Vn requests two previous - For b prime to a, p = p
values in Lucas Functions computation. The primary
values have to be V0 and V1. - a = 1
p
**Encryption and Decryption processes for luc cryptosystem:** The ciphertext C is obtained by - −1 = −( 1) (p 1)2−
p
encrypting the plain text, P by: Enc (P) = Ve (P, 1)
- If p and q are odd primes then
(mod N) = C (mod N). Where, e is a public key while
p 1− q 1−
Ve is a Lucas Function. a = a ( 1)− 2 2
p p
On the other hand, the decryption process by : Dec
(C) = Vd(C,1) = Vd (Ve (P,1),1) = Ved(P,1) = P (mod **Least Common Multiple (LCM): The Least Common**
N). Where d is private key and Vd is a Lucas Function. Multiple of two integers x and y are the smallest
positive integer which is divisible by x of y and it is
**Lucas functions properties:** There are properties of multiple. It could be divided by x and y with a
Lucas Function which useful for encryption and remainder (Knuth, 1981). For example to find LCM by
decryption process (Smith and Lennon, 1993) and using Division by primes:
(Horster et al., 1996) Eq. 1-4:
- Divide all the numbers by the smallest prime which
Vn = PVn-1-Vn-2, (1) could divide any of them at the same time
- Then continues in the same way until all prime
V2n = Vn2 – 2Qn, (2) numbers
- The last step multiply all prime with the last
V2n-1 = VnVn-1 – MQn-1, (3) remainder from each number
V2n+1 = PVn2 -QVnVn-1 - PQn (4) Let find the Least Common Multiple for 1092 and
1170. Refer to Fig. 1 that has explanation on using the
The initial values are V0 = 2 and V1 = P. Where, D Divide-By-Prime. The LCM for 1092 and 1170 is
= P[2] – 4Q is a discriminant. 2.3.13.14.15 = 16380. See the example below.
524
-----
_J. Computer Sci., 8 (4): 523-527, 2012_
Fig. 3: An existing algorithm for computing private key
Fig. 1: The using of division by prime to find least
common multiple.
Fig. 4: Existing algorithm to compute private key d.
Fig. 2: Extended Euclid Algorithm
**Extended Euclidean Algorithm (EEA): The Extended**
Euclidean Algorithm is used to find the Greatest
Common Divisor (GCD) of two integers a and b and it
is an extension to the Euclidean Fig. 2.
It is also can be used to find the integers x and y in
ax+by = GCD (a.b). This is a useful technique when a
and b are co primes, because x is the modular
multiplicative inverse of a modulo b. (Silverman, 2006;
Knuth, 1981).
**Current Methods: A private key d for Luc cryptosystem**
could be computed by following these steps:
- de ≡ 1 (mod r), e is a public key
- r = LCM ( x, y)
- x = p - LS(p) and y = q - LS(q)
- LS(p) = D/p and LS(q) = D/q
- D = C[2]- 4
Note that LCM is least common multiple, LS is Fig. 5: New Approach of Computing Legendre
Legendre Symbol, D is discriminant, C is Cipher text, e Symbols (LS)
is a public key and d is private key.
This technique suffers lots of computations and
**Proposed methods: The weakness of the existing**
requires more computation time. The computation of
algorithm was because using the slower algorithm to
private key d used the slower computation technique such
compute LCM and LS. Moreover, the time consuming
as Least Common Multiple (LCM) which used Greater
for computation private key of Luc Cryptosystem are
Common Division (GCD) and Legendre Symbol (LS)
been reduced. By this fact, the performance of decryption
which used computing of power for calculation.
The enhancement of computation of Legendre process can be improved. The result of these proposed
Symbols can be use in designing a proposed technique algorithms would be compared to the existing Fig. 3.
of computing private key. The new approach on When the computation of Legendre Symbols is done
computing Legendre Symbols is shown in detail in the computation of finding private key can be continued
Fig. 4 below. with computation of Least Common Multiple.
525
-----
_J. Computer Sci., 8 (4): 523-527, 2012_
- Calculate plaintext
- vd(c,1) = v6809(15407,1) = 11111
There are three data set used three different
experiments by changing the size of one variable and
fixing the others. Three set of data are different size of
public key e, different size of primes and different size
of message. All details criteria on each set of data are
explained here.
Set 1: Different sizes of public key e are 99, 159, 199,
339,539 digits while the size of p and q are 100
digits. In addition the size of plaintext p is 5 digits.
Set 2: Change P size (20, 80, 100, 160, 200), while p
and q size are 100 digits and e size is 19 digits.
Set 3: Using different size of primes p and q; 40, 60,100
digits, where the size of e is 159 and the size of
p is 20 digits.
Fig. 6: Proposed Algorithm For Least Common
Multiple (LCM) Using (DbP) In the following tables explain the decryption
computation time for both algorithm the existing
This technique is base on the method of Divide-By- algorithm and the proposed algorithm in different
Prime and it is called DbP. situations.
The detail of this technique is shown in Algorithm Table 1 shows the decryption computation time on
different size of public key. From this table, it is clearly
5. Remember that in Fig. 5, x = LS and y = e. Where LS
shown that the increasing of the size of public key can
is found from Fig. 4 and e is the public key. The result of
also increase the computation time for both algorithms.
Fig. 5 and 6 is R and R is the private key.
Table 1: Decryption computation time on different public keys size
**RESULTS**
Existing DbP
e p & q P d (Seconds) (Seconds)
The sender uses computing of ciphertext C from
99 100 5 199 47.59 35.24
the original message, P. Let consider that P = 11111, p
159 100 5 199 47.69 35.67
= 1093, q = 1171 and e = 1109. To compute C means
199 100 5 199 60.76 36.49
the computation of V1109 (11111,1) = 15407. 339 100 5 199 85.85 67.61
Meanwhile, C = 15407.To get back the plaintext P, the 539 100 5 199 91.28 68.28
receiver should compute ciphertext, C. The following
steps display how the proposed Algorithms work: Table 2: Decryption computation time on different size of messages
Existing DbP
- Ciphertext C = 15407 P p & q e d (Seconds) (Seconds)
- Discriminant computation is D = C[2]- 4 20 100 19 198 89.71 66.86
- Legendre Symbol for (D/p) is (D/1093) = 1 80 100 19 198 90.21 67.75
- Legendre Symbol for (D/q) is (D/1171) = 1 100 100 19 198 90.38 67.87
160 100 19 198 90.56 67.98
200 100 19 199 98.16 68.14
Calculate r where:
Table 3: Decryption computation time on different size of primes
- r = LCM((1093-1), (1171-1))
Existing DbP
- By using Division by Prime to calculate r p & q P e d (Seconds) (Seconds)
- r = LCM(1092,1170) = 2.3.13.14.15 = 16380 40 20 159 77 12.8 5.51
- By using EEA to find private key d by e d = 1 60 20 159 119 34.73 13.07
(mod 1279903) 80 20 159 159 41.14 20.26
- In addition, 1109*d = 1 (mod 1279903) 90 20 159 178 45.17 26.20
- Finally, d = 6809 100 20 159 198 53.01 30.92
526
-----
_J. Computer Sci., 8 (4): 523-527, 2012_
From three tables above, the existing algorithm is computation speed. It is clearly shown in Table 1, 2 and
suffered huge time computation for the decryption 3. As a conclusion, the new algorithm makes the
process, meanwhile the proposed algorithm is required decryption process more efficient by reducing the time
a small computation time. computing for calculate Legendre Symbols and Least
The results in Table 1-3 above are based on the Common Multiple.
running time for each algorithm in C language in
Windows 7 Environment, Intel Core[tm]2 Duo Processor **ACKNOWLEDGEMENT**
P8700 (2.53 GHz) and 3GBof RAM. All computation
times are in seconds. The first author would like to express gratitude to
Universiti Kebangsaan Malaysia for a research grant
**DISCUSSION** UKM-GUP-2011-244.
Although the new algorithm is reduced the time **REFERENCES**
consumer which led to speed up the computing time for
decryption still requires more computation time. The Ali, Z.M., M. Othman, M.R.M. Said and M.N.
calculation of private d is done by the calculation of Sulaiman, 2007. Two fast computation algorithms
modular equation ed = 1 (mod r), since e is a public key for LUC cryptosystems. Proceeding of The
which is known by everyone and r is the Least International Conference on Electrical Engineering
Common Multiple two Legendre Symbols. and Informatics (ICEEI2007), ITB, 2: 434-437.
The Least Common Multiple is computed by Ali, Z.M., M. Othman, M.R.M. Said and M.N.
division by prime method. In this study, the private key Sulaiman, 2009. Computation of private key for luc
computation is possible, because the value of primes p cryptosystem. Proceedings of the International
and q is known. Conference on Electrical Engineering and
Then, the product of p and q can be use to find Informatics, Aug. 5-7, IEEE Xplore Press, Selangor,
Legendre Symbols, Least Common Multiple of two pp: 418-422. DOI: 10.1109/ICEEI.2009.5254700
Legendre Symbols. The most important fact is that the Diffie, W. and M. Hellman, 1976. New directions in
Extended Euclidean Algorithm need both public key cryptography. IEEE Trans. Inform. Theory, 22:
and N = p*q. This fact is the most crucial in finding the 644-654. DOI: 10.1109/TIT.1976.1055638
private key for decryption process. Horster, P., M. Michels and H. Petersen, 1996. Digital
The decryption processes then continues with the signature schemes based on Lucas functions.
computation of private key. Then it continues with Proceedings of the 1st International Conference on
decryption process to find the plain text, P. Communications and Multimedia Security, (ICCMS’
The computation time of every step in decryption 96), Chapman and Hall, Graz, pp: 178-190.
is recorded including the time for calculation Legendre Knuth, D.E. 1981. The Art of Computer Programming:
Symbol LS, Lease Common Multiple (LCM) and Seminumerical Algorithms. 2nd Edn., AddisonExtended Euclid Algorithm (EEA). Wesley, ISBN-10: 0201038226, pp: 688.
The proposed algorithm clearly shows that it can Othman, M., E.M. Abulhirat, Z.M. Ali, M.R.M. Said
compute faster than the existing algorithm. Table 1, 2 and R. Johari, 2008. A new computation algorithm
and 3 concluded that by implementing Divide-By- for a cryptosystem based on lucas functions. J.
Prime (DbP) in Least Common Multiple can speed up Comput. Sci., 4: 1056-1060. DOI:
the computation of EEA. Furthermore, DbP has an 10.3844/jcssp.2008.1056.1060
ability to skip some redundant computation found in the Rivest, R.L., A. Shamir and L. Adleman, 1978. A
existing algorithm. method for obtaining digital signatures and public
key cryptosystems. Commun. ACM, 21: 120-126.
**CONCLUSION** DOI: 10.1145/359340.359342
Silverman, J.H., 2006. A Friendly Introduction to
A new algorithm can speed up the process of Number Theory. 3rd Edn., Pearson Prentice Hall,
Upper Saddle River, ISBN-10: 0131861379 pp:
decryption. It is done by find another algorithm for
434.
computing private key. The new enhancement is made
Smith, P.J. and M.J.J. Lennon, 1993. LUC: A New
by a new approach of finding Legendre symbol and
Public Key System. 9th IFIP Symposium on
Least Common Multiple.
Computer Security in E.G Douglas, (SEGD’ 93),
The comparison between new and existing
pp: 103-117.
algorithm shows that the new approach is better in
527
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3844/JCSSP.2012.523.527?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3844/JCSSP.2012.523.527, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "HYBRID",
"url": "https://thescipub.com/pdf/jcssp.2012.523.527.pdf"
}
| 2,012
|
[] | true
| 2012-02-01T00:00:00
|
[
{
"paperId": "d1bad0416fc85c99dae44bc537b9de6cca89410f",
"title": "Computation of private key for LUC Cryptosystem"
},
{
"paperId": "61840df814f992355d9bd8d04f3eee97e6afcfca",
"title": "A New Computation Algorithm for a Cryptosystem Based on Lucas Functions"
},
{
"paperId": "debf23db13d41b44edb7d35fc3f06f10b3848b25",
"title": "Two Fast Computation Algorithms for LUC Cryptosystems"
},
{
"paperId": "95e13dd420741bfc3e629b8bb439663eb6a238d1",
"title": "A Friendly Introduction to Number Theory"
},
{
"paperId": "0ecfdb388bffbd623e536de70aee9ff811317cbc",
"title": "LUC: A New Public Key System"
},
{
"paperId": "b4d4a78ecc68fd8fe9235864e0b1878cb9e9f84b",
"title": "A method for obtaining digital signatures and public-key cryptosystems"
},
{
"paperId": "ba624ccbb66c93f57a811695ef377419484243e0",
"title": "New Directions in Cryptography"
},
{
"paperId": "2f05f96a53f94c9da24fbcca5c3df1d7e159c63d",
"title": "The Art of Computer Programming, Volume II: Seminumerical Algorithms"
},
{
"paperId": "cc667bdd335068286b8ba42fb539d5b2a3915ed5",
"title": "Digital signature schemes based on Lucas functions"
},
{
"paperId": null,
"title": "Set 1: Different sizes of public key e are 99, 159, 199, 339,539 digits while the size of p and q are 100 digits"
},
{
"paperId": null,
"title": "Set 2: Change P size (20, 80, 100, 160, 200), while p and q size are 100 digits and e size is 19 digits"
}
] | 5,087
|
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0339ac2f1170a0ba72dabc2febaecd5fc3e92623
|
[
"Computer Science"
] | 0.885222
|
RIPEMD-160: A Strengthened Version of RIPEMD
|
0339ac2f1170a0ba72dabc2febaecd5fc3e92623
|
Fast Software Encryption Workshop
|
[
{
"authorId": "2863419",
"name": "H. Dobbertin"
},
{
"authorId": "3085918",
"name": "A. Bosselaers"
},
{
"authorId": "144795520",
"name": "B. Preneel"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Foundations of Software Engineering",
"Found Softw Eng",
"Fast Software Encryption",
"ACM SIGSOFT Conf Found Softw Eng",
"FSE",
"Fast Softw Encryption Workshop",
"ACM SIGSOFT Conference on the Foundations of Software Engineering",
"Fast Softw Encryption"
],
"alternate_urls": [
"http://www.wikicfp.com/cfp/program?id=1093"
],
"id": "89a13061-deba-4155-a533-2c60ea096c6f",
"issn": null,
"name": "Fast Software Encryption Workshop",
"type": "conference",
"url": "http://www.wikicfp.com/cfp/program?id=1092"
}
| null |
# RIPEMD-160:
A Strengthened Version of RIPEMD
Hans Dobbertin 1 Antoon Bosselaers 2 Bart Preneel 2.
1 German Information Security Agency
P.O. Box 20 10 63, D-53133 Bonn, Germany
**dobbert** **in@skom,** **rheln,** **de**
2 Katholieke Universiteit Leuven, ESAT-COSIC
K. Mercierlaan 94, B-3001 Heverlee, Belgium
{ant oon. bosselaers ,bart. preneel}@esat, kuleuven, ac .be
Abstract. Cryptographic hash functions are an important tool in cryp-
tography for applications such as digital fingerprinting of messages, mes-
sage authentication, and key derivation. During the last five years, sev-
eral fast software hash functions have been proposed; most of them are
based on the design principles of Ron Rivest's MD4. One such proposal
was RIPEMD, which was developed in the framework of the EU project
RIPE (Race Integrity Primitives Evaluation). Because of recent progress
in the cryptanalysis of these hash functions, we propose a new version
of RIPEMD with a 160-bit result, as well as a plug-in substitute for
RIPEMD with a 128-bit result. We also compare the software perfor-
mance of several MD4-based algorithms, which is of independent inter-
est.
## 1 Introduction and Background
Hash functions are functions that map bitstrings of arbitrary finite length into
strings of fixed length. Given h and an input x, computing h(x) must be easy.
#### A one-way hash function must satisfy the following properties:
**-** preimage resistance: it is computationally infeasible to find any input
which hashes to any pre-specified output.
**-** second prelmage resistance: it is computationally infeasible to find any
second input which has the same output as any specified input.
For an _ideal_ one-way hash function with an m-bit result, finding a preimage
or a second preimage requires about 2 TM operations. A _collision resistant hash_
#### function is a one-way hash function that satisfies an additional condition:
**-** collision resistance: it is computationally infeasible to find a collision, i.e.
two distinct inputs that hash to the same result.
- N.F.W.O. postdoctoral researcher, sponsored by the National Fund for Scientific
Research (Belgium).
-----
For an _ideal collision resistant hash function with an m-bit result, the fastest way_
to find a collision is a birthday or square root attack which needs approximately
2 m/2 operations [19].
Almost all hash functions are iterative processes which hash inputs of arbi-
trary length by processing successive fixed-size blocks of the input. The input
X is padded to a multiple of the block length and subsequently divided into t
blocks X1 through X~. The hash function h can then be described as follows:
**Ho = IV;** **H~ = f(H~_l, X~), 1 < i < ~** **h(X) = H,.**
Here f is the _compression function_ of h, Hi is the _chainin9 variable_ between
stage i - 1 and stage i, and _IV_ denotes the initial value.
CoMsion resistant hash functions were first used in the context of practical
digital signature schemes: in order to improve the efficiency (and the security)
of these schemes, messages are hashed, and the (slow) digital signature is only
applied to the short hash-result. Other applications include the protection of
passwords, the construction of message authentication codes or MACs, and the
derivation of key variants.
The first constructions for hash functions were based on block ciphers (such
as DES) [8, 9, 10]. Although some trust has been built up in the security of these
proposals, their software performance is not very good, since they are typically
2... 4 times slower than the corresponding block cipher. Hash functions based
on modular arithmetic axe slow as well, and serious doubt has been raised about
their security.
The most popular hash functions, which are currently used in a wide variety
of applications, are the custom designed hash functions from the MD4-family.
MD4 was proposed in 1990 by R. Rivest [13, 14]; it is a very fast hash function
tuned towards 32-bit processors. Because of unexpected vulnerabilities identified
in [3] (namely collisions for two rounds our of three), R. Rivest designed in 1991
a strengthened version of MD4, called MD5 [15]. An additional argument was
that although MD4 was not a very conservative design, it was being implemented
fast into products. MD5 is probably the most widely used hash function, in spite
of the fact that it was shown in [4] that the compression function of MD5 is not
collision resistant: the collision found changes the chaining variables rather than
the message block. This does not pose a threat for standard applications of MD5,
but still implies a violation of one of the design principles.
The RIPE consortium 3 had as goal to propose a portfolio of recommended
integrity primitives [12]. Based on its independent evaluation of MD4 and MD5
[3, 4] the consortium proposed a strengthened version of MD4, which was called
RIPEMD. RIPEMD consists of essentially two parallel versions of MD4, with
some improvements to the shifts and the order of the message words; the two par-
allel instances differ only in the round constants. At the end of the compression
function, the words of left and right halves are added.
s C.W.I. (NL) prime contractor, ~trhus University (DK), KPN (NL), K.U.Leuven (B),
Phillps Crypto B.V. (NL), and Siemens AG (D).
-----
A second alternative for MD5 is the Secure Hash Algorithm (SHA-1), which
was designed by NSA and published by NIST (National Institute of Standards
and Technology, US) [7]. The two main improvements are the increased size of
the result (160 bits compared to 128 bits for the other schemes), and the fact
that the message words in the different rounds are not permuted but computed
as the sum of previous message words. This has as main consequence that it is
much harder to make local changes confined to a few bits: individual message
bits influence the calculations at a large number of places. The first version of
SHA, which was published in May 1993, had a weaker form of this property (no
mixing was done between bits at different positions in a word), and apparently
this can be exploited to produce collisions faster than 2 s~ operations. However,
no details have been made available. This weakness was removed in the improved
version, published in April '95.
The remainder of this paper is organized as follows. In w we discuss in more
detail why a new version of RIPEMD is proposed. In w we give a description of
the new schemes, and in w we motivate the design decisions. In w the perfor-
mance of the new versions of RIPEMD are compared to other MD4-based hash
functions. w presents the conclusions.
2 Motivation for a New Version of RIPEMD
The main contribution of MD4 is that it is the first cryptographic hash function
which made optimal use of the structure of current 32-bit processors. The use of
serial operations and the favorable treatment of little-endian architectures show
that MD4 is tuned towards software implementations.
However, introducing a new structure in cryptographic algorithms also in-
volves the risk of unexpected weaknesses. It became clear that existing tech-
niques such as differential or linear cryptanalysis were not applicable, and that
any successful cryptanalysis would require the development of new techniques.
The attacks by B. den Boer and A. Bosselaers on two (out of three) rounds of
MD4 [3] and on the compression function of MD5 [4] were the first indications
that some structural properties of the algorithms can be exploited, but did not
seem a serious threat to the overall algorithm. More recently, the attack on MD4
was improved by S. Vaudenay [18] yielding two hash-results that differ only in
a few bits. This was a clear illustration that MD4 did not behave as one could
expect from a random function (e.g., it is not correlation resistant as defined in
[1]).
Early '95 H. Dobbertin found collisions for the last two out of three (and
later for the first two) rounds of RIPEMD [5]. While this is not an immediate
threat to RIPEMD with three rounds, the attack was quite surprising. Moreover,
it introduced a new technique to cryptanalyze this type of functions. In the Fall
of '95, H. Dobbertin was able to extend these techniques to produce collisions
for MD4 [6], and for the compression function of the extended version of MD4
[13] (see also w 3.3). The attack on MD4 requires only a few seconds on a PC,
-----
and still leaves some freedom to the message; it clearly rules out the use of MD4
as a collision resistant function.
It is anticipated that these techniques can be used to produce collisions for
MD5 and perhaps also for RIPEMD. This will probably require an additional
effort, but it no longer seems as far away as it was a year ago.
An independent reason to upgrade RIPEMD is the limited resistance against
a brute force collision search attack. P. van Oorschot and M. Wiener present in
[17] a design for a $10 million collision search machine for MD5 that could find
a collision in 24 days. If only a $1 million budget is available, and the memory
of an existing computer network is used, the computation would require about 6
months. Taking into account the fact that the cost of computation and memory
is divided by four every three years (this observation is known as Moore's law),
one can conclude that a 128-bit hash-result does not offer sufficient protection
for the next ten years. Note that collisions obtained in this way need less than
10 random looking bytes; the rest of the inputs can be chosen arbitrarily.
RIPEMD is in use in several banking applications, and is (together with
SHA-1) currently under consideration as a candidate for standardization within
ISO/IEC JTC1/SC27. However, the current situation brings us to the conclusion
that it would be prudent to upgrade current implementations, and to consider
a more secure scheme for standardization. Therefore the authors designed a
strengthened version of RIPEMD-160 which should be secure for ten years or
more. Also, an improved 128-bit version is proposed, which should only be used
to replace RIPEMD in current applications.
SHA-1 has already a 160-bit result, and because of some of its properties it
is quite likely that SHA-1 is not vulnerable to the known attacks. However, its
design criteria and the attack on the first version are secret.
3 Description of the New RIPEMD
In this section we briefly describe RIPEMD-160, RIPEMD-128, and two variants
which give a longer hash-result. We assume that the reader is familiar with the
structure and notation of MD4 (see for example [13]).
**3.1** **RIPEMD-160**
The bitsize of the hash-result and chaining variable for RIPEMD-160 are in-
creased to 160 bits (five 32-bit words), the number of rounds is increased from
three to five, and the two lines are made more different (not only the constants
are modified, but also the Boolean functions and the order of the message words).
This results in the following parameters (pseudo-code for RIPEMD-160 is
given in Appendix A):
**1. Operations in one step. A :--** **_(A + f(B, C, D) + X + K) <<" + E_** and
C := _C <<1~_ Here _<<" denotes cyclic shift (rotation) over s positions._
2. Ordering of the message words. Take the following permutation p:
-----
Further define the permutation 7r by setting ~r(i) = 9i + 5 (mod 16). The
order of the message words is then given by the following table:
Line **II** Round 1 Round 2 Round 3 Round 4 Round ~ I
left _id_ _p_ _p2_ _p3_ _p4_
right 7r ~rp ~rp ~ ~rp ~ ~rp 4
```
3. Boolean functions. Define the following Boolean functions:
#### fl ( x, Y~ z)=x@yez,
f2(x, y, z) = (~ A y) V (-~x A z),
```
13(x, y, ~) _=_ _(~_ _v ~y) �9 z,_
#### f4(x, y, ~) -- (~ A ~) v (y A -,~),
fs(x, y, ~) = �9 �9 (y v -~).
These Boolean functions are applied as follows:
Line [] Round 1 Round 2 Round 3 Round 4 Round 5
left fl f2 f3 f4 f5
right f5 f4 f3 f2 fl
4. Shifts. For both lines we take the following shifts:
### [Round[IX01Xl Ix21xa]x4[xslxalxTlxslxglxlolXll]x121xlalx;_~.lx15]
1 11 14 15 12 5 8 7 9 11 13 14 15 6 7 9 8
2 12 13 11 15 6 9 9 7 12 15 11 13 7 8 7 7
3 13 15 14 11 7 7 6 8 13 14 13 12 5 5 6 9
4 14 11 12 14 8 6 5 5 15 12 15 14 9 9 8 6
5 15 12 13 13 9 5 8 6 14 11 12 11 8 6 5 5
5. Constants. Take the integer parts of the following numbers:
Line [[ Round 1 Round 2 Round 3 Round 4 Round 5
left **0** **23~ "v~** **230 .v~** **2 ~~ .v~** **230 -v~**
right **23~ -~/2** **230 .~/3** **2 ~~ .~/5** **230 -~** **0**
-----
#### 3.2 RIPEMD-128
The main difference with tLIPEMD-160 is that we keep a hash-result and chain-
ing variable of 128 bits (four 32-bit words); only four rounds are used.
1. Operation in one step. A :: (A q- f(B, C, D) -k X q- K) <<S. 2. Boolean functions. The Boolean functions are applied as follows:
Line H Round 1 Round 2 Round 3 Round 4
left ]1 f2 Is f4
right fa f3 ]2 fl
3. Constants. Take the integer parts of the following numbers:
Line I I Round 1 Round 2 Round 3 Round 4
left 0 280. v/2 280. ~ 280. V/5
right 280. ~ 280. ~/3 230. ~ 0
**3.3** **Optional Extensions to 256 and 320 bit** Hash-Results
#### Some applications of hash functions require a longer hash-result, without needing a larger security level. A straightforward way to achieve this would be to use two parallel instances of the same hash function with different initial values; however, this might result in unwanted dependencies between the two chains
(such dependencies have been exploited in the attack on RIPEMD). Therefore
it is advisable to have a stronger interaction between the two instances.
In [13] an extension of MD4 was proposed which yields a 256-bit hash-result
by running two parallel instances of MD4 which differ only in the initial values and in the constants in the second and third round. After every application of the compression function, the value of the register A is interchanged between the two chains. H. Dobbertin was able to produce collisions for the compression function of this extension; moreover, we anticipate that it is possible to construct collisions for the complete extension as well.
RIPEMD-128 and RIPEMD-160 have already two parallel lines, hence a dou-
ble length extension (to 256 respectively 320 bits) can be constructed without the need for two parallel instances: it is sufficient to omit the combination of the two lines at the end of every application of the compression function. We propose to introduce interaction between the lines by swapping after round 1 the contents of registers A and A', after round 2 the contents of registers B and B', etc.
### 4 Motivation of the Design Decisions
#### The main design principle of RIPEMD-160 is to overcome the problems raised in ~2, but with as few changes as possible to the original structure to maximize
-----
on confidence previously gained with RIPEMD and its predecessors MD4 and
MDS.
Also, it was decided to aim for a rather conservative design which offers a
high security level, rather than to push the limits of performance with the risk
of a redesign a few years from now.
The basic design philosophy of RIPEMD was to have two parallel iterations;
the two main improvements are that the number of rounds is increased from three
to five (four for RIPEMD-128) and that the two parallel rounds are made more
different. From the attack on RIPEMD we conclude that having only different
additive constants in the two lines is not sufficient. In RIPEMD-160, the order of
the message blocks in the two iterations is completely different; in addition, the
order of the Boolean functions is reversed. We envisage that in the next years it
will become possible to attack one of the two lines and up to three rounds of the
two parallel lines, but that the combination of the two parallel lines will resist
attacks.
The operation for RIPEMD-160 on the A register is related to that of MD5
(but five words are involved); the rotate of the C register has been added to
avoid the MD5 attack which focuses on the most significant bit [4]. SHA-1 has
two rotates as well, but in different locations. The value of 10 for the C register
was chosen since it is not used for the other rotations. The step operation for
RIPEMD-128 is identical to that of MD4 (and RIPEMD).
The permutation of the message words of RIPEMD was designed such that
two words that are 'close' in round 1-2 are far apart in round 2-3 (and vice
versa). If this permutation would have been applied in RIPEMD-160, this crite-
rion would not have been satisfied (message blocks 2 and 13 form an undesirable
pattern due to a cycle of length 2 [5]). Therefore, it was decided to exchange the
values for 12 and 13, resulting in the permutation p of ~3.1. The permutation
7r was chosen such that two message words which are close in the left half will
always be at least seven positions apart in the right half. For the Boolean func-
tions, it was decided to eliminate the majority function because of its symmetry
properties and a performance disadvantage. The Boolean functions are now the
same as those used in MD5. As mentioned above, the Boolean functions in the
left and right half are used in a different order.
The shifts in RIPEMD were chosen according to a specific strategy, which
was only documented in an internal report. The same strategy has been extended
to the strengthened algorithms in a straightforward way. The design criteria ax'e
the following:
**-** the shifts are chosen between 5 and 15 (too small/large shifts are considered
not very good, and a choice larger than 16 does not help much);
- every message block should be rotated over different amounts, not all of them
having the same parity;
- the shifts applied to each register should not have a special pattern (for
example, the total should not be divisible by 32);
**-** not too many shift constants should be divisible by four.
-----
Note that the design decisions require a compromise: it is not possible to
make a good choice of message ordering and shift constants for five rounds that
is also 'optimal' for three rounds out of five.
### 5 Performance Evaluation
In this section we compare the performance of RIPEMD-160, RIPEMD-128,
RIPEMD, SHA-1, MD5, and MD4. Implementations were written in Assembly
language optimized for the Pentium processor (90 MHz). Note that the numbers
are for realistic inputs, i.e., 256 Megabyte of data are hashed using an 8 K buffer
(this is slower than hashing short blocks from the cache memory). The relative
speeds coincide more or less with predictions based on a simple count of the
number of operations. RIPEMD-160 is about 15% slower than SHA-1, half the
speed of RIPEMD, and four times slower than MD4. On a big-endian RISC
machine, the difference between SHA-1 and RIPEMD-160 will be slightly larger.
RIPEMD-128 is 30% slower than RIPEMD. Optimized C implementations are
a factor of 1.8... 2.2 slower; for MD5 the speed of our C code is 36% faster than
that of [16].
**Table** I. Performance of several MD4-based hash functions on a 90 MHz Pentium
### algorithm performance (Mbit/s)
Assembly C
MD4 165.7 81.4
MD5 113.5 59.7
SHA-1 46.5 21.2
RIPEMD 82.1 44.0
RIPEMD-128 63.8 35.6
RIPEMD-160 39.8 19.3
### 6 Concluding Remarks
We have proposed RIPEMD-160, which is an enhanced version of RIPEMD. The
design is made such that the confidence built up with RIPEMD is transferred
to the new algorithm. The significant increase in security comes at the cost of a
reduced performance (a factor of two), but the resulting speed is still acceptable.
We encourage comments and results on the security of RIPEMD-160.
Acknowledgments We would like to thank Bert den Boer, Markus Dichtl,
Walter Fumy, and Peter Landrock for encouragement and advice.
-----
### References
1. R. Anderson, "The classification of hash functions," _Proe. of the IMA Confer-_
_ence on Cryptography and Coding, Cirencester, December 1993,_ Oxford University
Press, 1995, pp. 83-95,
2. I.B. Damgs "A design principle for hash functions," _Advances in Cryptology,_
_Proc. Crypto'89, LNCS 435,_ G. Brassard, Ed., Springer-Verlag, 1990, pp. 416-427.
3. B. den Boer, A. Bosselaers, "An attack on the last two rounds of MD4," _Advances_
_in Cryptology, Proc. Crypto'91, LNCS 576,_ J. Feigenbaum, Ed., Springer-Verlag,
1992, pp. 194-203.
4. B. den Boer, A. Bosselaers, "Collisions for the compression function of MD5," _Ad-_
_vances in Cryptology, Proe. Euroerypt'93, LNCS 765, T. Helleseth, Ed., Springer-_
Verlag, 1994, pp. 293-304.
**5. H.** Dobbertin, "RIPEMD with two-round compress function is not collisionfree,"
_Journal of Cryptology,_ to appear.
6. H. Dobbertin, "Cryptanalysis of MD4," _Fast Soft~oare Encryption,_ this volume.
7. FIPS 180-1, Secure hash standard, NIST, US Department of Commerce, Washing-
ton D.C., April 1995.
8. R. Merkle, "One way hash functions and DES," _Advances in Cryptology, Proc._
_Crypto'89, LNCS 435, G. Brassard, Ed., Springer-Verlag, 1990, pp. 428-446._
9. C.H. Meyer, M. Schilling, "Secure program load with Manipulation Detection
Code," _Proc. Securicom 1988,_ pp. 111-130.
10. B. Preneel, R. Govaerts, J. VandewaUe, "Hash functions based on block ciphers:
a synthetic approach," _Advances in Cryptology, Proc. Crypto'93, LNCS 773,_
D. Stinson, Ed., Sprlnger-Verlag, 1994, pp. 368-378.
11. B. Preneel, _Cryptographic Hash Functions,_ Kluwer Academic Publishers, to ap-
pear.
12. RIPE, _"Integrity Primitives for Secure Information Systems. Final Report_
_of RACE Integrity Primitives Evaluation (RIPE-RACE 1040)," LNCS 1007,_
Springer-Verlag, 1995.
13. R.L. Rivest, "The MD4 message digest algorithm," _Advances in Cryptology, Proe._
_Crypto'90, LNCS 537,_ S. Vanstone, Ed., Springer-Verlag, 1991, pp. 303-311.
14. R.L. Rivest, "The MD4 message-digest algorithm," _Request for Comments (RFC)_
_1320,_ Internet Activities Board, Internet Privacy Task Force, April 1992.
15. R.L. Rivest, "The MD5 message-dlgest algorithm," _Request for Comments (RFC)_
_1321,_ Internet Activities Board, Internet Privacy Task Force, April 1992.
16. J. Touch, "Report on MD5 performance," _Request for Comments (RFC) 1810,_
Internet Activities Board, Internet Privacy Task Force, June 1995.
17. P.C. van Oorschot, M.J. Wiener, "Parallel collision search with application to hash
functions and discrete logarithms," _Proc. 2nd A CM Conference on Computer and_
_Communications Security,_ ACM, 1994, pp. 210-218.
18. S. Vaudenay, "On the need for multipermutations: cryptanalysis of MD4 and
SAFER," _Fast Software Eneryption, LNCS 1008, B. Preneel, Ed., Springer-Verlag,_
1995, pp. 286-297.
19. G. Yuval, "How to swindle Rabin," _Cryptologia,_ Vol. 3, No. 3, 1979, pp. 187-189.
-----
#### A Pseudo-code for RIPEMD-160
RIPEMD-160 is an iterative hash function that operates on 32-bit words. The round function takes as input a 5-word chaining variable and a 16-word message block and maps this to a new chaining variable. All operations axe defined on 32-bit words. Padding is identical to that of MD4 [13, 14]. Test values axe listed in Appendix B. First we define all the constants and functions.
RIPEMD-160: definitions
_nonlinear functions at bit level: exor, mux, -, mux, -_
_f(j,z,y,z) =xeyez_ (0 _< j _< 15)
_f(j,_ z, y, z) = (z A y) v (-~z A z) (16 <_ j _< 31)
f(j, ~, y, z) = (~ v ~y) `�9` (32 _< j _< 47)
## f(j, ~, y, ~) = (z A ~) v (y A ~) (48 _< j _< 63)
f(j, ~, y, ~) = �9 ~ (y v -~z) (64 _< j _< 79)
_added constants (hexadecimal)_
_K(j) =_ O0000000x (0 <_ j < 15)
_g(j)_ : 5A827999 x (16 < j ~ 31) L 2a~ v~J
### g(j) = 6ED9EBAlx (32 ~ j <_ 47) L2 a~ v~J
_K(j)_ : 8F1BBCDC x (48 < j _< 63) [2 s~ `v/'5J`
_K(j) =_ A9SSFD4Ex (64 ~ j _< 79) [28~ `v~J`
### K'(j) = SOA28BE6x (0 ~ j ~ 15) L 2s~ �9 r
_g'(j) =_ SC4DD124x (16 < j < 31) [2 s~ ~f3J
### g'(j) = 6DZO3RF3 x (32 < j < 47) [280 �9 #g]
g'(j) = 7A6DZ6Egx (48 _< j _< 63) L 28~ qVJ
_g'(j)_ = `O000oOOOx` (64 ~ j _< 79)
_selection of message word_
r(j) **= j** **(0 < j < 15)**
#### r(16..31) = 7, 4, 13, 1, 10, 6, 15, 3, 12, 0, 9, 5, 2, 14, 11, 8 r(32..47) = 3, 10, 14, 4, 9, 15, 8, 1, 2, 7, 0, 6, 13, 11, 5, 12 r(48..63) = 1, 9, 11, 10, 0, 8, 12, 4, 13, 3, 7, 15, 14, 5, 6, 2 r(64..79) = 4, 0, 5, 9, 7, 12, 2, 10, 14, 1, 3, 8, 11, 6, 15, 13 r'(0..15) = 5, 14, 7, 0, 9, 2, 11, 4, 13, 6, 15, 8, 1, 10, 3, 12 r'(I6..31) = 6, 11, 3, 7, 0, 13, 5, I0, 14, 15, 8, 12, 4, 9, I, 2 ~'(32..47) = 15, 5, 1, 3, 7, 14, 6, 9, 11, 8, 12, 2, 10, 0, 4, 13 C(48..63) = 8, 6, 4, 1, 3, 11, 15, 0, 5, 12, 2, 13, 9, 7, 10, 14 r'(64..79) = 12, 15, 10, 4, 1, 5, 8, 7, 6, 2, 13, 14, 0, 3, 9, 11
-----
#### amount for rotate left (rol)
s(0..15) : 11, 14, 15, !2, 5, 8, 7, 9, 11, 13, 14, 15, 6, 7, 9, 8
### s(16..31) : 7, 6, 8, 13, II, 9, 7, 15, 7, 12, 15, 9, II, 7, 13, 12
s(32..47) : II, 13, 6, 7, 14, 9, 13, 15, 14, 8, 13, 6, 5, 12, 7, 5
s(48.,63) : II, 12, 14, 15, 14, 15, 9, 8, 9, 14, 5, 6, 8, 6, 5, 12
s(64..79) : 9, 15, 5, II, 6, 8, 13, 12, 5, 12, 13, 14, II, 8, 5, 6
s'(O..15) : 8, 9, 9, II, 13, 15, 15, 5, 7, 7, 8, 11, 14, 14, 12, 6
s'(16..31) :9,13,15,7,12,8,9,11,7,7,12,7,6,15,13,11 s'(32..47) : 9, 7, 15, 11, 8, 6, 6, 14, 12, 13, 5, 14, 13, 13, 7, 5
s'(48..63) : 15, 5, 8, 11, 14, 14, 6, 14, 6, 9, 12, 9, 12, 5, 15, 8
s'(64..79) : 8, 5, 12, 9, 12, 5, 14, 6, 8, 13, 6, 5, 15, 13, II, 11
#### initial value (hexadecimal)
h0 **:** **67452301x; hi :** **EFCDAB89x; h 2 :** **98BADCFEx;**
h3 = I0325476x; h4 = C3D2E1FOx;
It is assumed that the message after padding consists of t 16-word blocks
that will be denoted with _X~[j], with 0 < { < t - 1 and 0 ~ j < 15. The symbol_
[] denotes addition modulo 232 and _rol,_ denotes cyclic left shift (rotate) over
s positions. The pseudo-code for RIPEMD-160 is then given below.
**RIPEMD-160:** **pseudo-code**
#### fori := 0 tot- 1 {
A := ho; B := hi; C := h2; D = h3; E = h4;
#### A' := h0; B' := hi; C' := h2; D' = h3; E' = h4; for j := O to 79{
T := _rol,(j)(A [] ](j,S, C, D) []_ X~[r(j)] [] _g(j))+ E;_
A :-- E; E := D; D := _folio(C); C := B; B := T;_
T := ro/,(j)(A'[] f(79 -j, B', C', D') [] X~[r'(j)] [] _g'(j)) + E;_
#### A' := E'; E' :-- D'; D' := rollo(C'); C' := B'; B' := T;
### }
h0 := _hi [] C [] D'; hi_ := _h2 [] D [] E'; h2 := h3 [] E [] A';_
h3 :=h4[]A[]BS;h4:=h3[]B[~C';
-----
#### B Test Values
RIPEMD-160:
1HI
9cl 185aScSe9fc54612808977ee8f548b2258d31
**"a"**
Obdc9d2d256b3ee9daae347be6f 4dc835a467f f e "a"
"abe"
8eb208f 7eOEd987a9bO44a8e98c6bO87f 15aObfc "abc"
"message digest"
5dO689ef49d2faeS726881b123a85ffa21595f36 "message digest"
"abcdef ghijklmnopqrstuvwxyz"
f 71c27109c692clbE6bbdcebSb9d2865b3708dbc
"abcdb cde cde f de f gel ghf ghishi j hij kij kl j klmk]Janlamomnopnopq"
12aOE3384a9cOc88e405aO6c27dcf49ada62eb2b
"ABCD~GHI3KI~NOPQRSTUVI~XYZabcdef ghijklmnopqrstuvwxyz0123456789"
bOe20b6e3116640286ed3a87aS713079621f5189
8 times "1234567890"
96752e45573d4639f 4dbd3323cab82bf 63326bfb
RIPEMD-128:
lilt
cdf 26213alSOdc3ecb610f 18f6638646
ilall
86be7afa339dOf c7cfc785e72f578d33
"a'bc"
c14a12199c66e4ba84636bOf69144c77
"message digest"
9e327b3d6e523062afcl132d7df9dt b8
"abcdefghijklmnopqrstuv.xyz"
fd2aa6OTf 71dc8f 510714922b371834e
"abcdbcdecdef defgef ghfstLishijhijkij klj klmklamlmnonmopnopq"
alaaO689dOfafa2ddc22e88b49133a06
"AB CDEFGH 13KU4NDP qRSTUVk-lYZabcdef 8hij klmnopqrstuvwxyzO 123456789"
dl e959eb179c911faea4624c60cScT02
8 times "1234567890"
3f45ef 194732c2dbb2c4a2c769795fa3
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/3-540-60865-6_44?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/3-540-60865-6_44, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "https://link.springer.com/content/pdf/10.1007/3-540-60865-6_44.pdf"
}
| 1,996
|
[
"JournalArticle",
"Conference"
] | true
| 1996-02-21T00:00:00
|
[] | 8,552
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Mathematics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/0339ce2081cce967786c46faa5e1ee4e0fa15516
|
[
"Computer Science"
] | 0.818074
|
PKCHD: Towards a Probabilistic Knapsack Public-Key Cryptosystem with High Density
|
0339ce2081cce967786c46faa5e1ee4e0fa15516
|
Inf.
|
[
{
"authorId": "72682678",
"name": "Yuan Ping"
},
{
"authorId": "7425382",
"name": "Baocang Wang"
},
{
"authorId": "40584823",
"name": "Shengli Tian"
},
{
"authorId": "34768162",
"name": "Jingxian Zhou"
},
{
"authorId": "2110817068",
"name": "Hui Ma"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
By introducing an easy knapsack-type problem, a probabilistic knapsack-type public key cryptosystem (PKCHD) is proposed. It uses a Chinese remainder theorem to disguise the easy knapsack sequence. Thence, to recover the trapdoor information, the implicit attacker has to solve at least two hard number-theoretic problems, namely integer factorization and simultaneous Diophantine approximation problems. In PKCHD, the encryption function is nonlinear about the message vector. Under the re-linearization attack model, PKCHD obtains a high density and is secure against the low-density subset sum attacks, and the success probability for an attacker to recover the message vector with a single call to a lattice oracle is negligible. The infeasibilities of other attacks on the proposed PKCHD are also investigated. Meanwhile, it can use the hardest knapsack vector as the public key if its density evaluates the hardness of a knapsack instance. Furthermore, PKCHD only performs quadratic bit operations which confirms the efficiency of encrypting a message and deciphering a given cipher-text.
|
# information
_Article_
## PKCHD: Towards a Probabilistic Knapsack Public-Key Cryptosystem with High Density
**Yuan Ping** **[1,2,]*** **, Baocang Wang** **[1,3,]*, Shengli Tian** **[1], Jingxian Zhou** **[2]** **and Hui Ma** **[1]**
1 School of Information Engineering, Xuchang University, Xuchang 461000, China; cb_fan@126.com (S.T.);
bsdczy@163.com (H.M.)
2 Information Technology Research Base of Civil Aviation Administration of China,
Civil Aviation University of China, Tianjin 300300, China; yzzxtj@aliyun.com
3 Key Laboratory of Computer Networks and Information Security, Ministry of Education, Xidian University,
Xi’an 710071, China
***** Correspondence: pyuan.lhn@xcu.edu.cn (Y.P.); bcwang79@aliyun.com (B.W.)
Received: 22 January 2019; Accepted: 19 February 2019; Published: 21 February 2019
**Abstract: By introducing an easy knapsack-type problem, a probabilistic knapsack-type public key**
cryptosystem (PKCHD) is proposed. It uses a Chinese remainder theorem to disguise the easy knapsack
sequence. Thence, to recover the trapdoor information, the implicit attacker has to solve at least two hard
number-theoretic problems, namely integer factorization and simultaneous Diophantine approximation
problems. In PKCHD, the encryption function is nonlinear about the message vector. Under the
re-linearization attack model, PKCHD obtains a high density and is secure against the low-density subset
sum attacks, and the success probability for an attacker to recover the message vector with a single
call to a lattice oracle is negligible. The infeasibilities of other attacks on the proposed PKCHD are also
investigated. Meanwhile, it can use the hardest knapsack vector as the public key if its density evaluates
the hardness of a knapsack instance. Furthermore, PKCHD only performs quadratic bit operations which
confirms the efficiency of encrypting a message and deciphering a given cipher-text.
**Keywords: public key cryptography; knapsack problem; low-density attack; lattice reduction**
**1. Introduction**
A public key cryptosystem (PKC), a concept introduced by Diffie and Hellman in their landmark
paper [1], is a critical cryptographic primitive in the area of network and information security. Traditional
PKCs such as RSA [2] and ElGamal [3] suffer from the same drawback of relatively low speed, which
hampers the further applications of public-key cryptography and also motivates the cryptographers to
design faster PKCs. Among the first public-key schemes, knapsack-type cryptosystems were invented as
fast PKCs. Due to the high speed of encryption and decryption and their NP-completeness, they were
considered to be the most attractive and the most promising for a long time. However, some attacks
lowered the initial enthusiasm and even announced the premature death of trapdoor knapsacks.
Following the first knapsack system developed by Merkle and Hellman [4], many knapsack-type
cryptosystems can be found. However, only a few of them are considered to be secure, including the
most resistant one, the Chor–Rivest knapsack system [5,6]. In the literature, many techniques were
developed and many trapdoors were found to hide information, i.e., using the 0–1 knapsack problem [4],
compact knapsack problem [7], multiplicative knapsack problem [8,9], modular knapsack problem [10,11],
matrix cover problem [12], group factorization problem [13,14], polynomials over GF(2) [15], Diophantine
-----
_Information 2019, 10, 75_ 2 of 27
equations [16], complementing sets [17], and so on. However, almost all the additive knapsack-type
cryptosystems are vulnerable to low-density subset sum attacks [18–20], GCD attack [21], simultaneous
Diophantine approximation attack [22] or orthogonal lattice attack [14]. Additionally, Refs. [23,24] show
the rise and fall of knapsack cryptosystems.
Three reasons clarify the insecurities of the additive knapsack-type cryptosystems. Firstly, as observed
in [21], these systems are basically linear. Secondly, for some of them, the trapdoor information is easy to
recover. In particular, some systems use the size conditions to disguise an easy knapsack problem that
make them vulnerable to simultaneous Diophantine approximation attacks [22]. Thirdly, the densities of
some systems are not high enough. Coster et al. [20] showed that, if the density is <0.9408, a single call
_· · ·_
to a lattice oracle will lead to polynomial time solutions.
Like the aforementioned, to design a secure knapsack-type PKC, we must ensure that
- in the system, the encryption function is nonlinear about the message vector;
- to disguise the easy knapsack problem, the size conditions should be excluded;
- the encryption function must be non-injective. A cipher-text must have so many preimages that it is
computationally infeasible for the attacker to list all the preimages.
It is believed in [23] that, if someone invents a knapsack cryptosystem that fully exploits the difficulty
of the knapsack problem, with a high density and a difficult-to-discover trapdoor, then it will be a system
better than those based on integer factorization and discrete logarithms. Can such a knapsack-type PKC
satisfying the requirements above be developed, or, in other words, may any efficient yet straightforward
constructions have been overlooked? In this paper, we will try to provide an affirmative answer.
Based on a new easy knapsack-type problem, a probabilistic knapsack public-key cryptosystem with
high density (PKCHD) is proposed, which has the following properties:
- PKCHD is a probabilistic knapsack-type PKC.
- The multivariate polynomial encryption function is nonlinear about the message vector, and its
degrees are controlled by the randomly-chosen small integers.
- The secret key is disguised via Chinese remainder theorem (CRT) rather than the size conditions.
Thus, PKCHD is secure against simultaneous Diophantine approximation attacks.
- The density of PKCHD is sufficiently high under the relinearization attack model. A cipher-text has
too many plaintexts for the attacker to enumerate all of them in polynomial time.
- If its density evaluates the hardness of a knapsack instance, PKCHD can always use the hardest
knapsack vector as the public-key.
- The attacker has to solve at least two hard number-theoretic problems, namely integer factorization
and simultaneous Diophantine approximation problems, to recover the trapdoor information.
- PKCHD is more efficient than RSA [2] and ElGamal [3]. The encryption and the decryption of the
system only perform O(n[2]) bit operations.
The rest of the paper is organized as follows. In Section 2, we give some preliminaries on concepts and
definitions about lattices, low-density subset sum attacks, and simultaneous Diophantine approximation.
The easy knapsack-type problems are presented in Section 3, as well as several examples to make the
problems more understandable. The detailed description of the proposed PKCHD is given in Section 4.
Section 5 discusses the performance related issues and specifies the parameter selection. Section 6 discusses
several attacks on our system including key-recovery attacks, low-density attacks, and simultaneous
Diophantine approximation attacks. The security of the system is carefully examined in this section.
Section 7 gives some concluding remarks.
-----
_Information 2019, 10, 75_ 3 of 27
**2. Preliminaries**
Throughout this paper, the following notations will be used:
- **R, the field of real numbers.**
- **Z, the ring of integers; Z[+], the set of all positive integers.**
- **Zn = {0, · · ·, n −** 1}, the complete system of least nonnegative residues modulo n; Z[∗]n[, the reduced]
residue system modulo n.
- gcd(a, b), the greatest common divisor of a and b; lcm(a, b), the least common multiple of a and b.
- If gcd(a, b) = 1, a[−][1] mod b denotes the inverse of a modulo b.
- _a|b, a divides b._
- _a mod p, the least nonnegative remainder of a divided by p._
- _a = b mod N means that a is the least nonnegative remainder of b modulo N; a_ _b (mod N) means_
_≡_
that a and b are congruent modulo N.
- For (a, b) (Z[+])[2], and an integer m, m mod (a, b) denotes the 2-tuple (m mod a, m mod b).
_∈_
- _u_ _v (mod (a, b)) means that u mod a_ = v mod a or u mod b = v mod b.
_̸≡_ _̸_ _̸_
- _|A|, the cardinality of a set A._
- _|a|2, the binary length of an integer a._
- _⌈r⌉, the smallest integer greater than or equal to r._
Throughout this paper, we also adopt some customary parlance. For example, when we say a value is
negligible, we mean that the value is a negligible function v(k) : N [0, 1], i.e., for any polynomial p( ),
_�→_ _·_
there exists k0 ≥ 1 such that v(k) < 1/p(k) for any k > k0. The length of a vector means its norm (L1, L2
or L∞ norm).
_2.1. Lattice_
A lattice is a discrete additive subgroup of R[n]. An equivalent definition is that a lattice consists of all
integral linear combinations of a set of linearly independent vectors, i.e.,
�
_L =_
� _d_
### ∑ zibi
_i=1_
_zi ∈_ **Z**
����
,
where b1, · · ·, bd are linearly independent over R. Such a set of vectors {bi} is called a lattice basis.
In the lattice theory, three important algorithmic problems are the shortest vector problem (SVP),
the closest vector problem (CVP) and the smallest basis problem (SBP). The SVP asks for the shortest
non-zero vector in a given lattice L. Given a lattice L and a vector v, the CVP is to find a lattice vector s
minimizing the length of the vector v − _s. Then, the SBP aims at finding a lattice basis minimizing the_
maximum of the lengths of its elements. The problems are of special significance in complexity theory
and cryptology. The SVP can be approximated by solving SBP. No polynomial-time algorithm is known
for the three problems. The best polynomial time algorithms to solve the SVP achieve only slightly
sub-exponential factors, and are based on the LLL algorithm [25].
Before 1996, the lattice theory only applies to cryptanalysis [14,18–22,26–29], especially in breaking
some knapsack cryptosystems. However, positive applications of the lattice theory in cryptology [30–33]
have been witnessed in the last ten years. Some cryptographers even introduce the knapsack cryptosystems
into the lattice-based cryptosystems due to the applications of lattice reduction algorithms in breaking
the knapsack-type cryptosystems. For example, Sakurai [34] viewed the lattice-based cryptosystems as
the revival of the knapsack trapdoors. More negative and positive applications of the lattice theory in
cryptology can be found in [34,35].
-----
_Information 2019, 10, 75_ 4 of 27
The SVP and CVP are widely believed as difficult problems. However, interestingly, experimental
results showed that lattice reduction algorithms behave much more nicely, especially in the
low-dimensional (<300) lattices, than was expected from the worst-case proved bounds. When the
dimension of a lattice is low, the lattice reduction algorithms can serve as a lattice oracle (SVP or CVP
oracle). Therefore, to make a PKC invulnerable to lattice attacks, generally, the dimension is required
to be sufficiently high (>500) without reducing the practicability, e.g., NTRU [32]. In this paper, a new
method of constructing knapsack-type cryptosystem is presented. The dimension of the lattice underlying
the cryptosystem is low (about 150), and it is still secure against lattice attacks under some reasonable
assumptions.
_2.2. Low-Density Subset Sum Attacks_
Given a cargo vector A = (a1, · · ·, an) and an integer s, the 0–1 knapsack problem or more precisely
the subset-sum problem is to determine a binary vector X = (x1, · · ·, xn) such that the scalar product of A
and X is s. More generally, we define the general knapsack problem or compact knapsack problem as to
find a vector X = (x1, · · ·, xn) with xi ∈ [0, 2[b] _−_ 1] such that
_n_
### ∑ aixi = s. (1)
_i=1_
Note that Equation (1) is linear about the variable X. However, when the linearity restriction
is removed and a new function f quadratic about X is defined such that f (X) = _s, i.e.,_
_XAX[T]_ = ∑i[n]=1 [∑][n]j=1 _[a][ij][x][i][x][j][ =][ s][, we call it a matrix cover problem. Especially when the matrix][ A][ is diagonal,]_
_A = diag(a1, · · ·, an), the matrix cover problem turns out to find the vector X = (x1, · · ·, xn) subject to_
∑i[n]=1 _[a][i][x]i[2]_ [=][ s][. This problem is called a quadratic knapsack problem. These problems had been used to]
construct knapsack-type PKCs [4,7,12].
In a compact knapsack cryptosystem, the public key of the system is a cargo vector A = (a1, · · ·, an).
A message M = (m1, · · ·, mn) with mi ∈ [0, k] is encrypted into
_n_
_s =_ ∑ _aimi._ (2)
_i=1_
An important characteristic of a knapsack cryptosystem is the density of the cryptosystem.
A cryptosystem’s density has a great effect on its security against lattice-based attacks such as low-density
subset-sum attack and on whether it can be used to generate digital signatures for data origin authentication
purposes. In a high density cryptosystem, almost all the messages can be signed. Informally, the density of
a knapsack cryptosystem is defined as the fraction of the signable messages among all the messages [36],
or the density is approximately the information rate, which is the ratio of the number of bits in plaintext
message over the average number of bits in cipher-text [23]. Now, we provide the formal definition
of density.
**Definition 1 (Density [37]). The density d of the compact knapsack problem (2) is defined by**
∑i[n]=1 _[e][i]_
_d =_, (3)
log2Cmax
_where Cmax = k ∑i[n]=1_ _[a][i][ is the maximum value of the cipher-text in the system and e][i][ =][ |][m][i][|][2][ =][ ⌈][log]2[(][k][ +][ 1][)][⌉][.]_
-----
_Information 2019, 10, 75_ 5 of 27
We want to give two remarks about the definition here. Firstly, ⌈log2(k + 1)⌉ bits are needed to
represent the k + 1 integers in [0, k]. Thus, we set ei = ⌈log2(k + 1)⌉. Secondly, some different definitions
can be found in the literature. For example, Orton [7] defined the density of Equation (2) as
_d =_ _[n][⌈][log][2][(][k][ +][ 1][)][⌉]_ .
log2max ai
However, Ref. [37] gave a smaller density definition than that given in [7]. Thus, we adopt the
smaller definition.
When the density d of a knapsack problem is too low, there exists an efficient reduction from the
knapsack problem to the SVP over a lattice. Coster et al. [20] showed that, if d < 0.9408, which is the
_· · ·_
improvement of the earlier bound 0.6463 [19], then the knapsack problem can be easily solved in a
_· · ·_
non-negligible probability with a single call to a lattice oracle.
Given a knapsack system A = (a1, · · ·, an) and a sum s = ∑i[n]=1 _[a][i][x][i][; the basic idea of the low-density]_
attack [20] runs as follows. The attacker constructs a matrix
1 0 _· · ·_ 0 _Na1_
0 1 _· · ·_ 0 _Na2_
... ... ... ... ...
0 0 _· · ·_ 1 _Nan_
12 12 _· · ·_ 12 _−Ns_
_v1_
_v2_
...
_vn_
_vn+1_
_V =_
=
at first using the public key, where N > _[√]n/2. The integral combinations of the row vectors v1, · · ·, vn+1_
of V form an (n + 1)-dimensional lattice L. Suppose that e = (e1, · · ·, en) is a solution to s = ∑i[n]=1 _[a][i][x][i][.]_
Note that the vector
_f = ( f1, · · ·, fn, 0) = (e1 +_ [1]
2 [,][ · · ·][,][ e][n][ +][ 1]2 [, 0][) =][ e][1][v][1][ +][ · · ·][ +][ e][n][v][n][ +][ v][n][+][1][ ∈] _[L][,]_
which contains enough information for the attacker to solve a solution to s = ∑i[n]=1 _[a][i][x][i][. The length of][ f][ is]_
relatively small. The short vector f can be found with non-negligible probability by using lattice basis
reduction algorithms.
In fact, even if we design a knapsack system with the density close to 1 and >0.9408, we cannot
_· · ·_
claim that it is secure against low-density subset sum attacks. Let the length of the message vectors be
bounded by r and N(n, r) be the number of integral lattice points with length at most r in the n-dimensional
sphere of radius r centered at the origin. Assume that the lattice points in the sphere have the same length
and that the lattice reduction algorithms can find a lattice point in the sphere. Thus, the lattice point
output by the lattice reduction algorithm is exactly the message vector with a probability Pr = 1/N(n, r).
However, if the density is slightly greater than >0.9408, N(n, r) is bounded by a constant O(1) or
_· · ·_
a polynomial function O(p(n)). In such a case, the probability Pr = 1/N(n, r) is non-negligible. This
is why Omura et al. [26] showed that the low-density attack can be applied to Chor–Rivest [5] and
Okamoto–Tanaka–Uchiyama cryptosystems [38].
_2.3. Simultaneous Diophantine Approximation_
The simultaneous Diophantine approximation problem is a basic problem in Diophantine
approximation theory, which has found uses both in cryptanalysis [22,28] and cryptography [39].
The problem is defined as follows.
-----
_Information 2019, 10, 75_ 6 of 27
**Definition 2 (Simultaneous Diophantine approximation). The simultaneous Diophantine approximation**
_problem is: given n + 1 real numbers r1, · · ·, rn, ϵ > 0, and an integer Q > 0, find integers p1, · · ·, pn and_
_q : 0 < q_ _Q, such that_
_≤_
����ri − _pqi_ ���� _≤_ _ϵq_ [.]
Informally speaking, this problem asks for a set of fractions with a common and relatively small
denominator approximating the given set of real numbers. There is a solution to the simultaneous
Diophantine approximation problem if Q _ε[−][n], but no efficient algorithm is found. However, when_
_≥_
viewed as a problem involving lattices, the problem can be approximated by lattice basis reduction
algorithms. Note that the integral linear combinations of the row vectors of the matrix
_a1_
_a2_
...
_an_
_an+1_
1 0 0 0
_· · ·_
0 1 0 0
_· · ·_
... ... ... ... ...
0 0 1 0
_· · ·_
_−r1_ _−r2_ _· · ·_ _−rn_ _ϵ/Q_
_A =_
=
form a lattice L. Lattice basis reduction algorithms can be applied to the lattice L to output a reduced
basis. The shortest vector b in the reduced basis can be used to approximate the simultaneous Diophantine
approximation problem. Since b ∈ _L, there exist integers p1, · · ·, pn and q such that_
_n_ �
_b =_ ∑ _piai + qan+1 =_ _p1 −_ _qr1, · · ·, pn −_ _qrn,_ _[q]Q[ϵ]_
_i=1_
�
.
Since b is short, each pi − _qri is small, which is equivalent to saying that |ri −_ _pi/q| is also small._
Thus, {pi/q} is a set of fractions, with a common denominator q, approximating {ri}. This informal
demonstration reveals the relation between lattice reduction algorithms and the simultaneous Diophantine
approximation problem.
**3. Easy Knapsack-Type Problems**
Knapsack-type PKCs always follows a common design morphology [9], that is:
- Construct an easy instance P[easy] from an intractable problem P.
- Shuffle P[easy] to make the resultant problem P[shuffle] seemingly-hard and indistinguishable
from P.
- _P[shuffle] is published as the encryption key. The information s by means of which P[shuffle] is_
reduced to P[easy] is kept as the secret key.
- The authorized receiver knowing s solves P[easy] to recover a message, whereas the task for the
attacker is to solve P[shuffle].
In the knapsack public-key cryptography, several kinds of easy knapsack problems have been considered,
e.g., super-increasing sequences [4], the cargo vectors used in the Graham–Shamir cryptosystem [40] and
the knapsack sequences [41] used for attacking a knapsack-type cryptosystem [16] based on Diophantine
equations. In this section, we propose several new easy knapsack problems, which can be viewed as the
generalizations of those problems presented in [42,43].
-----
_Information 2019, 10, 75_ 7 of 27
_3.1. An Easy Compact Knapsack Problem_
Simultaneous compact knapsack problem is considered in this section: given the sums (s1, s2) ∈
(Z[+])[2] and two cargo vectors A = (a1, · · ·, an), B = (b1, · · ·, bn) ∈ (Z[+])[n], find a vector X = (x1, · · ·, xn),
such that s1 = ∑i[n]=1 _[a][i][x][i][, and][ s][2][ =][ ∑]i[n]=1_ _[b][i][x][i][. The problem has a solution only if][ gcd][(][a][1][,][ · · ·][,][ a][n][)][|][s][1][ and]_
gcd(b1, · · ·, bn)|s2. Without loss of generality, in this paper, we always assume that gcd(a1, · · ·, an) =
gcd(b1, · · ·, bn) = 1. The following theorem gives an easy instance of the simultaneous compact
knapsack problem.
**Theorem 1. Given two cargo vectors A = (a1, · · ·, an) and B = (b1, · · ·, bn). Denote by ci and di the gcd of**
_the first i components of A and B, respectively, i.e., ci = gcd(a1, · · ·, ai), di = gcd(b1, · · ·, bi). If 2 ≤_ _k ≤_ _λi =_
_lcm(ci−1/ci, di−1/di), i = 2, · · ·, n, the following simultaneous compact knapsack problem_
_n_
### ∑ aixi = s1, (4)
_i=1_
_n_
### ∑ bixi = s2, 0 ≤ xi ≤ k − 1, (5)
_i=1_
_can be solved in polynomial (in n) time. Furthermore, the problem has at most one solution._
**Proof. Note that cn−1|ai, i = 1, · · ·, n −** 1, so Equation (4) mod cn−1 gives anxn ≡ _s1 (mod cn−1) . Thus, we_
can invert an and obtain xn ≡ _s1a[−]n_ [1] [(][mod][ c]n−1[)][ . Similarly, we get][ x][n] _[≡]_ _[s]2[b]n[−][1][(][mod][ d]n−1[)][. Then, we can]_
determine a unique xn ∈ **Zλn according to CRT, where λi = lcm(cn−1/cn, dn−1/dn) = lcm(cn−1, dn−1) ≥** _k._
If the unique xn obtained is greater than k − 1, we can conclude that the simultaneous compact knapsack
problem has no solutions. Otherwise, we determine an xn, 0 ≤ _xn ≤_ _k −_ 1.
Suppose that the values of xi+1, · · ·, xn, i = n − 1, · · ·, 2 have been determined, then
_i_ _n_
### ∑ ajxj = s1 − ∑ ajxj, (6)
_j=1_ _j=i+1_
and
_i_ _n_
### ∑ bjxj = s2 − ∑ bjxj. (7)
_j=1_ _j=i+1_
Note that Equation (6) modulo ci−1 gives
_n_
_aixi ≡_ _s1 −_ ∑ _ajxj (mod ci−1) ._
_j=i+1_
It is easy to verify that gcd(ai, ci−1) = ci and gcd(ai/ci, ci−1/ci) = 1. If ci��s1 − ∑nj=i+1 _[a][j][x][j][, we have]_
_ai_ _s1 −_ ∑[n]j=i+1 _[a][j][x][j]_
_ci_ _xi ≡_ _ci_ mod _[c][i]c[−]i_ [1] ; (8)
otherwise, the simultaneous compact knapsack problems (4) and (5) have no solutions. By inverting ai/ci,
we obtain according to Equation (8)
_s1 −_ ∑[n]j=i+1 _[a][j][x][j]_
_xi ≡_ _ci_
� _ai_
_ci_
�−1
mod _[c][i][−][1]_ . (9)
_ci_
-----
_Information 2019, 10, 75_ 8 of 27
Similarly, we can deduce that problems (4) and (5) have no solutions or have a congruence
_s2 −_ ∑[n]j=i+1 _[b][j][x][j]_
_xi ≡_ _di_
� _bi_
_di_
�−1
mod _[d][i][−][1]_ . (10)
_di_
From (9) and (10), we can determine a unique xi _∈_ **Zλi according to the CRT, where λi** =
lcm(ci−1/ci, di−1/di) ≥ _k. Thus, if (4) and (5) have solutions, we can determine a unique xi: 0 ≤_ _xi ≤_ _k −_ 1.
With the determined values of x2, · · ·, xn, we get
_n_
def
_a1x1 = s1 −_ ∑ _ajxj_ = r1,
_j=2_
and
_n_
def
_b1x1 = s2 −_ ∑ _bjxj_ = r2.
_j=2_
If a1|r1 and b1|r2, respectively, and the two quotients are identical, i.e.,
0 = _[r][2]_
_≤_ _[r][1]_
_a1_ _b1_
def
= r ≤ _k −_ 1,
we set x1 = r; otherwise, we deduce that the problems (4) and (5) have no solutions. Even if the unique
values of x1, · · ·, xn have been determined, we cannot claim that they are the solutions to (4) and (5).
We need to verify whether x1, · · ·, xn satisfy (4) and (5). If yes, then X = (x1, · · ·, xn) is a solution to (4)
and (5); otherwise, (4) and (5) have no solutions.
To determine each xi, we need to solve two modular equations by using CRT. This problem can be
solved only by computing 2n modular equations. Thus, the simultaneous compact knapsack problems (4)
and (5) can be solved in polynomial (in n) time. If the problem has solutions, each xi is uniquely determined
according to CRT. Thus, the simultaneous compact knapsack problem has at most one solution.
However, a high-density knapsack-type cryptosystem can not be designed based on this easy knapsack
problem. It should be generalized in some other way.
_3.2. Generalization of the Simultaneous Compact Knapsack Problem_
Before generalizing the simultaneous compact knapsack problem, we first introduce some useful
notations to make the discussion more convenient. Given I ⊂ **Z, K ⊂** **Z[+]** and J = {j = (j1, j2)|j1, j2 ∈ **Z[+]},**
we use I[K] to denote the set {i[k]|i ∈ _I, k ∈_ _K}. ∀j = (j1, j2) ∈_ _J, and I[K]_ mod j represents the set {i[k] mod j =
(i[k] mod j1, i[k] mod j2)|i ∈ _I, k ∈_ _K}. Generally speaking, we have the following inequalities:_
_∀j ∈_ _J,_ ���IK mod j��� _≤_ ���IK��� _≤|I| × |K| ._
The second “≤" holds in that it is possible for different i1, i2 and k1, k2 to give an identical i1[k][1] = i2[k][2][,]
for example, 2[2] = 4[1]; of course, two different i1[k][1] [and][ i]2[k][2] [mod][ j][ also can give rise to the same value.]
-----
_Information 2019, 10, 75_ 9 of 27
**Definition 3. If ∀j ∈** _J,_ ��IK mod j�� = ��IK�� = |I| × |K|, we call set I a truly-distinguishable (T-DIST) modulo
_the set J under the indices of K; if ∀j ∈_ _J,_ ��IK mod j�� = ��IK�� _< |I| × |K|, we call the set I pseudo-distinguishable_
_(P-DIST) modulo the set J under the indices of K; If ∃j ∈_ _J,_ ��IK mod j�� _<_ ��IK��, we call the set I indistinguishable
_(IND) modulo the set J under the indices of K. If different (i1, k1) and (i2, k2) result in the same i1[k][1]_ _[≡]_ _[i]2[k][2]_ [(][mod j][)][,]
_we call the 3-tuples ((i1, k1), (i2, k2), j) a collision. In particular, the collisions in the case of P-DIST are called_
_trivial collisions; The collisions in the case of IND are called non-trivial collisions._
**Theorem 2. A set I is T-DIST (P-DIST, or IND respectively) modulo the set J under the indices of K iff I is T-DIST**
_(P-DIST, or IND respectively) modulo the set J[T]_ _under the indices of K, where J[T]_ = {(j2, j1)|(j1, j2) ∈ _J}._
**Proof. It suffices to note that ∀j = (j1, j2) ∈** _J, i1[k][1]_ [mod][ (][j][1][,][ j][2][) =][ i]2[k][2] [mod][ (][j][1][,][ j][2][)][ iff][ i]1[k][1] [mod][ (][j][2][,][ j][1][) =]
_i2[k][2]_ [mod][ (][j][2][,][ j][1][)][.]
Consider the definitions, in the case of T-DIST, no collisions occur. Thus, given the i[k] mod j, we can
uniquely determine the corresponding (i, k). In the case of P-DIST, when a collision occurs, we only can
determine a unique value r from i[k] mod j. However, there exist at least two integer pairs (i1, k1) and
(i2, k2) such that i1[k][1] [=][ i]2[k][2] [=][ r][. A collision occurs in the case of IND iff][ (][i][1][,][ k][1][)][ ̸][= (][i][2][,][ k][2][)][,][ i]1[k][1] 2 [and]
_[̸][=][ i][k][2]_
_i1[k][1]_ [mod][ j][ =][ i]2[k][2] [mod][ j][.]
**Theorem 3. Given two cargo vectors A = (a1, · · ·, an), B = (b1, · · ·, bn) and two sets I, K ⊂** **Z[+]** _with |I|, |K| =_
_O(1). Let ci and di respectively denote the gcd of the first i components of A and B, and J = {(ci−1/ci, di−1/di)|i =_
2, · · ·, n}. If I is T-DIST modulo the set J under the indices of K, the simultaneous Diophantine equations
_n_ _n_
### ∑ aixi[k][i] [=][ s][1][,] ∑ bixi[k][i] [=][ s][2][,] (11)
_i=1_ _i=1_
_with xi ∈_ _I and ki ∈_ _K, can be solved in polynomial (in n) time. Furthermore, the problem has at most one solution_
_in X = (x1, · · ·, xn)._
**Proof. Note that** _I_, _K_ = O(1), and we can construct a table of I Modulo J under the Indices of K in
_|_ _|_ _|_ _|_
polynomial time. Its query operations can be carried out in polynomial time.
The proof of the theorem is analogous to that of Theorem 1. The only distinction is: in Theorem 1,
we use CRT to determine a unique xi ∈ **Zλi** ; whereas, in Theorem 3, when we obtain a unique
_xi[k][i]_ [mod][ (][c][i][−][1][/][c][i][,][ d][i][−][1][/][d][i][)][, we look up the table to construct and determine a unique][ x][i][ and][ x]i[k][i] [.]
It can be concluded that, if the simultaneous Diophantine equations have solutions, there exists only
one solution. The problem can be solved in polynomial (in n) time.
Algorithm 1 formalizes the computational method of solving the simultaneous Diophantine
Equation (11).
The requirement “T-DIST" is not necessary. In fact, if I is P-DIST modulo the set J under the indices of
_K, Theorem 3 and hence Algorithm 1 also work. In such a case, each xi[k][i]_ [is uniquely determined, whereas]
some values of xi are not uniquely determined. Now, we give the following theorem.
-----
_Information 2019, 10, 75_ 10 of 27
**Algorithm 1. Solving the simultaneous Diophantine equations**
1 Construct a table T showing that I is T-DIST modulo J under the indices of K and store the table.
2 Compute l1n = s1a[−]n [1] [(][mod][ c]n−1[)][,][ l]2n [=][ s]2[b]n[−][1] (mod dn−1).
1) Look up T, decide an entry matching (l1n, l2n).
2) If no, output “No Solutions" and exit;
3) Otherwise, determine and store the values of xn and xn[k][n] [.]
3 For i = n − 1, · · ·, 2
1) Decide whether ci and di divide r1i = s1 − ∑[n]j=i+1 _[a][j][x]kj_ _j_ [and][ r][2][i][ =][ s][2][ −] [∑][n]j=i+1 _[b][j][x]kj_ _j_ [, respectively.]
2) If no, output “No Solutions" and exit;
3) Otherwise, calculate l1i = _[r]c[1]i[i]_ � _acii_ �−1 mod _[c][i]c[−]i_ [1] [,][ l][2][i][ =][ r]d[2]i[i] � _dbii_ �−1 mod _[d]d[i][−]i_ [1] [.]
If no entries in T match (l1i, l2i), exit with “No Solutions";
Otherwise, determine and store the unique xi and xi[k][i] [.]
4 Check whether c1 = a1 divides r11 = s1 − ∑[n]j=2 _[a][j][x]kj_ _j_ [and][ d][1][ =][ b][1][ divides][ r][21][ =][ s][2][ −] [∑][n]j=2 _[b][j][x]kj_ _j_
and r11/a1 = r21/b1
1) If yes, set x1[k][1] [=][ r]a[11]1 [=][ r]b[21]1 [;]
2) Otherwise, output “No Solutions" and exit.
3) Solve x1 from x1[k][1] [, and store][ x][1][ and][ x]1[k][1] [.]
5 Decide whether ∑i[n]=1 _[a][i]_ _[x]i[k][i]_ [=][ s][1][ and][ ∑]i[n]=1 _[b][i]_ _[x]i[k][i]_ [=][ s][2][.]
1) If yes, output X = (x1, · · ·, xn) and exit;
2) Otherwise, output “No Solutions" and exit.
**Theorem 4. Given two cargo vectors A = (a1, · · ·, an), B = (b1, · · ·, bn) and two sets I, K ⊂** **Z[+]** _with_
_|I|, |K| = O(1). Denote by ci and di the gcd of the first i components of A and B, respectively. Let J =_
_{(ci−1/ci, di−1/di)|i = 2, · · ·, n}. If I is P-DIST modulo the set J under the indices of K, the simultaneous_
_Diophantine equations_
_n_ _n_
### ∑ aixi[k][i] [=][ s][1][,] ∑ bixi[k][i] [=][ s][2][,]
_i=1_ _i=1_
_with xi ∈_ _I and ki ∈_ _K, can be solved in polynomial (in n) time. Furthermore, it has at most one solution in_
_x1[k][1][,][ · · ·][,][ x]n[k][n]_ _[.]_
**4. The Proposed PKCHD Cryptosystem**
This section derives the proposed PKCHD, a probabilistic knapsack-type cryptosystem. The public
information consists of two sets I, K **Z[+],** _I_, _K_ = O(1), and n **Z[+], the dimension of a message vector. Let**
_⊂_ _|_ _|_ _|_ _|_ _∈_
_µ = max i[k],_ _i ∈_ _I and k ∈_ _K._ (12)
The cryptographic algorithm consists of three sub-algorithms: key generation, encryption and decryption.
_4.1. Key Generation_
Randomly choose two cargo vectors A = (a1, · · ·, an) and B = (b1, · · ·, bn) ∈ (Z[+])[n], and denote by ci
and di the gcd of the first i components of A and B, respectively. Let J = {(ci−1/ci, di−1/di)|i = 2, · · ·, n}.
The randomly-chosen A and B must satisfy the following condition:
**Con: I is T-DIST modulo the set J under the indices of K.**
-----
_Information 2019, 10, 75_ 11 of 27
Randomly choose two prime numbers p = q such that
_̸_
_n_ _n_
_p ≥_ _µ_ ∑ _ai,_ _q ≥_ _µ_ ∑ _bi._ (13)
_i=1_ _i=1_
Let N = pq. Compute the vector E = (e1, · · ·, en) according to CRT,
_ei ≡_ _ai(mod p),_ _ei ≡_ _bi(mod q)._ (14)
Compute w = en[−][1][(][mod][ N][)][. The public encrypting vector is][ F][ = (][ f]1[,][ · · ·][,][ f][n][) = (][ f]1[,][ · · ·][,][ f]n−1[, 1][)][ with each]
_fi ≡_ _wei(mod N)._ (15)
The secret key consists of p, q and en. When decrypting a cipher-text, the receiver stores the values of ci, di.
_4.2. Encryption_
Let M = (m1, · · ·, mn), mi ∈ _I be the message to be encrypted, and G = (g1, · · ·, gn), gi ∈_ _K be a_
randomly chosen index vector. Using the public key F, cipher-text c is computed by
_n_
_c =_ ∑ _fimi[g][i]_ [.] (16)
_i=1_
_4.3. Decryption_
To decipher a cipher-text c, the receiver firstly computes sp and sq by
�
_sp ≡_ _enc ≡_ ∑i[n]=1 _[e][i][m]i[g][i]_ _[≡]_ [∑]i[n]=1 _[a][i][m]i[g][i]_ [(][mod][p][)][,] (17)
_sq ≡_ _enc ≡_ ∑i[n]=1 _[e][i][m]i[g][i]_ _[≡]_ [∑]i[n]=1 _[b][i][m]i[g][i]_ [(][mod][q][)][ .]
From Equations (12) and (13), we know that
_n_ _n_
_sp =_ ∑ _aimi[g][i]_ [,] _sq =_ ∑ _bimi[g][i]_ [.] (18)
_i=1_ _i=1_
According to the key generation algorithm and Theorem 3, we know that Equation (18) are easy
simultaneous Diophantine equations. The receiver can recover the message M by solving Equation (18)
according to Algorithm 1.
_4.4. Remarks_
Even though the parameter N is not an RSA integer, the system works. The “T-DIST” requirement for
the cargo vectors A and B in Con is not necessary. In fact, if A and B meet the following requirement,
**Con[∗]: I is P-DIST modulo the set J under the indices of K.**
The cipher-text will not be uniquely deciphered. The sender can add some redundant information to the
message vector so that the receiver can pick out the exact message from all the plaintexts he deciphers.
Alternatively, both of them can agree on an encoding method by means of which the messages are encoded as
plaintext vectors so that no collision occurs in all the encoded plaintext vectors.
-----
_Information 2019, 10, 75_ 12 of 27
_4.5. A Practical Implementation_
To implement the PKCHD in real-life practice, we choose I = 0, 1,, 7, K = 1, 2, 3 and n = 150.
_{_ _· · ·_ _}_ _{_ _}_
Thus, µ = max i[k] = 7[3] = 343. Let W be a set consisting of the following pairs (w1, w2) ∈ (Z[+])[2]: (1,51),
(1,65), (1,66), (2,33), (2,37), (2,39), (2,41), (2,43), (2,47), (3,17), (3,22), (3,25), (3,26), (3,29), (3,32), (4,23), (5,13),
(5,16), (5,19), (6,11), (6,13), (7,11), (8,11), (9,11). We have the following theorem.
**Theorem 5. I is P-DIST modulo the set J = W** _W_ _[T]_ _under the indices of K._
_∪_
**Proof. According to Theorem 2, we only need to show that I is P-DIST modulo the set W under the indices**
of K, which can be proved by verifying that for every (w1, w2) ∈ _W,_
_|I[K]_ mod (w1, w2)| = |I[K]| < |I| × |K|.
Take (1,51) as an example,
_I[K]_ mod (1, 51) = {(0, i)��i =0, · · ·, 9, 16, 25, 27, 36, 49, 13, 23, 12, 37}.
Thus, _I[K]_ mod (1, 51) = _I[K]_ = 19 < _I_ _K_ = 24.
_|_ _|_ _|_ _|_ _|_ _| × |_ _|_
In fact, J gives all the 48 integer pairs j = (u, v) with uv < 100 such that I is P-DIST modulo the set
(u, v) under the indices of K = 1, 2, 3 .
_{_ _}_ _{_ _}_
We randomly choose two cargo vectors A = (a1, · · ·, an) and B = (b1, · · ·, bn) such that
(ci−1/ci, di−1/di) ∈ _J = W ∪_ _W_ _[T],_ _i = 2, · · ·, n,_
where ci = gcd(a1, · · ·, ai) and di = gcd(b1, · · ·, bi). According to Theorem 5, the generated vectors
_A and B meet the requirement of Con[∗]. We also generate RSA integers N = pq with p, q primes and_
_p ≥_ 343 ∑i[n]=1 _[a][i][,][ q][ ≥]_ [343][ ∑]i[n]=1 _[b][i][. We compute the public vector][ F][ according to Equations (][14][) and (][15][).]_
The message M is split into n = 150 blocks with each block mi ∈ _I. When generating G = (g1, · · ·, gn),_
we should note that, if mi = 2, the corresponding gi = 2. The cipher-text is computed as
_̸_
_n_
_c =_ ∑ _fimi[g][i]_ [,] _mi ∈_ _I and gi ∈_ _K._ (19)
_i=1_
The decryption is the same as Equations (17) and (18). However, if we compute mi[g][i] [=][ 4, we should]
decipher mi into 4 rather than 2. When confronted with some mi[g][i] [=][ 0 or 1, we can uniquely determine]
_mi = 0 or 1 (Of course, gi is not uniquely determined). Thus, the message can be uniquely recovered._
One observation that we also want to point out here is that the proposed implementation can be
modified as a deterministic encryption algorithm. We can develop an encoding algorithm which encodes
messages into an n-dimensional vector Y = (y1, · · ·, yn) with every yi ∈ _M[G]_ = {mi[g][i] ��0 ≤ _mi ≤_ 7, 1 ≤
_gi ≤_ 3}. In such a case, the decryption also works. After deciphering a cipher-text into a Y ∈ (M[G])[n],
the receiver can decode Y to recover the message. Of course, the modification is of no special significance
both in efficiency and for security. However, it will be very useful for us to discuss the low-density attacks
on our system.
-----
_Information 2019, 10, 75_ 13 of 27
**5. Performance and Parameter Specifications**
This section specifies the parameter selection, analyzes the performance related issues, i.e., the key
generation, the computational complexity of the encryption and decryption algorithms, the public key
size and the information rate.
_5.1. Parameter Specifications_
_p and q should be slightly greater than µ ∑i[n]=1_ _[a][i][ and][ µ][ ∑]i[n]=1_ _[b][i][, respectively. When generating the]_
public and secret keys, _I_, _K_ = O(1) is not necessarily required. However, this requirement does improve
_|_ _|_ _|_ _|_
the efficiency of decryption. To decrypt a cipher-text, n table-query operations are needed by the receiver.
If _I_, _K_ = O(1), the table only includes _I_ _K_ = O(1) rows, which makes the table-query operations
_|_ _|_ _|_ _|_ _|_ _| × |_ _|_
more efficient. In order to make the data sizes of the public and secret keys acceptable, we should require
that ∀i ∈ _I, k ∈_ _K, |i|2, |k|2 = O(1). From Equations (12) and (13), we know that, if the lengths of i and k_
are relatively large, then the length of N and hence the lengths of the public and secret keys will be very
large. It makes the proposed PKCHD system impractical.
If factoring the generated modulus N is hard, N can be published without compromising the security.
However, if the sender knows N, he can encrypt a message vector M by
_n_
_c =_ ∑ _fimi[g][i]_ [(][mod][ N][)][,] (20)
_i=1_
which results in the reduction of the bit-length of the cipher-text. The public vector F can be permuted and
re-indexed for increased security.
_Remark. The public key size of the proposed system is about (n −_ 1)|N|2. Thus, the considerable
public data size may be a burden for realizing the PKC. In fact, the public key of a PKC is stored in a
certificate issued by the trusted third party. However, if the public key is too large, at the certificate, we can
save a hashed value instead of the public key. To encrypt a message, the sender asks the intended receiver
for the public key F. If the public key F[′] sent by the receiver matches the hashed value stored at the
receiver’s certificate, the sender conceives that the vector F[′] is exactly the public key F of the receiver and
then uses it to encrypt the message. This method is suggested in [4] to compress the public key data size.
_5.2. On Generating the Keys_
Algorithm 2 generates the secret cargo vectors A = (a1, · · ·, an) and B = (b1, · · ·, bn) subject to Con[∗].
**Algorithm 2. Generating the secret cargo vectors A, B**
1 Given I and K, compute a set J ⊂ �Z[+][�][2] such that I is P-DIST modulo K under the indices of J.
2 Randomly choose n-1 integer pairs (ui, vi) ∈ _J, i = 1, · · ·, n-1 with repetition permitted._
3 1) Randomly choose 2(n-1) numbers s2, · · ·, sn and t2, · · ·, tn
�
with gcd(si, uj) = gcd(ti, vj) = 1 for i = 2, · · ·, n-1
gcd(si, si+1) = gcd(ti, ti+1) = 1
2) If s1 = t1 = un = vn = 1, for i = 1, · · ·, n, we calculate ai = si ∏[n]j=i _[u][j][,]_ _bi = ti ∏[n]j=i_ _[v][j]_
4 Output A = (a1, · · ·, an), B = (b1, · · ·, bn).
Given I and K, the set J consisting of integer pairs can be generated by doing exhaustive computation
for all the integer pairs (u, v) with the product uv bounded by a small constant (for example, 100). On the
basis of Theorem 6, the generated vectors A and B really satisfy the requirement of Con[∗].
-----
_Information 2019, 10, 75_ 14 of 27
**Theorem 6. Generated by Algorithm 2, the secret cargo vectors A and B are subject to Con[∗].**
**Proof. Let ci and di denote the gcd of the first i components of A and B, respectively. To prove that A and**
_B are subject to Con[∗], we only need to show that, for each i = 2, · · ·, n, the (ci−1/ci, di−1/di) belong to the_
generated set J.
It is easy to verify that
_ci = gcd (a1, · · ·, ai)_
� _n_ _n_ �
= gcd _s1_ ∏ _uj, · · ·, si_ ∏ _uj_
_j=1_ _j=i_
� _n_ _n_ � _n_
= gcd ∏ _uj, · · ·,_ ∏ _uj_ = ∏ _uj._
_j=1_ _j=i_ _j=i_
Similarly,
_n_
_di = gcd (b1, · · ·, bi) =_ ∏ _vj._
_j=i_
Therefore,
� _ci−1_, _[d][i][−][1]_
_ci_ _di_
�
= (ui−1, vi−1) ∈ _J,_
as desired.
In Algorithm 2, si and ti should be carefully chosen to guarantee that the generated ai and bi are not
too large and always have the same binary length. For example, we can choose those si and ti with lengths
�����
,
�����2
_−_
�����2
_−_
�����2
_n_
### ∏ uj
�����j=1
_n_
### ∏ vj
�����j=1
�����
_n_
### ∏ uj
_j=i_
_n_
### ∏ vj
_j=i_
and
Thus,
_|si|2 =_
_|ti|2 =_
.
�����2
. (21)
�����2
_|ai|2 ≈|bj|2 ≈|b1|2 ≈|a1|2 ≈_
�����
_n−1_
### ∏ ui
_i=1_
Note that p and q are slightly greater than µ ∑i[n]=1 _[a][i][ =][ 343][ ∑]i[n]=1_ _[a][i][ and][ µ][ ∑]i[n]=1_ _[b][i][ =][ 343][ ∑]i[n]=1_ _[b][i][, and]_
that uivi < 100. Then, for each fi, the length is
_| fi|2 ≈|N|2 ≈|p|2 · |q|2 ≈|343na1|2 · |343nb1|2_
_n−1_
_≈|343[2]n[2]a1 · b1|2 ≈_ 2|343n|2 + ∏ _uivi_
����� _i=1_ �����2
_< 2|343n|2 + |100[n][−][1]|2_
_≈_ 2|343n|2 + (n − 1)|100|2,
(22)
-----
_Information 2019, 10, 75_ 15 of 27
which is bounded by O(n). If the selected (ui, vi) is uniformly distributed over the set J = W ∪ _W_ _[T],_
the expected value of ui _vi is_
_·_
�
_ui · vi ≈_ 48
�
### ∏ uv = 24
(u,v)∈J
### ∏ w1w2 ≈ 76.1.
(w1,w2)∈W
Thus,
_fi ≈_ _N ≈_ 343[2] _· n[2]_ _· 76.1[n][−][1]._ (23)
The two estimations from Equations (22) and (23) are critical for examining the effects of the
low-density subset sum attacks on the implementation of the proposed cryptosystem.
To defend against multiple transmission attacks, one way is frequently changing the secret/public
keys. However, since the proposed PKCHD cryptosystem requires an RSA modulus, we prefer a slight
modification to it in practical use. Here, we can randomly choose two coprime numbers p and q, calculate
the modulus N = pq and keep it secret. Notice that p and q are not necessarily primes.
_5.3. Computational Complexity_
In this section, we evaluate the computational complexity of the proposed PKCHD cryptosystem by
analyzing the costs for encrypting a message and decrypting a cipher-text. Since the length of fi is bounded
by O(n) (see Equation (22)), encrypting a message (Equation (16)) needs n 1 multiplications and additions,
_−_
and n exponentiations. (1) Generally, the computation for the n − 1 additions is inexpensive; (2) as pointed
out earlier, the lengths of mi ∈ _I and gi ∈_ _K are bounded by O(1). It takes O(n) bit operations to perform_
the n exponentiations. Naturally, the binary length of mi[g][i] [is also][ O][(][1][)][. (3) Meanwhile,][ O][(][|][ f][i][|][2][ × |][m]i[g][i]
_[|][2][) =]_
_O(n) bit operations are required to do the multiplication fi × mi[g][i]_ [. Thus, the computational complexity for]
carrying out the n 1 multiplications is given by O(n[2]). Consequently, the computational complexity for
_−_
message encryption is O(n[2]).
To decrypt a cipher-text, the receiver should do a modular multiplication in (17) and solve the
easy simultaneous Diophantine equations in (18). For the modular multiplication, O((|N|2)[2]) =
_O(n[2]) bit operations are required. To solve the Diophantine Equations (18) for M, the receiver only_
needs O(n) division, subtraction, multiplication and table-query operations. Generally, the O(n)
divisions and multiplications are the most costly. The bit lengths of the two integers involved in a
division (or a multiplication) are respectively bounded by O(n) and O(1). Thus, the computational
complexity for doing the O(n) division, subtraction, multiplication and table-query operations is O(n[2]).
Thence, the computational complexity of the decryption algorithm is also O(n[2]).
Compared with the traditional asymmetric encryption primitives RSA [2] and El Gamal [3],
the proposed PKCHD cryptosystem has improvement in efficiency. For instance, both the encryption
and decryption of the proposed PKCHD cryptosystem are only of quadratic bit complexity, whereas
RSA [2] and El Gamal [3] reach cubic regarding the security parameter (If the length of the encryption
exponentiation e of RSA is bounded by O(1), for example, e = 3 or 2[17] + 1, the encryption only performs
_O(log[2]2[N][)][ bit operations). To make the comparison more concrete, we take the encryption of the proposed]_
implementation, for example. If n = 150, from (23), we have
_| fi|2 ≈_ ���3432 · n2 · 76.1n−1���2 [=][ 963.]
Thus, about (n − 1) | fi|2 ���migi ���2 [=][ 149][ ·][ 963][ ·][ 9][ ≈] [1.3][ ×][ 10][6][ bit operations are required to finish the encryption.]
The computational cost is only about 1.3 10[6]/1024[2] 1.24 times that of a standard RSA-1024 modular
_×_ _≈_
multiplication.
-----
_Information 2019, 10, 75_ 16 of 27
_5.4. Information Rate_
The information rate ρ of a cryptosystem is defined as the ratio of the binary length of the message to
that of the cipher-text. In the proposed PKCHD cryptosystem, the information rate turns out to be
3n
_ρ =_ .
log2Cmax
We need to evaluate the binary length of Cmax. Note that
_n_
_Cmax = 343_ ∑ _fi ≈_ 343 [(n − 1) f1 + 1]
_i=1_
_≈_ 343 (n − 1) f1 ≈ 343[3] _· (n −_ 1)n[2] _· 76.1[n][−][1]._
Thus, the information rate is evaluated by
(24)
3n
_ρ ≈_
log2 [343[3] _· (n −_ 1)n[2] _· 76.1[n][−][1]]_ [.]
When n = 150, the information rate ρ is about 0.46.
**6. Security Analysis**
Suppose that the attacker is trying to cryptanalyze the proposed PKCHD cryptosystem. Given a
ciphertext c, the attacker has two methods to attack the proposed cryptosystem. The one is to solve the
cracking problem [44], that is, determine the unique message vector M = (m1, · · ·, mn) according to his
knowledge about the public information and the enciphering function (16) such that (16) is satisfied for
some small integers g1, · · ·, gn. The other method is to solve the trapdoor problem, that is, reverse the
basic mathematical construction of the trapdoor in a PKC. If the attacker finds an efficient algorithm for the
trapdoor problem, he will also have an algorithm for the cracking problem. This section investigates the
hardness for the attacker to solve the cracking problem and the trapdoor problem. To make our discussion
more concrete, we only consider the attacks on the implementation described in Section 4.
_6.1. On Solving the Cracking Problem_
6.1.1. Brute Force Attacks
One straightforward way to attack the system is to solve (19) for M = (m1, · · ·, mn) directly. Let M[G] =
_{mi[g][i]_ ��0 ≤ _mi ≤_ 7, 1 ≤ _gi ≤_ 3}. To determine whether (19) has a solution, and if so, to find it, the attacker
can compute all the ∑i[n]=1 _[f][i][m]i[g][i]_ [with][ m]i[g][i] ��MG�� = 19, so the brute force attack
_[∈]_ _[M][G][. However, note that]_
will take on the order of 19[n] steps. A better method is to compute and sort each of the sets
�
_migi_ _[∈]_ _[M][G]_
�����
_S1 =_
� _n/2_
### ∑ fimi[g][i]
_i=1_
and
�
_n_
_c −_ ∑ _fimi[g][i]_
_i=n/2+1_
�
_migi_ _[∈]_ _[M][G]_,
�����
_S2 =_
-----
_Information 2019, 10, 75_ 17 of 27
and then scan S1 and S2, looking for a common element. If a common element s = ∑i[n]=[/2]1 _[f][i][m]i[g][i]_ =
_c −_ ∑i[n]=n/2+1 _[f][i][m]i[g][i]_ [is found, then][ c][ =][ ∑]i[n]=1 _[f][i][m]i[g][i]_ [. The entire procedure takes on][ n][19][n][/2][ steps [][24][]. For the]
proper parameters n, the attack is computationally infeasible.
6.1.2. Low-Density Attack
Low-density subset sum attacks only apply to a linear multivariate equation. Note that the encryption
function (19) is nonlinear about the message vector M, so the low-density attacks cannot be used to
cryptanalyze the proposed cryptosystem directly. The attacker can re-linearize the encryption function. By
setting yi = mi[g][i] _[∈]_ _[M][G][, the attacker obtains a linear function from the encryption function (][19][),]_
_n_
_c =_ ∑ _fiyi,_ _yi ∈_ _M[G]._ (25)
_i=1_
Notice that the problem (25) is not a standard compact knapsack problem. Analogous to the case of the
standard knapsack problem, the known best method for solving the problem (25) seems to be the “Brute
Force Attacks” given by Ref. [24]. However, if the attacker wants to use low-density attacks to recover the
corresponding message from a given cipher-text c, he cannot ensure that the solution to (25) belongs to
_M[G]. The attacker can solve the problem (25) by solving the compact knapsack problem defined below,_
_n_
_c =_ ∑ _fiyi,_ 0 ≤ _yi ≤_ 343. (26)
_i=1_
The attacker looks forward to finding a solution Y = (y1, · · ·, yn) to (26) using the low-density attacks.
Now we assume that the attacker has found such a solution Y to the compact knapsack problem (26).
If every yi ∈ _M[G], then the attacker can simply solve n equations yi = mi[g][i]_ [to recover the message][ M][.]
Thus, we call the vector Y a message plaintext since it contains enough information about the message
_M. On the contrary, if there exists a yi ̸∈_ _M[G], then Y contains little information about M and hence is_
useless for the attacker to decipher the cipher-text. Because the vector Y is also a solution to (26), we call
the vector Y a plaintext vector. In other words, in the relinearization attack model, we view the plaintext
space as 0,, 343 and the message plaintext space as (M[G])[n]. The difference between the two sets
_{_ _· · ·_ _}[n]_
0,, 343 (M[G])[n] is the redundant information added to the messages, or, equivalently, we pick out
_{_ _· · ·_ _}[n]_ _−_
some elements as the message plaintexts from the whole plaintext space. This method has been used in
the Chor–Rivest [5] and Okamoto–Tanaka–Uchiyama [38] schemes. In their schemes, only those vectors
whose Hamming weight is exactly h are the message plaintexts.
Now, we begin to investigate the effects of the powerful low-density attacks on the security of the
proposed PKCHD. When applied to a specific knapsack instance, the low-density attacks depend on the
density of the knapsack. To estimate the density of the compact knapsack problem (26) using the definition
of (3), we must evaluate all the ei = |mi|2 and Cmax. The estimation of Cmax is given in (24) and each
_ei = |mi|2 = ⌈log2 (343 + 1)⌉_ = 9, so the density is
9n 9n
_d =_ (27)
_≈_
log2Cmax log2 [343[3] _· (n −_ 1)n[2] _· 76.1[n][−][1]]_ [.]
If we choose n = 150, the density is about 1.38 > 0.9408 .
_· · ·_
If the public vector F is evaluated via (22), we can give the lower bound of the density. According to
(22) and (24), we can evaluate
_Cmax ≈_ 343 (n − 1) f1 < 343[3](n − 1)n[2]100[n][−][1].
-----
_Information 2019, 10, 75_ 18 of 27
Thus, the density is lower-bounded by
9n
_d >_
log2 [343[3](n − 1)n[2]100[n][−][1]] [.]
In the case of n = 150, the lower bound is about 1.3 > 0.9408 . If we adopt the definition of density
_· · ·_
given in [7], the estimation will be ever larger.
With an appropriate choice of the parameters, the PKCHD can obtain a high density even under the
worst case scenario. However, we cannot claim its security against low-density subset-sum attacks only
by an argument based on density. In the knapsack-type cryptographic history, so many cryptosystems
have been broken by the powerful low-density attacks. Even those cryptosystems with high density such
as Chor–Rivest [5] and Okamoto–Tanaka–Uchiyama [38] schemes were also shown to be vulnerable to
low-density attacks [26,27]. Thus, we must be cautious to claim the proposed PKCHD’s security against
the low-density attacks. Other lattice-based attacks on the system also need to be well examined. If we
have shown that the proposed cryptosystem is invulnerable to the known lattice attacks, we think that the
security of the cryptosystem against the lattice-reduction-based attacks should be convincing.
6.1.3. On the Number of Plaintext Vectors That a Cipher-Text Has
The low-density subset-sum attacks always assume that the practical lattice reduction algorithms
can serve as an SVP oracle at least in the cases of low-dimensional lattices. In fact, lattice reduction
algorithms perform well in practice, and some current experimental records can be found in [27]. Thus,
we assume that lattice reduction algorithms can obtain the shortest vector in a lattice with low dimension.
Meanwhile, another fact is that the encryption function of the proposed PKCHD is non-injective under
the relinearization attack model. Thence, for a given cipher-text c, 0 ≤ _c ≤_ 343 ∑i[n]=1 _[f][i][, there are many]_
preimages Y such that (26) is satisfied. The lengths of the preimages are bounded by the length r of the
vector Ymax = (343, · · ·, 343). Thus, all the preimages are the lattice points in the n-dimensional sphere of
radius r centered at the origin. The number N(n, r) of the lattice points in the sphere is exactly the number
of the preimages corresponding to a given cipher-text c. Furthermore, all the preimages almost have the
same length. No evidence shows that the message is the shortest vector among all the plaintext vectors.
In fact, Refs. [42,43] have given a small example in which the message plaintext is not the shortest vector
no matter what norms are used. Thus, the lattice reduction algorithms just find a random vector in the
_N(n, r) preimages. We use an assumption to formalize what we have discussed._
**Unif: Given a cipher-text c, the vector output by the lattice reduction algorithms is uniformly distributed**
over the N(n, r) plaintext vectors.
**Theorem 7. Under the assumption Unif, the probability δ of the lattice algorithms finding out the message vector**
_is negligible._
**Proof. Based on the assumption Unif, we can conclude that δ = 1/N(n, r). Therefore, N(n, r) needs to be**
evaluated. Since Ref. [27] presented the estimation of the upper bound of N(n, r), to complete this proof,
the lower bound is required. Notice that the expected number N(n, r) should be the ratio of the number of
all the plaintext vectors to that of the possible cipher-texts, i.e.,
344[n]
_N(n, r)_
_≈_
343 ∑i[n]=1 _[f][i][ +][ 1][ ≈]_ _C[344]max[n]_
344[n]
_≈_
343[3] (n 1)n[2] 76.1[n][−][1][ >][ 2][n][,]
_·_ _−_ _·_
-----
_Information 2019, 10, 75_ 19 of 27
for sufficiently large n. Obviously,
1
_δ =_
_N(n, r)_ _[<][ 1]2[n]_
is negligible.
The evaluation of the number of the preimages that a cipher-text has is somewhat rough. However,
it suffices to show the non-injectivity of the encryption function under the relinearization attack model.
Thence, another way of evaluating the number of the preimages is presented. Note that any vector Y ∈
0, 1,,, 343 satisfying (26) must be a solution to the modular knapsack problem defined below,
_{_ _· · ·_ _· · ·_ _}[n]_
_n_
_c =_ ∑ _fiyi (mod N),_ 0 ≤ _yi ≤_ 343.
_i=1_
It is easy to verify that this problem is equivalent to the following simultaneous compact
knapsack problem,
_n_ _n_
_cen (mod p) =_ ∑ _aiyi,_ _cen (mod q) =_ ∑ _biyi._
_i=1_ _i=1_
To solve the problem, the method given in Theorem 1 is preferred. According to CRT, a unique
_yi modulo λi = lcm(ci−1/ci, di−1/di) can be determined. However, since λi = lcm(ci−1/ci, di−1/di) =_
lcm(ui−1, vi−1) ≤ _ui−1vi−1 < 100 and 0 ≤_ _yi ≤_ 343, we can determine at least three values for each yi.
Finally, there are at least 3[n] vectors Y = (y1, · · ·, yn) for which a given cipher-text c can be determined.
Of course, not all the vectors are the solutions to (26). However, even if a small amount of the vectors
satisfy (26), it suffices to show that a given cipher-text c has exponentially many plaintext vectors.
Now, a small example (see Table 1) is used to illustrate what we have discussed. To simplify the
discussion, we set I = 0, 1, 2, 3, K = 1, 2, 3, and n = 9. In this case, the cipher-text c = 44190990551868
_{_ _}_ _{_ _}_
has ten preimages Ys under the relinearization attack model. However, there exists only one message
plaintext vector Y1 = (4, 27, 3, 27, 2, 27, 0, 1, 4) amongst all the ten preimages. The left nine preimages
_Y2, · · ·, Y10 are the plaintext vectors. Thus, we conclude that the low-density subset sum attack will find_
the message plaintext vector Y1 with a probability δ = 101 [under the assumption][ Unif][. Additionally,]
the message plaintext vector Y1 is not the shortest non-zero vector in the lattice involved in the
low-density subset sum attack no matter what norms are used. If we use (20) to encrypt the message,
the encryption function
9
_c =_ ∑ _fiyi (mod N) = 192662536160,_ 0 ≤ _yi ≤_ 27
_i=1_
even has 237 preimages in all, which are not listed in Table 1 for space limitations. In this case, the parameter
_n is too small to achieve practical security. However, if a relatively large n (e.g., 150) is chosen, the number_
of the preimages of a given cipher-text will be very large. This is what we have claimed in the proof of
Theorem 7.
-----
_Information 2019, 10, 75_ 20 of 27
**Table 1. The non-injectivity of the encryption function under the relinearization attack model.**
_I_ _{0, 1, 2, 3}_
_K_ _{1, 2, 3}_
_µ_ 27
_n_ 9
_A_ 10000, 6000, 7000, 5800, 5300, 5840, 8210, 6662, 5113
_B_ 10000, 5000, 8000, 5500, 5100, 6150, 5830, 5335, 6007
_p_ 999979
_q_ 999983
_N_ 999962000357
_E_ 10000, 250000750, 999712012607, 75004225, 50004250,
499903507646, 594995715, 750303249963, 499757509985
_e9[−][1]_ 759237254392
_F_ 661037209656, 7824090728, 451539481682,
866739311295, 192593114076, 586570143338,
753328582077, 356431315295, 1
_M_ (2, 3, 3, 3, 2, 3, 0, 1, 2)
_G_ (2, 3, 1, 3, 1, 3, 2, 3, 2)
_c_ 44190990551868
_Y_ (4, 27, 3, 27, 2, 27, 0, 1, 4), (10, 5, 12, 19, 19, 7, 10, 1, 4)
(5, 12, 9, 13, 9, 27, 10, 1, 4), (18, 6, 4, 25, 13, 4, 0, 11, 4)
(13, 13, 1, 19, 3, 24, 0, 11, 4), (5, 8, 19, 27, 4, 1, 0, 21, 4)
(2, 0, 15, 27, 24, 1, 0, 21, 4), (1, 0, 22, 7, 1, 21, 10, 21, 4)
(2, 3, 16, 8, 1, 21, 21, 1, 14), (3, 2, 5, 23, 0, 12, 12, 11, 24)
6.1.4. On Reducing to the CVP
Nguyen and Stern [27] found that the knapsack problem also can be reduced to the CVP. Note that
the solutions of
_n_
### ∑ zi fi = 0 (28)
_i=1_
form an (n 1)-dimensional linear space over R. Thus, the integral solutions of (28) form an (n 1)_−_ _−_
dimensional lattice L. Given a cipher-text c, we can compute by using an extended Euclidean algorithm
integers x1, · · ·, xn such that c = ∑i[n]=1 _[x][i][ f][i][. Let][ Y][ = (][y][1][,][ · · ·][,][ y][n][)][ be a plaintext vector (not necessarily the]_
message plaintext vector). Then the vector u = (x1 − _y1, · · ·, xn −_ _yn) belongs to L such that_
_n_ _n_ _n_
### ∑(xi − yi) fi = ∑ xi fi − ∑ yi fi = c − c = 0.
_i=1_ _i=1_ _i=1_
In addition, u is fairly close to the vector X = (x1, · · ·, xn). Thus, the closest vector u ∈ _L to X is_
expected to be found by accessing the CVP-oracle. Thus, X − _u is a plaintext vector. However, we should_
observe that the success probability of the reduction depends on the number N(n, r) of integer points in
the (n 1)-dimensional spheres. According to Theorem 7, we can conclude that the closest vector output
_−_
by the CVP-oracle is the exact message plaintext vector with a negligible probability.
Furthermore, the cryptanalysis of low-weight knapsacks [26,27] does not compromise the security
of the system in which the low-weight vectors are not selected as message vectors. Until now, it is safe
to claim the security of the cryptosystem against the known lattice-based attacks including low-density
subset-sum attacks.
|I K µ n A B p q N E e−1 9 F M G c Y|{0, 1, 2, 3} {1, 2, 3} 27 9 10000, 6000, 7000, 5800, 5300, 5840, 8210, 6662, 5113 10000, 5000, 8000, 5500, 5100, 6150, 5830, 5335, 6007 999979 999983 999962000357 10000, 250000750, 999712012607, 75004225, 50004250, 499903507646, 594995715, 750303249963, 499757509985 759237254392 661037209656, 7824090728, 451539481682, 866739311295, 192593114076, 586570143338, 753328582077, 356431315295, 1 (2, 3, 3, 3, 2, 3, 0, 1, 2) (2, 3, 1, 3, 1, 3, 2, 3, 2) 44190990551868 (4, 27, 3, 27, 2, 27, 0, 1, 4), (10, 5, 12, 19, 19, 7, 10, 1, 4) (5, 12, 9, 13, 9, 27, 10, 1, 4), (18, 6, 4, 25, 13, 4, 0, 11, 4) (13, 13, 1, 19, 3, 24, 0, 11, 4), (5, 8, 19, 27, 4, 1, 0, 21, 4) (2, 0, 15, 27, 24, 1, 0, 21, 4), (1, 0, 22, 7, 1, 21, 10, 21, 4) (2, 3, 16, 8, 1, 21, 21, 1, 14), (3, 2, 5, 23, 0, 12, 12, 11, 24)|
|---|---|
-----
_Information 2019, 10, 75_ 21 of 27
_6.2. On Solving the Trapdoor Problem_
When we discuss the cracking problem, we only consider the infeasibility of the attacker’s solving (19)
regardless of the structure of the public vector F = ( f1, · · ·, fn). In other words, the public vector
_F = ( fi, · · ·, fn) is considered to be indistinguishable from a randomly generated n-dimensional vector._
However, (19) is only a seemingly-hard compact knapsack problem. If the public key reveals enough
information for the attacker to reverse the basic mathematical construction of the trapdoor in the proposed
PKCHD system, then he also can serve as an authorized receiver to decipher any cipher-text. Thus, the key
recovery attacks on the cryptographic scheme also need to be carefully studied.
6.2.1. Simultaneous Diophantine Approximation Attack
Most of the knapsack-type cryptosystems use size conditions to disguise an easy knapsack problem.
The designer randomly generates an easy knapsack problem, y = ∑i[n]=1 _[a][i][x][i][,][ x][i][ ∈]_ [[][0, 2][b][ −] [1][]][, and chooses]
a modulus m and a multiplier w, gcd(m, w) = 1. He uses the size condition m > (2[b] _−_ 1) ∑i[n]=1 _[a][i][ to]_
disguise the easy cargo vector A = (a1, · · ·, an) as a seemingly-hard knapsack sequence B = (b1, · · ·, bn),
_bi = wai(mod m). The size condition can be utilized by the simultaneous Diophantine approximation_
attack to obtain some useful information about (w, m). See [22,28] for more information about the
relationship between the simultaneous Diophantine approximation problem and cryptanalytics.
The trapdoor of the proposed PKCHD system is disguised using CRT, which involves no size
conditions. Thus, launching a simultaneous Diophantine approximation attack cannot find valuable
information about the trapdoor. Even though the size condition has been used in (13), the attacker
must peel off the outmost shuffle in (14) and (15) if he wants to launch a simultaneous Diophantine
approximation attack. Unfortunately, it is also a difficult task.
6.2.2. Known N Attack
The exact value of N is assumed to be known by the attacker, and he wants to learn some information
about the secret key. A straightforward way is to search for en and factor N to recover the trapdoor
information. To evaluate to what extent the attacker can succeed, we must decide whether the public key
_F = ( f1, · · ·, fn) and N provide the attacker with enough information to compromise the cryptosystem. If_
the public vector F is indistinguishable from a random-chosen n-dimensional vector F[∗] over ZN (In fact,
only the first n − 1 components of F[∗] are randomly chosen, and the last components of F[∗] must be 1.
Otherwise, it makes no sense to say that the public vector F is indistinguishable from a random-chosen
_n-dimensional vector in that fn = 1). We can conclude that the public key F and N provide no useful_
information for the attacker to recover the secret key. In other words, it is impossible for the attacker to
retrieve the integer en **ZN from a random n-dimensional vector F.**
_∈_
According to Algorithm 2, the only distinction between the generated ai, bi and a random integer
with the same binary length is: when i is small enough, the generated ai, bi are smooth integers (i.e., it
only contains small prime factors), whereas a random integer may not be. However, the public vector F is
scrambled by (14) and (15). At the same time, the smoothness of the two vectors A and B is also disguised.
After the two shuffles (14) and (15), the only distinction disappears. Then, the generated vector F must be
indistinguishable from those random n-dimensional vectors over ZN. Thus, the publication of N will not
affect the security of the system. On the contrary, it will reduce the length of the cipher-text and improve
on the transmitting efficiency.
The attacker cannot expect to recover the secret key by searching for the integer en to make all the
_ai = fiei(mod p) and bi = fiei(mod q) smooth simultaneously, where i < n is a relatively small integer._
In fact, the best way of retrieving the trapdoor seems to factor N at first and then recover the secret vectors
_A and B. It is easy to verify that anw ≡_ 1(mod p) and bnw ≡ 1(mod q), where w = en[−][1][(][mod][ N][)][. If]
-----
_Information 2019, 10, 75_ 22 of 27
we write an[−][1] and bn[−][1] for the inverse of an(mod p) and bn(mod q) respectively, and set fip = fi(mod p),
_fiq = fi(mod q), i = 1, · · ·, n −_ 1, (15) modulo p and q result in
_fip ≡_ _a[−]n_ [1][a]i[(][mod][ p][)][,] _fiq ≡_ _bn[−][1][b]i[(][mod][ q][)][.]_
Note that the vectors A and B are of some special structure. Therefore, if the modulus N is factored,
the attackers will get some useful information from the integers fip and fip. To examine the potential
threats against the proposed PKCHD cryptosystem, we consider a stronger assumption, that is, the attacker
had factorized the modulus N.
6.2.3. Known p and q Attack
Now, we consider such a scenario that the attacker has factorized the modulus N = pq. It is easy for
the attacker to compute the fip’s and fiq’s. Then, for the attacker, the left task is just to recover an and bn in
that other ai and bi can be easily reconstructed via
_ai ≡_ _an fip(mod p),_ _bi ≡_ _bn fiq(mod q)._
In addition, the gcd’s ci and di are easily determined by using the Euclidean algorithm. Thus, the secret
key is recovered.
_(a) Structural attack: In fact, if the attacker obtains two pairs (ai, fip) and (bj, fjq), he can determine the_
exact values of an and bn. Note that a1 and b1 have special structures (See Algorithm 2). If the attacker
wants to launch a structural attack, i.e., he does exhaustive search for all the possible integer pairs (a1, b1).
Assume n = 150, the n − 1 integer pairs (ui, vi) are randomly chosen with repetition permitted such that
(ui, vi) ∈ _J = W ∪_ _W_ _[T]. For each i, (ui, vi) takes 48 possible values. Then, the number of possible choices_
for the pair (a1, b1) is given in the following theorem.
**Theorem 8. When n = 150, the number t of choices for generating (a1, b1) is t = ([197]47** [)][.]
**Proof. If we denote the set J = {ji|i = 1, · · ·, 48} and look at each ji as an apple with color i, then we are**
confronted with such an “apple” probability model: choose n = 150 apples from the 48 color of apples
with repetition permitted.
Now, we consider a line on which 197 dots are scattered. We choose 47 dots among the 197 dots and
view them as boards. We denote the 47 boards as bi, i = 1, · · ·, 47 from left to right. The dots on the left
of b1 are the apples with color 1, and the dots on the right of b47 are the apples with color 48. These dots
between board i and board i + 1 are the apples with color i + 1, for i = 1,, 46. Thus, every choice of the
_· · ·_
47 board corresponds to a choice of the integer pair (a1, b1). We have t = ([197]47 [)][ choices in total. Thus, we]
complete the proof.
Since t = ([197]47 [)][ ≈] [2][1025][, apparently, it is computationally infeasible for the attacker to try all the possibilities.]
_(b) Simultaneous Diophantine approximation attack: Without loss of generality, we let_
_an fip −_ _li_ _p = ai,_ _i = 1, · · ·, n −_ 1. (29)
Divide the both sides of (29) by pan, and we obtain
_fip_ = _ai_ . (30)
_p_ _[−]_ _a[l]n[i]_ _pan_
-----
_Information 2019, 10, 75_ 23 of 27
_√_
Note that p ≈ 343 ∑[n]j=1 _[a][j][ ≈]_ [343][na][i][ ≈] [343][n]
76.1[n][−][1]. Thus, we have
_fip_
���� _p_ _[−]_ _a[l]n[i]_
= _ai_ _≈_ [1] _√1_
���� _pan_ _p_ _[≈]_ 343n 76.1[n][−][1]
from (21), (23) and (30). If we note again that an ≈ _p/(343n), we can claim that {li/an} is a set of fractions_
with a common and relatively small denominator an approximating the set of fractions { fip/p}. More
formally, we can assume that these fractions li/an are the simultaneous Diophantine approximations of the
fractions fip/p. If there is an efficient algorithm to solve the problem, the attacker can retrieve the secret
vector A = (a1, · · ·, an). Using a similar method, he also can recover the vector B = (b1, · · ·, bn). Thus,
the gcd’s ci and di are also obtained.
Since the simultaneous Diophantine approximation problem is a widely-believed intractable problem,
no efficient algorithm has been found for it. From the discussion above, it can be deduced that,
to reconstruct the secret key, the attacker must search for the modulus N and then solve two hard
number-theoretic problems, namely the integer factorization problem and the simultaneous Diophantine
approximation problem. This is a property shared with the scheme presented in [39].
_6.3. Generating the Hardest Knapsack Instances_
It is general knowledge that the whole public key cryptography is based on the computational
complexity theory. We may hope that the PKCs based on proven intractability assumptions, e.g.,
the knapsack problem, are unbreakable super-codes. However, the fact is not the case; many PKCs
based on the NP-complete problems such as the knapsack problem and the multivariate quadratic
polynomials [45] had been shown insecure. Fortunately, some PKCs based on unproven mathematics’
assumptions remain unbroken. Following the work of [45], this phenomena can be explained as follows.
The security of some of the integer-factorization-based PKCs or the discrete-logarithm-based PKCs is based
not only on the hardness of factoring an integer or solving the discrete logarithm problem defined over
some cyclic groups, but also on the key generation algorithms. For example, it may not be a difficult thing
for factoring a randomly-chosen large integer in that the integer always contains some small prime factors.
However, the RSA system does not use such easy-to-factor integers, and it always can select the hardest
factorization problem as the basis for its security. The knapsack problem is shown to be NP-complete,
but the computational complexity only deals with the worst-case complexity. If the use of the hardest
knapsack instances is excluded in public key cryptography, we cannot expect a knapsack cryptosystem
to be an unbreakable super-code. In fact, the knapsack problems with density <0.9408 is shown easy
_· · ·_
to solve [20]. Many cryptographers have pointed out that the knapsack instances with density greater
than 1 cannot be used in public key cryptography in that the cipher-texts are not uniquely decipherable.
Relatively, the room left for designing a secure knapsack cryptosystem is narrow. Further discussion about
the relationship between knapsack cryptography and computational complexity refers to [36].
Schnorr and Euchner [29] had shown that the hardest knapsack instances are those with density
_d ≈_ 1 + log2(n/2)/n, which is slightly larger than 1. The density of the proposed PKCHD is given in (27).
When n approaches infinity,
9n 9
lim
_n→∞_ log2 [343[3] _· (n −_ 1)n[2] _· 76.1[n][−][1]] [=]_ log276.1 _[≈]_ [1.44,]
and
�
= 1.
lim
_n→∞_
�
1 + [log][2][(][n][/2][)]
_n_
-----
_Information 2019, 10, 75_ 24 of 27
Thus, for a sufficiently large n, we always have
9n
.
log2 [343[3] _· (n −_ 1)n[2] _· 76.1[n][−][1]]_ _[>][ 1][ +][ log][2][(]n[n][/2][)]_
In other words, the proposed PKCHD cryptosystem always can use a knapsack problem with
density d > 1 + log2(n/2)/n as the encryption function. To generate the hardest knapsack problem,
the cryptosystem can generate two larger primes p and q to make the density d ≈ 1 + log2(n/2)/n.
To make a knapsack problem be the hardest, the cargo vector should be indistinguishable from the
random vectors. In fact, we have shown that the public vector of the PKCHD system is indistinguishable
from a randomly-chosen vector. Consequently, if the hardness of a knapsack instance is evaluated by its
density, the PKCHD system always can use the hardest knapsack vector as the public key.
_6.4. Provable Security Remarks_
In public key cryptography, two typical methods are employed for security analysis. One is the
provable security theory [46], the basic idea is to reduce the security of a PKC under some attack model to
a mathematical hard problem. The other is to deliver the PKC to the cryptological community for attacks
that is called enumerative security. Provable security has been widely accepted as a standard method for
the security analysis of PKCs. However, due to the following considerations, in this study, we do not prefer
provable security results about the proposed PKCHD cryptosystem. Firstly, we should note that almost all
the provably secure PKCs are constructed from the number-theoretic problems, i.e., integer factorization
and discrete logarithm problems. Secondly, provable security theory is not suitable for analyzing the
security of those PKCs based on NP-complete problems. These PKCs are always constructed from an easy
problem. Actually, the problem of reversing the encryption functions is only a seemingly-hard rather than
a truly hard problem. It makes no sense to reduce the security of a PKC to a seemingly-hard problem.
Thirdly, security analysis for a newly-designed trapdoor one-way function should be centered on the
estimation of the hardness of reversing the encryption function and retrieving the trapdoor information.
If no efficient algorithms have been found for a long time to compromise its security, we can assume its
one-wayness and begin to consider adding paddings to it to make it obtain provable security objectives.
It will be a significant theoretical result if one can prove that reversing the encryption function is
equivalent to solving the mathematical problems used in constructing the PKC. However, this is an
extremely tough task [44].
**7. Conclusions**
Due to the performance advantages over other cryptosystems, the knapsack cryptosystems, as a
typical class of PKCs, plays an important role in the wide variety of available cryptosystems. Especially,
new knapsack-type cryptographic primitives have been developed in recent years, e.g., the non-injective
knapsack cryptosystems [47], the knapsack Diffie–Hellman problem [48], and elliptic curve discrete
logarithm based knapsack public-key cryptosystem [49].
In this paper, a probabilistic knapsack-type PKC, namely PKCHD, which uses CRT to disguise the
easy knapsack sequence has been constructed with careful security analysis. Fortunately, no practical
attacks have been found to comprise the PKCHD’s security. However, the history that almost all additive
knapsack-type cryptosystems were shown to be vulnerable to some attacks makes the designers confident.
Thus, some novel attacks are to be investigated to make it more secure.
-----
_Information 2019, 10, 75_ 25 of 27
**Author Contributions: Conceptualization, Y.P. and B.W.; methodology, Y.P. and B.W.; validation, Y.P., B.W., S.T. and**
J.Z.; formal analysis, Y.P., B.W. and J.Z.; investigation, S.T. and H.M.; resources, S.T. and H.M.; writing–original draft
preparation, Y.P.; writing–review and editing, Y.P. and B.W.; supervision, B.W.; project administration, Y.P. and B.W.;
funding acquisition, Y.P. and B.W.
**Funding: This work is supported by the National Key R&D Program of China under Grant No. 2017YFB0802000,**
the National Natural Science Foundation of China under Grant No. U1736111, the Plan For Scientific Innovation
Talent of Henan Province under Grant No. 184100510012, the Program for Science and Technology Innovation
Talents in the Universities of Henan Province under Grant No. 18HASTIT022, the Key Technologies R&D Program of
Henan Province under Grant No. 182102210123 and 192102210295, the Foundation of Henan Educational Committee
under Grant No. 16A520025 and 18A520047, the Foundation for University Key Teacher of Henan Province under
Grant No. 2016GGJS-141, the Open Project Foundation of Information Technology Research Base of Civil Aviation
Administration of China under Grant No. CAAC-ITRB-201702, and the Innovation Scientists and Technicians Troop
Construction Projects of Henan Province.
**Acknowledgments: The authors would like to thank the anonymous reviewers for their carefulness and patience,**
and thank Sheng Tong for the proof of Theorem 8 and Fagen Li for paper preparation.
**Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study;**
in the collection, analyses or interpretation of data; in the writing of the manuscript or in the decision to publish
the results.
**References**
1. Diffie, W.; Hellman, M.E. New Directions in Cryptography. IEEE Trans. Inf. Theory 1976, IT-22, 644–654.
[[CrossRef]](http://dx.doi.org/10.1109/TIT.1976.1055638)
2. Rivest, R.L.; Shamir, A.; Adleman, L.M. A Method for Obtaining Digital Signature and Public Key Cryptosystems.
_[Commun. ACM 1978, 21, 120–126. [CrossRef]](http://dx.doi.org/10.1145/359340.359342)_
3. ElGamal, T. A Public Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms. IEEE Trans.
_[Inf. Theory 1985, IT-31, 469–472. [CrossRef]](http://dx.doi.org/10.1109/TIT.1985.1057074)_
4. Merkle, R.C.; Hellman, M.E. Hiding Information and Signatures in Trapdoor Knapsacks. IEEE Trans. Inf. Theory
**[1978, IT-24, 525–530. [CrossRef]](http://dx.doi.org/10.1109/TIT.1978.1055927)**
5. Chor, B.; Rivest, R.L. A Knapsack-Type Public Key Cryptosystem Based on Arithmetic in Finite Fields. IEEE Trans.
_[Inf. Theory 1988, IT-34, 901–909. [CrossRef]](http://dx.doi.org/10.1109/18.21214)_
6. [Vaudenay, S. Cryptanalysis of The Chor–Rivest Cryptosystem. J. Cryptol. 2001, 14, 87–100. [CrossRef]](http://dx.doi.org/10.1007/s001450010005)
7. Orton, G. A Multiple-Iterated Trapdoor for Dense Compact Knapsacks. In Advances in Cryptology–Eurocrypt 1994
_(LNCS); Springer-Verlag: Perugia, Italy, 1995; Volume 950, pp. 112–130._
8. Morii, M.; Kasahara, M. New Public Key Cryptosystem Using Discrete Logarithm Over GF(p). IEICE Trans. Fund.
**1988, J71-D, 448–453.**
9. Naccache, D.; Stern, J. A New Public-Key Cryptosystem. In Advances in Cryptology–Eurocrypt 1997 (LNCS);
Springer-Verlag: Konstanz, Germany, 1997; Volume 1233, pp. 27–36.
10. Goodman, R.M.F.; McAuley, A.J. New Trapdoor-Knapsack Public-Key Cryptosystem. IEE Proc. 1985, 132 Pt E,
282–292.
11. Niemi, V. A New Trapdoor in Knapsacks. In Advances in Cryptology–Eurocrypt 1990 (LNCS); Springer-Verlag:
Aarhus, Denmark, 1990; Volume 473, pp. 405–411.
12. Janardan, R.; Lakshmanan, K.B. A Public-Key Cryptosystem based on The Matrix Cover NP-Complete Problem.
In Advances in Cryptology–Crypto 1982; Plenum: New York, NY, USA, 1983; pp. 21–37.
13. Blackburn, S.R.; Murphy, S.; Stern, J. Weaknesses of A Public Key Cryptosystem based on Factorization of Finite
Groups. In Advances in Cryptology–Eurocrypt 1993 (LNCS); Springer-Verlag: Lofthus, Norway, 1994; Volume 765,
pp. 50–54.
14. Nguyen, P.; Stern, J. Merkle-Hellman Revisited: A cryptanalysis of The Qu-Vanstone Cryptosystem based on
Group Factorizations. In Advances in Cryptology–Crypto 1997 (LNCS); Springer-Verlag: Santa Barbara, CA, USA,
1997; Volume 1294, pp. 198–212.
-----
_Information 2019, 10, 75_ 26 of 27
15. Pieprzyk, J.P. On Public-Key Cryptosystems Built Using Polynomial Rings. In Advances in Cryptology–Eurocrypt 1985
_(LNCS); Springer-Verlag: Linz, Austria, 1985; Volume 219, pp. 73–80._
16. Lin, C.H.; Chang, C.C.; Lee, R.C.T. A New Public-Key Cipher System based upon The Diophantine Equations.
_[IEEE Trans. Comput. 1995, 44, 13–19. [CrossRef]](http://dx.doi.org/10.1109/12.368013)_
17. Webb, W.A. A Public Key Cryptosystem based on Complementing Sets. _Cryptologia 1992, XVI, 177–181._
[[CrossRef]](http://dx.doi.org/10.1080/0161-119291866865)
18. Brickell, E.F. Solving Low Density Knapsacks. In Advances in Cryptology–Crypto 1983; Plenum: New York, NY,
USA, 1984; pp. 24–37.
19. [Lagarias, J.C.; Odlyzko, A.M. Solving Low-Density Subset Sum Problems. J. ACM 1985, 32, 229–246. [CrossRef]](http://dx.doi.org/10.1145/2455.2461)
20. Coster, M.J.; LaMacchia, B.A.; Odlyzko, A.M.; Schnorr, C.P. An Improved Low-Density Subset Sum Algorithm.
In Advances in Cryptology–Eurocrypt 1991 (LNCS); Springer-Verlag: Brighton, UK, 1991; Volume 547, pp. 54–67.
21. Brickell, E.F.; Odlyzko, A.M. Cryptanalysis: A Survey of Recent Results. In Contemporary Cryptology, The Science
_of Information Integrity; IEEE Press: New York, NY, USA, 1992; pp. 501–540._
22. Lagarias, J.C. Knapsack Public Key Cryptosystems and Diophantine Approximation. In Advances in
_Cryptology–Crypto 1983; Plenum: New York, NY, USA, 1984; pp. 3–23._
23. [Lai, M.K. Knapsack Cryptosystems: The Past and The Future. Available online: http://www.ics.uci.edu/](http://www.ics.uci.edu/~{}mingl/knapsack.html)
[~{}mingl/knapsack.html (accessed on 20 December 2003).](http://www.ics.uci.edu/~{}mingl/knapsack.html)
24. Odlyzko, A.M. The Rise and Fall of Knapsack Cryptosystems. Am. Math. Soc. Proc. Symp. Appl. Math 1990, 42,
75–88.
25. Lenstra, A.K.; Lenstra, H.W., Jr.; Lovász, L. Factoring Polynomials with Rational Coefficients. Math. Ann. 1982,
_[261, 513–534. [CrossRef]](http://dx.doi.org/10.1007/BF01457454)_
26. Omura, K.; Tanaka, K. Density Attack to The Knapsack Cryptosystems with Enumerative Source Encoding.
_IEICE Trans. Fund. 2001, E84-A, 1564–1569._
27. Nguyen, P.; Stern, J. Adapting Density Attacks to Low-Weight Knapsacks. In Advances in Cryptology–Asiacrypt
_2005 (LNCS); Springer-Verlag: Chennai, India, 2005; Volume 3788, pp. 41–58._
28. Wang, B.; Hu, Y. Diophantine Approximation Attack on A Fast Public Key Cryptosystem. In The 2nd Information
_Security Practice and Experience Conference–ISPEC 2006 (LNCS); Springer: Hangzhou, China, 2006; Volume 3903,_
pp. 25–32.
29. Schnorr, C.P.; Euchner, M. Lattice Basis Reduction: Improved Practical Algorithms and Solving Subset Sum
[Problems. Math. Progr. 1994, 66, 181–191. [CrossRef]](http://dx.doi.org/10.1007/BF01581144)
30. Ajtai, M.; Dwork, C. A Public-Key Cryptosystem with Worst-Case/Average-Case Equivalence. In Proceedings of
the 29th ACM STOC, El Paso, TX, USA, 4–6 May 1997; pp. 284–293.
31. Goldreich, O.; Goldwasser, S.; Halvei, S. Public-Key Cryptosystems from Lattice Reduction Problems. In Advances
_in Cryptology–Crypto 1997 (LNCS); Springer-Verlag: Santa Barbara, CA, USA, 1997; Volume 1294, pp. 112–131._
32. Hoffstein, J.; Pipher, J.; Silverman, J.H. NTRU: A New High Speed Public Key Cryptosystem. In Proceedings the of
_Algorithm Number Theory–ANTS III (LNCS); Springer-Verlag: Portland, OR, USA, 1998; Volume 1423, pp. 267–288._
33. [Cai, J.Y.; Cusick, T.W. A lattice-based Public-Key Cryptosystem. Inf. Comput. 1999, 151, 17–31. [CrossRef]](http://dx.doi.org/10.1006/inco.1998.2762)
34. Sakurai, K. A Progress Report on Lattice-based Public-Key Cryptosystems—Theoretical Security Versus Practical
Cryptanalysis. IEICE Trans. Inf. Syst. 2000, E83-D, 570–579.
35. Nguyen, P.; Stern, J. The Two Faces of Lattices in Cryptology. In Proceedings of the Cryptography and Lattices–CaLC
_(LNCS); Springer-Verlag: Providence, RI, USA, 2001; Volume 2146, pp. 146–180._
36. Shamir, A. On The Cryptocomplexity of Knapsack Systems. In Proceedings of the Eleventh Annual ACM
Symposium on Theory of Computing, Atlanta, GA, USA, 30 April–2 May 1979; pp. 118–129.
37. Katayangi, K.; Murakami, Y. A New Product-Sum Public-Key Cryptosystem Using Message Extension.
_IEICE Trans. Fund. 2001, E84-A, 2482–2487._
38. Okamoto, T.; Tanaka, K.; Uchiyama, S. Quantum Public-Key Cryptosystems. In Advances in Cryptology–Crypto
_2000 (LNCS); Springer-Verlag: Santa Barbara, CA, USA, 2000; Volume 1880, pp. 147–165._
39. Wang, B.; Hu, Y. Public Key Cryptosystem based on Two Cryptographic Assumptions. IEE Proc. Commun. 2005,
_152, 861–865._
-----
_Information 2019, 10, 75_ 27 of 27
40. Shamir, A.; Zippel, R.E. On The Security of The Merkle-Hellman Cryptographic Scheme. IEEE Trans. Inf. Theory
**[1980, 26, 339–340. [CrossRef]](http://dx.doi.org/10.1109/TIT.1980.1056197)**
41. Laih, C.S.; Gau, M.J. Cryptanalysis of A Diophantine Equation Oriented Public Key Cryptosystem.
_[IEEE Trans. Comput. 1997, 46, 511–512. [CrossRef]](http://dx.doi.org/10.1109/12.588074)_
42. Eier, R.; Lagger, H. Trapdoors in Knapsack Cryptosystems. In Cryptography–EUROCRYPT 1982 (LNCS); Springer:
Berlin/Heidelberg, Germany, 1982; Volume 149, pp. 316–322.
43. Wang, B.; Wu, Q.; Hu, Y. A Knapsack-based Probabilistic Encryption Scheme. Inf. Sci. 2007, 177, 3981–3994.
[[CrossRef]](http://dx.doi.org/10.1016/j.ins.2007.03.010)
44. Koblitz, N. Algebraic Aspects of Cryptography; Springer-Verlag: Berlin, Germany, 1998.
45. Wolf, C. Multivariate Quadratic Polynomials in Public Key Cryptography. Ph.D. Thesis, Katholieke
[Universiteit Leuven, Leuven, Belgium, 2005. Available online: http://eprint.iacr.org/2005/393 (accessed](http://eprint.iacr.org/2005/393)
on 1 November 2005).
46. [Koblitz, N.; Menezes, A.J. Another Look at “Provable Security”. J. Cryptol. 2007, 20, 3–37. [CrossRef]](http://dx.doi.org/10.1007/s00145-005-0432-z)
47. Koskinen, J.A. Non-Injective Knapsack Public-Key Cryptosystems. Theor. Comput. Sci. 2001, 255, 401–422.
[[CrossRef]](http://dx.doi.org/10.1016/S0304-3975(99)00297-2)
48. Han, S.; Chang, E.; Dillon, T. Knapsack Diffie-Hellman: A New Family of Diffie-Hellman. Cryptology ePrint
_[Archive: Report 2005/347. Available online: http://eprint.iacr.org/2005/347 (accessed on 22 August 2006).](http://eprint.iacr.org/2005/347)_
49. Su, P.C.; Lu, E.; Chang, H. A Knapsack Public-Key Cryptosystem based on Elliptic Curve Discrete Logarithm.
_[Appl. Math. Comput. 2005, 168, 40–46. [CrossRef]](http://dx.doi.org/10.1016/j.amc.2004.08.027)_
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article
distributed under the terms and conditions of the Creative Commons Attribution (CC BY)
[license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/INFO10020075?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/INFO10020075, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2078-2489/10/2/75/pdf?version=1551090308"
}
| 2,019
|
[
"JournalArticle"
] | true
| 2019-02-21T00:00:00
|
[
{
"paperId": "a989d3f3fb9fd1a3282e6945133394d8841fa425",
"title": "The Past and the Future."
},
{
"paperId": "80b388f07313e609a0fbd7dadfbadc69ae3b653e",
"title": "Protection"
},
{
"paperId": "1e1733356e22806d475b8aa96b35b2c300d78ada",
"title": "A knapsack-based probabilistic encryption scheme"
},
{
"paperId": "0b93c74392043b913bfa41e1fc044effb05973c6",
"title": "Public key cryptosystem based on two cryptographic assumptions"
},
{
"paperId": "0e181002019c336b9372961e120ead5d89b25b2a",
"title": "Adapting Density Attacks to Low-Weight Knapsacks"
},
{
"paperId": "6671fa36dc3f6d90949fb304e976b9d02b00f00a",
"title": "A knapsack public-key cryptosystem based on elliptic curve discrete logarithm"
},
{
"paperId": "0c6ef606962fdbd129fef90288d2a018424e0c27",
"title": "Density Attack to the Knapsack Cryptosystems with Enumerative Source Encoding"
},
{
"paperId": "129f461d5afb581eedaa8fcf07ffb627f27eb5f3",
"title": "A New Product-Sum Public-Key Cryptosystem Using Message Extension"
},
{
"paperId": "9e81935bd9e6f49a44e2b52a850d14a1f5546674",
"title": "The Two Faces of Lattices in Cryptology"
},
{
"paperId": "fcbf1705e2fba9b1416a49783d018f7fde722204",
"title": "Non-injective knapsack public-key cryptosystems"
},
{
"paperId": "e9327d56abfeaf1e59ca48f76dee7c7a4057aef4",
"title": "Quantum Public-Key Cryptosystems"
},
{
"paperId": "d5d5570d652d599cc9e74c25c40cafdfeeeb0545",
"title": "A Progress report on lattice based public-key cryptosystems. Theoretical security versus practical cryptanalysis"
},
{
"paperId": "f2636fefd89bffcf911cbd912759df0a5c2eb6fa",
"title": "A Lattice-Based Public-Key Cryptosystem"
},
{
"paperId": "57eda333076023c68e4ee2a75b1701f99d619772",
"title": "Merkle-Hellman Revisited: A Cryptanalysis of the Qu-Vanstone Cryptosystem Based on Group Factorizations"
},
{
"paperId": "633293bd984f8d7da7ed25d2265f6e14499b4c76",
"title": "A New Public-Key Cryptosystem"
},
{
"paperId": "9ea96c6ed5739166584b77cbb402b18eff7cd2b3",
"title": "A public-key cryptosystem with worst-case/average-case equivalence"
},
{
"paperId": "e3ed64da5eec09d5a32ad771486e0702a95eca3a",
"title": "Cryptanalysis if a Diophantine Equation Oriented Public Key Cryptosystem"
},
{
"paperId": "419c275ad51c7f048563765a37cb02070d9a7fee",
"title": "Public-Key Cryptosystems from Lattice Reduction Problems"
},
{
"paperId": "a9d351774708ba4b5c685a4f13898ecb02d2b60b",
"title": "A Multiple-Iterated Trapdoor for Dense Compact Knapsacks"
},
{
"paperId": "cf76ab86a6894e9018f2ced9be529fb0fc38e722",
"title": "A Public Key Cryptosystem based on complementing Sets"
},
{
"paperId": "ca9742f2d1bca725a1a66164e036ad6378c5625b",
"title": "Lattice basis reduction: Improved practical algorithms and solving subset sum problems"
},
{
"paperId": "a6fc23f71103e1fd7631263de1d2a19f93d4f5d1",
"title": "An Improved Low-Denisty Subset Sum Algorithm"
},
{
"paperId": "18cc77038e20029599633514fda856259126b237",
"title": "New Public-Key Cryptosystem Using Discrete Logarithms over GF(p)"
},
{
"paperId": "3b77c26311e4e457e6865723cdbcf948e458dc14",
"title": "A New Trapdoor in Knapsacks"
},
{
"paperId": "a7865b7ce7f8e66f533668f9b1dc1dd967a680a5",
"title": "A knapsack-type public key cryptosystem based on arithmetic in finite fields"
},
{
"paperId": "8f281295747e4654defa5fdee9e0966d72e52d3f",
"title": "New trapdoor-knapsack public-key cryptosystem"
},
{
"paperId": "a3e3dc22d596d128d21151f9e9f714203f576d25",
"title": "A Public-Key Cryptosystem Based on the Matrix Cover NP-Complete Problem"
},
{
"paperId": "c5654bc032f7eed78dcb0f6849853181de635cf4",
"title": "Solving low density subset sum problems"
},
{
"paperId": "6a47e62afd84ecd38527b69f4d105511e1d9ab51",
"title": "Factoring polynomials with rational coefficients"
},
{
"paperId": "d9aa2239b4ffb76131b6cd1b7f539d3e3f13f9a6",
"title": "Trapdoors in Knapsack Cryptosystems"
},
{
"paperId": "a90291ce204d2a86a6dffd18356d060f03ca6df5",
"title": "On the security of the Merkle- Hellman cryptographic scheme (Corresp.)"
},
{
"paperId": "1e790520d145db7ca04d49606115288194eba742",
"title": "On the cryptocomplexity of knapsack systems"
},
{
"paperId": "da4ff3b089d9404c0283fdb54403d41f736175a1",
"title": "Hiding information and signatures in trapdoor knapsacks"
},
{
"paperId": "ba624ccbb66c93f57a811695ef377419484243e0",
"title": "New Directions in Cryptography"
},
{
"paperId": "2aa4772262770bc3e3c915f9a8a38665386c043e",
"title": "Cryptanalysis of the Chor—Rivest Cryptosystem"
},
{
"paperId": "53fd4bf39a4666d715dc1fdbc9931bfba0293bcc",
"title": "Diophantine Approximation Attack on a Fast Public Key Cryptosystem"
},
{
"paperId": "ff9e2286caa06fd306257af7f3584fe8890332f3",
"title": "Multivariate quadratic polynomials in public key cryptography"
},
{
"paperId": "b363e6a5716dd0ac820f2b1348ef6ae20fd02a39",
"title": "Knapsack Diffie-Hellman: A New Family of Diffie-Hellman"
},
{
"paperId": "07d67c825c17fbf4d28e2e1427570589ff028f80",
"title": "Another Look at \"Provable Security\""
},
{
"paperId": "3ccd1ba63457ba2f3fdb3d72b84cc1cdd601df64",
"title": "The Rise and Fall of Knapsack Cryptosystems"
},
{
"paperId": "57ad302c286983f3f4de4941208ba76a59652e55",
"title": "Algebraic aspects of cryptography"
},
{
"paperId": "ae0849114d71cf7c9c18e350384c7da99216b422",
"title": "A new public-key cipher system based upon the diophantine equations"
},
{
"paperId": "224ea03df11a9b10a0a77e089cf22f915ace138f",
"title": "Cryptanalysis: A Survey of Recent Results"
},
{
"paperId": "a1cd437a924849d19e0713f042e45e79dc8b95a1",
"title": "A public key cyryptosystem and signature scheme based on discrete logarithms"
},
{
"paperId": "3ba5657eb9a1d7175a2b133df62c70f5b9655327",
"title": "On Public-Key Cryptosystems Built using Polynomial Rings"
},
{
"paperId": "e3138f7a1af1150106ad75a6536089db43db312b",
"title": "Knapsack Public Key Cryptosystems and Diophantine Approximation"
},
{
"paperId": "0b3b13efb2e0e9383b0e6a39f0382a95a1a978e3",
"title": "Solving Low Density Knapsacks"
},
{
"paperId": "d8b8b2abd4bd1f41be93f0d5ca163cf2775b1434",
"title": "Factoring polynomials with rational coeficients"
}
] | 28,704
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/033a22ad78fec9b60fd5456514583d24f4964b52
|
[
"Computer Science"
] | 0.852217
|
SEMANTIC APPROACH TO SMART CONTRACT VERIFICATION
|
033a22ad78fec9b60fd5456514583d24f4964b52
|
[
{
"authorId": "48257932",
"name": "N. Petrovic"
},
{
"authorId": "34726256",
"name": "M. Tosic"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Vulnerabilities of smart contract are certainly one of the limiting factors for wider adoption of blockchain technology. Smart contracts written in Solidity language are considered due to common adoption of the Ethereum blockchain platform. Despite its popularity, the semantics of the language is not completely documented and relies on implicit mechanisms not publicly available and as such vulnerable to possible attacks. In addition, creating formal semantics for the higher-level language provides support to verification mechanisms. In this paper, a novel approach to smart contact verification is presented that uses ontologies in order to leverage semantic annotations of the smart contract source code combined with semantic representation of domain-specific aspects. The following aspects of smart contracts, apart from source code are taken into consideration for verification: business logic, domain knowledge, run-time state changes and expert knowledge about vulnerabilities. Main advantages of the proposed verification approach are platform independence and extendability.
|
**FACTA UNIVERSITATIS**
Series: **Automatic Control and Robotics Vol. 19, N[o] 1, 2020, pp. 21 - 37**
https://doi.org/10.22190/FUACR2001021P
## SEMANTIC APPROACH TO SMART CONTRACT
[]
## VERIFICATION
UDC ((336.744:004.736+004.6):004.7)
Nenad Petrović, Milorad Tošić
University of Niš, Faculty of Electronic Engineering, Niš, Republic of Serbia
**Abstract. Vulnerabilities of smart contract are certainly one of the limiting factors for**
_wider adoption of blockchain technology. Smart contracts written in Solidity language_
_are considered due to common adoption of the Ethereum blockchain platform. Despite_
_its popularity, the semantics of the language is not completely documented and relies_
_on implicit mechanisms not publicly available and as such vulnerable to possible_
_attacks. In addition, creating formal semantics for the higher-level language provides_
_support to verification mechanisms. In this paper, a novel approach to smart contact_
_verification is presented that uses ontologies in order to leverage semantic annotations_
_of the smart contract source code combined with semantic representation of domain-_
_specific aspects. The following aspects of smart contracts, apart from source code are_
_taken into consideration for verification: business logic, domain knowledge, run-time_
_state changes and expert knowledge about vulnerabilities. Main advantages of the_
_proposed verification approach are platform independence and extendability._
**Key words: blockchain, Ethereum, semantic technology, smart contract, Solidity, software**
_verification_
1. INTRODUCTION
Since the breakthrough of Bitcoin cryptocurrency in 2009, blockchain has been
considered as one of the most influential emerging technologies of the last decade [1-3].
Back then, its main purpose was to enable decentralized, safe and trustworthy transfer of
financial assets worldwide without fees or involving intermediary.
Due to its quickly growing popularity, a large community has been built around
blockchain technology enthusiasts (including researchers, industry professionals and
hobbyists), which has led to the development of a new generation of cryptocurrencies. One
of the most important representatives of the new generation is widely accepted Ethereum[1] [2,
Received May 04, 2020
**Corresponding author: Nenad Petrović**
Faculty of Electronic Engineering, Aleksandra Medvedeva 14, 18000 Niš, Republic of Serbia
E-mail: nenad.petrovic@elfak.ni.ac.rs
[1 https://www.ethereum.org/](https://www.ethereum.org/)
© 2020 by University of Niš, Serbia | Creative Commons License: CC BY-NC-ND
-----
22 N. PETROVIĆ, M. TOŠIĆ
4]. In addition to applications involving financial transactions, there is a whole spectrum of
novel use cases relying on blockchain. From logistics, robotics, transportation, energy
trading and government to healthcare [5], there have been many tries to adopt blockchain
technology to create value-add.
Smart contracts are of key importance in the blockchain system architecture because they
describe flow of actions taken during a transaction. They are implemented as a program code
similar to any other software code. Therefore, it is susceptible to different vulnerabilities,
such as integer overflow/underflow for example. Several smart contract attacks have been
identified, such as reentrancy and timestamp exploits [6].
Absence of resilience to these vulnerabilities in various domains and use cases can lead
to huge financial losses and catastrophic results, even physical damage to the environment,
infrastructure as well as human beings. The changes applied once the transaction is executed
are immutable, which makes the consequences applied by the exploited smart contract
permanent. For that reason, the verification of smart contract within blockchain platforms is
of utmost importance.
Creating a formal semantic for a higher-level language can enable the creation of verified
compilers and support verification mechanisms [7]. However, despite popularity of the
Ethereum blockchain platform, semantics of its accompanying contract specification
language Solidity[2] is not completely documented and publicly available. Therefore, adding
the explicitly defined semantics on the top of the Solidity language would be highly
beneficial for detection of vulnerabilities [7, 8].
In this paper, we propose a semantics-based approach to smart contract verification
aiming the Solidity language used within the Ethereum blockchain platform. The main
novelty of the idea presented in this paper is based on ontologies for leveraging semantic
annotations of the smart contract source code combined with a semantic representation of
domain and expert knowledge in order to perform the verification and detect potential
vulnerabilities. Moreover, the semantic technology proposed in the paper provides the
means for novel platform-independent representation of these aspects in a generic way
enabling much easier extendability and even interoperability between different blockchain
platforms in case of highly complex business processes and transactions.
2. BACKGROUND AND RELATED WORK
**2.1. Blockchain**
Blockchain is a data structure that consists of append-only sequence of blocks which
holds information about the executed transactions [1-3]. It refers to a distributed ledger
system that stores copies of the former data structure within the peer-to-peer network of
nodes. Each user (also called node) has alphanumeric address ensuring the user’s anonymity
as well as transaction record transparency at the same time. In the context of cryptocurrency
blockchain applications, the transaction represents transfer of a value and ownership of
digital tokens between sender and recipient, recorded on the distributed ledger [1-3]. Tokens
are used to represent tangible as well as intangible assets – from cash and physical objects to
copyrights and intellectual property [1-3]. Each block in the blockchain contains a
[2 https://solidity.readthedocs.io/en/v0.5.5/](https://solidity.readthedocs.io/en/v0.5.5/)
-----
Semantic Approach to Smart Contract Verification 23
cryptographic hash of the previous block and timestamp in order to ensure that no one can
modify or delete them once they are recorded in ledger. The more blocks are in the chain,
the chain becomes more secure and reliable.
Two types of blockchain networks can be identified: public and private. Anyone can
join public blockchain networks while each node maintains its own copy of the ledger. In
private networks, ledger is often permissioned such that only authorized entities are able
to act on a ledger. When a new transaction occurs, it has to be validated and accepted by
all the nodes within the network that act as miners rewarded for the effort they put in [1,
3]. After the agreement, the ledger is in state of consensus. Several consensus protocols
are accepted as standard in blockchain networks, such as Practical Byzantine Fault
Tolerance (consensus based on majority) and Proof-of-Work (based on computing effort
instead of majority) [1, 3]. The blockchain network is resilient to malicious attack because
in order to hack the consensus it would be necessary to create a whole new blockchain of
modified records, which is an enormously expensive and time-consuming task.
However, there are certain performance drawbacks and limitations of blockchain
technology. It is not suitable for storing data at high volumes or velocity as the data could
be too large to be copied to each individual node, while the time and processing effort
required for validation and verification of a block are often too high [1, 3].
**2.2. Smart contract**
Smart contract is a protocol intended to digitally facilitate, verify, or enforce the
negotiation and performance of a contract [1, 4]. In the context of the blockchain
technology, smart contract is a software code that defines and executes transactions on the
target blockchain platform where performed transactions are trackable and irreversible [13]. Its distinctive feature is that it enables the execution of credible transactions without
involving third parties.
A smart contract consists of business logic definition and operations that affect the
state of the blockchain ledger. It modifies ownership and value of assets (represented as
digital tokens) [1-3]. It can be implemented using any programming language and secured
using encryption and digital signing. In the case of Ethereum, smart contracts are written
using a high-level object-oriented language Solidity, developed by the Ethereum
Foundation. It is far more expressive and powerful than the Bitcoin’s script language
originally used for smart contract definition.
Despite the fact that Solidity seems quite similar to JavaScript, it also includes additional
features that are used to support the implementation of transaction mechanisms in distributed
environment of the Ethereum blockchain network. It uses the concept of class for
representation of smart contracts. Similarly to other object-oriented languages, instances of
Solidity smart contracts contain fields and methods. While fields represent state of the
contract, methods represent the contract-specific operations that are be invoked in order to
perform the transaction. However, when uploaded to the Ethereum network, smart contracts
are translated to a lower level bytecode executed by the Ethereum Virtual Machine (EVM).
Once a smart contract enters the blockchain it cannot be removed.
-----
24 N. PETROVIĆ, M. TOŠIĆ
**2.3. Smart contract vulnerabilities**
The most characteristic known smart contract vulnerabilities [6, 9] identified for
Solidity language within the Ethereum platform are given in Table 1.
**Table 1 Summary of smart contract vulnerabilities in Solidity**
Vulnerability Description Example
Reentrancy Calling external contracts that can
take over the control flow and make
changes to the data that the calling
function was not expecting.
Integer
overflow/under
flow
Timestamp
dependence
Revert-based
DoS
Overflow: If uint reaches the
maximum value (2[256]) then it will
circle back to zero which checks for
the condition.
Underflow: If a uint is made to be less
than zero, it will cause an underflow
and get set to its maximum value.
In Solidity, there are many block state
variables like timestamp, random seed
and block number. Since these state
variables are written at the head of
each block, the malicious miner may
modify them in order to get profit/ and
get profit from it leveraging them to
make the transactions flow go along
different program paths.
Causing the malfunction of a system
by exploiting the unexpected recursive
calls of revert functions.
Exploiting the functions that can be called
repeatedly, before the first invocation of
the function was finished. This may cause
the different invocations of the function to
interact in destructive ways. The possible
solution to avoid this threat is to use
transfer() and send() instead of call(), as
they are safe against reentrancy attacks
since they limit the code execution to
2300 gas which is enough to log the
event. Otherwise, using call(), always the
internal state modification (such as
change of balances) should be done
before the external call.
If any user apart from administrator can
call functions that update the value of
the uint number.
Locking contract for a period of time and
various exploits of conditional statements
based on time-varying states.
If attacker bids using a smart contract
which has a fallback function that reverts
any payment, the attacker can win any
auction. When it tries to refund the old
leader, it reverts if the refund fails. This
means that a malicious bidder can become
the leader while making sure that any
refunds to their address will always fail. In
this way, they can prevent anyone else
from the bidding function, and stay the
leader forever.
-----
Semantic Approach to Smart Contract Verification 25
**2.4. Ontologies and semantic technology**
The term ontology is used in different scientific fields. It was initially used to define the
philosophical branch studying ways of being, basic concepts of being and relations between
them. In computer science, ontology refers to a formal representation of conceptualization
used for materialization of knowledge about given domain of discourse. This implies
formalization of knowledge and its representation in a form suitable for use by computers.
Ontology is often defined as a representational artifact, comprising a taxonomy as a proper
part, whose representations are intended to designate some combination of universals,
defined classes, and relations between them [10].
Every ontology consists of classes, individuals, attributes, and relations. Classes represent
abstract groups, collections or types of objects. Individuals are instances of classes. Attributes
are related properties, characteristics or parameters that classes can have. Relations define ways
in which classes and individuals can be related to each other. Individuals specified according to
the conceptualization defined by some ontology are sometimes called facts. Collection of facts
is often stored separately from the corresponding ontology and called knowledge base.
Ontology is augmented with a set of rules that are used to generate new knowledge from the
existing set of facts. Rules are defined within the ontology language used, but can also be
specified by means of some of the rules definition languages.
The role of the semantic technology in software systems is to encode the meaning of data
separately from its content and application code. This way, it is possible for machines to
understand data, exchange the understanding and perform reasoning on top of it. In the context
of semantic technologies, ontologies are used to describe the shared conceptualization of a
particular domain [10]. Semantic descriptions are represented using the RDF[3] related standard
languages in the form of (subject, predicate, object) 3-tuples and persisted on the disk in socalled triple stores. SPARQL[4] is a language used for querying the RDF semantic triple stores.
By executing queries against the triple store, it is possible to retrieve the results that may
support different reasoning mechanisms to infer new knowledge based on the existing facts.
**2.5. Hoare logic**
The Hoare logic refers to a formal system with a set of logical rules enabling reasoning
about the correctness of computer programs, proposed in 1969 [11]. The central concept of
the Hoare logic is the Hoare triple. It describes how the execution of a piece of code changes
the state of the computation. A Hoare triple has form of {P}C{Q}, where P and Q represent
assertions, while C is an executable command. Assertions are represented as predicate logic
formulae. _P is named precondition;_ _Q is the postcondition. When the precondition is_
satisfied, the command execution will establish the postcondition.
In [12], AutoProof tool aiming verification of object-oriented programs based on
concepts of the Hoare logic was presented with promising results. It offers a prover based
on the Boogie verifier aiming Eiffel programs annotated with full-fledged functional
specifications in the form of contracts that consist of pre- and postconditions, class
invariants, and other kinds of annotations.
[3 https://www.w3.org/RDF/](https://www.w3.org/RDF/)
[4 https://www.w3.org/TR/rdf-sparql-query/](https://www.w3.org/TR/rdf-sparql-query/)
-----
26 N. PETROVIĆ, M. TOŠIĆ
Considering the fact, that Solidity is quite similar to object-oriented languages
(especially to Eiffel which is based on design by contract), concepts of the Hoare logic are
adopted in this paper as well. However, the smart contract verification mechanism presented
in this paper leverages the semantic representations of source code, domain knowledge and
verification methodology. The assertions related to preconditions and postconditions are
implemented as queries against the semantic knowledge base interpreted as _true (if they_
return at least one instance) or false (if there is no any instance found).
**2.6. Related work**
A summarized overview of the related solutions for smart contract verification is given in
Table 2. First column is the reference publication for the considered solution, second column
shows which is the underlying approach to smart contract verification, while third column
shows the aspects of verification considered by the corresponding verification mechanism.
Finally, fourth column shows the case study used for the evaluation.
**Table 2 Overview of existing solutions aiming smart contract verification**
Reference Approach Aspects Case study
(Z. Nehai et al.
2018) [13]
(W. Ahrendt et
al., 2018) [14]
model-checking based on
temporal propositional
logic
meta-theoretical
reasoning
Business logic,
overflow/underflow
Business logic Crowdfunding
Reentrancy Reentrancy in casino
game
Common vulnerabilities Overflow and reentrancy
bugs
Energy transaction in
electric transmission
network
ConCert [15] Static verification
leveraging Java
translation
solc-verify [16] Source code reasoning
using Solidity compiler,
Boogie and SMT solvers
Vandal [17] Low-level Ethereum
Virtual Machine (EVM)
bytecode converted to
semantic logic relations.
Security analysis
expressed in a logic
specification
Mythril[5] [18] Symbolic execution,
SMT solving and taint
analysis used to detect a
variety of security
vulnerabilities
Both common and specific
vulnerabilities
Unchecked send
Reentrancy
Unsecured balance
Destroyable contract
Use of ORIGIN
Security vulnerabilities Parity bug
Most of the existing solutions are designed for specific blockchain technology, types
of contracts and use case and not easily extendable, on the other side. Note the advantage
of the solution proposed in this paper related to an ability to easily add the support for
different blockchain platforms technologies. It is possible to enable verification of smart
[5 https://github.com/ConsenSys/mythril](https://github.com/ConsenSys/mythril)
-----
Semantic Approach to Smart Contract Verification 27
contracts written in other languages by just providing a parser which performs semantic
annotation of the source code together with the corresponding ontology. At the same time,
the representation of domain and verification mechanisms do not need to be changed.
Moreover, the existing verification mechanisms can be easily extended by adding expert
knowledge facts, without any modification of the verifier’s source code.
3. PROBLEM DEFINITION
The research problem addressed in this paper is how to verify smart contracts before the
actual execution of the corresponding transaction in a platform-independent way by
integration of: 1) semantic description of smart contract source code, 2) semantic representation
of business logic and domain rules, 3) run-time behavior of smart contracts, and 4) expert
knowledge about known flaws and vulnerabilities of smart contracts. In this way, custom
verification rules for checking whether certain conditions hold before (pre-conditions) and after
(post-conditions) the execution of the smart contract could be defined in order to guide the
verification process in a desired direction. In the context of this paper, verification rule refers to
the smallest unit of the smart contract verification process. Each verification rule ri consists of
sets of pre-conditions (pre1…prem) and post-conditions (post1…postn) and refers to a range of
source code lines from a line a to the line b within the smart contract s. Verification flow f is a
set of verification rules (r1…rp) whose pre- and post- conditions are checked during the
verification process.
In the first step of the verification process, before the smart contract execution, each
verification rule _ri within the verification flow_ _f is evaluated by checking whether the_
_pre1…˄…prem holds. After that, the specified part of the smart contract s is executed within_
the simulated execution environment. The obtained results and states are interpreted and
stored within the semantic knowledge base. After the simulated smart contract execution, it
is checked whether post1…˄…postn holds in a similar way as it is done for the pre-conditions.
If the smart contract s passes the verification (meaning that both pre1…˄…prem holds
before the execution, while _post1…˄…postn_ holds after the execution for all verification
rules within verification flow), then the transaction will be executed and its information
recorded within the blockchain.
4. IMPLEMENTATION
**4.1. Semantic framework**
The semantic framework considers the following aspects: 1) semantic representation of a
smart contract source code, 2) expert knowledge about vulnerabilities, 3) business logic/rules
and domain knowledge, 4) expert knowledge about verification rules, and 5) run-time
behavior of the verified contract. In what follows, the proposed ontologies will be proposed
and described.
1) Smart contract source code representation ontology (Fig. 1): Each contract consists
of participants, parameters, functions and attributes. Participants correspond to the parties
involved in the transaction as either sender or receiver. A function has arguments and
local parameters. It could affect the state of a set of variables. Moreover, a function can
-----
28 N. PETROVIĆ, M. TOŠIĆ
call another function at certain line within the code. There are specific-purpose functions,
such as revert, which are a subclass of function class. Parameters and arguments are both
variables with name, type and value.
**callsFunction**
**Sender**
**hasParticipant** **hasFunction** **domain** **range**
**Receiver** **range** **domain** **domain** **hasName**
**subClassOf** **domain** **range**
**range**
**subClassOf** **Participant** **Contract** **Function** **domain**
**Role** **calledAt** **Name**
**range** **domain** **Revert** **subClassOfdomain** **domain** **domain** **range**
**hasRole** **domain** **domain** **becomesZero** **Line Number**
**affects** **range**
**hasAttribute** **hasLocalVariable** **domain** **hasArgument**
**range** **range**
**Type** **range** **range** **Variable** **domain** **hasValue** **range** **Value**
**range**
**hasType** **domain**
**Fig. 1 Smart contract source code representation ontology**
2) Vulnerability queries: Refers to a set of queries to the semantic triple store that
describe the conditions that hold for specific types of vulnerabilities. They are used as asserts
within the pre- and post- conditions. For the purpose of vulnerability detection (such as
reentrancy), some specific aspects of smart contracts are captured within the semantic
description, such as the number of line when a variable becomes zero.
3) Business rules ontology: Consists of relations and concepts specific to the considered
domain. The examples are given in section about case studies.
4) Verification rule ontology (Fig. 2): Each verification rule consists of pre-condition,
post-condition and targeted smart contract code. Each pre- and post- condition contain a
query which is used for assert testing. The targeted code can be a whole smart contract or its
part within a given range of lines of code. A set of verification rules makes verification flow.
**hasVerificationRule** **hasPrecond**
**range** **domain** **domain** **hasQuery**
**domain** **range**
**range**
**Verification flow** **Verification rule** **Asssert**
**domain** **Query**
**domain** **hasPostcond** **range**
**targetsCode**
**range** **Code range**
**domain**
**From Line**
**Type** **domain** **fromLine**
**range** **domain** **range**
**toLine** **inContract** **Contract Name**
**range**
**Fig. 2 Verification rule ontology**
-----
Semantic Approach to Smart Contract Verification 29
5) Transaction run-time ontology (Fig. 3): The role of this ontology is to describe the
state before the transaction and after simulated execution of the part of code that is being
verified. For this aspect, the balance of each participant both before and after the contract
execution is relevant. Moreover, the timestamp for current time coming from a trusted
authority at the beginning and end of the execution is also taken into account.
**hasPreBalance**
**range** **hasPreTime** **domain**
**range**
**PreTime** **Participant**
**domain** **PreBalance**
**PostTime** **domain**
**range** **domain** **PostBalance**
**Contract** **hasPostBalance**
**hasPostTime** **range**
**domain** **range**
**inContract**
**Fig. 3 Transaction run-time ontology**
The ontologies from Fig. 1-3 are referred to as Smart Contract Ontologies (SCO) in
SPARQL queries that are given later.
**4.2. Architecture and working principle**
The working principle and underlying architecture of the proposed approach are given in
Fig. 4. First, the smart contract’s source code is parsed and semantically annotated based on
the conceptualization implemented in the smart contract source code representation ontology
(Fig. 1). During the traversal of its syntax tree, semantic annotations of the code are inserted
into the semantic knowledge base.
On the other side, user defines verification rules by means of the verification flow
modeling environment. The rules are also transformed to the form suitable for ontological
representation within the semantic knowledge base according to the _Verification rule_
_ontology (Fig. 2). During checking pre- and post- condition asserts, the queries are executed_
against the semantic knowledge base. The returned query results are interpreted to determine
whether the specified conditions hold or not. If they hold, the transaction described by the
smart contract will be executed. Otherwise, there are two possibilities. Either the original
contract will be fixed (if possible) by inserting additional lines of code or it will not be
executed.
-----
30 N. PETROVIĆ, M. TOŠIĆ
**MODELING ENVIRONMENT**
**MODELING TOOL**
2
**USER**
1
**PARSE**
**SMART CONTRACT**
**SEMANTIC FRAMEWORK**
**DOMAIN**
**KNOWLEDGE/** **VULNERABILITIES**
**BUSINES RULES**
**VERIFICATION**
**RULES**
**SOURCE CODE** **RUN-TIME**
**REPRESENTATION** **BEHAVIOR**
4
**PRECONDITION** **EXECUTION**
**CHECK** **ENVIRONMENT**
**CODE**
**GENERATOR**
**VERIFICATION FRAMEWORK**
**Fig. 4** Overview of the framework for semantic-driven smart contract verification
1: Semantic annotations of source code 2: Semantically annotated verification flow
3: Queries/results 4: Semantic annotations of changes occurred as result of execution
5: Queries/results 6: Transaction execution 7: Modified smart contract
In Listing 1, pseudocode of the verification process leveraging semantic descriptions is
given.
**_Input: smart contract source code, verification flow, first_line, last_line_**
**_Output: true/false_**
Steps:
1. Obtain all the verification rules from the verification flow;
2. Perform the semantic annotation of smart contract using Smart contract source code
representation ontology from the beginning to the end of code range;
3. result:=true;
4. For each verification rule vr in verification flow;
result:=result AND ExecuteSPARQLquery(vr.hasPrecond.Assert.hasQuery.Query)
end for;
5. SimulatedExecution(smart contract source code, from_line, to_line)
6. For each verification rule vr in verification flow;
result:=result AND ExecuteSPARQLquery(vr.hasPostcond.Assert.hasQuery.Query)
end for;
7. return result;
8. End.
**Listing 1 Semantic-driven smart contract verification algorithm**
-----
Semantic Approach to Smart Contract Verification 31
**4.3. Verification flow modeling tool**
As a part of the semantic-driven framework for the smart contract verification, we
propose the verification flow modeling tool. It gives the ability to the users to define a set of
verification rules that are used for the process of the smart contract verification. Each
verification rule consists of: 1) pre-condition, 2) code range, 3) target contract and 4) postcondition. Once it is created, the verification flow is forwarded from the modeling
environment to the components responsible for the verification. The implementation of a
modeling tool is based on Node-RED[6], built upon SCOR coordination flow editor [19] and
SMADA-Fog’s adaptation strategy modelling tool [20]. In Fig. 5, an illustration of the
modeling environment is given.
**Verification rule elements**
**Edit verification_rule node**
**start**
**pre-condition** Contract.BeginDate<Now and Now<Contract.EndDate
**verification_rule**
**start** **verification_rule** **verification_rule** **end** **range** **send**
**end**
**target** **energy_trading**
**post-condition** contractPostBalance==contactPr
eBalance
contract.hasFunction.send().hasA
rgument.amount
Modeling elements
Verification flow Parameters
**Fig. 5 Verification flow modeling tool**
5. CASE STUDIES
**5.1. Music sample licensing**
Let us assume that an independent songwriter wants to use loops from the package
produced by another artists (referred to as _loopmaker). They negotiate about the price,_
license duration and distribution rights. At the end, they agree on the following contract
conditions: the buyer can leverage the samples as much as he wants within the period of
two years, while each commercial release containing the samples from that library will be
charged 1 currency unit. After that period, the usage of samples is not possible. The
described contract is adopted from [21] and given in Listing 2.
```
pragma solidity ^0.4.21;
contract SampleLibrary{
uint begin=BeginDate;
uint end=EndDate;
event Sent(address from, address to, uint amount);
function send() public {
if (balances[Songwriter] < Price) return;
balances[Songwriter] -= Price;
balances[Loopmaker] += Price;
emit Sent(Songwriter, Loopmaker, Price);
}
}
```
**Listing 2 Sample license selling smart contract**
6 [https://nodered.org/](https://nodered.org/)
-----
32 N. PETROVIĆ, M. TOŠIĆ
An excerpt from a music license selling platform domain ontology is given in Fig. 6.
Note that a complete ontology depends on operational details that may be different in
different practical environments and is not covered in this paper.
**range**
**EndDate** **hasEndDate**
**Songwriter**
**hasBuyer** **domain**
**hasPrice**
**range**
**subClassOf** **domain**
**domain**
**range**
**Buyer** **Music trade** **Price**
**subClassOf**
**Loopmaker** **domain** **BeginDate**
**Seller** **domain**
**range** **range**
**hasBeginDate**
**hasSeller**
**Fig. 6 Music license selling platform ontology sample**
Next, the descriptions of the verification rules and corresponding SPARQL queries
used in experiments are given. For this case study, two verification rules were used. First
verification rule contains a pre-condition that checks if the contract between the involved
parties is still valid. If it is true, the contract will be executed. Otherwise, the user will be
informed that contract renewal is required in order to proceed. The corresponding
SPARQL query for this pre-condition is given as:
```
PREFIX sco: http://www.example.com/SCO/
PREFIX mlspo: <http://www.example.com/MLSPO/>
SELECT ?c
WHERE {
GRAPH <http://www.example.com/music_verification> {
?c mlspo:hasBeginDate ?bd.
?c mlspo:hasEndDate ?ed.
?c sco:hasPreTime ?cd.
FILTER(?cd>?bd && ?cd<?ed)
}
}
```
On the other side, the second rule consists of a post-condition that checks if the
balance after the transaction execution is equal to the difference of the initial value and
value of transferred tokens. The following SPARQL query is used in this case:
```
PREFIX sco: <http://www.example.com/SCO/>
SELECT ?s ?r
WHERE {
GRAPH <http://www.example.com/music_verification> {
?s sco:type sco:Sender.
?s sco:hasPostBalance ?post.
?s sco:hasPreBalance ?pre.
?r rdf:type sco:Receiver.
?r sco:hasPostBalance ?post2.
?r sco:hasPreBalance ?pre2.
FILTER(?pre-?post=?post2-?pre2)
}
}
```
-----
Semantic Approach to Smart Contract Verification 33
**5.2. Autonomous car charging**
Let us consider an autonomous car that recharges its battery on a charging station for
certain amount of energy where charging cost depends on the distribution cost to the target
charging station. The smart contract code of this case study inspired by [22] is given in
Listing 3, while the description of the considered verification rules and corresponding
SPARQL queries are given afterwards.
```
pragma solidity ^0.4.21;
contract EnergyTrade{
event Sent(address buyer, address generator, uint amount, uint
transfer_cost, uint generation_cost);
uint price;
uint token_price;
function trade() public {
price=(amount*transfer_cost*generation_cost)/token_price;
if (balances[buyer] < price) return;
balances[buyer] -= price;
balances[generator] += price;
emit Sent(buyer,generator,amount,transfer_cost,generation_cost);
}
}
```
**Listing 3 Autonomous car charging smart contract**
The segment of the underlying domain ontology for energy trading that is relevant for our
example is shown (Fig. 7).
**hasBuyer** **hasAmount**
**PostEnergy** **range** **hasPostEnergy** **range** **domain**
**domain** **range**
**domain**
**Buyer** **Energy exchange** **Value**
**range**
**TransferCost** **hasTransferCost**
**domain** **Generator**
**range**
**GenerationCost** **hasGenerationCost** **domaindomain** **range** **domain**
**domain** **hasGenerator**
**range**
**PreEnergy** **hasPreEnergy**
**Fig. 7 Energy trade ontology**
In this case study, there are three verification rules (two pre-conditions and two post
condition). In the following, these verification rules and corresponding SPARQL queries are
given. The first verification rule contains a pre-condition that checks whether the energy sender
has enough energy in order to perform the transaction. The SPARQL query used for this rule is:
```
PREFIX sco: http://www.example.com/SCO/
PREFIX eto: <http://www.example.com/ETO/>
SELECT DISTINCT ?r
WHERE {
GRAPH <http://www.example.com/energy_verification> {
?c sco:hasAmount ?eta.
?r rdf:type sco:Receiver.
?r eto:hasPreEnergy ?pre.
FILTER(?pre >= ?eta)
}
}
```
-----
34 N. PETROVIĆ, M. TOŠIĆ
The second verification rule also contains a pre-condition which has to check if
reentrancy is not present. The corresponding SPARQL query is:
```
PREFIX sco: <http://www.example.com/SCO/>
SELECT ?f2 ?variable
WHERE {
GRAPH <http://www.example.com/energy_verification> {
?c rdf:type sco:Contract.
?c sco:hasFunction ?f1.
?f1 sco:callsFunction ?f2.
?f2 sco:calledAt ?call_line.
?f1 sco:affects ?variable.
?variable sco:becomesZero ?zero_line.
FILTER(?call_line<?zero_line)
}
}
```
On the other side, the third and the fourth verification rules contain only post-conditions.
The third is the same as the post-condition rule from previous case study. Finally, the fourth
verification rule is described as follows. After the transaction, the energy buyer (receiver)
must have an amount of energy that is equal to the sum of previously available energy and
the amount of energy that is received from generator (sender). “Before” denotes the
available energy before the transaction, while “after” denotes the energy state after the
transaction. The energy generator (sender) must have an amount of energy that is equal to
the difference of the previously available energy and the amount of energy that is sent to the
buyer (receiver). For this post-condition the following SPARQL query was used:
```
PREFIX sco: http://www.example.com/SCO/
PREFIX eto: <http://www.example.com/ETO/>
SELECT ?s ?r
WHERE {
GRAPH <http://www.example.com/energy_verification> {
?s rdf:type sco:Sender.
?s eto:hasPostEnergy ?post.
?s eto:hasPreEnergy ?pre.
?r rdf:type sco:Receiver.
?r eto:hasPostEnergy ?post2.
?r eto:hasPreEnergy ?pre2.
FILTER(?post-?pre=?pre2-?post2)
}
}
```
6. EVALUATION
In this section, the evaluation of the proposed approach is presented with respect to the
execution speed of the verification process. The execution was performed on a laptop equipped
with Intel i7 7700-HQ quad-core CPU running at 2.80GHz and 16GB of DDR4 RAM and
RDF triple store deployed in cloud. The results are compared to relevant existing solutions.
In Table 3, an overview of the obtained results is given, where each row represents a
single experiment. The first column denotes the corresponding case study for the considered
experiment. The second column is the reference to the verification rules involved into the
experiment. Moreover, the third column shows the time needed for smart contract parsing
and construction of semantic representation. The next column is the time needed for
-----
Semantic Approach to Smart Contract Verification 35
verification based on SPARQL queries. Finally, the last column shows the number of triplets
inserted into RDF triple store during the experiment. All execution times are given in
seconds as average of 20 executions.
**Table 3 Smart contract verification evaluation results**
Case study Verification Parsing and semantic Verification Triplets
rule representation [s]
[s]
Music 1 0.028
Music 2 1.55 0.019 25
Energy Reentrancy 0.033
Energy 1 0.026
Energy 2 1.66 0.034 28
Energy 1 and 2 0.041
Energy 3 0.029
According to the achieved results, it can be noticed that most of the execution time
was spent on parsing and construction of a semantic smart contract representation, while
the verification itself is much faster. It can be explained by the fact that the construction
of a semantic smart contract representation involves insertion of many triplets into the
RDF triple store, while each verification rule is translated to a single SPARQL query.
Moreover, it is noticeable that processing of the music contract is shorter than energy
trading, due to fact that the second case included more triplets which were inserted for its
semantic representation.
Furthermore, the verification time increases as the number of rules increases, as it
involves more SPARQL queries to be executed. The queries for the first rule in music
contract case study and for the second and third rules in energy exchange case study are
longer than other queries as they involve arithmetic operations.
Finally, the introduced overhead for the smart contract verification that involves parsing,
triple insertion and SPARQL query execution does not exceed the order of magnitude of 1s
in the presented experiments. The achieved overall average execution speed is faster than
solutions presented in [18] (approximately 84s per contract [23]) and [17] that achieved
average processing time of 4.15s, while it shows similar performance as [16].
7. CONCLUSION AND FUTURE WORK
In this paper, a semantic approach to smart contract verification and code generation
to avoid known bugs and vulnerabilities is presented. As an outcome, easily extendable
framework is proposed and described. The usage of the proposed framework is illustrated
in two case studies: music industry license selling and energy trading. According to the
initial results the approach seems promising. In the presented experiments, the overall
overhead for verification was of order of magnitude of 1s.
Moreover, one of future goals of the framework proposed in this paper is to leverage
these semantic annotations in order to generate code that will be added to the original
smart contract in order to avoid known bugs and vulnerabilities. In that case, the new
contract is constructed by adding the generated lines of code to the original contract. For
-----
36 N. PETROVIĆ, M. TOŠIĆ
each detected vulnerability the additional lines of code are generated and inserted into the
original smart contract on the specific position.
The framework is designed to be easily extendable to cover new business cases and
rules, support other smart contract languages (apart from Solidity) and newly discovered
smart contract bugs and vulnerabilities by extending the existing semantic knowledge
base, without the need of making direct modifications to the verification mechanisms
themselves. However, it is planned in the future to evaluate the aspects of extendability in
quantitative measurements and adopt it for other blockchain platforms and smart contract
languages apart from Ethereum and Solidity.
REFERENCES
[1] N. Balani and R. Hathi, Enterprise Blockchain: A Definitive Handbook, 2017.
[2] [S. Palladino, “Ethereum for Web Developers (chapter 1)”, pp. 1-16, 2019. Available: https://doi.org/10.1007/](https://doi.org/10.1007/978-1-4842-5278-9_1)
[978-1-4842-5278-9_1](https://doi.org/10.1007/978-1-4842-5278-9_1)
[3] A. Narayanan and J. Clark, “Bitcoin’s academic pedigree”, Communications of the ACM, 60(12), pp.
36–45, 2017.
[4] “A Next-Generation Smart Contract and Decentralized Application Platform”. [Online]. Available:
[https://github.com/ethereum/wiki/wiki/White-Paper . Last accessed: 24/03/2019.](https://github.com/ethereum/wiki/wiki/White-Paper)
[5] K. Zīle, R. Strazdiņa, “Blockchain Use Cases and Their Feasibility”, Applied Computer Systems, 23(1),
[pp. 12–20, 2018. https://doi.org/10.2478/acss-2018-0002](https://doi.org/10.2478/acss-2018-0002)
[6] X. Feng, Q. Wang, X. Zhu, S. Wen, “Bug Searching in Smart Contract”, pp. 1-8, 2019. [Online].
[Available on: https://arxiv.org/abs/1905.00799](https://arxiv.org/abs/1905.00799)
[7] D. Harz, W. Knottenbelt, “Towards Safer Smart Contracts: A Survey of Languages and Verification
[Methods”, pp. 1-20, 2018. Available on: https://arxiv.org/abs/1809.09805v4](https://arxiv.org/abs/1809.09805v4)
[8] V. Mathur, “Literature Review: Smart Contract Semantics”, pp. 1-9, 2018.
[9] [“Smart Contract Best Practices: Known Attacks”. [Online]. Available: https://consensys.github.io/smart-](https://consensys.github.io/smart-contract-best-practices/known_attacks/)
[contract-best-practices/known_attacks/ . Last accessed 12/10/2019.](https://consensys.github.io/smart-contract-best-practices/known_attacks/)
[10] T. Gruber,“Toward Principles for the Design of Ontologies Used for Knowledge Sharing”, International
Journal Human-Computer Studies 43 (5-6), 907-928 (1995).
[11] C. A. R. Hoare, “An axiomatic basis for computer programming”, Communications of the ACM. 12
[(10), pp. 576–580, 1969 [Online]. Available: https://doi.org/10.1145/363235.363259.](https://doi.org/10.1145/363235.363259)
[12] C. Furia, C. Poskitt, J. Tschannen, “The AutoProof Verifier: Usability by Non-Experts and on Standard
Code”, EPTCS 187, pp. 42-55, 2015.
[13] Z. Nehai, P. Y. Piriou, F. Daumas,, “Model-Checking of Smart Contracts”, The 2018 IEEE International
[Conference on Blockchain pp. 1-8 (2018). https://doi.org/10.1109/Cybermatics_2018.2018.00185](https://doi.org/10.1109/Cybermatics_2018.2018.00185)
[14] D. Annenkov, J. B. Nielsen, B. Spitters, “ConCert: a smart contract certification framework in Coq”,
CPP 2020: Proceedings of the 9th ACM SIGPLAN International Conference on Certified Programs and
[Proofs, pp. 215-228, 2020. https://doi.org/10.1145/3372885.3373829](https://doi.org/10.1145/3372885.3373829)
[15] W. Ahrendt et al., “Verification of Smart Contract Business Logic Exploiting a Java Source Code
Verifier”, FSEN 2019, LNCS 11761, pp. 228-243, 2019.
[16] A. Hajdu and D. Jovanovic, “solc-verify: A Modular Verifier for Solidity Smart Contracts”, Verified
Software: Theories, Tools, and Experiments (VSTTE 2019), pp. 1-18, 2019.
[17] L. Brent et al., “Vandal: A Scalable Security Analysis Framework for Smart Contracts”, pp. 1-28, 2018.
[https://arxiv.org/pdf/1809.03981.pdf](https://arxiv.org/pdf/1809.03981.pdf)
[18] B. Mueller, “Introducing Mythril: A framework for bug hunting on the Ethereum blockchain”. [Online],
[https://medium.com/hackernoon/introducing-mythril-a-framework-for-bug-hunting-on-the-ethereum-](https://medium.com/hackernoon/introducing-mythril-a-framework-for-bug-hunting-on-the-ethereum-blockchain-9dc5588f82f6)
[blockchain-9dc5588f82f6 . Last accessed 06/03/2020.](https://medium.com/hackernoon/introducing-mythril-a-framework-for-bug-hunting-on-the-ethereum-blockchain-9dc5588f82f6)
[19] V. Nejkovic, N. Petrovic, M. Tosic, N. Milosevic, “Semantic approach to RIoT autonomous robots
[mission coordination”, Robotics and Autonomous Systems, 103438, pp. 1-19, 2020. https://doi.org/10.1016/](https://doi.org/10.1016/j.robot.2020.103438)
[j.robot.2020.103438](https://doi.org/10.1016/j.robot.2020.103438)
-----
Semantic Approach to Smart Contract Verification 37
[20] N. Petrovic, M. Tosic, “SMADA-Fog: Semantic model driven approach to deployment and adaptivity in
[Fog Computing”, Simulation Modelling Practice and Theory, 102033, pp. 1-25, 2019. https://doi.org/10.1016/](https://doi.org/10.1016/j.simpat.2019.102033)
[j.simpat.2019.102033](https://doi.org/10.1016/j.simpat.2019.102033)
[21] N. Petrovic, “Adopting Semantic-Driven Blockchain Technology to Support Newcomers in Music
Industry”, CIIT 2019, Mavrovo, North Macedonia, pp. 2-7, 2019.
[22] N. Petrović, Đ. Kocić, “Data-driven Framework for Energy-Efficient Smart Cities”, Serbian Journal of
[Electrical Engineering, Vol. 17, No. 1, Feb. 2020, pp. 41-63. https://doi.org/10.2298/SJEE2001041P](https://doi.org/10.2298/SJEE2001041P)
[23] T. Durieux, J. F. Ferreira, R. Abreu, P. Cruz, "Empirical Review of Automated Analysis Tools on 47,587
[Ethereum Smart Contracts", pp. 1-12, 2020. [Online]. Available on: https://arxiv.org/pdf/1910.10601.pdf](https://arxiv.org/pdf/1910.10601.pdf)
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.22190/FUACR2001021P?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.22190/FUACR2001021P, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "http://casopisi.junis.ni.ac.rs/index.php/FUAutContRob/article/download/6409/3662"
}
| 2,020
|
[] | true
| 2020-07-28T00:00:00
|
[] | 11,772
|
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/033a589571860de1b499ceab4a81a1c8da646de4
|
[] | 0.815662
|
Machine-learning and statistical methods for DDoS attack detection and defense system in software defined networks
|
033a589571860de1b499ceab4a81a1c8da646de4
|
[
{
"authorId": "2265912328",
"name": "Merlin James Rukshan Dennis"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
|
Distributed Denial of Service (DDoS) attack is a serious threat on today’s Internet. As the traffic across the Internet increases day by day, it is a challenge to distinguish between legitimate and malicious traffic. This thesis proposes two different approaches to build an efficient DDoS attack detection system in the Software Defined Networking environment. SDN is the latest networking approach which implements centralized controller, which is programmable. The central control and the programming capability of the controller are used in this thesis to implement the detection and mitigation mechanisms.
In this thesis, two designed approaches, statistical approach and machine-learning approach, are proposed for the DDoS detection. The statistical approach implements entropy computation and flow statistics analysis. It uses the mean and standard deviation of destination entropy, new flow arrival rate, packets per flow and flow duration to compute various thresholds. These thresholds are then used to distinguish normal and attack traffic. The machine learning approach uses Random Forest classifier to detect the DDoS attack. We fine-tune the Random Forest algorithm to make it more accurate in DDoS detection. In particular, we introduce the weighted voting instead of the standard majority voting to improve the accuracy. Our result shows that the proposed machine-learning approach outperforms the statistical approach. Furthermore, it also outperforms other machine-learning approach found in the literature.
|
# MACHINE-LEARNING AND STATISTICAL METHODS
FOR
DDOS ATTACK DETECTION AND DEFENSE SYSTEM IN
SOFTWARE DEFINED NETWORKS
by
**Merlin James Rukshan Dennis**
Master of Engineering, Anna University, India, 2006
Bachelor of Engineering, Manonmaniam Sundaranar University, India, 2003
A thesis
presented to Ryerson University
in partial fulfillment of the
requirements for the degree of
Master of Applied Science
in the Program of
Computer Networks
Toronto, Ontario, Canada, 2018
© Merlin James Rukshan Dennis 2018
-----
AUTHOR’S DECLARATION
I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including
any required final revisions as accepted by my examiners.
I authorize Ryerson University to lend this thesis to other institutions or individuals for the purpose
of scholarly research.
I further authorize Ryerson University to reproduce this thesis by photocopying or by other means,
in total or in part, at the request of other institutions or individuals for the purpose of scholarly
research.
I understand that my thesis may be made electronically available to the public.
ii
-----
**Machine-Learning and Statistical Methods**
**For**
**DDoS Attack Detection and Defense System in**
**Software Defined Networks**
by
Merlin James Rukshan Dennis
Master of Applied Science
Computer Networks
Ryerson University, 2018
**Abstract**
Distributed Denial of Service (DDoS) attack is a serious threat on today’s Internet. As the traffic
across the Internet increases day by day, it is a challenge to distinguish between legitimate and
malicious traffic. This thesis proposes two different approaches to build an efficient DDoS attack
detection system in the Software Defined Networking environment. SDN is the latest networking
approach which implements centralized controller, which is programmable. The central control
and the programming capability of the controller are used in this thesis to implement the detection
and mitigation mechanisms.
In this thesis, two designed approaches, statistical approach and machine-learning approach, are
proposed for the DDoS detection. The statistical approach implements entropy computation and
flow statistics analysis. It uses the mean and standard deviation of destination entropy, new flow
arrival rate, packets per flow and flow duration to compute various thresholds. These thresholds
are then used to distinguish normal and attack traffic. The machine learning approach uses Random
Forest classifier to detect the DDoS attack. We fine-tune the Random Forest algorithm to make it
more accurate in DDoS detection. In particular, we introduce the weighted voting instead of the
standard majority voting to improve the accuracy. Our result shows that the proposed machine
learning approach outperforms the statistical approach. Furthermore, it also outperforms other
machine-learning approach found in the literature.
iii
-----
Acknowledgements
I wish to express my sincere gratitude to my supervisor Dr. Ngok-Wah Ma for his continuous
support in the MASc program. Without him, the completion of this study would not have been
possible.
I would like to thank my co-supervisor Dr. Xiaoli Li, who gave lots of ideas to improve my results
despite her busy schedule.
Special thanks to the Computer Networks Department and Yeates School of Graduate Studies at
Ryerson University, for giving me this great opportunity and their financial support throughout
my studies.
Last, but not the least, I would like to thank my parents, husband and my cute little daughter for
their love and encouragement. Without their patience and sacrifice, I could not have completed
this thesis.
iv
-----
**Table of Contents**
List of Figures .............................................................................................................................. viii
List of Tables ................................................................................................................................. ix
List of Abbreviations ...................................................................................................................... x
Chapter 1 ......................................................................................................................................... 1
1 Introduction .................................................................................................................................. 1
1.1 Problem Statement ................................................................................................................ 1
1.2 Research Objective and Contribution ................................................................................... 2
1.3 Thesis Organization............................................................................................................... 3
Chapter 2 ......................................................................................................................................... 4
2 Background and Related Work .................................................................................................... 4
2.1 Introduction to Software Defined Networking...................................................................... 4
2.2 Benefits of SDN .................................................................................................................... 5
2.3 OpenFlow Protocol ............................................................................................................... 6
2.4 SDN controller ...................................................................................................................... 7
2.5 DDoS Attacks ........................................................................................................................ 8
2.5.1 Types of DDoS attack ................................................................................................ 9
2.6 Introduction to Machine Learning....................................................................................... 10
2.6.1 Types of Machine learning algorithms .................................................................... 10
2.7 Supervised Machine learning .............................................................................................. 10
2.7.1 Random Forest algorithm ........................................................................................ 11
2.7.2 Feature Selection ...................................................................................................... 15
2.7.3 Classifier Accuracy Estimation ................................................................................ 15
2.7.4 Advantages of Random Forest ................................................................................. 16
2.7.5 Disadvantages of Random Forest ............................................................................ 16
v
-----
2.8 Related Work in SDN based DDoS attack detection .......................................................... 17
2.8.1 DDoS attack Detection using Statistical Approach ................................................. 17
2.8.2 DDoS attack Detection using Machine Learning Approach .................................... 18
2.9 Description of the Original Approach ................................................................................. 19
2.9.1 Stage I: Detection based on Entropy Variation........................................................ 19
2.9.2 Detection based on the number of Flows ................................................................. 20
2.9.3 Stage II: Detection based on Analysis of Flow statistics: ........................................ 21
2.9.4 Mitigation Module ................................................................................................... 21
Chapter 3 ....................................................................................................................................... 22
3 Proposed Methods ...................................................................................................................... 22
3.1 Proposed Method - Statistical Approach ............................................................................. 22
3.1.1 Computation of Threshold values ............................................................................ 22
3.1.2 Mitigation Module ................................................................................................... 23
3.2 Machine Learning Approach ............................................................................................... 24
3.2.1 Building the Machine Learning Model .................................................................... 24
3.2.2 Random Forest classifier .......................................................................................... 25
3.2.3 UCLA Dataset .......................................................................................................... 26
3.2.4 Training Phase ......................................................................................................... 28
3.2.5 Testing Phase ........................................................................................................... 30
3.2.6 Preparation of Training Data ................................................................................... 31
3.2.7 Training the classifier .............................................................................................. 31
3.2.8 Implementation of RF Classifier in the Controller .................................................. 32
Chapter 4 ....................................................................................................................................... 33
4 Performances and Analyses ....................................................................................................... 33
4.1 Mininet ................................................................................................................................ 33
vi
-----
4.2 POX Controller ................................................................................................................... 33
4.3 Traffic generator .................................................................................................................. 33
4.4 Performance Metrics of Machine Learning Approach ........................................................ 35
4.4.1 Confusion Matrix ..................................................................................................... 35
4.5 Simulation Scenario and Results of Our Proposed Method ................................................ 36
4.6 Performance of the Statistical Approach ............................................................................. 38
4.6.1 Comparison of the Basic Approach with Our modified approach ........................... 38
4.7 Mitigation Results ............................................................................................................... 39
4.7.1 Performance Analysis of Mitigation Module .......................................................... 40
4.8 Performance of the Machine Learning approach ................................................................ 41
4.8.1 Feature Selection ...................................................................................................... 41
4.8.2 ROC Plot .................................................................................................................. 43
4.9 Comparison of Our Statistical and Machine Learning Approach ....................................... 44
4.10 Comparison of Machine Learning Approach with Existing ML Approach...................... 45
Chapter 5 ....................................................................................................................................... 47
5 Conclusion and Future Work ..................................................................................................... 47
5.1 Conclusion ........................................................................................................................... 47
5.2 Future work ......................................................................................................................... 47
Appendix ....................................................................................................................................... 48
Bibliography ................................................................................................................................. 64
vii
-----
**List of Figures**
Fig 2.1: SDN Architecture [2] ........................................................................................................ 5
Fig 2.2: OpenFlow switch [3] ......................................................................................................... 6
Fig 2.3: Flow Table Entries [4] ....................................................................................................... 7
Fig 2.4: Types of SDN controller [5] .............................................................................................. 8
Fig 2.5: DDoS Attack on SDN controller [7] ................................................................................. 9
Fig 2.6: Supervised Learning classifier ........................................................................................ 11
Fig 2.7: Random Forest Algorithm [8] ......................................................................................... 13
Fig 2.8: Example Dataset of people who buys computer [39] ...................................................... 13
Fig 2.9: Dataset divided into 3 Subsets [39] ................................................................................. 14
Fig 2.10: Single Decision Tree [39] .............................................................................................. 14
Fig 2.11: 10-fold cross-validation [46] ......................................................................................... 16
Fig 3.1: Machine Learning Methodology ..................................................................................... 25
Fig 3.2: Sample UCLA Dataset .................................................................................................... 27
Fig 3.3: Flowchart for Training Phase .......................................................................................... 29
Fig 3.4: Flowchart for Testing Phase ............................................................................................ 30
Fig 4.1: Traffic Generation using Scapy ....................................................................................... 34
Fig 4.2: Network Setup ................................................................................................................. 37
Fig 4.3: Comparison of Performance of Statistical Approaches................................................... 39
Fig 4.4: Execution of Mitigation Module ..................................................................................... 39
Fig 4.5: Wireshark Output: Attack Traffic Mitigated ................................................................... 40
Fig 4.6: Feature Importance Plot .................................................................................................. 42
Fig 4.7: Exemplary ROC Plot [8] ................................................................................................. 43
Fig 4.8: ROC Plot of Proposed Method ........................................................................................ 44
Fig 4.9: Performance Comparison of Statistical & ML Approaches ............................................ 45
Fig 4.10: Performance Comparison of our ML Approach with other ML Approaches ............... 46
viii
-----
**List of Tables**
Table 3.1: Features in UCLA Dataset ........................................................................................... 27
Table 4.1: Normal Traffic Pattern ................................................................................................. 34
Table 4.2: Attack Traffic Pattern .................................................................................................. 34
Table 4.3: Confusion Matrix ......................................................................................................... 35
Table 4.4: False Positive and False Negative Values ................................................................... 38
Table 4.5: Performance Analysis of Mitigation Module .............................................................. 41
Table 4.6: Performance Metrics .................................................................................................... 42
ix
-----
**List of Abbreviations**
API Application Programming Interface
CPU Central Processing Unit
DDoS Distributed Denial of Service
FN False Negative
FP False Positive
FPR False Positive Rate
IP Internet Protocol
ML Machine Learning
OF OpenFlow
OS Operating Systems
RF Random Forest
ROC Receiver Operating Characteristics
SDN Software Defined Networking
TN True Negative
TP True Positive
TPR True Positive Rate
UDP User Datagram Protocol
x
-----
**Chapter 1**
**1 Introduction**
**1.1 Problem Statement**
Software Defined Networking is an emerging technology, which enables the network to be
programmable, centralized and flexible. The SDN architecture has a separate control plane and
data plane. System administrators can control the entire network through the centralized control
plane (controller). These features of SDN can be used in the construction of intelligent and
automated networks. Also, the operational costs in large data centers have been greatly reduced
with the implementation of SDN.
However, this centralized feature of the SDN controller makes it an ideal target for the attackers.
On the other hand, the same feature can also provide a new and efficient way to detect network
attacks. One of the major network attacks is a Denial of Distribution of Service (DDoS) attack.
With the immense internet growth, a large number of hosts are vulnerable to the attacks. Most of
the DDoS attacks are generated by attacking software which is installed on the vulnerable hosts
unknowingly.
This thesis proposes two approaches for the detection of the DDoS attack. The first approach, the
statistical approach, uses destination Entropy and Flow statistics measurements to distinguish the
normal and attack traffic. The second approach uses a machine-learning algorithm based on
Random Forest (RF) classifier to classify the normal and attack traffic.
We will compare the two approaches based on three performance parameters: detection accuracy,
false negative and false positive. In addition, we will also compare our Machine Learning approach
with other Machine learning approaches in the literature.
1
-----
**1.2 Research Objective and Contribution**
The main goal of this research is to develop a detection system to identify Distributed Denial of
Service (DDoS) attacks in the SDN environment. In this thesis, we use the traffic parameters of
normal traffic, such as payload size and packet per flow, to identify the attack. In the statistic
approach, the means and standard deviations of these parameters are measured to compute various
thresholds. These thresholds are used to distinguish the normal and attack traffic. Whereas in the
machine learning approach, an RF classifier with appropriate modifications is implemented and
used to classify the traffic.
Furthermore, a mitigation method is also proposed to mitigate the effect of the attack.
Our contributions in this thesis are,
1 Design and implementation of a DDoS attack detection system based on the statistical
approach proposed by Kia [1]. We have modified and improved the approach of [1] by
computing the threshold values based on the mean and standard deviations of the normal traffic
parameters.
2 Propose an efficient mitigation method based on pushing a drop flow to block the attack traffic,
thus, protect the controller and switch.
3 Design and implementation of a DDoS attack detection system based on the machine learning
model using the Random Forest algorithm. The RF algorithm is modified in such a way that it
uses weighted voting instead of standard majority voting for attack prediction as used by
Alphna et al [14], Malik et al [15], Farnaaz et al [16].
4 Compare, analyze and evaluate the proposed detection and mitigation techniques with the
approaches found in the literature.
2
-----
**1.3 Thesis Organization**
The report consists of 5 chapters. The rest of the thesis is organized as follows.
**Chapter 2 introduces SDN architecture and the OpenFlow protocol. It also gives a brief**
description of different types of DDoS attacks and a survey on DDoS detection methods found in
the literature. It also covers the machine learning fundamentals and the RF algorithm in particular.
The chapter ends with a summary of the DDoS detection system proposed in [1].
**Chapter 3 describes the details of the two proposed approaches.**
**Chapter 4 discusses the experimental setup for both statistical approach and machine learning**
approach. It also provides the detailed results, including the detection rate, False Positive (FP) and
False Negative (FN) values of the two approaches. The performances of the proposed approaches
are compared with each other. We also compare the performance of our approach with other
methods found in the literature.
**Chapter 5 concludes this thesis and provides a brief description of the further research to make**
the detection system more efficient.
3
-----
**Chapter 2**
**2 Background and Related Work**
**2.1 Introduction to Software Defined Networking**
SDN is an emerging network architecture which is dynamic, manageable and cost-effective. It is
based on the abstraction of forwarding plane from the control plane. This abstraction makes the
network directly programmable and flexible, which is ideal for configuring, managing, securing
and optimizing the network resources dynamically and automatically.
In a traditional network, switch’s proprietary protocol tells the switch where to forward the
network packet. The switch treats all the packets belonging to the same destination equally. This
has been changed with the introduction of SDN technology.
SDN can make decisions about how packets should flow through the network in the forwarding
plane. Packet handling rules are sent to the switches from a controller. The controller is a software
application running on a server located remotely. The switches seek guidance from the controller
for packet handling.
Switches and controller communicate via the controller’s south-bound interface. This
communication is achieved by the OpenFlow protocol. Similarly, applications can talk to the
controller via the controller’s north-bound interface. The SDN architecture is shown in Fig-2.1.
4
-----
Fig-2.1 shows the concept of decoupling data plane and control plane in SDN. The function of the
control plane is to make decisions on where the traffic should be sent. The control plane consists
of one or more SDN controllers. The controller is nothing but a software program. The controller
is centralized, and it maintains a global view of the network in the control plane. A single control
plane can control a number of forwarding devices such as OpenFlow switches. It defines the
forwarding rules of the devices in the data plane and can remotely configure all the devices in the
data plane. The network devices in the data plane forward traffic, according to these rules.
**2.2 Benefits of SDN**
The decoupled nature of the control plane and data plane makes SDN technology programmable.
The controller is a logical entity which gives the global view of the network. It can communicate
with both the SDN applications and the hardware network devices about the statistics and events
happening.
SDN facilitates automated load balancing and it has the ability to scale network resources
dynamically. The open standard implementation simplifies the network design and operations. It
is ideal for today’s high-bandwidth applications.
5
Fig 2.1: SDN Architecture [2]
Fig-2.1 shows the concept of decoupling data plane and control plane in SDN. The function of the
-----
Today it’s easier to unify cloud resources with SDN. Large data center platforms can be easily
managed from SDN controller. It’s also used to implement centralized security.
**2.3 OpenFlow Protocol**
The communication between the controller and the data plane devices needs a suitable standard.
OpenFlow is the communication interface defined between the control and forwarding layers of
an SDN architecture [3]. OpenFlow manages the switches in the network and allows the controller
to manipulate the flow of packets through the network.
Fig 2.2: OpenFlow switch [3]
Fig-2.2 shows the structure of an OpenFlow switch. An OpenFlow switch consists of one or more
flow tables, a group table and a secure channel to an external controller. The switch communicates
with the controller, and the controller manages the switch via OpenFlow protocol. Each flow table
in the switch contains a set of flow entries. Using the OpenFlow protocol, the controller can add,
delete and update flow entries in the flow table.
6
Fig 2.2: OpenFlow switch [3]
-----
Fig 2.3: Flow Table Entries [4]
When a new packet arrives at an OpenFlow switch, it will look into the flow table to find a match.
If a match is found, the action assigned to that entry is applied and the counter for the entry will
be updated. If there is no match in the table, it is called table miss and the switch sends a Packet_IN
message to the controller through the secure channel. The controller processes the packet and sends
Packet_OUT and or Flow_MOD message back to the switch. The Packet_OUT message is an
instruction to the switch on what to do with the packet. Whereas Flow_MOD message instructs
the switch to install a new flow entry in the flow table. Hence, the packet is forwarded according
to this new rule.
**2.4 SDN controller**
The controller is considered as the core of an SDN network. The controller uses protocols like
OpenFlow to communicate with networking devices. There are different types of controllers like
POX, Ryu, Open Day Light, Beacon, etc. The Fig-2.4 shows the different SDN controllers. In this
research, POX controller is used.
7
-----
Fig 2.4: Types of SDN controller [5]
**POX**
POX is inherited from NOX controller [5]. It is an open source development platform, used to
create an SDN controller using python programming language. POX controller provides an
efficient way to handle the OpenFlow devices. Using POX controller, you can run different
applications like a hub, switch, load balancer, and firewall. It is a great tool for SDN research
works. The proposed algorithm in this research is implemented in the pox controller.
**2.5 DDoS Attacks**
Ensuring security in SDN is very important to provide secure communication. This research
concentrates on Distributed Denial of Service Attack (DDoS) on the data plane. The DDoS attack
makes a machine or network resources unavailable to its users [6]. This is achieved by consuming
the entire network bandwidth or the resources of the network nodes (such as memory and CPU).
Fig-2.5 shows the DDoS attack on an SDN controller.
8
-----
Fig 2.5: DDoS Attack on SDN controller [7]
**2.5.1 Types of DDoS attack**
**UDP Flood [6] is a type of attacks which aims at bringing down the server by sending a large**
number of UDP packets to random ports on the targeted host. The attackers usually utilize the
UDP’s connectionless feature to submit a stream of UDP data packet to the victim machine. The
victim machine’s queue becomes filled and it will not be able to respond legitimate user’s request.
Usually, in these types of attacks, the attacker spoofs the source IP address of the UDP packets to
hide the locations of the attack machines.
**SYN Flood is an attack using TCP connection initiation to target the victim's machine. A large**
number of SYN packets are sent to the victim but no ACK is returned to the victim, causing a large
number of resources at the victim’s machine and making the machine unavailable to the legitimate
users.
**The DNS Reflection attack sends DNS request to victim source IP address which causes**
responses which are much larger than the requests to direct to the victim.
**HTTP Flood sends a huge number of requests to a web server and overwhelms it to the point**
where it cannot respond to legitimate requests.
**ICMP Flood is another type of attack that exhausts the resources of the victim by sending a very**
large number of ICMP pings (echo request), which keeps the server busy in sending responses
(echo replies).
9
-----
**2.6 Introduction to Machine Learning**
Machine learning is an Artificial Intelligence application which provides the computer program
the ability to learn from input data [8]. It allows us to use historical data as the input for the
prediction of future data. Thus, the accuracy of the output is solely based on the quality of the
historical data.
Nowadays, machine learning techniques are used in various fields to solve different problems. For
example, they are used in Email spam filtering, pattern and image recognition, search engines
filtering, healthcare applications, etc.
**2.6.1 Types of Machine learning algorithms**
Machine learning algorithms can be broadly classified as Supervised learning algorithms and
Unsupervised learning algorithms.
**Supervised Learning algorithms**
A Supervised learning algorithm is mainly used to solve classification and regression problems as
it makes the detection or decision-making process easier. It uses the past learned data to predict
the future events. The input data used to train the learning algorithm is a labeled one. That is, the
input data have one or more labels, e.g. in our thesis attack and no attack are the labels used to
classify the traffic data. After appropriate training, the system can classify unknown data. This
research uses supervised learning algorithms.
**Unsupervised learning algorithms**
Unsupervised learning algorithm uses unlabeled input data to train the system. That is, the input
data are not tagged with labels. It finds the hidden structure from the unlabeled input, and groups
them as clusters showing the similarities. The initial performance of this type of learning algorithm
is poor, but the system can tune itself to improve the performance.
**2.7 Supervised Machine learning**
This is the most commonly used technique in machine learning. Our research problem implements
a Supervised machine learning algorithm to classify the network traffic as legitimate traffic and
malicious traffic. Here the classifier gets the input which is a set of feature values also called input
10
-----
vector and outputs the predicted value called class. Fig-2.6 shows the supervised learning
classifier.
Fig 2.6: Supervised Learning classifier
Here the training data are given as an input to the learning algorithm which results in a classifier
model. The performance of the classifier can be evaluated using unseen data.
**2.7.1 Random Forest algorithm**
Random forest is one of the most powerful algorithms used for predictive modeling [8]. The
underlying principle is the construction of multiple decision trees by randomizing the combination
of variables. That is, multiple decision trees are constructed from the given data set and the results
are combined to make predictions. To construct multiple decision trees, the data set is divided
repeatedly into subtrees by changing the combination of variables. The challenge here is to find
the best combination of variables which gives the highest accuracy in prediction.
The accuracy of the Random Forest algorithm can be tuned by increasing the number of trees
generated. Each individual decision tree generated makes its own prediction. Some may be right
and some wrong. The individual trees that produced correct predictions reinforce each other, while
wrong predictions get canceled. For this to happen, the individual trees generated must be
11
-----
uncorrelated. Here comes the Bagging technique which helps in generating the decision trees with
minimal correlation.
Random Forest is an ensemble classifier which can implement Bootstrap aggregation (Bagging)
to improve the accuracy. That is, normally the learning algorithm can choose the split point from
all the available features. But the Random Forest algorithm which implements bagging technique,
while constructing the individual trees, randomly chooses a node to split on. Here every node is a
condition on a single feature to split the dataset. The randomly selected features x is calculated by
the following formula:
𝑥= √𝑝 (2.1)
where p is the total no. of input features.
For example, if a dataset had 16 input variables for a classification problem, then the randomly
selected features x is given by,
_x = √16_
_x = 4_
Thus, the individual decision trees are constructed based on the randomly selected 4 features.
Fig-2.7 shows the process involved in RF algorithm to create decision trees and, to derive the
predictions out of them. Here the training dataset _D has d1, d2, ...dN variables. Using Bootstrap_
process, it creates m decision trees. The average value of the results obtained in each decision tree
is calculated and the result is predicted.
12
-----
Fig 2.7: Random Forest Algorithm [8]
The process of creating a single decision tree can be explained with an example [39]. Fig-2.8 shows
a dataset of people who buy a computer.
Fig 2.8: Example Dataset of people who buys computer [39]
In Fig-2.9 the dataset is divided into 3 subsets based on the attribute “age”.
13
-----
Fig 2.9: Dataset divided into 3 Subsets [39]
The attribute age has three different values: <=30, 31...40, >40. From the table, we can learn that
all students with age <=30 buys a computer. The people with age = 31...40 buys a computer. Also,
people with age >40 and fair credit rating buy a computer. Based on these analyses the decision
tree is generated.
Fig 2.10: Single Decision Tree [39]
14
Fig 2.10: Single Decision Tree [39]
-----
**Extracting Classification Rules**
Each attribute-value pair along a path from the root to leaf forms a rule. The leaf node holds the
class prediction. For e.g.
If age = “<=30” and student = “no” then buys_computer = “no”
If age = “<=30” and student = “yes” then buys_computer = “yes”
Consider a new data: (<=30, yes, excellent, ?). To find the class value of new data, the tree is
analyzed, and the class value is computed as a yes.
**2.7.2 Feature Selection**
Though there may be numerous features available in the given dataset, only features relevant to
the problem to be solved are selected. This is called feature selection. To select the relevant
features, we need to find the importance of each feature in predicting the results. This is done by
calculating how much the error rate drops for a feature at each split point. Those features with
small error rate are considered as more important for the classification problem. This is called a
Gini score. The Gini score of a feature can be calculated by the following formula,
𝐺(𝐷) = 1 −∑𝑛𝑗=1 𝑝𝑗2 (2.2)
where D is the dataset of n classes and pj is the relative frequency of a feature with class value _j._
The average of all the Gini score of a feature across all the decision trees gives the importance of
that feature. Based on the importance of features, we can select the most relevant features from the
given dataset, which gives the accurate prediction result.
**2.7.3 Classifier Accuracy Estimation**
Estimation of the predictive accuracy of a classifier is done to know how good the prediction will
be. Common methods used for accuracy estimation are [40]: Validation set approach and k-fold
cross-validation.
In validation set approach, to measure the classifier accuracy, the dataset is divided into a training
dataset (50%) and testing dataset (50%). Once the classifier model is built using the training
dataset, the accuracy is estimated using the unused testing dataset.
15
-----
Whereas, k-fold cross-validation [40] can be performed by dividing the entire dataset into k-folds.
For each k-folds in the dataset build the model using k-1 folds of the dataset. Then test the model
using the kth fold. Repeat this procedure until all the k folds have served as a test data. Fig-2.11
shows the k-fold cross-validation for k =10.
Fig 2.11: 10-fold cross-validation [46]
The value of k should be chosen carefully because lower k value produces more error and higher
value of k takes large computation time. In our thesis, we have used k-fold cross-validation method
with k =10.
**2.7.4 Advantages of Random Forest**
- The Random Forest algorithm can handle large datasets.
- The accuracy is high compared to other machine learning algorithms.
- The implementation is easier and its faster than any other algorithm.
- It overcomes the problem of overfitting (model error due to noise in the training data) if
many trees are grown.
**2.7.5 Disadvantages of Random Forest**
- Utilizes more memory for building a forest.
- Random forest models are black boxes which are hard to interpret.
- Random forest overfits when the number of trees generated is less.
16
-----
**2.8 Related Work in SDN based DDoS attack detection**
We discuss the related work based on two groups of detection methods. One group uses statistical
approach while the other group uses machine learning approach.
**2.8.1 DDoS attack Detection using Statistical Approach**
Researchers can find many studies on DDoS attack detection methodologies. The method proposed
by Seyed et al. [6] is based on the entropy comparison of consecutive packet samples to identify
changes in their randomness. A window of 50 packets is collected, and the entropy is calculated
from their destination IP addresses. If the entropy is less than the threshold, an attack is reported.
Surender Singh et al. [9] proposed a distributed framework, which analyzes the behavior of the
packet flows. The proposed method uses entropy and a traceback algorithm to distinguish the
malicious flows from the legitimate flow.
Jisa David et al. [10] proposed a DDoS attack detection system which is based on fast entropy
using flow-based analysis. Their proposed method shows better detection accuracy. They analyze
network traffic and compute the fast entropy of request per flow.
SPHINX [11] is a framework proposed to detect attacks in SDN in real-time with low performance
overheads. It can detect both known and potentially unknown attacks on network topology. It is
mainly based on an approximation of real network into a flow graph. It uses these Flow graphs to
detect security threats in the network topology.
Lei et al. [12] proposed a system called FloodGuard. It concentrates on an SDN-specific attack
called data-to-control plane saturation attack. It implements two modules proactive flow rule
analyzer which preserves network policy enforcement and packet migration protects the controller
being overloaded.
Qin et al. [13] proposed a method for intrusion detection with a time window of 0.1 seconds and
three levels of threshold. This method tries to reduce false positive and false negative values. It is
found that the time and resource consumption of the method is high.
Proposed method extends the recent work done by Kia [1] on Early Detection and Mitigation of
DDoS Attacks in Software Defined Networks, which is based on an Entropy variation of the
destination IP address, Flow Initiation Rate and Study of Flow Specification. The proposed method
17
-----
is a lightweight DDoS attack detection at its early stage. In our modified method we have
implemented moving mean and standard deviation for the computation of adaptive thresholds and
a better mitigation module has been introduced.
**2.8.2 DDoS attack Detection using Machine Learning Approach**
DDoS Attack Detection and Prevention based on Ensemble Classifier (RF) proposed by Alpna et
al. [14] uses a combination of classifiers to improve the performance of their model. Experimental
results were conducted on UCLA dataset. The results show high accuracy with minimum error.
In [15] [16], the authors have proposed network IDS using Random Forest algorithm. They have
classified DDoS attack under network intrusion attacks. They have not considered the enormous
volume of attack packets that DDoS detection system has to handle in comparison with intrusion
attacks. The method is only suitable to fight against the intrusion attacks. Moreover, their approach
cannot be used to mitigate the attacks. Whereas in our thesis, we have classified the attack based
on the features like payload size, packet count per flow and the flow duration that are responsible
for the breakdown of the server. Our approach can also successfully mitigate the attacks.
Keisuke Kato et al. [17] proposed an intelligent DDoS attack detection system using packet
analysis and Support Vector Machine. The detection system used SVM with a Radio Basis
Function (RBF) neural networks. Experiments were done using CAIDA DDoS attack 2007 dataset.
Sivatha Sindhu et al. [18] proposed a neural decision tree for feature selection and classification.
The proposed method uses six decision tree classifiers namely Decision Stump, C4.5, Naive
Baye’s Tree, Random Forest, Random Tree and Representative Tree model to detect the
anomalous network pattern. They used sensitivity and specificity for the performance evaluation.
Saurav Nanda et al. [19] studied the attack patterns in the network using ML approach. The
methodology uses 4 different ML algorithms like C4.5, Naive Bayes, Bayes net & Decision table.
The prediction accuracy of the algorithms was compared, and they conclude that Bayesian network
has the highest prediction rate.
Zhong et al. [20] proposed a DDoS attack detection method using a fuzzy c-means (FCM)
clustering algorithm. To extract the features in network traffic they used Apriori association
algorithm.
18
-----
IDS using RF and SVM proposed by MD Al Mehedi Hasan et al. [21] developed 2 models for IDS
using SVM and RF. The performance of these two models was compared based on their detection
rate and precision and false negative rate.
The practical drawback in the above-mentioned approaches is that the authors have implemented
the RF algorithm using the default prediction method which is the majority voting. Majority voting
does not give accurate results out of random predictions. This drawback is eliminated in our
approach by replacing majority voting by weighted voting, which gives more accurate results. We
have discussed the implementation of weighted voting in detail in chapter 3.
In this thesis, both statistical and a supervised machine learning based DDoS attack detection
classifier is proposed for the SDN environment.
**2.9 Description of the Original Approach**
The proposed statistical approach is based on the work done by Kia [1]. Here we call her work as
the original approach. In this section, we will briefly describe the original approach. The approach
is designed based on three main concepts: Entropy variation of destination IP address, Flow
Initiation Rate and Study of Flow Specifications.
**2.9.1 Stage I: Detection based on Entropy Variation**
Entropy is a measure of uncertainty or randomness associated with a random variable [1]. This
research implements the computation of entropy to detect DDoS attacks in the first stage with less
computation time. Here, the entropy is computed based on the measure of traffic randomness with
respect to the destinations in a network. The entropy drops considerably when there is a single
victim attack, as all the packets are destined to the same destination address, whereas, in the non
attack scenario the entropy value tends to be larger for the traffic is normally spread out to many
destinations.
Entropy computation is done by collecting the incoming packets in a packet window of fixed size
_n. That is, each window can hold n number of packets. For each window, the incoming traffic is_
analyzed and classified according to the frequencies of occurrences of the destination IP addresses.
The frequency of occurrence (Fi) of destination IP address IPi is calculated by,
_Fi = ni / n_ (2.3)
19
-----
where _ni is the number of packets with destination address_ _IPi._
And the entropy is calculated by
𝐻= − ∑𝑛𝑖=1 𝐹𝑖 log2 𝐹𝑖 (2.4)
since 0≤ Fi ≤1 => H ≥ 0, maximum entropy occurs when each packet is destined to exactly one
host and minimum entropy occurs when all the packets in a window are destined for a single host.
A small entropy is a good indication of a DDoS attack on a single victim. The detection stage here
derives an entropy threshold, Eth, based on the average entropy of the normal traffic. If H < Eth, an
attack is suspected, and a warning is issued.
**2.9.2 Detection based on the number of Flows**
Entropy-based DDoS attack detection described in the previous section is best suited for single
victim attack detection. For the multiple victim attacks, we cannot rely only on the entropy as the
attack is targeted at multiple destinations. Hence, along with entropy variation, the system
incorporates another method of DDoS detection based on the flow rate.
Many DDoS attacks, send a large number of packets with spoofed source IP addresses to the
switch. The switch, in turn, sends many packet-in messages to the controller to set up flows. This
process increases the CPU usage of the controller and depletes switch memory and network
bandwidth. The consequences can bring down the controller and/or crash the switch.
To detect such attack, the new flow rate is computed in each window by,
_flow_rate = n / t_ (2.5)
where _t is the time taken to collect_ 𝑛 _packet-in messages, where_ 𝑛 is the window size. The
calculated flow rate is compared against the chosen threshold value, which is derived from the
average flow rate of the normal traffic. If the current flow rate exceeds the threshold value, an
attack is suspected, and the algorithm enters the stage II for the confirmation of the attack. If the
flow rate is below the threshold limit, the network is considered as attack free.
20
-----
**2.9.3 Stage II: Detection based on Analysis of Flow statistics:**
In stage II, the system analyzes the following characteristics: number of packets per flow (𝑃𝑓),
amount of received bytes (𝐵𝑓) and flow duration (𝐷𝑓). The controller collects these flow statistics
every 10 seconds from the switches.
𝑃𝑓, 𝐵𝑓 and 𝐷𝑓 are checked against the packet count thresholds, Pth, the payload size threshold, Bth
and the flow duration threshold, 𝐷𝑡ℎ respectively. The threshold values Pth, Bth and 𝐷𝑡ℎ, on the
other hand, are derived based on the averages of 𝑃𝑓, 𝐵𝑓 and 𝐷𝑓, respectively.
The packet count, byte count, and duration for each flow are obtained from the default counters of
the switch’s flow table and checked against the following conditions:
1. Is the packet count of a flow is less than the threshold value, Pth
(Pf < Pth)
2. Is the payload size is less than the threshold value Bth
(Bf<Bth)
3. Is the flow duration is less than the threshold value Dth
(Df<Dth)
If any two conditions are true, the counter (fcount) is increased by one. After all the flows have
been examined the attack rate is calculated by dividing the counter (fcount) value by the number
of flows. If the calculated attack rate exceeds the flow rate threshold value, an alarm is raised
confirming the attack.
**2.9.4 Mitigation Module**
The next goal is to protect the switches and controller under attack. Usually, the controller will not
crash easily as they are designed with high capacities. But the switches are not very robust against
attacks due to their limited resources. During the attacks, the flow tables of the switches get filled
with a large number of short flows which eventually breaks the switch.
Once the attack is detected, to prevent the breakdown of the switch, the default value of flow
idle_timer is changed to the mitigating value. The mitigated value which is smaller than the default
value makes the short flows timeout quickly and the flows are deleted from the switch flow tables.
21
-----
**Chapter 3**
**3 Proposed Methods**
The focus of our research is on the security challenges in DDoS attack detection in SDN
environment. Two DDoS attack detection methods, Statistical Approach and Machine Learning
Approach, are proposed.
**3.1 Proposed Method - Statistical Approach**
Our proposed method is based on the original approach from Kia [1]. Here, we call our approach
as the modified approach. Our modified approach differs from [1] in the computation of adaptive
threshold values and the implementation of mitigation module.
**3.1.1 Computation of Threshold values**
In [1], the thresholds are derived from the average traffic parameters only, whereas, in our thesis,
the proposed algorithm uses exponential moving mean and standard deviation to calculate the
dynamic threshold values. The use of mean and standard deviation provide more accurate
measurements of the traffic parameters and thus lead to more appropriate thresholds.
Our proposed statistical detection method requires four threshold values to detect the attack:
Entropy threshold Eth, Flow rate threshold Fth, Packet count threshold Pth and Payload threshold
_Bth. The threshold values are computed based on the normal traffic. That is the detection system_
analyzes the normal behaviour of the network without any attack.
The threshold values used in our thesis are dynamic and are updated based on the current traffic
load. Let 𝑥𝑛 be the traffic parameter measured in the 𝑛[𝑡ℎ] window, then the moving mean, 𝑥̅𝑛, and
standard deviation, 𝜎𝑛, calculated at the end of the 𝑛[𝑡ℎ] window is given by equation 3.1 and 3.2
respectively.
𝑥̅𝑛 = 𝛼∗𝑥𝑛 + (1 −𝛼) ∗𝑥̅𝑛−1 3.1
𝜎𝑛 = √𝛼∗(𝑥𝑛 −𝑥̅𝑛−1)[2] + (1 −𝛼) ∗(𝜎𝑛−1)[2] 3.2
where 𝛼 is a constant whose value can be between 0 and 1.
22
-----
The adaptive threshold, 𝑡ℎ𝑛, calculated in the 𝑛[𝑡ℎ] window is given by equation 3.3:
𝑡ℎ𝑛 = 𝑥̅𝑛 + 𝑘𝜎𝑛 3.3
where 𝑘 is a constant.
In our thesis, the value of k is set to 1 to derive the threshold values used in the first stage of attack
detection and the value of k is set to 2 to derive the threshold values in the second stage of attack
detection.
By choosing the value of k to be one in the first stage, the system will detect the attack earlier. The
false positives that are caused by the use of small 𝑘 may be eliminated by the second stage of
detection. In the second stage, 𝑘 is set to 2 to prevent the final false positive rate from getting too
large.
**3.1.2 Mitigation Module**
In the previous section, we described the statistical approaches in DDoS attack detection. The next
goal of our research is to protect the switches and controller under attack. According to the method
proposed in [1], the mitigation module is implemented by replacing the default value of the flow
idle timer with a small mitigated value causes the malicious flow timeout quickly and are deleted
from the switch flow table. This mechanism is good when the rate of attack is small. But when it
is a large attack, the switch breaks down and loses the communication with the controller. Hence,
changing the flow idle timer won’t be beneficial.
There are different mitigation methods implemented by various authors. Brainard et al [43]
mention random early dropping as one of the earliest solutions to handling attacks, but this method
has the highest risk of dropping legitimate traffic. Buragohain et al [44] introduce a mitigation
method based on how many times a suspicious source address, attempts to attack. That is if the
source address attempts to send flows more than a random legitimate counter value then it is
blocked. Xu et al [45] propose IP traceback to filter packets during the attack.
In our proposed method, the defense system of the controller is activated once it receives the attack
confirmation from the detection system. The mitigation process can be explained as follows.
- For every 3 second window, the number of new flows coming into the controller is counted
and checked against the flow rate threshold Fth which is calculated using equation 2.5.
23
-----
- If the count of new flows exceeds the threshold, the packets are dropped until the end of
the time window.
- The count will be reset at the beginning of the new window.
- The controller pushes new flows into the flow table of the switch to block the similar
malicious flows.
By limiting the number of flows per 3-sec window, the system can defend a large DDoS attack as
shown by the results in Chapter 4.
**3.2 Machine Learning Approach**
There are many machine learning techniques available for DDoS detection. In our research, we
have used classification technique using Random Forest (RF) classifier.
**3.2.1 Building the Machine Learning Model**
Nowadays there are many software tools which can be used to identify the DDoS attack. But
malicious traffic can be in any form. The attackers change their attack patterns regularly. So, there
is a need to learn from experience. This can be achieved by machine learning. The Machine
Learning algorithm can be used to analyze traffic to recognize an attack pattern.
The basic steps of Machine Learning approach can be summarized as follows.
1. Collect the raw data. (UCLA Dataset)
2. Preprocess the dataset to insert missing values and feature extractions, etc.
3. Identify the most important features.
4. Create a sub-dataset with the most relevant features.
5. Train the Random Forest classifier.
6. Calculate the accuracy of the model.
7. Test with unseen data.
24
-----
8. Evaluate the results.
The above process can be represented by Fig-3.1.
Fig 3.1: Machine Learning Methodology
**3.2.2 Random Forest classifier**
The random forest (RF) classifier is constructed by combining the unpruned random decision trees.
It uses the power of many decision trees to generate predictive models. In this phase, we have built
the random forest classifier using python machine learning library, scikit-learn and the data
analysis library, pandas.
**How Random Forest Works**
**Random Forest Creation Algorithm**
1. Let the number of training cases in the dataset be N.
2. Select randomly m features from total M features, such that m is much less than M.
3. Among m features calculate the node d, using the best split point.
4. Split the d node into daughter nodes.
25
Fig 3.1: Machine Learning Methodology
-----
5. Repeat 2 to 4 until a leaf node has been reached.
6. Build the forest by repeating steps 2 to 5 for i number of times. The value of i is equal to
the number of trees to be created.
**Random Forest Prediction Algorithm**
After the creation of forest using the training dataset, we can perform prediction for the unknown
test data using Majority voting.
1. Select the test features and use the rules of each randomly created decision tree to predict
the result. Save the result as the target.
2. Calculate the votes for each predicted target.
3. High voted target is declared as final prediction
Internally random forest creates many independent decision trees and sets the rules for each
decision tree based on the values of the input variables. There is no need to set the classification
rules manually. So the dataset plays an important role here. To get accurate results, our datasets
should be error free.
**3.2.3 UCLA Dataset**
Building the training data is the most important step in the implementation of machine learning
approach. We need to select adequate dataset for training our machine learning model. In this
research, we have used UCLA dataset [22] to build the training data. The UCLA dataset contains
real-time UDP flood attack traces. As our thesis is based on SDN architecture, we chose to modify
the UCLA dataset by adding traffic flow entries of the simulated traffic in addition to the real
traces. The data from the dataset is downloaded and preprocessed for any missing values and then
converted to a comma separated file (.csv) format which can be read by the machine learning
module developed in python. The features in UCLA dataset include Packet_TIME, IP_from, IP_to,
PORT_from, PORT_to, LENGTH.
26
-----
|Features in UCLA Dataset|Col2|
|---|---|
|Packet_TIME|Time when the packet was sent|
|IP_from|Number masking the IP address of packet source|
|IP_to|Number masking the IP address of the packet destination|
|PORT_from|Original source port|
|PORT_to|Original destination port|
|U|UDP Packet|
|LENGTH|Length of packet (without header) in Bytes|
Table 3.1: Features in UCLA Dataset
The traces of attack data in the UCLA dataset were generated by Tribe Flood Network attack tool.
The features used in our training data were altered to match our simulated network setup. Fig-3.2
shows part of the training dataset used in our research.
Fig 3.2: Sample UCLA Dataset
27
-----
**3.2.4 Training Phase**
With a training dataset and features selected, we can now train the machine learning models. The
training phase is shown in Fig-3.3.
The first step is to get the input dataset (UCLA), then process it. That is, the features or columns
that have zero values needs to be edited or removed depending on the importance of the data. Also,
the values containing characters must be converted into numeric in order for the data to be
processed by the algorithms. The next step is to select the features which are relevant to the attack
detection. We then train the models using random forest algorithm from python scikit-learn
libraries, and finally, we save the trained model and record the results of cross-validation and use
them for future predictions.
28
-----
Fig 3.3: Flowchart for Training Phase
29
-----
**3.2.5 Testing Phase**
Fig-3.4 shows the testing phase in the machine learning model of our proposed system. Get the
input from the controller every 10 seconds and pass it as the input to the random forest classifier
model built in training phase. Check if attacked is detected. If the attack is detected raise the alarm
and record the attack to a log file. Otherwise, continue the process of getting the input.
Fig 3.4: Flowchart for Testing Phase
30
-----
**3.2.6 Preparation of Trainingata**
The first step in our proposed detection method is the preparation of training data. The dataset
downloaded from UCLA website is converted to a .csv file and the duplicate values are removed,
and the missing parameters are added. After processing the dataset manually, it is loaded by the
detection script by using the read_csv(“filename”) function. The .csv file is converted to pandas
_DataFrame to perform the classification process._
The DataFrame consists of two types of data: feature data labeled as X and target data labeled as
_Y. Every row in the X is a datapoint (i.e. a network traffic flow) and every column in X are a feature_
(e.g. byte length, packet count). For a classification problem, Y contains the class value (attack or
no attack) of every datapoint.
The Y column of our DataFrame is removed and saved as a separate numpy array (python data
structure) labeled as the Result. Now the remaining DataFrame has only the feature data labeled
_X. The feature information from the DataFrame is saved as a numpy array (matrix X) using the_
pandas function as_matrix.
The Result array with class value: attack or no attack is replaced with binary values 1 and 0. The
1 represents attack and 0 represents no attack.
**3.2.7 Training the classifier**
Once the training data is ready, we build an RF classifier and apply the classifier to the training
data and use 10-fold cross-validation to compute the accuracy of the classifier. The RF classifier
in our proposed method is built using scikit-learn python library. Next step is to generate decision
trees. The number of decision trees generated can affect your accuracy. With the increase in the
number of decision trees the accuracy also increases. The number of decision trees generated can
be specified. In our thesis, we have generated 100 decision trees to provide better detection
accuracy without introducing significant processing overhead.
By default, random forest decision trees are generated out of random samples from the training
data. Also, splitting on a feature in the decision tree is done by considering a random subset of
variables to split on. This randomness may affect the prediction accuracy.
In our research, we have made the following changes to improve the prediction accuracy. First, we
have implemented Bootstrap technique. So instead of selecting a random number of features, we
31
-----
select _m number of features based on equation 2.1. Secondly, to compute the best split, we_
calculated variable importance based on the Gini Index.
Finally, the RF classifier is applied to the training data and the accuracy is calculated using a 10
fold cross-validation. Our goal is to classify the incoming network traffic as an attack or no attack
traffic.
**3.2.8 Implementation of RF Classifier in the Controller**
Every 10 seconds, the pox controller sends the flow statistics collected from the switches to the
RF classifier. The model accepts these as unseen inputs and tries to predict if there is an attack.
In the random forest method, the prediction is done by calling the predict_proba method. Once
the method is called, it returns the prediction probabilities. For example: at a given point X there
is a 60% probability that it belongs to class 1 and 40% probability that it belongs to class 0. The
classifier’s probabilities are converted to predictions. We can visualize the predicted probabilities
using the predict_proba method.
However, when the classes in the training data are unbalanced (e.g. when the number of attack
class is more compared to the no attack class), the predictions calculated by the classifier become
inaccurate. This happens because our RF classifier learns the pattern of the training data to predict
the unseen output. When the training data itself is unbalanced, the results turn out to be inaccurate.
The default behavior of random forest can be changed by choosing an appropriate threshold value.
The analysis of precision rate can be helpful in choosing the appropriate threshold probability.
In our thesis, the value of output probability threshold is tuned so that we get the higher precision
rate. More specifically, we have changed this default threshold α to 0.25 based on the analysis of
the precision rate of the training set. The precision rate is calculated based on True Positive rate
(TP) and False Positive rate (FP):
_Precision = TP/(TP+FP)_ (3.4)
The definitions of TP and FP are defined in the subsequent chapter. The implementation of
weighted voting instead of majority voting improves the precision rate.
32
-----
**Chapter 4**
**4 Performances and Analyses**
In this chapter, we implement and evaluate the statistical and machine learning DDoS detection
approaches presented in chapter 3. The results of the detection system based on statistical approach
are compared with the results from the original method [1]. Similarly, the results of the machine
learning DDoS detection approach are compared with the results from another existing machine
learning approach in literature.
To test the performances of our proposed methods, a virtual network is simulated, using the
following technologies:
**4.1 Mininet**
Mininet is the well-known network emulator for SDN research problems. It uses process-based
virtualization to run many hosts and switches on a single OS kernel [23]. Virtual hosts, switches,
controllers, and links of Mininet can be used to create any type of network topology. Its hosts run
Linux network software. Moreover, its switches support OpenFlow which helps in developing
OpenFlow based applications used in SDN environment. It also provides an extensible python API
for the creation of the network.
**4.2 POX Controller**
POX controller comes pre-installed with the Mininet virtual machine. Using POX controller, you
can turn dumb OpenFlow devices into the hub, switch, load balancer, firewall devices. The POX
controller allows an easy way to implement OpenFlow/SDN experiments. Different parameters
can be passed to POX according to real or experimental topologies, thus allowing you to run
experiments on real hardware, testbeds or in Mininet emulator.
**4.3 Traffic generator**
Our thesis is based on UDP flood attack and we use the traffic generator Scapy [24] to generate
UDP packets and spoof the source IP address of the packets. Scapy is a powerful interactive packet
manipulation program written in python. It can forge packets of different protocols. It can perform
tasks like scanning, tracerouting, probing, unit tests, attacks, and network discovery. It is used as
a replacement of Hping, Arpspoof, Arping, TCPDUMP, etc.
33
-----
In the Appendix, we presented the code for generating both the normal and attack traffic. Fig-4.1
shows the traffic generated using Scapy script.
Fig 4.1: Traffic Generation using Scapy
The traffic patterns used in the project are given below:
**Normal Traffic Pattern**
Packet Type UDP
Packet Payload 60 bytes
No. of Packet Sent per Random between 1 and 8
Flow
Packet Inter-Arrival 0.1 Sec
Interval
Traffic Rate 10 Packets/Sec
Table 4.1: Normal Traffic Pattern
**Attack Traffic Pattern**
Packet Type UDP
Packet Payload 0
No. of Packet Sent per 1
Flow
Packet Inter-Arrival 0.05 Sec
Interval
Traffic Rate 20 Packets/Sec
Table 4.2: Attack Traffic Pattern
34
|Normal Traffic Pattern|Col2|
|---|---|
|Packet Type|UDP|
|Packet Payload|60 bytes|
|No. of Packet Sent per Flow|Random between 1 and 8|
|Packet Inter-Arrival Interval|0.1 Sec|
|Traffic Rate|10 Packets/Sec|
|Attack Traffic Pattern|Col2|
|---|---|
|Packet Type|UDP|
|Packet Payload|0|
|No. of Packet Sent per Flow|1|
|Packet Inter-Arrival Interval|0.05 Sec|
|Traffic Rate|20 Packets/Sec|
-----
**4.4 Performance Metrics of Machine Learning Approach**
The performance of our proposed detection system using the ML approach is evaluated using the
parameters, accuracy, error, and precision. We use confusion matrix to calculate these performance
metrics.
**4.4.1 Confusion Matrix**
A confusion matrix is an N × N matrix where N is the number of classes. In this thesis, we have 2
classes (attack and normal). The columns of the matrix represent actual classes and the rows
represent the predicted classes. The confusion matrix gives the no. of correctly and incorrectly
predicted results by the model. Table 4.3 shows the confusion matrix of our proposed approach.
Predicted Class
Attack Normal
Actual Class Attack TP FN
Normal FP TN
Table 4.3: Confusion Matrix
where,
TP = True Positive is the number of times the attack traffic was correctly classified.
FN = False Negative is the number of times attack traffic was classified as normal traffic.
TN = True Negative is the number of predictions that were correctly classified
35
|Col1|Col2|Predicted Class|Col4|
|---|---|---|---|
|||Attack|Normal|
|Actual Class|Attack|TP|FN|
||Normal|FP|TN|
-----
FP = False Positive is the number of times the normal traffic was classified as attack traffic.
From the confusion matrix, we can define the following performance metrics:
**Accuracy Rate**
Accuracy rate = No. of correct Predictions / Total No. of Predictions.
That is,
𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦=
𝑇𝑃+𝑇𝑁
(4.1)
𝑇𝑃+𝐹𝑁+𝐹𝑃+𝑇𝑁
**Error Rate**
Error rate = No. of wrong Predictions / Total No. of Predictions.
That is,
𝐸𝑟𝑟𝑜𝑟 𝑟𝑎𝑡𝑒=
**Precision**
Precision = No. of relevant items selected.
𝐹𝑃+𝐹𝑁
(4.2)
𝑇𝑃+𝐹𝑁+𝐹𝑃+𝑇𝑁
𝑃𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛=
𝑇𝑃
(4.3)
𝑇𝑃+𝐹𝑃
In the subsequent sections, we will present the results of different methods based on FP, FN,
Accuracy Rate.
**4.5 Simulation Scenario and Results of Our Proposed Method**
The experiment was done on a DELL Inspiron 5558 laptop with Intel (R) Core (TM) i5-5250U
CPU @1. 60GHz, 1601 MHz, 2 Core(s), 4 Logical Processor(s). Using Mininet, a tree-type
network is created. Its depth is two with five switches and 20 hosts. Fig-4.2 shows the network.
Open Virtual Switch (OVS) was used for network switches. OVS is a software switch that runs
both on hardware and software. In Fig-4.2, all switches refer to OpenFlow enabled switches. The
_L2_multi.py module of POX was used for the controller._
36
-----
To implement the modified statistical approach, we have added four modules to the controller
program: entropy computation module, flow rate computation module, flow statistics collection
module and a mitigation module.
The machine learning approach uses the same network set up, however, the controller program is
modified by including two machine learning modules: a classifier module which has access to the
training dataset and a prediction module which classify the incoming network traffic as an attack
or no attack traffic.
Fig 4.2: Network Setup
The experiment mainly concentrates on multiple victim attacks as the single victim attack can be
easily detected. During the simulation, we are running two Scapy programs. The one that is
generating the attack sends the packets faster than the one which generates normal traffic. The
traffic pattern shown in Table 4.1 and Table 4.2 is used for this purpose. We run attack traffic from
randomly chosen 4 hosts to attack four different destinations, consequently remaining 16 hosts
generate the legitimate traffic. The IP address in Mininet for all hosts are assigned from 10.0.0.1
to 10.0.0.20.
37
-----
**4.6 Performance of the Statistical Approach**
Based on the calculated threshold values, the performance of our detection system is analyzed by
running the simulation 50 times. For each run, attack traffic of pattern B is generated on four hosts.
Each simulation lasts about 30 minutes. The results were recorded, and the False Positive and False
Negative values are summarized in Table 4.4:
**FP & FN Values of Multiple Victim Attack**
No. of Attacks 50
FP (Avg.count) 3
FN (Avg.count) 1
Accuracy Rate 92%
Table 4.4: False Positive and False Negative Values
Based on the values in Table 4.4, we can say that there is a possibility for an attack traffic to pass
through the detection system as normal traffic, which is harmful to the system.
**4.6.1 Comparison of the Basic Approach with Our modified approach**
The main difference between the basic approach and our approach is the way of deriving the
thresholds. Fig-4.3 shows the comparison of the accuracy rate achieved by the basic approach and
our modified approach.
38
|FP & FN Values of Multiple Victim Attack|Col2|
|---|---|
|No. of Attacks|50|
|FP (Avg.count)|3|
|FN (Avg.count)|1|
|Accuracy Rate|92%|
-----
Fig 4.3: Comparison of Performance of Statistical Approaches
From the result, we can say that the accuracy of the detection system has improved by using the
thresholds derived based on the mean and standard deviation method.
**4.7 Mitigation Results**
After the attack has been confirmed by Stage II of our detection system, the mitigation module is
called, and the attack traffic is dropped to prevent the breakdown of the switches in the network.
Fig-4.4 shows the output of the execution of the mitigation module of the POX controller to drop
the attack traffic.
Fig 4.4: Execution of Mitigation Module
39
-----
Fig-4.5 shows the entire process of attack detection and mitigation of the proposed statistical
approach.
Fig 4.5: Wireshark Output: Attack Traffic Mitigated
**4.7.1 Performance Analysis of Mitigation Module**
To demonstrate the performance of the defense system in defeating DDoS attacks, we compare
results in two cases. In the first case, we start the attack on our simulation network without enabling
any DDoS defense mechanisms. The switch at the victim end breaks down due to the large attack
traffic. In the second case, we deploy the mitigation technique in the POX controller. After
receiving the attack alert, the controller drops the attack traffic in order to protect the switch.
The simulation is run 10 times for each case with different attack rate. Table 4.5 shows the result
obtained in each case:
40
-----
|Packets/Sec|With Defense Window|No Defense Window|
|---|---|---|
|10|Operational|Operational|
|20|Operational|Operational|
|50|Operational|Operational|
|100|Operational|Crash|
|200|Operational|Crash|
|300|Operational|Crash|
|400|Operational|Crash|
|500|Operational|Crash|
|1000|Operational|Crash|
Table 4.5: Performance Analysis of Mitigation Module
Comparing the proposed method with the original method, the obvious difference is that the
detection system succeeds in lowering the attack traffic. This demonstrates that our proposed
DDoS defense system is able to differentiate between attack and legitimate traffic with high
accuracy.
**4.8 Performance of the Machine Learning approach**
The UCLA dataset is fairly balanced and contains a total of 1200 instances, with 60% (720) attack
and 40% (380) no attack instances. A Python script was used for training and testing the classifier.
10-fold cross-validation is used to evaluate the classifier. The accuracy of the resulting classifier
is compared with Kato et al [17].
**4.8.1 Feature Selection**
We need to decide on which features to train on. The UCLA dataset has a variety of features as
summarized in Table 3.1. However, all these features are not used in our approach to classify the
traffic flow. Hence, we have selected three features: Payload, Packet count and Flow duration from
41
-----
the UCLA dataset and calculated the feature importance based on Gini score. The code to compute
a Gini score can be found in the Appendix
Fig-4.6 shows the calculated feature importance. Based on the obtained result, we selected the
following features: No. of Packets, Byte length (payload) and Flow Duration, to build our model:
Fig 4.6: Feature Importance Plot
We evaluated the performance of our proposed detection system using parameters such as accuracy
rate, error rate, precision and ROC plot. We use confusion matrix to calculate accuracy, error, and
precision. Table 4.6 shows the calculated performance measures of our proposed system.
**Performance Metrics**
Accuracy 97.70%
Error Rate 2.3%
Precision 95.74%
Table 4.6: Performance Metrics
42
|Performance Metrics|Col2|
|---|---|
|Accuracy|97.70%|
|Error Rate|2.3%|
|Precision|95.74%|
-----
**4.8.2 ROC Plot**
Receiver Operating Characteristic (ROC) plot is the graphical way of inspecting the performance
of our random forest classifier. It shows the rate at which our classifier is making correct
predictions. It is applicable only to binary classification model. It is plotted between the False
Positive Rate (FPR) on the x axis and the true positive rate (TPR) on the y axis. Fig-4.7 shows the
exemplary ROC plot:
Fig 4.7: Exemplary ROC Plot [8]
AUC or Area Under the Curve is the space underneath the ROC curve. The perfect classifier has
the AUC value of 1. AUC value is used to compare different models. It also determines the
performance of the classifier.
43
-----
The ROC Plot of our proposed method is given in Fig-4.8.
Fig 4.8: ROC Plot of Proposed Method
The AUC value of our classifier is 0.93.
**4.9 Comparison of Our Statistical and Machine Learning Approach**
In this section, we compare the accuracy of our Machine Learning approach to the solution
obtained from the statistical approach.
44
-----
Fig 4.9: Performance Comparison of Statistical & ML Approaches
From the above analysis, we can conclude that the proposed detection system implemented using
a Machine Learning approach could successfully detect DDoS attack with higher accuracy.
**4.10 Comparison of Machine Learning Approach with Existing ML Approach**
In this section, we will compare the proposed machine learning based DDoS Detection method
with the approach of Kato et al [17].
In [17] the authors proposed a DDoS detection system using a Support Vector Machine algorithm.
The algorithm used CAIDA dataset to analyze the attack pattern. The main idea behind their
approach is to perform network packet analysis and to study the patterns of the DDoS attack using
the machine learning algorithm. They prepared three different types of the dataset and tested their
model. The detection system is 85% accurate with three features.
To compare the performance based on detection accuracy, we implemented the SVM algorithm
using the UCLA training dataset prepared in section 3.2.6. The experimental result shows that the
detection system could successfully detect the attack traffic with the detection accuracy of 88%.
Fig-4.10 shows the comparison of the performance of our ML approach using the RF classifier
with the ML approach using the SVM classifier.
45
-----
Fig 4.10: Performance Comparison of our ML Approach with other ML Approaches
From the results of the comparison, it can be seen that the proposed ML approach gives the best
performance in comparison with other approaches considered in this thesis. It surpasses the other
approaches in practical implementation due to the following reasons:
- Random forest classifier which has fast computing times and robustness against noisy
data.
- Implementation of the Weighted voting method instead of standard majority voting.
46
-----
**Chapter 5**
**5 Conclusion and Future Work**
**5.1 Conclusion**
Detecting and defending against DDoS attack is a complex task. Initially, we started the research
aiming to improve the DDoS detection system proposed by Kia [1]. We used the mean and
standard deviation to compute various thresholds. We also introduced a better mitigation method
to protect the controller and the switches. Even though we managed to improve the detection rate
performance, we found that it is still not entirely satisfactory. Hence, to improve the detection rate
further, we proposed an ML approach. Our proposed approach is based on RF algorithm with
weighted voting. Our results show that the proposed approach has the best performance among all
the approaches considered in this thesis.
**5.2 Future work**
As a future work, we recommend developing a controller that can detect any type of network attack
and employ deep packet inspection so that the detection accuracy is even higher. Moreover, SDN
has also a concept of distributed network security enforcement by making each network element
potentially an enforcement node and smart. This can be easily achieved by Machine Learning
approach.
47
-----
**Appendix**
**Entropy Computation Module**
class Entropy(object):
#Set Counter for Window size = 100
count = 0
# Dictionary to Store Destination IP occurrence
ipDic = {}
# List to Store IP addresses
ipList = []
# List to store Entropies
lstEnt = []
value = 1
#Function to collect Destination IP
def colectIP(self, element):
l = 0
self.count +=1
self.ipList.append(element)
# Check whether window size is reached
if self.count == 100:
for i in self.ipList:
l +=1
if i not in self.ipDic:
48
-----
self.ipDic[i] =0
self.ipDic[i] +=1
# Entropy Function to calculate Entropy
self.entropy(self.ipDic)
log.info(self.ipDic)
self.ipDic = {}
self.ipList = []
l = 0
self.count = 0
#Entropy computation
def entropy (self, lists):
#print "Entropy called"
l = 50
elist = []
for k,p in lists.items():
#Probability Of each IP
c = p/float(l)
c = abs(c)
#List of Entropies
elist.append(-c * math.log(c, 2))
log.info('Entropy = ')
log.info(sum(elist))
self.lstEnt.append(sum(elist))
49
-----
if(len(self.lstEnt)) == 80:
#Print to display Entropy
print self.lstEnt
self.lstEnt = []
return(sum(elist))
def __init__(self):
pass
**Flow Statistics Collection**
#Flow statistics Module
# standard includes
from pox.core import core
from pox.lib.util import dpidToStr
import pox.openflow.libopenflow_01 as of
from pox.lib.revent import *
# include as part of the betta branch
from pox.openflow.of_json import *
log = core.getLogger()
# handler for timer function that sends the requests to all the
# switches connected to the controller.
def _timer_func ():
for connection in core.openflow._connections.values():
connection.send(of.ofp_stats_request(body=of.ofp_flow_stats_request()))
log.debug("Sent %i flow/port stats request(s)", len(core.openflow._connections))
50
-----
# handler to display flow statistics received in JSON format
# structure of event.stats is defined by ofp_flow_stats()
def _handle_flowstats_received (event):
stats = flow_stats_to_list(event.stats)
log.info("flow statistics received from %s",dpidToStr(event.connection.dpid))
#Dictionary to store the flow details
flowlist = {}
#Counter to count the no. of flows
flow_count = 0
#Counter for packets
p_count = 0
#Counter for bytes
b_count = 0
for flow in event.stats: #for each flow
if flow.match.dl_type==0x0800: #Only UDP Packets
#Collect the Flow statistics
flowlist = {"flow_Duration": flow.duration_sec, "packet_count": flow.packet_count,
"byte_count": flow.byte_count}
#Increment the counters
p_count += flow.packet_count
b_count += flow.byte_count
if flow.packet_count <> 0:
51
-----
flow_count = flow_count+1
print "Traffic from %s: %s bytes,%s packets and flows",dpidToStr(event.connection.dpid),
p_count, b_count, flow_count
#print flow_count
def flow_stat():
self._timer_func()
# main functiont to launch the module
def launch ():
from pox.lib.recoco import Timer
# attach handsers to listners
core.openflow.addListenerByName("FlowStatsReceived", _handle_flowstats_received)
# timer set to execute every five seconds
Timer(5, _timer_func, recurring=True)
Mitigation Module:
#Check if Ethernet packet
if packet.type == ethernet.IP_TYPE:
#Start the Timer
self.start = time.time()
#Set the window interval
time_interval = 3
#Increment the Flow Counter
self.flow_list+=1
if self.flow_list == 1:
52
-----
self.end = self.start + time_interval
# #print 'end', self.end
#Check if flow counter exceeds the threshold and if timer expired
if self.flow_list > fth and self.start<self.end:
# Attack is confirmed, Turn ON mitigation
cprint(' Mitigation ON ', 'blue', 'on_cyan')
#Drop the attack packets
drop()
self.flow_list = 0
elif self.start>=self.end:
# If timer expires, reset the flow counter
self.flow_list = 0
else:
pass
53
-----
**Traffic Generation Code:**
**Normal Traffic**
import sys
import getopt
import time
from os import popen
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
from scapy.all import sendp, IP, UDP, Ether, TCP
from random import randrange
import random
import threading
#this function generates random IP addresses
# these values are not valid for first octet of IP address
def sourceIPgen():
not_valid = [10,127,254,1,2,169,172,192]
first = randrange(1,256)
while first in not_valid:
first = randrange(1,256)
ip = ".".join([str(first),str(randrange(1,256)),str(randrange(1,256)),str(randrange(1,256))])
return ip
#send the generated IPs
def gendest():
54
-----
first = 10
second =0; third =0;
start = 2
end = 60
ip = ".".join([str(first),str(second),str(third),str(randrange(start,end))])
return ip
def genTraffic():
m =0
#a = random.uniform(0.1,0.4)
# open interface eth0 to send packets
interface = popen('ifconfig | awk \'/eth0/ {print $1}\'').read()
for i in xrange(1000):
# form the packet
packets = Ether()/IP(dst=gendest(), src=sourceIPgen())/UDP(dport=80,sport=2)
print(repr(packets))
while m<=4:
# send packet with the defined interval (seconds)
sendp(packets,iface=interface.rstrip(),inter=0.1)
m+=1
def main():
#run_event = threading.Event()
#run_event.set()
#d1 = 0.1
55
-----
timeout = time.time() + 60*1
#threading.Timer(0.1,genTraffic).start()
while True:
genTraffic()
if time.time()>timeout:
break
#main
if __name__=="__main__":
main()
**Attack Traffic**
import sys
import time
from os import popen
import logging
logging.getLogger("scapy.runtime").setLevel(logging.ERROR)
from scapy.all import sendp, IP, UDP, Ether, TCP
from random import randrange
import time
def sourceIPgen():
#this function generates random IP addresses
# these values are not valid for first octet of IP address
not_valid = [10,127,254,255,1,2,169,172,192]
first = randrange(1,256)
56
-----
while first in not_valid:
first = randrange(1,256)
print first
ip = ".".join([str(first),str(randrange(1,256)), str(randrange(1,256)),str(randrange(1,256))])
print ip
return ip
def main():
timeout = time.time() + 30*1
while True:
mymain()
if time.time()>timeout:
break
#send the generated IPs
def mymain():
#getting the ip address to send attack packets
dstIP1 = sys.argv[1:]
dstIP2 = sys.argv[1:]
dstIP3 = sys.argv[1:]
dstIP4 = sys.argv[1:]
#print dstIP
src_port = 80
dst_port = 1
# open interface eth0 to send packets
57
-----
interface = popen('ifconfig | awk \'/eth0/ {print $1}\'').read()
print (repr(interface))
#for i in xrange(0,2000):
# form the packet
packets = Ether()/IP(dst=dstIP1,src=sourceIPgen())/UDP(dport=dst_port,sport=src_port)
print(repr(packets))
packets = Ether()/IP(dst=dstIP2,src=sourceIPgen())/UDP(dport=dst_port,sport=src_port)
print(repr(packets))
packets = Ether()/IP(dst=dstIP3,src=sourceIPgen())/UDP(dport=dst_port,sport=src_port)
print(repr(packets))
packets = Ether()/IP(dst=dstIP4,src=sourceIPgen())/UDP(dport=dst_port,sport=src_port)
print(repr(packets))
# send packet with the defined interval (seconds)
sendp( packets,iface=interface.rstrip(),inter=0.03)
#main
if __name__=="__main__":
main()
58
-----
**Machine Learning code**
**Classifier Module**
#Standard Includes
import numpy as np
import pandas as pd
from pandas import read_csv
#The Machine learning alogorithm
from sklearn.ensemble import RandomForestClassifier
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Test train split
#from sklearn.cross_validation import train_test_split
from sklearn.model_selection import train_test_split
# Just to switch off pandas warning
pd.options.mode.chained_assignment = None
# Used to write our model to a file
from sklearn.externals import joblib
#Open the data set
data = read_csv("tableset.csv")
print data.head()
print data.columns
#Select the Features
59
-----
data_inputs = data ["Duration", "No. of packets", "Byte length"]
print data_inputs.head()
ex_outputs = data[["result"]]
print ex_outputs.head()
#Create the Classifier to create 100 trees
rf = RandomForestClassifier (n_estimators=100)
rf.fit(data_inputs, ex_outputs)
#Accuracy Calculation
accuracy = rf.score(data_inputs, ex_outputs)
print "Accuracy = {}%".format(accuracy * 100)
#Save the ML model
joblib.dump(rf, "test_model1", compress=9)
#Calculate Feature Importance
importances = rf.feature_importances_
indices = np.argsort(importances)
plt.figure(1)
plt.title('Feature Importances')
plt.barh(range(len(indices)), importances[indices], color='b', align='center')
plt.yticks(range(len(indices)), data_inputs[indices])
plt.xlabel('Relative Importance')
plt.show()
60
-----
**Prediction Module**
#Standard Includes
import numpy as np
import pandas as pd
from pandas_ml import ConfusionMatrix
import matplotlib.pyplot as plt
from pandas import read_csv
#The Machine learning alogorithm
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import confusion_matrix
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
# Test train split
#from sklearn.cross_validation import train_test_split
from sklearn.model_selection import train_test_split
# Just to switch off pandas warning
pd.options.mode.chained_assignment = None
# Used to write our model to a file
from sklearn.externals import joblib
#Open Test Data
data = read_csv("data.csv")
test_input = data[["Duration", "packet", "byte length"]]
test_output = data[["result"]]
61
-----
#Open the Saved Model
rf = joblib.load("test_model1")
#Predict the Attack
pred = rf.predict_proba(test_input)
print pred
#Calculate the Accuracy
accuracy = rf.score(test_input, test_output)
print "Accuracy = {}%".format(accuracy * 100)
#Calculate Confusion Matrix
results = confusion_matrix(pred, test_output)
print "----Confusion Matrix-----"
print results
plt.matshow(results)
plt.title('Confusion Matrix')
plt.colorbar()
plt.ylabel('Actual')
plt.xlabel('Predicted')
#plt.show()
#calculate True Positive Rate, False Positive Rate
fpr,tpr, _ = roc_curve(test_output, rf.predict_proba(test_input)[:,1])
roc_auc = auc(fpr, tpr)
print 'ROC AUC: %0.2f' % roc_auc
62
-----
# Plot of a ROC curve for a specific class
plt.figure()
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
plt.legend(loc="lower right")
plt.show()
63
-----
**Bibliography**
[1]. Maryam Kia, “Early Detection and Mitigation of DDoS Attacks in Software Defined
Networks”, Master’s thesis, Ryerson University, Canada 2015.
[2]. Javed Ashraf and Seemab Latif, “Handling Intrusion and DDoS Attacks in Software Defined
Networks Using Machine Learning Techniques”, National Software Engineering
Conference, 2014.
[3]. opennetworking.org, “OpenFlow”, [Online]. Available: https://www.opennetworking.org.
[Accessed February 2016].
[4]. slideshare.net, “OpenFlow Tutorial”, [Online] Available: https://slideshare.net/openflow/.
[Accessed September 2016].
[5]. Sukhveer Kaur, Japinder Singh and Navtej Singh Ghumman, “Network Programmability
Using POX Controller”, International Conference on Communication, Computing &
Systems, pp. 134-138, 2014.
[6]. Seyed Mohammad Mousavi and Marc St. Hilaire, “Early Detection of DDoS Attacks against
SDN Controllers”, International Conference on Computing, Networking and
Communications, Communications and Information Security Symposium, 2015.
[7]. Duohe Ma1, Zhen Xu, and Dongdai Lin, “Defending Blind DDoS Attack on SDN Based on
Moving Target Defense”, researchgate, November 2015.
[8]. Meiling Liu, Xiangnan Liu, Jin Li and Jiale Jiang, “Evaluating total inorganic nitrogen in
coastal waters through the fusion of multi-temporal RADARSAT-2 and optical imagery
using random forest algorithm”, International Journal of Applied Earth Observation and
Geoinformation 33(1), pp. 192-202, December 2014.
[9]. Surender Singh and Sandeep Jain, “A Review of Detection of DDoS Attack Using Entropy
Based Approach”, IJCST Vol 4, Issue 2, April 2013.
[10]. Jisa David and Ciza Thomas, “DDoS Attack Detection using Fast Entropy Approach on
Flow-Based Network Traffic”, Procedia Computer Science, pp. 30 – 36, 2015.
64
-----
[11]. Mohan Dhawan, Rishabh Poddar, Kshiteej Mahajan and Vijay Mann, “SPHINX: Detecting
Security Attacks in Software-Defined Networks”, IBM Research, 2009.
[12]. Haopei Wang, Lei Xu, Guofei Gu, "FloodGuard: A DoS Attack Prevention Extension in
Software-Defined Networks”, 45th Annual IEEE/IFIP International Conference on
Dependable Systems and Networks (DSN'15), 2015.
[13]. Z. Qin, L. Ou, J. Liu, A. X. Liu J. Zhang, "An Advanced Entropy-Based DDoS Detection
Scheme”, IEEE, 2010.
[14]. Alpna and Sona Malhotra, “DDoS Attack Detection and Prevention Using Ensemble
Classifier (RF)”, IJARCSSE, 2016.
[15]. Arif Jamal Malik, Waseem Shahzad, and Farrukh Aslam Khan, “Network Intrusion
Detection Using Hybrid Binary PSO and Random Forests Algorithm”, Security and
Communication Networks, 2012.
[16]. Nabila Farnaaz and M.A. Jabbar, “Random Forest Modelling for Network Intrusion
Detection System”, IMCIP, 2016.
[17]. Keisuke Kato and Vitaly Klyuev, “An Intelligent DDoS attack Detection System Using
Packet analysis and Support Vector Machine”, IJICR, Volume 5, Issue 3, September 2014.
[18]. Sivatha Sindhu, Geetha subbiah and Kannan Arputharaj, “Decision tree based lightweight
intrusion detection using a wrapper approach”, Expert Systems with Applications, pp. 129
141, January 2012.
[19]. Saurav Nanda, Faheem Zafari, Casimer DeCusatis, Eric Wedaa and Baijian Yang,
“Predicting Network Attack Patterns in SDN using Machine Learning Approach”, IEEE
2016.
[20]. Zhong-dong Wu, Wei-Xin Xie and Jian-ping Yu, “Fuzzy C-Means Clustering Algorithm
Based on Kernel Method”, ICCIMA, 2003
[21]. Md. Al Mehedi Hasan, Mohammed Nasser, Biprodip and Shamim Ahmad, “Support Vector
Machine and Random Forest Modeling for IDS”, JILSA, pp. 45-52, 2014.
65
-----
[[22]. lasr.cs.ucla.edu, “UCLA Dataset”, [Online]. Available: https://lasr.cs.ucla.edu/DDoS/traces/.](https://lasr.cs.ucla.edu/ddos/traces/)
[Accesssed April 2017].
[23]. mininet.org, “Mininet”, [Online]. Available: http://mininet.org/. [Accessed June 2016].
[24]. secdev.org, “Scapy”, [Online]. Available: http://www.secdev.org/. [Accessed January 2016].
[25]. Xin Xu and Xuening Wang, “An adaptive network intrusion detection method based on PCA
and support vector machines”, Advanced Data Mining and Applications, Springer, pp. 696
703, 2005.
[26]. Ming-Yang Su, “Real-time anomaly detection systems for Denial-of-Service attacks by
weighted k-nearest-neighbor classifiers”, Expert Systems with Applications, PP. 3492-3498,
2011.
[27]. Sindia and Julia Punitha Malar Dhas, “SDN Based DDoS Attack Detection and Mitigation
in Cloud”, IJCTA, pp. 39-47, 2017.
[28]. Kuan-yin Chen, Anudeep Reddy Junuthula, Ishant Kumar Siddhrau, Yang Xu, H. Jonathan
Chao, “SDNShield: Towards More Comprehensive Defense against DDoS Attacks on SDN
Control Plane”, IEEE, 2016.
[29]. Christos Douligeris and Aikaterini Mitrokotsa, “DDoS attacks and defense mechanisms:
classification and state-of-the-art”, Computer Networks, pp. 643–666, 2004.
[30]. Mohammed Alenezi and Martin J Reed, “Methodologies for detecting DoS/DDoS attacks
against network servers”, The Seventh International Conference on Systems and Networks
Communications, 2012.
[31]. Manjula Suresh and R. Anitha, “Evaluating Machine Learning Algorithms for Detecting
DDoS Attacks”, Communications in Computer and Information Science, January 2011.
[32]. Stefan Seufert and Darragh O’Brien, “Machine Learning for Automatic Defence against
Distributed Denial of Service Attacks”, ICC, 2007.
[33]. Bhatia, Sajal, Schmidt, Desmond, & Mohay, George M, “Ensemble based DDoS detection
and mitigation model”, Fifth International Conference on Security of Information and
Networks, pp.79-86, 2012.
66
-----
[34]. Yang Xu and Yong Liu, “DDoS Attack Detection under SDN Context”, IEEE INFOCOM,
2016.
[35]. pythonprogramming.net, “Python for Machine Learning”, [Online]. Available:
[https://pythonprogramming.net/machine-learning-tutorial-python-introduction. [Accessed](https://pythonprogramming.net/machine-learning-tutorial-python-introduction)
June 2017].
[36]. scikit-learn.org, “Sklearn”, [Online]. Available: [https://scikit-learn.org/. [Accessed June](https://scikit-learn.org/)
2017].
[37]. Przemysław Berezinski, Bartosz Jasiul, and Marcin Szpyrka, “An Entropy-Based Network
Anomaly Detection Method”, Entropy, 2015.
[38]. H. Kim and N. Feamster, “Improving network management with software-defined
networking,” IEEE Communications Magazine, vol. 51, no. 2, pp. 114–119, February 2013.
[39]. www.cs.iit.edu, “Decision tree classification”, [Online]. Available: [http://www.cs.iit.edu/](http://www.cs.iit.edu/)
[Accessed Aug 2017].
[40]. www.analyticsvidhya.com, “Cross-validation in random forest”, [Online]. Available:
https://www.analyticsvidhya.com/blog/2015/11 [Accessed Aug 2017].
[41]. Jorge Proenca, Tiago Cruz, Edmundo Monterio and Paul Simoes, “How to use Software
Define Networking to improve Security - a Survey”, ECCWS2015-Proceedings of the 14th
European Conference on Cyber Warfare and Security, 2015.
[42]. Rodrigo Braga, Edjard Mota, and Alexandre Passito, “Lightweight DDoS Flooding Attack
Detection Using NOX/OpenFlow”, 35th Annual IEEE Conference on Local Computer
Networks, 2010.
[43]. Juels, A. and J. G. Brainard, Client Puzzles, “A Cryptographic countermeasure against
connection depletion attacks”, NDSS, 1999.
[44]. Chaitanya Buragohain, Nabajyoti Medhi, “FlowTrApp: An SDN Based Architecture for
DDoS Attack Detection and Mitigation in Data Centers”, 3rd International Conference on
Signal Processing and Integrated Networks, 2016.
67
-----
[45]. Sung, M. and J. Xu, “Ip traceback-based intelligent packet filtering: A novel technique for
defending against internet DDoS attacks”, ICNP ’02. IEEE Computer Society, Washington,
DC, USA, 2002.
[46]. [www.rapidminer.com](http://www.rapidminer.com/), “k-fold cross-validation”, [Online]. Available: https://
rapidminer.com. [Accessed August 2016].
68
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.32920/ryerson.14657556?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.32920/ryerson.14657556, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://rshare.library.ryerson.ca/articles/thesis/Machine-learning_and_statistical_methods_for_DDoS_attack_detection_and_defense_system_in_software_defined_networks/14657556/1/files/28139505.pdf"
}
| 2,018
|
[] | true
| null |
[] | 24,353
|
|
en
|
[
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/033aedf93d7d861e206d38b22775cb0fd0d9e251
|
[] | 0.817516
|
Distributed Control Scheme for Clusters of Power Quality Compensators in Grid-Tied AC Microgrids
|
033aedf93d7d861e206d38b22775cb0fd0d9e251
|
Sustainability
|
[
{
"authorId": "1992913226",
"name": "M. Martínez-Gómez"
},
{
"authorId": "1416693095",
"name": "Claudio Burgos-Mellado"
},
{
"authorId": "1418628634",
"name": "H. K. Morales-Paredes"
},
{
"authorId": "1576085742",
"name": "J. Gómez"
},
{
"authorId": "152669664",
"name": "A. K. Verma"
},
{
"authorId": "3090941",
"name": "J. Bonaldo"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": [
"http://mdpi.com/journal/sustainability",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172127"
],
"id": "8775599f-4f9a-45f0-900e-7f4de68e6843",
"issn": "2071-1050",
"name": "Sustainability",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172127"
}
|
Modern electrical systems are required to provide increasing standards of power quality, so converters in microgrids need to cooperate to accomplish the requirements efficiently in terms of costs and energy. Currently, power quality compensators (PQCs) are deployed individually, with no capacity to support distant nodes. Motivated by this, this paper proposes a consensus-based scheme, augmented by the conservative power theory (CPT), for controlling clusters of PQCs aiming to improve the imbalance, harmonics and the power factor at multiple nodes of a grid-tied AC microgrid. The CPT calculates the current components that need to be compensated at the point of common coupling (PCC) and local nodes; then, compensations are implemented by using each grid-following converter’s remaining volt-ampere capacity, converting them in PQCs and improving the system’s efficiency. The proposal yields the non-active power balancing among PQCs compounding a cluster. Constraints of cumulative non-active contribution and maximum disposable power are included in each controller. Also, grid-support components are calculated locally based on shared information from the PCC. Extensive simulations show a seamless compensation (even with time delays) of unbalanced and harmonics current (below 20% each) at selected buses, with control convergences of 0.5–1.5 [s] within clusters and 1.0–3.0 [s] for multi-cluster cooperation.
|
## sustainability
_Article_
# Distributed Control Scheme for Clusters of Power Quality Compensators in Grid-Tied AC Microgrids
**Manuel Martínez-Gómez** **[1,2,3,]*** **, Claudio Burgos-Mellado** **[3]** **, Helmo Kelis Morales-Paredes** **[4]** **,**
**Juan Sebastián Gómez** **[5]** **, Anant Kumar Verma** **[3]** **and Jakson Paulo Bonaldo** **[6]**
1 Electrical Engineering Department, Universidad de Chile, Santiago 8370451, Chile
2 Power Electronics, Machines and Control Group (PEMC), University of Nottingham,
Nottingham NG7 2R, UK
3 Electric Power Conversion Systems Laboratory (SCoPE Lab), Institute of Engineering Sciences,
Universidad de O’Higgins, Rancagua 2841959, Chile; claudio.burgos@uoh.cl (C.B.-M.);
anant.kumar@uoh.cl (A.K.V.)
4 Institute of Science and Technology of Sorocaba, São Paulo State University (UNESP), Av. Três de Março 511,
Sorocaba 18087-180, Brazil; helmo.paredes@unesp.br
5 Energy Transformation Center, Engineering Faculty, Universidad Andres Bello, Santiago 7500971, Chile;
juan.gomez@unab.cl
6 Department of Electrical Engineering, Federal University of Mato Grosso (UFMT), Cuiabá 78060-900, Brazil;
jakson.bonaldo@ufmt.br
***** Correspondence: manuel.martinez.gmz@ieee.org
**Citation: Martinez-Gomez, M.;**
Burgos-Mellado, C.; Morales-Paredes,
H.K.; Gomez, J.S.; Verma, A.K.;
Bonaldo, J.P. Distributed Control
Scheme for Clusters of Power Quality
Compensators in Grid-Tied AC
Microgrids. Sustainability 2023, 15,
[15698. https://doi.org/10.3390/](https://doi.org/10.3390/su152215698)
[su152215698](https://doi.org/10.3390/su152215698)
Academic Editor: Pablo García
Triviño
Received: 28 August 2023
Revised: 24 October 2023
Accepted: 2 November 2023
Published: 7 November 2023
**Copyright:** © 2023 by the authors.
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: Modern electrical systems are required to provide increasing standards of power quality,**
so converters in microgrids need to cooperate to accomplish the requirements efficiently in terms of
costs and energy. Currently, power quality compensators (PQCs) are deployed individually, with
no capacity to support distant nodes. Motivated by this, this paper proposes a consensus-based
scheme, augmented by the conservative power theory (CPT), for controlling clusters of PQCs aiming
to improve the imbalance, harmonics and the power factor at multiple nodes of a grid-tied AC
microgrid. The CPT calculates the current components that need to be compensated at the point of
common coupling (PCC) and local nodes; then, compensations are implemented by using each gridfollowing converter’s remaining volt-ampere capacity, converting them in PQCs and improving the
system’s efficiency. The proposal yields the non-active power balancing among PQCs compounding
a cluster. Constraints of cumulative non-active contribution and maximum disposable power are
included in each controller. Also, grid-support components are calculated locally based on shared
information from the PCC. Extensive simulations show a seamless compensation (even with time
delays) of unbalanced and harmonics current (below 20% each) at selected buses, with control
convergences of 0.5–1.5 [s] within clusters and 1.0–3.0 [s] for multi-cluster cooperation.
**Keywords: AC microgrids; distributed control; power quality; conservative power theory; cluster**
control; smart grids
**1. Introduction**
In recent years, there has been a growing interest in protecting the environment and
promoting energy sustainability. As a result, research has focused on finding alternative
sources of renewable energy to replace the use of fossil fuels. The integration of distributed
non-conventional renewable energy (NCRE) sources into the electrical grid can be realized
by means of the microgrid (MG) concept, which combines local generation, energy storage, and loads for an autonomous operation [1–4]. This integration is enabled by power
electronic converters in charge of interfacing the NCRE distributed generation units with
the MG. Note that MGs have two operating conditions: (i) connected to the main grid
(grid-tied) and (ii) disconnected to the main power grid (isolated) [1,3].
-----
_Sustainability 2023, 15, 15698_ 2 of 23
Focusing on grid-tied MGs, the control over the power converters is implemented
using a grid-following (grid-feeding) mode, meaning a behaviour approximate to a current
source. In this sense, grid-follower converters provide power based on energy harvesting
from the distributed NCRE sources [3]. Due to the variability of natural resources, it is
expected that the nominal capacity of power converters is not fully used for long periods.
It is worth noting that in grid-tied systems, in case of need, power quality compensation
can seemingly be supplied by the grid-following converters due to the fact that the grid
can support the MG’s active power consumption [5–7].
Considering the intermittency of generators and the fact that MGs are inherently
unbalanced and distorted electrical systems—which is heightened when small and lowvoltage (LV) distribution systems are considered—the power quality observed in MGs
could advance the ageing of electrical/electronic devices connected to them [4,5]. Indeed,
for LV MGs, imbalance and harmonics issues must be considered and properly managed to
ensure a reliable and safe operation. For instance, in an unbalanced MG operation, there
are (i) oscillations in the converter’s DC-link, (ii) circulation of a neutral current through
the neutral wire for three-phase four-wire MGs (the one considered in this paper) [8], and
(iii) a double-frequency oscillating active power component if synchronous or a doubly fed
induction generator-based micro-generator is part of the MG [5]. This double frequency
translates into an oscillating torque component in these generators, producing mechanical
stress, torque pulsations and noise, affecting the efficiency and life span of these machines.
Harmonic issues can (i) produce harmonic interaction and harmonic resonance between
the inverters and the network, (ii) produce harmonics that contribute to decreasing the
maximum load which can be drawn from NCRE generation units, and (iii) since an MG
is a weak system, harmonic loads could produce harmonic voltages, which could spread
through the system [8].
It is expected that future power grids could cope with power demands while maintaining system power quality. Therefore, there is a need for technical solutions that take
advantage of existing infrastructure and equipment, like power converters, to achieve the
MG operational goal cost effectively. In the next subsection, to offer a background on which
this paper bases its proposal, a summary of related works using power converters in MGs
for power quality compensation is presented.
_1.1. Literature Review_
To address the imbalance and harmonic issues described above, various technologies have been deployed in the last decade which commonly rely on power converters
used as part of the distributed flexible AC transmission system (DFACTS) family [4,5,8];
the DFACTS are distributed across the MG to provide support at specific points. Inside
DFACTSs, two major technologies are preferred, namely the shunt active power filter
(SAPF)—for harmonic and imbalance compensations—and the static synchronous compensator (STATCOM)—for reactive power compensations [8]. The control objectives commonly
include compensating current imbalance and harmonics while managing the reactive power
at specific points of a grid-tied MG system. For this, the use of dedicated power quality
compensators (PQCs) based on a combination of SAPFs and STATCOMs is a valid option
explored by many authors in the literature, such as in [6,9–11]. However, these early
solutions require additional hardware in the system, increasing costs. For example, in
reference [11], the harmonic distortion of the current at the PCC is reduced by a coordinated
allocation strategy of APFs. The proportional distribution of harmonic compensation is
performed by a central controller, which estimates weighting parameters using average
values of the current and estimated harmonic compensation rates.
Some approaches have proposed using high power units to perform power quality
compensations [12,13]. Recent examples in the literature are [6], in which multiple interconnected converters are used, and [14], in which a modular multilevel converter is
used; both methods reduce the number of required devices spread on the MG (simplifying
the communication network). Nevertheless, this kind of approach usually requires com
-----
_Sustainability 2023, 15, 15698_ 3 of 23
plex construction topologies, which are not exempt from control difficulties and elevated
production costs.
To overcome these cost issues, the authors of [7,15–18] propose a cost-effective solution
that embeds compensation functionalities on the control system of power converters already
installed in the MG, maximizing hardware utilization. For instance, in [15], a control scheme
for SAPFs in a smart grid is proposed. The control scheme aims to improve the power
quality at the point of common coupling (PCC) (in terms of imbalances and harmonics) by
coordinating the compensation effort of the SAPFs present in the system; also, a centralized
controller is in charge of calculating the compensation effort for each SAPF. A similar
approach is proposed in [17] for MG applications. The control system is based on a
multi-primary–secondary architecture, where the primary converter regulates both the
voltage and the frequency of the system and the secondary converters act as grid-following
converters; a supervisory control system calculates the current reference for the latter
converters. In reference [7], a two-layer optimization model was recently proposed for the
allocation of active power injection and harmonic mitigation of multi-functional inverters.
The model is solved by particle swarm optimization relying on centralized communications,
and it can achieve both an economical operation and the mitigation of undesired harmonic
levels.
It is worth remembering that proposals [7,15–18] are based on a centralized control
approach. However, as discussed in [2,19], the distributed approach has advantages over
the centralized approach: better reliability, flexibility, scalability, plug-and-play operation,
and tolerance to single-point failures. We note that consensus-based distributed control
schemes have been widely used for power converters in grid-forming isolated MGs operating under droop control schemes [2]; however, this approach has been little explored in the
literature for grid-follower converters.
Regarding the compensation over multiple buses in an MG, works [20–24] address this
topic. In [20], a decentralized local compensation scheme is developed in each converter (i.e.,
PQC) at sub-buses to reduce the current harmonic propagation to the MG’s PCC. However,
with this decentralized approach, the DGs cannot support the harmonic compensation at
other buses, like the PCC. Therefore, it is clear that better performance could be achieved
via cooperation/coordination between clusters of PQCs using centralized or distributed
approaches to support selected buses. In [21], the authors explore an optimization scheme
for compensating harmonics in multiple buses using a single SAPF. The controller relies on
a model predictive optimizer applied on a two-bus shipboard isolated MG. The approach
requires a fixed MG topology, and its application in a large system with plug-and-play
generators is not practical to realize. In reference [22], a cooperative control scheme for
power electronic converters in an MG is addressed using information received from local
loads and neighbouring units. Notably, consensus terms for stationary and oscillatory
power components are used to mitigate imbalances and harmonics at local nodes and the
PCC. Consequently, the aforementioned methodology allows for an effective compensation
but without choosing individual levels of harmonic distortion, reactive power, and current
imbalance. Finally, in reference [23], an optimal installation of SAPFs is proposed, where
comprehensive harmonic mitigation in a distribution network can be achieved. A model of
SAPF with extended-range compensation is developed to assess the distance and sensitivity
of harmonic compensation for harmonic disturbance sources across multiple buses. The
optimization problem is solved using a centralized control. However, the model requires
knowledge of the system’s harmonic impedance matrix and makes assumptions that may
not be accurate in real-time operations with plug-and-play loads.
_1.2. Contributions_
Motivated by the above discussion, this paper proposes a novel distributed control
scheme based on the consensus theory [2] and augmented by the conservative power
theory (CPT) [25] to improve the power quality of grid-connected MGs. The proposed
control scheme uses the remaining volt-ampere (VA) power capacity of multi-functional
-----
_Sustainability 2023, 15, 15698_ 4 of 23
power converters for compensating imbalances and harmonics at some critical nodes of
the MG. The consensus control requires low bandwidth communications, so it is a lowcost investment compatible with any existing infrastructure. Also, due to its distributed
nature, the proposed control scheme does not require a central controller, unlike the models
previously reported in papers in this area [7,15–18].
The use of CPT allows the decomposition of the power in independent current and
voltage components, which represent reactivity, distortion and imbalance. This provides a
feasible instrument for delivering significant data in real-world MG operational scenarios.
CPT is appealing for integrating auxiliary features in grid-tied inverters because of its
ability to characterize the load under imperfect voltage conditions (such as distorted
or unbalanced voltages), as shown in [15,26]. Further, it offers the selectivity and the
adaptability to generate reference currents without requiring coordinate transformations
or the implementation of any synchronization algorithms [27]. This means that with the
proposed control protocol, accurate current compensations could be made without using
Delayed Signal Cancellation (DSC) [27–29] or other methods, which are highly susceptible
to noise. Furthermore, CPT is versatile and can be used in a variety of systems, regardless
of the number of phases and conductors.
For compensation over multiple buses, groups of PQCs, physically close to each other,
are considered. Unlike individual PQCs per bus, a cluster of PQCs allows compensation of
higher powers by combining the capacity of multiple PQCs [11,17]. Thus, a cluster of PQCs
could be viewed as an alternative to high-power FACTS (like multi-level topologies [12]).
It is important to note that the PQC’s cluster approach does not necessarily require all
hardware to be located in the same physical place.
To the best of the authors’ knowledge, regarding consensus algorithms for the coordination of CPT-based converters in grid-tied MGs, conference papers [22,30] published
recently by some of the authors of this proposal are the pioneers in addressing this topic.
In this sense, the current proposal corresponds to an extension of [30] since it addresses
some of its limitations and develops a more general approach. Indeed, the control scheme
reported in [30] requires the initial conditions of the grid-following converters to be known
before its activation, hindering its application to real systems. This is avoided in the new
proposed control scheme by using novel distributed consensus observers to estimate such
initial conditions. In addition, in contrast to [30], where a single-bus MG topology was
addressed, this paper follows the topological guidelines of [22] and thereafter extends the
proposal to operate in multiple buses. Similar to [22], this proposal controls the PQCs in a
distributed fashion. However, in this work, multiple clusters of PQCs are controlled and
coordinated in a distributed fashion using the CPT approach.
The features of the proposed control strategy are highlighted in a comparative table
regarding the published literature as shown below (Table 1).
**Table 1. Comparison of the proposed method with the published works in the literature.**
**Additional** **Communication** **Multi-Bus** **Implementation**
**References** **Need Synchronizer or DSC**
**Hardware** **(If Available)** **Compensation** **Costs**
[6,14] ✓ centralized _×_ ✓ $ $ $
[10,11,21,23,24] ✓ centralized ✓ ✓ $ $ $
[16–18] _×_ centralized _×_ ✓ $ $
[7] _×_ centralized ✓ ✓ $ $
[15,26] _×_ centralized _×_ _×_ $ $
[31] _×_ centralized ✓ _×_ $ $
[20,22] _×_ distributed ✓ ✓ $
[30] _×_ distributed _×_ _×_ $
Proposal _×_ distributed ✓ _×_ $
The contributions of this work are summarized as follows:
-----
_Sustainability 2023, 15, 15698_ 5 of 23
- A distributed control protocol using the CPT for non-active power-sharing in a cluster
of parallel PQCs connected to a grid-tied MG is proposed. This protocol allocates the
contributions of the converters concerning the per-unit (p.u.) available power.
- A new observer-based control loop for controlling the sharing of compensations for
non-active power in a PQC cluster is presented. Also, a stability analysis is included.
- A cooperative multi-purpose control scheme for current imbalance and harmonics
in multiple buses is described. Each cluster of PQC performs a local and a grid-side
compensation using the CPT framework.
- Online regulation via adjusted weights for the trade-off between grid-side and local
CPT current components compensations is presented. The weights are adjusted
according to deviations in defined power quality factors.
The rest of the paper is organized as follows: Section 2 presents preliminaries about
CPT and graph theory. Section 3 describes the design of the cooperative control of PQCs
in a cluster. Section 4 explains the multi-purpose control scheme between the clusters of
PQCs considering the trade-off regulations of local and grid-side compensations. Section 5
describes the methodology for simulation analysis. Section 6 demonstrates the results and
discussions. Finally, conclusions are presented in Section 7.
**2. Preliminaries**
The notation of bold letters in equations stands for vectors. Also, whenever possible,
capitalized letters in equations represent matrices or RMS magnitudes.
_2.1. Conservative Power Theory_
The instantaneous quantities in a three-phase system can be defined for any phase
voltage (v) and current (i) waveforms as
_p(t) = v(t)_ **_i(t)_**
_◦_
= va(t)ia(t) + vb(t)ib(t) + vc(t)ic(t),
_w(t) =_ **_v(t)_** **_i(t)_**
� _◦_
= �va(t)ia(t) + �vb(t)ib(t) + �vc(t)ic(t),
(1)
where p(t) and w(t) are the instantaneous active power and reactive energy, respectively.
The term _v is the unbiased phase voltage integral (i.e., phase voltage integral without DC_
�
component). It was shown [25] that p(t) and w(t) are conservative for every network,
irrespective of voltage and current waveforms.
**Remark 1. For the sake of simplicity, all of the time dependencies of variables are omitted here and**
_on for the rest of the article (e.g., va(t) = va)._
The corresponding average terms of (1) are
� _T_
_P =_ **_v, i_** = [1]
_⟨_ _⟩_
_T_ 0 [(][v][ ◦] **_[i][)][dt]_**
= [1] � _T_
_T_ 0 [(][v][a][i][a][ +][ v][b][i][b][ +][ v][c][i][c][)][dt][,]
� _T_
_W =_ **_v, i_** = [1]
_⟨_ � _⟩_ _T_ 0 [(][v][�][ ◦] **_[i][)][dt]_**
= [1] � _T_
_T_ 0 [(][v][�][a][i][a][ +][ �][v][b][i][b][ +][ �][v][c][i][c][)][dt][,]
(2)
where P and W are the active power and reactive energy. Based on (2), the current of a
generic three-phase power system can be characterized with orthogonal components as
follows [25]:
**_i = i[b]a_** [+][ i]r[b] [+][ i][u] [+][ i][v] [=][ i][b]a [+][ i]NA [,] (3)
-----
_Sustainability 2023, 15, 15698_ 6 of 23
where i[b]a [and][ i]r[b] [are the active and reactive balanced current components,][ i][u] [and][ i][v] [are the]
unbalanced and void (distorted) current components, and iNA are the non-active currents.
As the components of (3) are orthogonal to each other, the estimation of the RMS
collective value (norm) of the current results in the following equation:
�
**_I =_**
�
=
(i[b]a,RMS[)][2][ + (][i]r[b],RMS[)][2][ + (][i][u][,RMS][)][2][ + (][i][v][,RMS][)][2]
(4)
�
(Ia[b])[2] + (Ir[b] )[2] + (Iu)[2] + (Iv)[2] = (Ia[b])[2] + (INA)[2],
where Ia[b][,][ I]r[b][,][ I][u][,][ I][v] [and][ I]NA [are the RMS collective values of the current components.]
Therefore, the apparent power, S, can be estimated using (4) as
�
_S =V I = V_
�
=
�
= V (Ia[b])[2] + (Ir[b] )[2] + (Iu)[2] + (Iv)[2]
(5)
�
_P[2]_ + Q[2] + U[2] + D[2] = _P[2]_ + NA[2],
�
where V = _Va[2] + Vb[2]_ [+][ V][c][2][ is the RMS collective value of the voltage,][ P][ =][ V I]a[b] [is the active]
power, Q = V Ir[b] [is the reactive power,][ U][ =][ V I][u] [is the unbalance power,][ D][ =][ V I][v] [is the]
void (distortion) power and NA = V INA is the non-active power.
_2.2. Graph Theory_
A distributed and bidirectional communication network of a multi-agent system can
be modelled as an undirected graph G(N, ξ, A) among agents N = {1, . . ., N}, where ξ
is the set of communication links and A is the non-negative N × N weighted adjacency
matrix. The elements of A can be assigned as binary values such that
_aij =_
�
1 if data from agent j arrives at agent i, _i_ = j
_∀_ _̸_
.
0 otherwise
We let xi ∈ R be the value of some quantity of interest at node i; it is said that the
multi-agent system achieves consensus if and only if [xj − _xi] →_ 0 as t → ∞ _∀_ _i, j ∈N ._
Then, the consensus of a first-order multi-agent system can be achieved via the consensus
protocol
_x˙i = c ∑_ _aij ·_ �xj − _xi�_, (6)
_j∈Ni_
where c is the coupling gain regulating the convergence speed [19]. It is worth to highlight
that the consensus is achieved if and only if graph has a spanning tree [2].
_G_
In a matrix form, the global system dynamics using (6) are given by
**_x˙ =_** _c_ **_x,_** (7)
_−_ _L_
where x = (x1, . . ., xN)[T], and L is the Laplacian matrix such that L = D − _A with_
_D := diag(A · 1N×N) being the in-degree matrix._
In this work, the information is exchanged between inverters, measurement devices,
and controllers.
**3. Design of Cooperative Control for Power Quality Compensators**
Starting from an MG with power converters that could be used as PQCs (obeying the
logic behind SAPF and STATCOM), clusters of PQCs can be produced by adding communication links between units. The criteria for forming such clusters of PQCs could vary; some
authors even proposed optimization methods to achieve their optimum allocation [23]. The
main factor when choosing the clusters of PQCs in this case is the proximity in order to
reduce implementation costs of communications. Figure 1 represents an example of the
structure of a cluster of PQCs for the purposes of this work.
-----
_Sustainability 2023, 15, 15698_ 7 of 23
#### Cluster of PQCs GRID SIDE
220V
PQC
50 Hz
Cluster
#N
#### Rest of
PQC
Filter Filter Filter Cluster the MG Unbalanced
#2
#### load
PQC
Source Source Source Cluster
PQC#1 PQC#2 PQC#N #1
**Figure 1. Illustration of a cluster of PQCs used in this work.**
Evidently, the main control goal of each PQC is to provide active power to the MG,
according to the coupled NCRE source generation. Nonetheless, each cluster of PQCs
includes an additional control goal to improve the power quality at a local load using
the remaining power capacity. To this end, each PQC decomposes the load measured
current iL into CPT current components [30]. Provided that the grid supplies balanced and
distortion-free voltages, harmonics and power imbalances could be entirely compensated
by the PQCs through the injection of currents into the MG. Then, inspired by [26], weights
are incorporated for the current components to offer flexibility in the prioritization of
compensations, i.e.,
**_i[ref]L_** [=][ k]r[L][i][b]L,r [+][ k]u[L][i]L,u [+][ k]v[L][i]L,v [,] (8)
where kr[L][,][ k]u[L] [and][ k]v[L] [are the weights for reactive power, unbalanced power and void power,]
respectively; it is considered that kr[L] [=][ k]u[L] [=][ k]v[L] [=][ 1 for full compensation.]
We let hi ∈ [0, 1] be the relative amount of non-active power (reactive, unbalanced and
void power) to be compensated by the ith PQC inside a PQC cluster. It is named NA[req], the
required non-active power for full compensation, so NA[req] ∑ _hi = ∑_ NAi. For the sake of
simplicity, all of the PQCs in a cluster know the required non-active power to compensate.
Based on that, the local control of each PQC determines autonomously its current reference
as
**_ii[ref *]_** = i[ref]P,i _L_ [=][ i][ref]P,i _L_ [,] (9)
_[−]_ _[h][i][i][ref]_ _[−]_ [(][n][i][ +][ z][i][)][i][ref]
where i[ref]P,i [is a reference current for active power supply (][P]i[ref]) given by an internal power
loop, like a maximum power point tracking (MPPT) algorithm. We note that the contribution hi is defined as hi = ni + zi with ni and zi compensating terms; in particular, ni
compensates according to the non-active power-sharing, whereas zi compensates using
a total contribution (∑ _hi) constraint. The formulation for the obtainment of ni and zi is_
described next.
_3.1. Consensus Algorithm for Non-Active Power-Sharing_
In order to achieve an egalitarian distribution of compensating power between PQCs
in the same cluster, the non-active power supplied by each PQC is estimated based
on [15,26] as
�
NAi = _Q[2]i_ [+][ U]i[2] [+][ D]i[2] [.] (10)
|Fi|lter|
|---|---|
|||
|Source||
|Col1|Fil|te|r|Col5|
|---|---|---|---|---|
||||||
|Source|||||
-----
_Sustainability 2023, 15, 15698_ 8 of 23
Then, the power-sharing can be achieved by means of the consensus protocol
�
_n˙_ _i = c[cl]n_
_N_ � NAj
### ∑ aij − [NA][i]
_j=1_ _S[max]j_ _Si[max]_
, (11)
where Si[max] is the maximum apparent power of the PQC and c[cl]n _[>][ 0 is a control gain which]_
modifies the dynamic response of the consensus in the clth cluster. When (11) is applied
into (9), it allows sharing the effort of compensating power, NA, between PQCs according
to their maximum available power.
_3.2. Control Loop for Fulfilling the Required Compensation of Non-Active Power_
To guarantee a safe and adequate operation of the PQC cluster, a control loop related
to meeting the required contribution NA[req] is proposed. The sole application of (11) could
lead to deterioration in the power quality at the PCC when condition
### ∑ [h]i [=][ 1] (12)
is not met. This could be avoided by knowing the cluster topology and consequently the
initial conditions of any hi [30]. In [30], the authors calculated initial conditions (h0i) to
ensure that the consensus algorithm seamlessly achieves its control goal. However, such
calculation of initial parameters is not robust in the face of connection/disconnection of
PQCs. Alternatively, a feedback control loop can be designed to force the fulfilment of
condition ∑ _hi = 1; to this end, the feedback control has the following input:_
_N_
_u =_ ∑ _hi −_ 1 = hN − 1, (13)
_i=1_
where h is the time-varying average contribution value among PQCs in a given cluster,
and N is the number of active PQCs in the cluster. Value h can be estimated through a
distributed observer (see the definition of dynamic average consensus in [2,32]), whereas
_N is given by design. N can be obtained/updated by the communication protocol or a_
discovery method [33]; however, in this work, N is assumed to be fixed. Thereafter, using
(13), the following control loop is proposed as
_zi = k[z]p[(][1][ −]_ _[h]i_ _[N][) +][ k][z]i_
� _t_
(14)
0 [(][1][ −] _[h][i]_ _[N][)][dx][,]_
_N_
### ∑ aij(hj − hi)dx, (15)
_j=1_
_hi = hi + cz_
� _t_
0
where zi is a compensating term used in (9) to adjust hi, k[z]p [and][ k][z]i [are PI control parameters,]
_hi is the local estimate of the average value h and cz > 0 is a consensus speed gain._
_3.3. Control Loop for Fulfilling Power Limit Constraints of PQCs_
Power limit constraints of the PQCs inside a cluster could be violated when applying
(9); for safe operation, fulfilment of the following condition is required:
NAi + Pi[ref] _< Si[max]_ . (16)
-----
_Sustainability 2023, 15, 15698_ 9 of 23
Hence, an additional compensation needs to be designed according to the disposable
power of each PQC. The disposable power is calculated as Si[disp] = Si[max] _−_ _Pi[ref]. Then, (11)_
is modified by changing Si[max] to Si[disp], and a compensation is added into (9) as follows:
**_ii[ref *]_** = i[ref]P,i _L_ [=][ i][ref]P,i _L_ [,] (17)
_[−]_ [(][h][i][ −] _[δ][h][i][)][i][ref]_ _[−]_ [(][n][i][ +][ z][i][ −] _[δ][h][i][)][i][ref]_
0, [NA][i][ −] _[S]i[disp]_ + _[k][2]_ _δhi_
_Si[disp]_ _k[cl]h_
_−_ _[k][2]_ _δhi,_ (18)
_k1_
�
_δh[˙]_ _i =_ _[k][cl]h_ max
_k1_
�
where k1, k2 and k[cl]h [are control parameters (see [][34][] for an example of the structure of][ (][18][)][).]
Parameter k[cl]h [is related to][ c]n[cl][, which depends on the cluster’s communications. The term]
_δhi is added to relax (13), reducing hi while avoiding the operation of a PQC above the_
available power capacity. δhi(0) = 0 is defined as an initial condition. Also, we note that if
NAi is greater than Si[disp], then δhi > 0; otherwise, δhi = 0.
By applying (11), (14), (15), (17) and (18), the terms ni and zi initially ensure the nonactive power consensus and the fulfilment of the total contribution constraint (∑ _hi = 1),_
respectively. However, when the demanded non-active power is greater than the PQC’s
power capacity, δhi commences to increase. Then, parameter ni is recalculated according to
the neighbours to maintain the non-active power-sharing. Consequently, parameter zi is
adjusted for ensuring condition ∑ _hi = 1, i.e., compensating the ni variation. As a result,_
∑ _hi = 1, whereas ∑(hi_ _δhi) < 1, so the actual non-active power delivered by the PQC is_
_−_
reduced to fulfil the maximum power capacity.
**Remark 2. To handle over-currents, saturations need to be included in the integrators with an**
_appropriate anti-windup._
_3.4. Stability Analysis_
Let us assume an MG operation inside power limits (δhi = 0 ∀i). By combining
the time derivatives of hi and hi, the control protocol for the cluster coordination can be
expressed as
˙ _k[z]i_ _[N]_ � 1 � _cz_ _N_ � �
_hi =_ 1 − _k[z]p_ _N_ _N_ _[−]_ _[h][i]_ + 1 − _k[z]p_ _N_ _j∑=1_ _aij_ _hj −_ _hi_
(19)
_n_ [NA][req] _N_ � _hj_ _hi_ �
+ _[c][cl]_ ∑ _aij_ _−_ .
1 − _k[z]p_ _N_ _j=1_ _S[max]j_ _Si[max]_
It is worth noting that NAi = NA[req]hi, where NA[req] is the total amount of required
non-active power. If the dynamic of the consensus of NAi is sufficiently slow, i.e., the
value of c[cl]n [is small, then][ (][19][)][ can be viewed and analyzed as a first-order consensus with]
leader units (all units in this case) approaching reference signal 1/N (see [35] for a complete
analysis of such systems). Then, the system becomes asymptotically stable as long as there
is a spanning tree in the communication graph.
**4. Design of Cooperative Control for Clusters of Power Quality Compensators**
The coordination of clusters (groups) of PQCs in an MG can be realized by sharing
the compensating current components estimated by CPT transformation. This can be
implemented by adding communication links that communicate a measurement equipment
at the PCC with designated leader units at each PQC cluster. In this way, other clusters in
the MG might support the power quality correction at the PCC.
Then, weighting factors are used to regulate the trade-offs between local compensation
(near the node of connection of the cluster) and the grid-side PCC. These trade-offs could
minimize both the power flow in the distribution lines and, eventually, the distribution
loss.
-----
_Sustainability 2023, 15, 15698_ 10 of 23
**Remark 3. As the shared compensating currents are instantaneous variables sensitive to distur-**
_bances, the transformation of currents to power commands and vice versa can be performed to_
_improve the robustness of the system, as described in [36]._
_4.1. Multi-Purpose Compensation of PQCs for Power Quality Improvement in PCC and_
_Local Node_
Once the leader of each cluster of PQCs receives the data from the PCC, it re-transmits
the data to the PQCs inside the cluster. With this, each PQC executes a control action
using the compensating current components measured from the PCC. These compensating
currents are weighted in the process according to parameters that are designed in the rest
of this section.
Based on (17), a compensating current component from the PCC can be incorporated as
� �
**_ii[ref *]_** = i[ref]P,i **_i[ref]G,i_** [+][ i][ref]L,i, (20)
_[−]_ [(][h][i][ −] _[δ][h][i][)]_
where the currents i[ref]G,i [and][ i][ref]L,i [correspond to generated references relating to the compen-]
sations in the grid and the local node, respectively. Component i[ref]L,i [is obtained following]
(8) whereas i[ref]G,i [is calculated by a current controller that distinguishes CPT components.]
The proposed current controller is summarized in a matrix form as follows:
(21)
T
3×3
1×3 _[⊙]_
� �T
**_i[ref]G,i[(][t][)]_**
3×1 [=]
�� �
**_k[u]p_**
i[b]G,r[(][t][ −] _[τ][)]_
iG,u(t − _τ)_
**_iG,v(t −_** _τ)_
(1 _e[−][ω][F][t])_
_·_ _−_
T
3×3
�
**_k[G][�][T]_**
_·_ 3×1 [,]
� _t_
+ 0 [[][k]i[u][]]1×3 _[⊙]_
avg(i[b]G,r[(][x][ −] _[τ][))]_
avg(iG,u(x − _τ))_
avg(iG,v(x − _τ))_
�
_dx_
where i[ref]G,i [= (][i][ref]G,i,a[,][ i][ref]G,i,b[,][ i][ref]G,i,c[)][, also][ i][b]G,r[,][ i][G][,][u][ and][ i][G][,][v][ are three-phase CPT current compo-]
nents measured from the PCC, ωF is a low-pass filter frequency used for a “soft-start” of
the current injection, k[u]p [= (][k][u]p,r[,][ k][u]p,u[,][ k][u]p,v[)][ and][ k]i[u] [= (][k][u]i,r[,][ k][u]i,u[,][ k][u]i,v[)][ are vectors of control]
gains. The average function is defined as avg(·) := _T[1]_ �0T[(][·][)][dt][ (average of samples along][ T][),]
and it is applied by phase. Also, ⊙ is the Hadamard Product, and k[G] = (kr[G][,][ k][G]u [,][ k][G]v [)][ where]
_kr[G][,][ k][G]u_ [and][ k][G]v [have the same purpose as the ones in (][8][).]
All the control parameters in (21) need to be the same for each PQC of a cluster. Further
details about the method to select the coefficients in k[G] are given in the next subsection.
**Remark 4. The use of a current controller that performs integration for estimating i[ref]G,i** _[is proposed]_
_because it adds a layer of robustness under parameter uncertainties and delays whilst it permits a_
_defined control bandwidth, decoupling the control effort from the local compensation._
**Remark 5. The notation of time dependency is included in (21) to highlight the phenomenon of**
_transport delay (τ)._
_4.2. Adaptive Weightings for Trade-Off between Grid and Local Power Quality Regulation_
The control proposed in (20) and (21) inherently reduces the power quality of adjacent distribution lines by injecting distorted and unbalanced currents. Moreover, the
introduction of such currents by a distant cluster of PQCs increases the power losses
and may jeopardize the power quality of local loads in the path to the PCC (propagating harmonic [20] and unbalanced components), producing the so-called “whack-a-mole”
effect [24]. To regulate the former issue, a dynamic adjustment of kr[G][,][ k][G]u [and][ k][G]v [is proposed.]
-----
_Sustainability 2023, 15, 15698_ 11 of 23
The proposed adjustments of CPT weights are realized according to the measured
power quality indexes (PQIs). Based on the CPT current/power components, various
performance indexes were analyzed in [26,30]. The main advantage of handling the CPT’s
factors to evaluate the power quality of the MG, instead of conventional PQIs, is that
the so-called load conformity factors are concentrated on the load characteristics and
not just on the current waveforms. Moreover, they represent the impact of each power
quality disturbance on the load by correlating three-phase variables collectively, instead of
single-phase-equivalent variables. Therefore, as discussed in [37], the MG operation can be
characterized using the CPT’s factors, i.e.,
- General power factor
_λ =_ **_[I]Ia[b]_** [.] (22)
- Reactivity factor
_λQ =_ **_Ir[b]_** (23)
�
(Ia[b])[2] + (Ir[b] )[2][ .]
- Unbalance factor
**_Iu_**
_λU =_ (24)
�
(Ia[b])[2] + (Ir[b] )[2] + (Iu)[2][ .]
- Distortion factor
**_Iv_**
_λD =_ (25)
�
(Ia[b])[2] + (Ir[b] )[2] + (Iu)[2] + (Iv)[2][ .]
From a practical point of view, the measurement of these indexes is performed at an
arbitrary line “l” of the MG system (in the path between the PQC’s cluster and the PCC).
Therefore, the proposed dynamics for the CPT weights are the following:
� �
_kr[G]_ [=][ k]r0 _[−]_ _[δ][k][r]_ [,] _δk[˙]_ _r = k[cl]kr,l_ [max] 0, λQ,l − _λ[max]Q,l_, (26)
� �
_k[G]u_ [=][ k]u0 _[−]_ _[δ][k][u]_ [,] _δk[˙]_ _u = k[cl]ku,l_ [max] 0, λU,l − _λU[max],l_, (27)
� �
_k[G]v_ [=][ k]v0 _[−]_ _[δ][k][v]_ [,] _δk[˙]_ _v = k[cl]kv,l_ [max] 0, λD,l − _λ[max]D,l_, (28)
where λ[max]Q,l [,][ λ]U[max],l _λ[max]D,l_ [are the maximum allowable PQI factors defined by the Line][ l][.]
Parameters k[cl]kr[,][ k][cl]ku [and][ k][cl]kv [should be selected according to the PQC cluster and the selected]
Line l; in this case, they are assumed equal for all clusters. The initial values kr0, ku0 and
_kv0 can be selected according to the MG topology for avoiding further deterioration of the_
power quality in specific lines and nodes; for the sake of simplicity, these initial values are
assumed unitary.
**Remark 6. Note that (26)–(28) could have the same form as (18); however, to avoid control coupling**
_between clusters (especially when time delays exist), k2 = 0 is preferred in the design._
The application of (11)–(18) and (20)–(28) is summarized in Figure 2. Figure 2 represents the control of a PQC on a generic MG system, where PQCs, loads and sensors
are arbitrarily placed. It should be noted that the measured CPT components are sent in
packages named ΨL and ΨG for the load and grid side measurements, respectively.
-----
_Sustainability 2023, 15, 15698_ 12 of 23
#### Controller for i-th PQC
PQC converter
1
PWM
0
|Col1|Col2|
|---|---|
|Communication Network||
|||
|Col1|1 0 sat PI Control Σh=1 controller|Col3|1|
|---|---|---|---|
|Observer average ∫||||
PCC
Line
**Figure 2. Schematic representation of the proposed control loops for PQCs.**
To exemplify the operation of a PQC, a summary flowchart is provided in Figure 3.
Ini�aliza�on of controllers (states and parameters)
Inject reference current into MG through
Calculate current reference (eq. 20 and eq. 21)
power converter
Measurement of distor�on, imbalance and reac�ve
Update weights (eq. 26, eq. 27 and eq. 28)
power components through CPT at local node
Calculate disposable power and Calculate PQIs (eq. 23, eq. 24 and eq. 25)
per unit non-ac�ve power
Get CPT measurements from line
Send per unit non-ac�ve power and “l ” (from remote equipment or a
average contribu�on to neighbours leader PQC in the cluster)
Yes
Get per unit non-ac�ve power and average are adap�ve weigh�ngs for No
contribu�on to neighbours from neighbours trade-off enable?
Calculate average contribu�on (eq. 15), contribu�on (eq. 11 Get CPT measurements from the
and eq. 14) and power limit compensa�on (eq. 18) PCC (from remote equipment or a
leader PQC in the cluster)
No Is coopera�ve cluster control Yes
enable?
**Figure 3. Summary flowchart of the main operation of PQCs.**
**5. Case Study**
The evaluation of the performance of the control scheme proposed for the clusters of
PQCs is realized through experiments in a simulated environment. The simulations are
realized in software PLECS [38], version 4.5.9, with a discrete solver with a sample step of
200 (µs). The simulated model is described next.
|Col1|Get CPT measurements from line “l ” (from remote equipment or a leader PQC in the cluster)|
|---|---|
-----
_Sustainability 2023, 15, 15698_ 13 of 23
_5.1. Microgrid Model_
The simulated MG is depicted in Figure 4. It contains four nodes, two balanced loads,
three unbalanced loads, and three clusters of PQCs. The unbalanced loads are resistive–
inductive and star-connected with a neutral wire. In particular, Unbalanced load #1 has
a diode in the c-phase to produce distortion in the current waveforms (mainly a DC
component plus a double frequency harmonic). Each PQC is modelled as an ideal current
source. The grid is represented as an ideal three-phase voltage source with a fundamental
frequency of 50 (Hz) and a voltage amplitude of 220 (Vrms/ph). There are three PQCs
composing Cluster #1, two PQCs composing Cluster #2 and only one PQC in Cluster #3.
Node #4
Node #2
Node #1 -
PQC
Grid Cluster PQC #4
#3
Node #3
220Vrms
50Hz
PQC PQC #1
Cluster PQC #2
#1 PQC #3
PQC
PQC #5
Cluster
PQC #6
#2
**Figure 4. Unilinear diagram of the studied MG system used for simulations.**
The parameters of the electrical system are summarized in Table 2 while the control
parameters are summarized in Table 3.
**Table 2. Electrical parameters for simulation.**
**Variable** **Value** **Variable** **Value** **Variable** **Value**
_Rline 1–2_ 0.3 (Ω) _Lline 1–2_ 1.0 (mH) _R[3]Load 1[ph]_ 150 (Ω)
_Rline 2–3_ 0.1 (Ω) _Lline 2–3_ 0.3 (mH) _R[3]Load 3[ph]_ 100 (Ω)
_Rline 2–4_ 0.3 (Ω) _Lline 2–4_ 1.0 (mH)
_RLoad 1[a]_ 10 (Ω) _RLoad 1[b]_ 15 (Ω) _RLoad 1[c]_ 7 (Ω)
_LLoad 1[a]_ 10 (mH) _LLoad 1[b]_ 15 (mH) _LLoad 1[c]_ 15 (mH)
_RLoad 3[a]_ 40 (Ω) _RLoad 3[b]_ 56 (Ω) _RLoad 3[c]_ 36 (Ω)
_LLoad 3[a]_ 15 (mH) _LLoad 3[b]_ 15 (mH) _LLoad 3[c]_ 15 (mH)
_RLoad 4[a]_ 100 (Ω) _RLoad 4[b]_ 150 (Ω) _RLoad 4[c]_ 100 (Ω)
_LLoad 4[a]_ 1 (mH) _LLoad 4[b]_ 1 (mH) _LLoad 4[c]_ 1 (mH)
The communication layer of PQCs is constructed considering the bidirectional flow of
information. The adjacency matrices for Clusters #1, #2, and #3 are
�0 1
, _A2 =_ 1 0
_A1 =_
0 1 1
1 0 1
1 1 0
�
, _A3 =_ �0� .
-----
_Sustainability 2023, 15, 15698_ 14 of 23
**Table 3. Control parameters for simulation.**
**Variable** **Value** **Variable** **Value** **Variable** **Value**
_S1[max]_ 1.5 (kVA) _S2[max]_ 1.5 (kVA) _S3[max]_ 1.0 (kVA)
_S4[max]_ 1.7 (kVA) _S5[max]_ 1.5 (kVA) _S6[max]_ 2.0 (kVA)
_P1[ref]_ 0.3 (kW) _P2[ref]_ 0.2 (kW) _P3[ref]_ 0.2 (kW)
_P4[ref]_ 0.5 (kW) _P5[ref]_ 0.5 (kW) _P6[ref]_ 0.5 (kW)
_λ[max]Q,l_ 0.50 _λU[max],l_ 0.20 _λ[max]D,l_ 0.20
_c[1]n_ 4.00 _c[2]n_ 10.00 _c[3]n_ 1.00
_cz_ 3.00 _k[z]p_ 0.00 _k[z]i_ 14.64
_k[1]h_ 0.18 _k[2]h_ 0.75 _k[3]h_ 2.00
_k[1]kr,l_ 2.00 _k[1]ku,l_ 2.00 _k[1]kv,l_ 0.60
_k[u]p,r_ 2.00 _k[u]p,u_ 2.00 _k[u]p,v_ 0.60
_k[u]i,r_ 24.66 _k[u]i,u_ 24.66 _k[u]i,v_ 2.46
_ωF_ 8.97 ( [rad]s [)] _k1_ 0.20 _k2_ 100
_5.2. Performance Tests_
To evaluate the performance of the controllers during simulations, different scenarios
are used; they are described as follows.
5.2.1. Case 1. Multi-Mode Operation
The MG is initially operated without power quality compensation; then, at time
_t = 3 (s), the local PQC control is activated but without power boundary saturations. Power_
saturations are activated at t = 6 (s). After that, at t = 9 (s), the cooperative control between
clusters of PQCs is activated. Finally, at t = 12 (s), the restriction based on PQIs (λQ,l, λU,l
and λD,l) is put in action.
5.2.2. Case 2. Communication Issues within a Cluster of PQCs
This case is divided into two tests. In contrast to Case 1 and Case 3, these tests do not
contemplate the activation of (18) nor (21). In the first test, the performance of the control
strategy is analyzed when a PQC loses its communication before and after the activation
of the maximum power constraints (at t = 5 (s) and t = 8 (s), respectively); during the
communication failure, parameter N is not updated in order to represent the worst case
scenario. The second test is about transport time delays; time delays are introduced in the
communication links between the neighbour PQCs within a cluster. Delay (τ) values of 0,
100, and 200 (ms) are tested. These values are selected because of their gradual proximity
to the convergence rate in the design of (11), which is close to 500 (ms) for Cluster #1 and
1500 (ms) for Cluster #2 when using Table 3.
5.2.3. Case 3. Communication Issues in PQI Compensation
This test uses as a basis the conditions of Case 1. From it, a constant transport delay is
added to the communication links between clusters and the PQI measurement on Line 1–2.
Delay values of 0, 100 and 200 (ms) are tested.
**6. Results and Discussions**
_6.1. Case 1_
The results of Case 1 are shown in Figures 5 and 6. Different variables are grouped
and shown to better understand the system’s behavior at various stages of the test. Figure 5
focuses on showing the control variables, whilst Figure 6 shows the effects in the system’s
currents and PQIs.
In Figure 5a, we can see the behavior of the consensus variable NAi. Before t = 3 (s), the
non-active powers are close to zero since the PQCs are disabled, i.e., the power converters
only provide active power to the MG. In the same time span, Figure 5b–d show the behavior
of hi, ∑ _hi and δhi, where all the variables also keep zero values. During t ∈_ (3,6), the local
-----
_Sustainability 2023, 15, 15698_ 15 of 23
power quality compensation is activated in each PQC. As a result, Figure 5a shows that
each cluster of PQCs achieves non-active power consensus after a small transient of around
1 (s). However, the values of NAi in Cluster #1 exceed their maximum disposable capacity
_Si[disp]_ (there is overloading). This condition might damage the converters unless hardware
protection is provided (which would shut down the converters of Cluster #1). Figure 5b
shows that the dynamic of h coefficients follows the same bandwidth of non-active powers.
Also, different steady-state values are reached. It can be seen from Figure 5c that the sum
of h coefficients inside a cluster is equal to one at almost all times.
2.0 NA1
NA2
1.5 NA3
NA4
1.0 NA5
NA6
0.5
0.0
0 2 4 6 8 10 12 14 16
enabledPQCs power enabledPQCs max Time [s]PQC Clustercontrol enabled PQI controlenabled
(a)
|NA1 NA2|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
|NA3 NA4 NA5|||||||||||
|NA6|||||||||||
||||||||||||
||||||||||||
1.0 h1
h2
0.8 h3
h4
0.6
h5
0.4 h6
0.2
0.0
0 2 4 6 8 10 12 14 16
enabledPQCs power enabledPQCs max Time [s]PQC Clustercontrol enabled PQI controlenabled
(b)
|Col1|Col2|Col3|Col4|Col5|(a)|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
|h1 h2|||||||||||
|h3 h4|||||||||||
|h5 h6|||||||||||
||||||||||||
||||||||||||
||||||||||||
1.0 Ref
Σ h1
0.8 Σ h2
Σ h3
0.6
0.4
0.2
0.0
0 2 4 6 8 10 12 14 16
enabledPQCs power enabledPQCs max Time [s]PQC Clustercontrol enabled PQI controlenabled
(c)
|Col1|Col2|Col3|Col4|Col5|(b)|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
|Ref Σ h1|||||||||||
|Σ h2 Σ h3|||||||||||
||||||||||||
||||||||||||
||||||||||||
||||||||||||
1.0 δh1
δh2
0.8 δh3
δh4
0.6
δh5
0.4 δh6
0.2
0.0
0 2 4 6 8 10 12 14 16
enabledPQCs power enabledPQCs max Time [s]PQC Clustercontrol enabled PQI controlenabled
|Col1|Col2|Col3|Col4|Col5|(c)|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
||||||||||||
|δh1 δh2|||||||||||
|δh3 δh4|||||||||||
|δh5 δh6|||||||||||
||||||||||||
||||||||||||
||||||||||||
(d)
**Figure 5. Measured variables during simulation of Case 1. (a) Non-active powers of PQCs. (b)**
Contribution of PQCs (hi). (c) Cumulative contribution of PQC clusters (fulfilment of (12)). (d) Power
limit compensation (fulfilment of (16)).
This inconvenient operation that exceeds the maximum capacity of converters is fixed
after t = 6 (s), where saturation is imposed by (17) and (18). Figure 5a depicts that the
non-active powers of Cluster #1 stabilize at their maximum disposable capacity, whereas
other clusters remain unchanged. This means there is a reduction in the overall power
quality compensation of the system during t ∈(6,9). Figure 5b shows small changes in
-----
_Sustainability 2023, 15, 15698_ 16 of 23
PQCs contributions, whereas Figure 5d illustrates an increase in deviations in Cluster #1
caused by the power limit constraint. It is worth noting that the effects caused by δh are
not reflected in the charting of Figure 5c, following the same information routing described
in Figure 2; the latter ensures the decoupling between (13) and (18).
After t = 9 (s), the cooperative grid-side compensation is activated, where Cluster #2
and Cluster #3 receive measurements from the PCC. The non-active powers of Cluster #2
and Cluster #3 depicted in Figure 5a increase accordingly, and the consensus inside all the
clusters holds during the transient (around 2 (s) for Cluster #2 and 0.5 (s) for Cluster #3).
Figure 5b shows a negligible variation in the PQC contribution whereas Figure 5d shows a
pronounced increment in the compensation for power saturation of PQC #4 (yellow line).
This increment of δh4 is concordant with the saturation of Cluster #3 seen in Figure 5a.
In the final stage of the test, the adaptive weightings are activated at t = 12 (s). As
expected, in Figure 5a, a reduction can be seen in the amount of non-active power that Cluster #2 and Cluster #3 provide to compensate for the power quality at the PCC. Figure 5b,c
remain unchanged, which means that the dynamics of the adjustable parameters based on
PQIs are decoupled from the control loops of Constraints (12) and (16). Also, the power
limit correction of Cluster #3 is reduced to zero, as shown in Figure 5d. This is expected
since the non-active power contribution of Cluster #3 is reduced by adjustable parameters.
The behavior of the currents is shown in Figure 6a,c. It can be seen that after the
activation of the PQCs, at t = 3 (s), the grid side current is almost immediately free from
imbalance and distortion. Also, the currents in distribution line Line 1–2 shown in Figure 6c
exhibit an appropriate dynamic (no distortions are introduced into the distribution lines).
Figure 6b,d depict the variations in power quality during the simulation test transitions;
these figures show that PQIs improve after the activation of PQCs (t (3, 6)). However,
_∈_
after t = 6 (s), the control loop of (18) drives Cluster #1 to saturation, provoking a lack of
compensation at the PCC (Node #1); imbalance and distortion can be seen in Figure 6a
during this time span. Also, Figure 6c shows an expected unchanged behavior of Line 1–2,
where there are no additional current injections/consumptions. Overall, during t (3, 9),
_∈_
an appropriate behavior of the proposed controller is depicted in Figure 6, achieving all its
control goals while abiding by the power constraints.
After t = 9 (s), Clusters #2 and #3 start compensating the PCC, incidentally distorting
the distribution lines, as shown in current waveforms of Figure 6a,c and their corresponding
PQIs in Figure 6b,d. It can be seen that the combined PQC clusters almost compensate
the same amount that the initial local cluster (PQC Cluster #1) did during t (3, 6). It is
_∈_
worth noting that after t = 9 (s), the grid side noticeably reduces its current imbalance and
distortion, whereas other distribution lines (like Line 1–2) slightly decrease their power
quality; the proportion of power quality compensation of PQC clusters greatly depends
on the adjustment of the CPT weights of (27), and (28) and the PQC cluster location. After
the application of the PQI restrictions, at t = 12 (s), it can be seen in Figure 6d that the
distortion factor in Line 1–2 drops to the maximum allowable value λ[max]D,l = 0.2 after a
nearly 3 (s) transient, which inevitably deteriorates the power quality in the grid side.
Therefore, the selection of Line l, as well as its associated PQI values (λQ,l, λU,l, λD,l), is
sensitive concerning the grid side power quality.
-----
_Sustainability 2023, 15, 15698_ 17 of 23
40 Ia
Ib
20 Ic
0
-20
-40
0 2 4 6 8 10 12 14 16
enabledPQCs power enabledPQCs max Time [s]PQC Clustercontrol enabled PQI controlenabled
(a)
0.5
λQ
0.4 λU
λD
0.3
0.2
0.1
0.0
0 2 4 6 8 10 12 14 16
enabledPQCs power enabledPQCs max Time [s] PQC Clustercontrol enabled PQI controlenabled
(b)
20 Ia
Ib
10 Ic
0
-10
-20
0 2 4 6 8 10 12 14 16
enabledPQCs power enabledPQCs max Time [s]PQC Clustercontrol enabled PQI controlenabled
(c)
0.5
λQ
0.4 λU
λD
0.3
0.2
0.1
0.0
0 2 4 6 8 10 12 14 16
enabledPQCs power enabledPQCs max Time [s] PQC Clustercontrol enabled PQI controlenabled
(d)
**Figure 6. Measured variables during simulation of Case 1. (a) Currents in grid side. (b) PQIs in grid**
side. (c) Currents in Line 1–2. (d) PQIs in Line 1–2.
_6.2. Case 2_
6.2.1. Communication Link Failure inside PQC Cluster
In this case, Figure 7 shows the performance of (11) under the disconnection of PQC #3
of Cluster #1. A seamless operation during the communication failure of PQC #3 (t (5, 6))
_∈_
can be seen (a negligible error at this point). After a disturbance (t = 6 (s)), as the power
redistributes according to maximum power limits, the PQCs stabilize their values in a
close proximity. In particular, before the reconnection, there is an error of 1.88 %; this error
vanishes rapidly (around 0.6 (s)) after the reconnection at t = 8 (s), as can be seen from the
highlighted zoom. From the simulation data, it can be inferred that the outdated value of
_N does not have a significant impact on power quality. Also, there is no disturbance in the_
-----
_Sustainability 2023, 15, 15698_ 18 of 23
consensus final value as long as a spanning tree is guaranteed in the communication graph
of the cluster.
2.0 NA1
NA2
1.8 NA3
1.6
1.4
1.2
1.0
|Col1|Col2|PQC#3 disconn.|Col4|PQCs max power enabled|Col6|Col7|Col8|PQC#3 reconn.|Col10|Col11|Col12|
|---|---|---|---|---|---|---|---|---|---|---|---|
|||||||||||||
|NA1 NA2||||||||||||
|NA3||||||||||||
|||||||||||||
|||||||||||||
|||||||||||||
|||||||||||||
4.0 4.5 5.0 5.5 6.0 6.5 7.0 7.5 8.0 8.5 9.0 9.5 10.0
Time [s]
**Figure 7. Non-active powers of Cluster #1 during simulation of communication failure in Case 2.**
6.2.2. Communication Delay inside PQC Cluster
The results with the effect of time delays under Case 2 are presented in Figure 8. The
activation time of (18) is changed to t = 8 (s) for this test. Also, for a clear contrast between
tests, only the dynamic of PQC #2 is shown. Similarly, only λD is shown since it is the index
with the greatest variation.
Before t = 8 (s), it can be seen in Figure 8a–c that the delay produces steady-state errors
proportional to it. This is mainly due to the observer of (15), which has the initial conditions
_hj = 0 (needed for convergence [32]) that induce the integration of hi indefinitely before_
receiving the measurement from neighbouring units. This problem is detailed and reported
in [39]. Fortunately, there are solutions reported in the literature, such as using a selective
anti-windup [40], reducing cz while slowing down the dynamics of the observer, or using a
robust feedback consensus loop [32].
After t = 8 (s), the saturation loop in (18) is activated, which limits the contribution
of NAi from each PQC. In this particular case, since NA[req] is greater than the combined
power of Cluster #1, the error of (15) does not affect the MG, since the cluster is not
able to “overcompensate”. Additionally, is it important to highlight that, internally, δhi
compensations are applied after the calculation of hi, so each PQC is able to reach non-active
power consensus and equilibrium in (16) despite (12) not being met by miscalculated hi.
In Figure 8d–f, the results are presented but implementing the anti-windup with reset
scheme proposed in [40]. It is distinguished that the steady-state error compared with τ = 0
of the charts in Figure 8e is reduced below 2%, which guarantees that the proposed strategy
produces accurate measurements of the individual contribution of PQCs. In Figure 8d,f,
appropriate power quality compensation is shown; particularly for τ = 0.2, the antiwindup produces a reduction of 58% in transient overshoot but a slight increase in settling
time. Overall, steady-state values in Figure 8d,f remain the sameas those in Figure 8a,c
after t = 8 (s). Therefore, the application of a time delay robust method is optional and
only necessary when contributions hi are required to be measured for a decision-making
process.
_6.3. Case 3_
The results are shown in Figures 9 and 10. In Figure 9, non-active powers of PQCs are
shown when subdued to a delay of 0.2 (s) in the data received for executing (26)–(28). It
can be seen after the activation of the adjustable PQIs at t = 12 (s) that the non-active power
contributions of Cluster #2 and Cluster #3 suffer minor changes in their waveforms (transient states) when compared with the results presented in Figure 5a for Case 1. Particularly,
the system damping is decreased and the convergence is slightly slower ( 0.15 (s) slower
_≈_
for an error band of 2%).
-----
_Sustainability 2023, 15, 15698_ 19 of 23
4
3
2
1
6 7 8 9 10 11 12
Time [s]
(a)
2.0 NA2 Td=0.0NA2, τ=0.0
1.8 NA2, τ=0.1
NA2, τ=0.2
1.6
1.4
1.2
1.0
0.8
6 7 8 9 10 11 12
Time [s]
|Col1|Col2|Col3|PQCs max power enabled|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|NA2, τ=0|.0||||||
|NA2, τ=0 NA2, τ=0|.1 .2||||||
||||||||
||||||||
||||||||
||||||||
|Col1|Col2|Col3|PQCs max power enabled|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
|τ|||||||
|NNAA22, Td==0|0..00||||||
|NA2, τ=0 NA2, τ=0|.1 .2||||||
||||||||
||||||||
||||||||
||||||||
||||||||
||||||||
(d)
3.0
2.5
2.0
1.5
1.0
6 7 8 9 10 11 12
Time [s]
(b)
RefΣ h, τ=0.0
1.3
Σ h, τ=0.1
1.2 Σ h, τ=0.2
1.1
1.0
0.9
0.8
0.7
6 7 8 9 10 11 12
Time [s]
(e)
|Col1|Col2|Col3|(a)|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||PQCs max power enabled||||
|Σ h, τ=0 Σ h, τ=0|.0 .1||||||
|Σ h, τ=0|.2||||||
||||||||
||||||||
||||||||
||||||||
|Col1|Col2|Col3|(d)|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||PQCs max power enabled||||
|RΣe hf, τ=0|.0||||||
|Σ h, τ=0|.1||||||
|Σ h, τ=0|.2||||||
||||||||
||||||||
||||||||
||||||||
||||||||
0.5
λD, τ=0.0
0.4 λD, τ=0.1
λD, τ=0.2
0.3
6 7 8 9 10 11 12
Time [s]
(f)
0.2
0.1
0.0
6 7 8 9 10 11 12
Time [s]
(c)
0.5
0.4
0.3
0.2
0.1
0.0
|Col1|Col2|Col3|(b)|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||PQCs max power enabled||||
|λD, τ=0. τ|0||||||
|λD, =0. λD, τ=0.|1 2||||||
||||||||
||||||||
||||||||
||||||||
|Col1|Col2|Col3|(e)|Col5|Col6|Col7|
|---|---|---|---|---|---|---|
||||PQCs max power enabled||||
|λD, τ=0. τ|0||||||
|λD, =0. λD, τ=0.|1 2||||||
||||||||
||||||||
||||||||
||||||||
**Figure 8. Effect of delays in Cluster #1 during simulation of Case 2. (a) Non-active power of PQCs.**
(b) Cumulative contribution of PQC cluster (fulfilment of (12)). (c) Harmonic distortion factor at
grid side. (d) Non-active power using anti-windup. (e) Cumulative contribution of the PQC cluster
(fulfilment of (12)) using anti-windup. (f) Harmonic distortion factor at grid side using anti-windup.
2.0
1.5
1.0
0.5
0.0
0 2 4 6 8 10 12 14 16
enabledPQCs power enabledPQCs max Time[s] PQC Clustercontrol enabled PQI controlenabled
|NA1, τ=0 NA2, τ=0|.2 .2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|
|---|---|---|---|---|---|---|---|---|---|---|
|NA3, τ=0 NA4, τ=0 NA5, τ=0|.2 .2 .2||||||||||
|NA6, τ=0|.2||||||||||
||||||||||||
||||||||||||
||||||||||||
**Figure 9. Non-active powers of PQCs during simulation of Case 3.**
The change in PQIs regarding time delays in the dynamics of (21) is shown in detail in
Figure 10. Here, the simulation time is extended by 1.5 (s) and the PQI control is enabled at
_t = 14 (s), i.e., two seconds later, for a better visualization of the delay phenomenon. As the_
harmonic distortion is the only variable exceeding the defined PQI limits for Line 1–2, it
should be enough to analyze only this waveform. However, for completeness, the other
PQIs are displayed. From Figure 10a, it can be seen that the delay in the communication
of the PQIs causes a minimal deterioration in the transient state. Figure 10b,c present the
unchanged status from the reactivity and unbalance factors, as expected since they are
inside the defined threshold. Because the updating of Loops (26)–(28) is slow (i.e., small
values of k[cl]kr,l[,][ k][cl]ku,l [and][ k][cl]kv,l[), control coupling (and hence oscillations) are avoided; then,]
the system can operate satisfactorily in the face of relatively large transport delays.
-----
_Sustainability 2023, 15, 15698_ 20 of 23
PQI control enabled
0.5
λD, τ=0.0
0.4 λD, τ=0.1
λD, τ=0.2
0.3
0.2
0.1
13.0 13.5 14.0 14.5 15.0 15.5 16.0 16.5 17.0 17.5
Time [s]
(a)
|Col1|Col2|Col3|PQI control enabled|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
|λD, τ=0 λD, τ=0|.0 .1|||||||||
|λD, τ=0|.2|||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
0.5
λQ, τ=0.0
0.4 λQ, τ=0.1
λQ, τ=0.2
0.3
0.2
0.1
13.0 13.5 14.0 14.5 15.0 15.5 16.0 16.5 17.0 17.5
Time [s]
|Col1|Col2|Col3|(a)|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
||||PQI control enabled|||||||
|λQ, τ=0 λQ, τ=0|.0 .1|||||||||
|λQ, τ=0|.2|||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
(b)
0.5
λU, τ=0.0
0.4 λU, τ=0.1
λU, τ=0.2
0.3
0.2
0.1
13.0 13.5 14.0 14.5 15.0 15.5 16.0 16.5 17.0 17.5
Time [s]
|Col1|Col2|Col3|(b)|Col5|Col6|Col7|Col8|Col9|Col10|
|---|---|---|---|---|---|---|---|---|---|
||||PQI control enabled|||||||
|λU, τ=0 λU, τ=0|.0 .1|||||||||
|λU, τ=0|.2|||||||||
|||||||||||
|||||||||||
|||||||||||
|||||||||||
(c)
**Figure 10. Effect of delays in Line 1–2 during simulation of Case 3. (a) Harmonic distortion factor.**
(b) Reactivity factor. (c) Unbalance factor.
**7. Conclusions**
The proposed methodology deals with the coordination of multiple converters in an
MG to act as PQCs that improve the power quality at selected buses and the PCC. On a
side, the proposed distributed controller ensures a flexible approach to synchronising the
actions of several PQCs without prior knowledge of initial conditions. On the other side,
the proposed CPT weights deal with the trade-offs of local and grid-side compensation by
means of PQIs.
Results show that the distributed control scheme proposed in this work is able to
satisfactorily coordinate PQCs in a cluster whilst permitting other clusters of PQCs to
support the power quality compensation at the PCC, where convergences in the range of
0.5–1.0 (s) within clusters and 1.0–3.0 (s) for multi-cluster cooperations are seen according
to the designed control bandwidths. It can also be seen from simulations that the control
effort is distributed and that the proposed consensus dynamics do not adversely affect
the transient state operation when communication issues exist. For communication losses
inside a cluster, there is a steady-state error <2% if no actions are taken regarding the control
parameters. For time delays inside a cluster, settling times are increased over 1.5 times for
a delay of 200 (ms), whereas steady-state values remain unchanged. In the case of delays
between line measurements and a cluster of PQCs, the settling times increase by 7% with
the steady-state value unaffected when 200 (ms) of time delays are considered. Therefore,
the proposed strategy is reliable for common communication issue scenarios.
The results allow concurrent analysis and design of the distributed compensation
system and a cooperative operation of multiple compensators acting in the same MG.
Future work will be carried out for the proposed method related to the resiliency of
-----
_Sustainability 2023, 15, 15698_ 21 of 23
communications. In this regard, it will be relevant to study cyber attacks over the PQCs
since they might likely deteriorate the power quality of the MG. Also, the extension to
isolated and hybrid MG topologies will bring new insights into the full potential of the
proposed methods.
**Author Contributions: Conceptualization, C.B.-M., H.K.M.-P. and M.M.-G.; methodology, M.M.-G.**
and C.B.-M.; software, C.B.-M. and M.M.-G.; validation, M.M.-G. and A.K.V.; formal analysis,
M.M.-G.; investigation, C.B.-M. and M.M.-G.; resources, C.B.-M., M.M.-G. and J.S.G.; data curation, M.M.-G. and A.K.V.; writing—original draft preparation, C.B.-M., H.K.M.-P. and J.S.G.;
writing—review and editing, C.B.-M., H.K.M.-P., M.M.-G., J.S.G., A.K.V. and J.P.B.; visualization,
M.M.-G. and J.S.G.; supervision, C.B.-M., H.K.M.-P. and J.P.B.; project administration, C.B.-M. and
J.S.G.; funding acquisition, J.S.G., C.B.-M., H.K.M.-P. and M.M.-G. All authors have read and agreed
to the published version of the manuscript.
**Funding: This research was funded in part by the National Agency for Research and Development**
(ANID) under grant ANID PIA ACT192013, in part by the National Council for Scientific and
Technological Development (CNPq) under grant 309297/2021-4; and in part by the Sao Paulo Research
Foundation (FAPESP) under grant 2022/15423-3. J.S. Gómez acknowledges the support of UNAB
Regular funds (project DI-02-23/REG). M. Martínez-Gómez acknowledges the support of ANID
under the grant ANID-Becas/Doctorado Nacional 2019-21191757. J.P. Bonaldo acknowledges the
support of Mato Grosso Research Foundation under grant FAPEMAT.0001047/2022.
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: No new data were created or analyzed in this study. Data sharing is**
not applicable to this article.
**Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design**
of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or
in the decision to publish the results.
**Abbreviations**
The following abbreviations are used in this manuscript:
MG Microgrid
NCRE Non-conventional renewable energy
LV Low voltage
DFACTS Distributed flexible AC transmission systems
SAPF Shunt active power filter
STATCOM Static synchronous compensator
PQC Power quality compensator
PCC Point of common coupling
CPT Conservative power theory
VA Volt-ampere
DSC Delayed Signal Cancellation
MPPT Maximum power point tracking
PQI Power quality index
**References**
1. Schwaegerl, C.; Tao, L. The Microgrids Concept. In Microgrids; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2013; Chapter 1;
[pp. 1–24. [CrossRef]](http://doi.org/10.1002/9781118720677.ch01)
2. Espina, E.; Llanos, J.; Burgos-Mellado, C.; Cárdenas, R.; Martínez-Gómez, M.; Sáez, D. Distributed control strategies for
[microgrids: An overview. IEEE Access 2020, 8, 193412–193448. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.3032378)
3. Olivares, D.E.; Mehrizi-Sani, A.; Etemadi, A.H.; Cañizares, C.A.; Iravani, R.; Kazerani, M.; Hajimiragha, A.H.; Gomis-Bellmunt,
[O.; Saeedifard, M.; Palma-Behnke, R.; et al. Trends in microgrid control. IEEE Trans. Smart Grid 2014, 5, 1905–1919. [CrossRef]](http://dx.doi.org/10.1109/TSG.2013.2295514)
4. Hernández-Mayoral, E.; Madrigal-Martínez, M.; Mina-Antonio, J.D.; Iracheta-Cortez, R.; Enríquez-Santiago, J.A.; RodríguezRivera, O.; Martínez-Reyes, G.; Mendoza-Santos, E. A Comprehensive Review on Power-Quality Issues, Optimization Techniques,
[and Control Strategies of Microgrid Based on Renewable Energy Sources. Sustainability 2023, 15, 9847. [CrossRef]](http://dx.doi.org/10.3390/su15129847)
-----
_Sustainability 2023, 15, 15698_ 22 of 23
5. Afonso, J.L.; Tanta, M.; Pinto, J.G.O.; Monteiro, L.F.C.; Machado, L.; Sousa, T.J.C.; Monteiro, V. A Review on Power Electronics
[Technologies for Power Quality Improvement. Energies 2021, 14, 8585. [CrossRef]](http://dx.doi.org/10.3390/en14248585)
6. Vineeth, G.; Aishwarya, J.; Sowmya, B.; Rani, B.; Narasimha, M. Power Quality Enhancement in Grid-Connected Renewable
Energy Sources Using MC-UPQC. In Proceedings of the 2023 International Conference on Power, Instrumentation, Energy and
[Control (PIECON), Aligarh, India, 10–12 February 2023; pp. 1–6. [CrossRef]](http://dx.doi.org/10.1109/PIECON56912.2023.10085824)
7. Wang, N.; Zheng, S.; Gao, W. Microgrid Harmonic Mitigation Strategy Based on the Optimal Allocation of Active Power and
[Harmonic Mitigation Capacities of Multi-Functional Grid-Connected Inverters. Energies 2022, 15, 6109. [CrossRef]](http://dx.doi.org/10.3390/en15176109)
8. Chawda, G.S.; Shaik, A.G.; Mahela, O.P.; Padmanaban, S.; Holm-Nielsen, J.B. Comprehensive Review of Distributed FACTS
Control Algorithms for Power Quality Enhancement in Utility Grid With Renewable Energy Penetration. IEEE Access 2020,
_[8, 107614–107634. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.3000931)_
9. Guerrero, J.M.; Loh, P.C.; Lee, T.L.; Chandorkar, M. Advanced Control Architectures for Intelligent Microgrids—Part II: Power
[Quality, Energy Storage, and AC/DC Microgrids. IEEE Trans. Ind. Electron. 2013, 60, 1263–1270. [CrossRef]](http://dx.doi.org/10.1109/TIE.2012.2196889)
10. Pattery, J.M.; Jayaprakasan, S.; Cheriyan, E.P.; Ramchand, R. A Composite Strategy for Improved Power Quality Using Micro
[Compensators in Secondary Distribution Systems. IEEE Trans. Power Deliv. 2022, 37, 1027–1035. [CrossRef]](http://dx.doi.org/10.1109/TPWRD.2021.3075602)
11. Yang, J.; Qi, R.; Liu, Y.; Ding, Y. Coordinated Control Strategy for Harmonic Compensation of Multiple Active Power Filters.
_[Energy Eng. 2022, 119, 609–620. [CrossRef]](http://dx.doi.org/10.32604/ee.2022.016969)_
12. Soto, D.; Green, T. A comparison of high-power converter topologies for the implementation of FACTS controllers. IEEE Trans.
_[Ind. Electron. 2002, 49, 1072–1080. [CrossRef]](http://dx.doi.org/10.1109/TIE.2002.803217)_
13. Perez, M.A.; Ceballos, S.; Konstantinou, G.; Pou, J.; Aguilera, R.P. Modular Multilevel Converters: Recent Achievements and
[Challenges. IEEE Open J. Ind. Electron. Soc. 2021, 2, 224–239. [CrossRef]](http://dx.doi.org/10.1109/OJIES.2021.3060791)
14. Kontos, E.; Tsolaridis, G.; Teodorescu, R.; Bauer, P. High Order Voltage and Current Harmonic Mitigation Using the Modular
[Multilevel Converter STATCOM. IEEE Access 2017, 5, 16684–16692. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2017.2749119)
15. Morales Paredes, H.K.; Costabeber, A.; Tenti, P. Application of Conservative Power Theory to cooperative control of distributed
compensators in smart grids. In Proceedings of the 2010 International School on Nonsinusoidal Currents and Compensation,
[Lagow, Poland, 15–18 June 2010; pp. 190–196. [CrossRef]](http://dx.doi.org/10.1109/ISNCC.2010.5524488)
16. Marini, A.; Ghazizadeh, M.S.; Mortazavi, S.S.; Piegari, L. A harmonic power market framework for compensation management
[of DER based active power filters in microgrids. Int. J. Electr. Power Energy Syst. 2019, 113, 916–931. [CrossRef]](http://dx.doi.org/10.1016/j.ijepes.2019.06.020)
17. Mortezaei, A.; Simoes, M.G.; Savaghebi, M.; Guerrero, J.M.; Al-Durra, A. Cooperative Control of Multi-Master–Slave Islanded
Microgrid With Power Quality Enhancement Based on Conservative Power Theory. IEEE Trans. Smart Grid 2018, 9, 2964–2975.
[[CrossRef]](http://dx.doi.org/10.1109/TSG.2016.2623673)
18. Munir, H.M.; Ghannam, R.; Li, H.; Younas, T.; Golilarz, N.A.; Hassan, M.; Siddique, A. Control of Distributed Generators and
Direct Harmonic Voltage Controlled Active Power Filters for Accurate Current Sharing and Power Quality Improvement in
[Islanded Microgrids. Inventions 2019, 4, 27. [CrossRef]](http://dx.doi.org/10.3390/inventions4020027)
19. Simpson-Porco, J.W.; Shafiee, Q.; Dorfler, F.; Vasquez, J.C.; Guerrero, J.M.; Bullo, F. Secondary Frequency and Voltage Control of
[Islanded Microgrids via Distributed Averaging. IEEE Trans. Ind. Electron. 2015, 62, 7025–7038. [CrossRef]](http://dx.doi.org/10.1109/TIE.2015.2436879)
20. Ding, G.; Wei, R.; Zhou, K.; Gao, F. Communication-less harmonic compensation in a multi-bus microgrid through autonomous
[control of distributed generation grid-interfacing converters. J. Mod. Power Syst. Clean Energy 2015, 3, 597–609. [CrossRef]](http://dx.doi.org/10.1007/s40565-015-0158-3)
21. Skjong, E.; Suul, J.; Molinas, M.; Johansen, T. Optimal Compensation of Harmonic Propagation in a Multi-Bus Microgrid. Renew.
_[Energy Power Qual. J. 2016, 236–241. [CrossRef]](http://dx.doi.org/10.24084/repqj14.280)_
22. Gomez, J.S.; Llanos, J.; Espina, E.; Burgos-Mellado, C.; Rodriguez, J. Cooperative Power Conditioners for Microgrids in Mining.
In Proceedings of the 2021 23rd European Conference on Power Electronics and Applications (EPE’21 ECCE Europe), Ghent,
[Belgium, 6–10 September 2021; pp. 1–10. [CrossRef]](http://dx.doi.org/10.23919/EPE21ECCEEurope50061.2021.9570651)
23. Yang, Z.; Yi, H.; You, Y.; Zhuo, F.; Zhan, C.; Shi, S. Optimal Installation and System-level Control Strategy of SAPF Based on
Extended-range Compensation of Multi-bus Harmonic Sources in Distribution Networks. In Proceedings of the 2023 25th
European Conference on Power Electronics and Applications (EPE’23 ECCE Europe), Aalborg, Denmark, 4–8 September 2023;
[pp. 1–9. [CrossRef]](http://dx.doi.org/10.23919/EPE23ECCEEurope58414.2023.10264241)
24. Wada, K.; Fujita, H.; Akagi, H. Considerations of a shunt active filter based on voltage detection for installation on a long
[distribution feeder. IEEE Trans. Ind. Appl. 2002, 38, 1123–1130. [CrossRef]](http://dx.doi.org/10.1109/TIA.2002.800584)
25. Tenti, P.; Paredes, H.K.M.; Mattavelli, P. Conservative Power Theory, a Framework to Approach Control and Accountability
[Issues in Smart Microgrids. IEEE Trans. Power Electron. 2011, 26, 664–673. [CrossRef]](http://dx.doi.org/10.1109/TPEL.2010.2093153)
26. Paredes, H.K.M.; Rodrigues, D.T.; Cebrian, J.C.; Bonaldo, J.P. CPT-Based Multi-Objective Strategy for Power Quality Enhancement
in Three-Phase Three-Wire Systems Under Distorted and Unbalanced Voltage Conditions. IEEE Access 2021, 9, 53078–53095.
[[CrossRef]](http://dx.doi.org/10.1109/ACCESS.2021.3069832)
27. Wang, Y.F.; Li, Y.W. Grid Synchronization PLL Based on Cascaded Delayed Signal Cancellation. IEEE Trans. Power Electron. 2011,
_[26, 1987–1997. [CrossRef]](http://dx.doi.org/10.1109/TPEL.2010.2099669)_
28. Svensson, J.; Bongiorno, M.; Sannino, A. Practical Implementation of Delayed Signal Cancellation Method for Phase-Sequence
[Separation. IEEE Trans. Power Deliv. 2007, 22, 18–26. [CrossRef]](http://dx.doi.org/10.1109/TPWRD.2006.881469)
-----
_Sustainability 2023, 15, 15698_ 23 of 23
29. Neves, F.A.S.; Cavalcanti, M.C.; de Souza, H.E.P.; Bradaschia, F.; Bueno, E.J.; Rizo, M. A Generalized Delayed Signal Cancellation Method for Detecting Fundamental-Frequency Positive-Sequence Three-Phase Signals. IEEE Trans. Power Deliv. 2010,
_[25, 1816–1825. [CrossRef]](http://dx.doi.org/10.1109/TPWRD.2010.2044196)_
30. Morales-Paredes, H.K.; Burgos-Mellado, C.; Bonaldo, J.P.; Rodrigues, D.T.; Quintero, J.S.G. Cooperative control of power quality
compensators in microgrids. In Proceedings of the 2021 IEEE Green Technologies Conference (GreenTech), Denver, CO, USA, 7–9
[April 2021; pp. 380–386. [CrossRef]](http://dx.doi.org/10.1109/GreenTech48523.2021.00066)
31. Stevanoni, C.; Deblecker, O.; Vallée, F. Cooperative Control Strategy of Multifunctional Inverters For Power Quality Enhancement
[in Smart Microgrids. Renew. Energy Power Qual. J. 2016, 73–78. [CrossRef]](http://dx.doi.org/10.24084/repqj14.230)
32. Spanos, D.; Olfati-Saber, R.; Murray, R. Dynamic consensus on mobile networks. In Proceedings of the 16th IFAC World Congress,
Prague, Czech Republic, 3–8 July 2005; pp. 1–6.
33. Hedetniemi, S.M.; Hedetniemi, S.T.; Liestman, A.L. A survey of gossiping and broadcasting in communication networks.
_[Networks 1988, 18, 319–349. [CrossRef]](http://dx.doi.org/10.1002/net.3230180406)_
34. Llanos, J.; Olivares, D.; Simpson-Porco, J.; Mehrdad, K.; Sáez, D. A Novel Distributed Control Strategy for Optimal Dispatch of
[Isolated Microgrids Considering Congestion. IEEE Trans. Smart Grid 2019, 10, 6595–6606. [CrossRef]](http://dx.doi.org/10.1109/TSG.2019.2908128)
35. Zhang, H.; Lewis, F.L.; Das, A. Optimal Design for Synchronization of Cooperative Systems: State Feedback, Observer and
[Output Feedback. IEEE Trans. Autom. Control. 2011, 56, 1948–1952. [CrossRef]](http://dx.doi.org/10.1109/TAC.2011.2139510)
36. Costabeber, A.; Tenti, P.; Caldognetto, T.; Liberado, E.V. Selective compensation of reactive, unbalance, and distortion power in
smart grids by synergistic control of distributed switching power interfaces. In Proceedings of the 2013 15th European Conference
[on Power Electronics and Applications (EPE), Lille, France, 2–6 September 2013; pp. 1–9. [CrossRef]](http://dx.doi.org/10.1109/EPE.2013.6634622)
37. Moreira, A.C.; Paredes, H.K.M.; de Souza, W.A.; Marafão, F.P.; da Silva, L.C.P. Intelligent Expert System for Power Quality
Improvement Under Distorted and Unbalanced Conditions in Three-Phase AC Microgrids. _IEEE Trans. Smart Grid 2018,_
_[9, 6951–6960. [CrossRef]](http://dx.doi.org/10.1109/TSG.2017.2771146)_
38. [Plexim. PLECS The Simulation Platform for Power Electronic Systems. Available online: https://www.plexim.com/products/](https://www.plexim.com/products/plecs)
[plecs (accessed on 24 October 2023).](https://www.plexim.com/products/plecs)
39. Moradian, H.; Kia, S.S. On Robustness Analysis of a Dynamic Average Consensus Algorithm to Communication Delay. IEEE
_[Trans. Control. Netw. Syst. 2019, 6, 633–641. [CrossRef]](http://dx.doi.org/10.1109/TCNS.2018.2863568)_
40. Martinez-Gomez, M.; Orchard, M.E.; Bozhko, S. Dynamic Average Consensus with Anti-windup applied to Interlinking
[Converters in AC/DC Microgrids under Economic Dispatch and Delays. IEEE Trans. Smart Grid 2023, 14, 4137–4140. [CrossRef]](http://dx.doi.org/10.1109/TSG.2023.3291208)
**Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual**
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/su152215698?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/su152215698, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2071-1050/15/22/15698/pdf?version=1699362035"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-11-07T00:00:00
|
[
{
"paperId": "09a9358ad668950780aa62f6a4d67447bd13f0bd",
"title": "Dynamic Average Consensus With Anti-Windup Applied to Interlinking Converters in AC/DC Microgrids Under Economic Dispatch and Delays"
},
{
"paperId": "5f3bc0e37f45525f8caa20c703edc0d22a5f9dac",
"title": "A Comprehensive Review on Power-Quality Issues, Optimization Techniques, and Control Strategies of Microgrid Based on Renewable Energy Sources"
},
{
"paperId": "fef4c9fd5d7744363b9573cdb738a0bb0ca1e67c",
"title": "Microgrid Harmonic Mitigation Strategy Based on the Optimal Allocation of Active Power and Harmonic Mitigation Capacities of Multi-Functional Grid-Connected Inverters"
},
{
"paperId": "e7a2ce63f692af0c64cc27ff92c42996040f323c",
"title": "A Composite Strategy for Improved Power Quality Using Micro Compensators in Secondary Distribution Systems"
},
{
"paperId": "3759ca8f528ddf30e3e602d61c31c43417932f4a",
"title": "A Review on Power Electronics Technologies for Power Quality Improvement"
},
{
"paperId": "0ecad3e890b02d5bd76b87af77699a61b73f46cf",
"title": "A harmonic power market framework for compensation management of DER based active power filters in microgrids"
},
{
"paperId": "7637d684a7f692e15d0551c2980117e90f87ba16",
"title": "Control of Distributed Generators and Direct Harmonic Voltage Controlled Active Power Filters for Accurate Current Sharing and Power Quality Improvement in Islanded Microgrids"
},
{
"paperId": "6881efd57c0fc215dcb93f0f234317cd033312ac",
"title": "A Novel Distributed Control Strategy for Optimal Dispatch of Isolated Microgrids Considering Congestion"
},
{
"paperId": "f20928a966779e4520e5ed738f184dff4459f9f5",
"title": "Intelligent Expert System for Power Quality Improvement Under Distorted and Unbalanced Conditions in Three-Phase AC Microgrids"
},
{
"paperId": "d3c09d9dab9c9d2e439495429197d5d52c8c8816",
"title": "On Robustness Analysis of a Dynamic Average Consensus Algorithm to Communication Delay"
},
{
"paperId": "ea9fe49bfde1c476b06e5b2d13b191e796549b5f",
"title": "High Order Voltage and Current Harmonic Mitigation Using the Modular Multilevel Converter STATCOM"
},
{
"paperId": "ecaecd8f133888edcd66225ebe3e4c541ffe1922",
"title": "Optimal Compensation of Harmonic Propagation in a Multi-Bus Microgrid"
},
{
"paperId": "76e193d14b6a1cb4a151e50e1eb82861a451f107",
"title": "Cooperative Control Strategy of Multifunctional Inverters for Power Quality Enhancement in Smart Microgrid"
},
{
"paperId": "74c14af9e57ca51d026492d89d0f679bba6b1f40",
"title": "Communication-less harmonic compensation in a multi-bus microgrid through autonomous control of distributed generation grid-interfacing converters"
},
{
"paperId": "251f7112f083a3072611eb2cedfa769166b3b759",
"title": "Secondary Frequency and Voltage Control of Islanded Microgrids via Distributed Averaging"
},
{
"paperId": "2b83b7f80f97192012ec324ae31e4f7487504a75",
"title": "Trends in Microgrid Control"
},
{
"paperId": "186d4227bc1780014951e21e520a8be5b4dbd9b0",
"title": "Advanced Control Architectures for Intelligent Microgrids—Part II: Power Quality, Energy Storage, and AC/DC Microgrids"
},
{
"paperId": "130516713dce62a25cd7fbe508f3916a89a7d56b",
"title": "Grid Synchronization PLL Based on Cascaded Delayed Signal Cancellation"
},
{
"paperId": "1c0d7b3391c078df60bddd175a06278d7258173d",
"title": "Optimal Design for Synchronization of Cooperative Systems: State Feedback, Observer and Output Feedback"
},
{
"paperId": "19eec642d7f844cf6b3f9a5f2506e8d87f20654a",
"title": "Conservative Power Theory, a Framework to Approach Control and Accountability Issues in Smart Microgrids"
},
{
"paperId": "29921512a3a6560830d376566d45b53acbcb61e8",
"title": "A Generalized Delayed Signal Cancellation Method for Detecting Fundamental-Frequency Positive-Sequence Three-Phase Signals"
},
{
"paperId": "b0f9f5bb2a898e7322198778ca665e2d23e1f2ca",
"title": "A comparison of high-power converter topologies for the implementation of FACTS controllers"
},
{
"paperId": "73d13fca13b7ffa09b4996d441da60db424cf7d5",
"title": "Considerations of a shunt active filter based on voltage detection for installation on a long distribution feeder"
},
{
"paperId": "eab71be96e0b36b90cfcb2845d5f5f0a85e4584c",
"title": "A survey of gossiping and broadcasting in communication networks"
},
{
"paperId": "6a590233acbf7a5a634988e0d9468b70639538ba",
"title": "Coordinated Control Strategy for Harmonic Compensation of Multiple Active Power Filters"
},
{
"paperId": "440083d062cdbe5e01a826f2326cf036d8d9d2a2",
"title": "CPT-Based Multi-Objective Strategy for Power Quality Enhancement in Three-Phase Three-Wire Systems Under Distorted and Unbalanced Voltage Conditions"
},
{
"paperId": "e16a537afd9c528804b6d2c93c91b1a5627a177b",
"title": "Modular Multilevel Converters: Recent Achievements and Challenges"
},
{
"paperId": "c4b9305898f8d633cb92aedced5b50855e382db6",
"title": "Distributed Control Strategies for Microgrids: An Overview"
},
{
"paperId": "2a9671f4bd11729926df66a089fdf0817d57f75e",
"title": "Comprehensive Review of Distributed FACTS Control Algorithms for Power Quality Enhancement in Utility Grid With Renewable Energy Penetration"
},
{
"paperId": "9251220ebdb28b893e8b066b85582536ed485956",
"title": "Cooperative Control of Multi-Master-Slave Islanded Microgrid with Power Quality Enhancement Based on Conservative Power Theory"
},
{
"paperId": "10637738e3c1630d204ddfc6bf53791f62cb4ac6",
"title": "Practical Implementation of Delayed Signal Cancellation Method for Phase-Sequence Separation"
}
] | 25,040
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Mathematics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/033b28a67b747f3f394cb8a81046d4470ca3f98a
|
[
"Computer Science"
] | 0.862732
|
Amortized efficient zk-SNARK from linear-only RLWE encodings
|
033b28a67b747f3f394cb8a81046d4470ca3f98a
|
J. Commun. Networks
|
[
{
"authorId": "2110621264",
"name": "Heewon Chung"
},
{
"authorId": "2145138757",
"name": "Dongwoo Kim"
},
{
"authorId": "2194565322",
"name": "Jeong Han Kim"
},
{
"authorId": "2277991001",
"name": "Jiseung Kim"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
## Amortized Efficient zk-SNARK from Linear-Only RLWE Encodings
### Heewon Chung, Dongwoo Kim, Jeong Han Kim, and Jiseung Kim
**_Abstract—This paper addresses a new lattice-based designated_**
**zk-SNARK having the smallest proof size in the amortized sense,**
**from the linear-only ring learning with the error (RLWE) encod-**
**ings. We first generalize a quadratic arithmetic programming**
**(QAP) over a finite field to a ring-variant over a polynomial**
**ring Zp[X]/(X** _[N]_ + 1) with a power of two N **. Then, we**
**propose a zk-SNARK over this ring with a linear-only encoding**
**assumption on RLWE encodings. From the ring isomorphism**
Zp[X]/(X _[N]_ + 1) =[∼] Zp[N] **[, the proposed scheme packs multiple]**
**messages from Zp, resulting in much smaller amortized proof**
**size compared to previous works.**
**In addition, we present a refined analysis on the noise flooding**
**technique based on the Hellinger divergence instead of the**
**conventional statistical distance, which reduces the size of a proof.**
**In particular, our proof size is 276.5 KB and the amortized**
**proof size is only 156 bytes since our protocol allows to batch**
_N proofs into a single proof. Therefore, we achieve the smallest_
**amortized proof size in the category of lattice-based zk-SNARKs**
**and comparable proof size in the (pre-quantum) zk-SNARKs**
**category.**
**_Index Terms—Post-quantum cryptography, RLWE, SNARK,_**
**zero-knowledge proofs.**
I. INTRODUCTION
ZERO-knowledge proof is a protocol that enables a
prover to convince a verifier of knowledge of witness
# A
without any unnecessary leakage of the witness [1]. Specifically, zero-knowledge succinct non-interactive argument of
knowledge (zk-SNARKs) is, literally, a zero-knowledge proof
which is one-round protocol whose proof size is small. Since
its introduction, zk-SNARKs have drawn vast attention due to
their versatility and diverse applications including cryptocurrency [2]–[4], deep learning [5] and database queries [6]. In
addition, there is an attempt to standardize zero-knowledge
proofs, named ZKProof [7], to apply them to the industry,
and many famous companies such as Google and Microsoft
take part in this workshop.
Most constructions proposed so far mainly depend on
_pre-quantum primitives and hardness assumptions such as_
Manuscript received August 19, 2021 revised January 27, 2023; approved
for publication by Jung Hee Cheon, Division 1 Editor, March 19, 2023.
H. Chung is with DESILO Inc., Seoul, Republic of Korea, email: heewon.chung@desilo.ai.
D. Kim is with the Department of AI·SW Convergence, Dongguk University, Seoul, Republic of Korea, email: Dongwoo.Kim@dgu.edu.
J.-H. Kim is with the School of Computational Sciences, Korea Institute for
Advanced Study (KIAS), Seoul, Republic of Korea, email: jhkim@kias.re.kr.
J. Kim is with the Department of Computer Science and Artificial Intelligence/CAIIT, Jeonbuk National University, Jeonju, Republic of Korea, email:
jiseungkim@jbnu.ac.kr.
H. Chung and D. Kim contributed equally to this paper.
J. Kim is the corresponding author.
Digital Object Identifier: 10.23919/JCN.2023.000012
pairing [8], [9], hidden order group [10], and discrete logarithm problem [11], [12]. There exist hash-based constructions [13]–[15] which are secure under the quantum computer,
however, it takes relatively high verification cost and storage
cost than group-based constructions since they contain many
hash function iterations. On the other hand, lattice-based
constructions have been proposed as promising candidates
for post-quantum zk-SNARKs. However, the proposed latticebased constructions are inefficient compared to group-based
constructions [8]–[12] in all aspects, especially, in the proof
size: while the group-based scheme [9] has 131 bytes of proof
(with BN-128 curve), the best one [16] among lattice-based
(designated) SNARKs requires 270 kilobytes of proof.
Here, we pay attention that the best one [16] and all other
zk-SNARKs (except [17], [18]) from lattices exploit encoding
schemes based on learning with errors (LWE) problem. However, several lattice-based constructions [19], [20] have shown
that lattice hard problems with algebraic structures such as
Ring LWE (RLWE) [21] or NTRU [22] have mathematical
structures with which one can improve the efficiency of
schemes. Therefore, replacing LWE by RLWE (in scheme construction) is one of the widely used techniques for improving
the efficiency of a lattice-based encryption scheme, and we
can raise the following natural and meaningful question:
_Is it possible to enhance the efficiency of the lattice-based_
_zk-SNARK with hard problems from algebraic lattices?_
**Related Work. Coming to the quantum revolution, building**
post-quantum zk-SNARKs with lattice-based cryptographic
primitives has been highlighted as one of the challenging
problems in the area of cryptography and security. We review
some previous work [16]–[18], [23], [24] proposing designated
verifier zk-SNARKs based on lattices.
Boneh et al. [17], [18] proposed the first lattice-based
SNARG from linear-only encoding assumption on the encryption scheme based on (R)LWE problem. Specifically, the latter
achieved the quasi-optimality in prover’s cost via the linearonly vector encryption scheme over rings and linear PCP [25]
with multiple provers. While those work provide the best
asymptotic cost among others, authors left the construction
of a zk-SNARK from lattices as an open problem.
On the other hand, Gennaro et al. [23] proposed zkSNARK with square span program (SSP) [26] assuming that
an encoding scheme from LWE problem also satisfies the
similar classical hardness assumptions — q-power knowledge
exponent (PKE), q-power Diffie Hellman (PDH) — from
finite groups (previously, those assumptions are exploited to
Creative Commons Attribution-NonCommercial (CC BY-NC).
This is an Open Access article distributed under the terms of Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0)
which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided that the original work is properly cited.
1229 23 0/23/$10 00 © 2023 CS
-----
construct zk-SNARKs from pairing groups, e.g., [27], [28]).
Similarly, Nitulescu [16] presented a lattice-based zk-SNARG
with square arithmetic programs (SAP) assuming that an
encoding scheme from LWE problem satisfies the ‘lineartargeted malleability’ assumption which is a slightly weaker
assumption than the linear-only assumption. It also has the
advantage that the size of the proof is smaller than the
aforementioned lattice-based zk-SNARKs, with the proof π
consisting of only two LWE encodings. Recently, Naganuma et
_al. [24] also proposed, via similar approach as above, a lattice-_
based zk-SNARK from quadratic arithmetic programs (QAP)
and then compared their result to the previous work [16], [23]
with implementation. As expected in theory, while SAP-based
zk-SNARK [16] has smaller proof size and less verification
cost, their QAP-based one [24] is better in other aspects:
setup time, prover’s cost, and the size of common reference
strings. A concurrent and independent work [29] proposed
a ring-variant of Pinocchio, named Rinocchio based on the
quadratic ring programs (QRP) similarly as our ring-QAP.
However, their construction is focused on SNARK which does
not provide zero-knowledge property. For more details about
the differences, we refer to Section III-B.
All of those works, including ours, provide zk-SNARKs
with designated verifiers (i.e., the verification requires a private
verification key) only, and constructing a publicly verifiable
zk-SNARK from lattice is still an open problem.
_A. Our Approach_
We propose a new lattice-based zk-SNARK from RLWE
problem, linear-only encoding assumption over this ring, and
the notion of ring-QAP. Moreover, we provide a tight analysis
on conventional noise flooding technique to reduce the size of
RLWE encodings based on the Hellinger distance and due to
this analysis, we can reduce not only the size of a proof in
amortized sense but also the size of a single encoding.
Previously, only an LWE-based encoding scheme was exploited [16], [24] to construct zk-SNARK from lattices. To
enhance the efficiency by leveraging the ring structures, we
extend QAPs over Zp to a ring-QAPs over a polynomial
ring Rp = Zp[X]/(X _[N]_ + 1) with the generalized SchwartzZippel Lemma over Rp then employ an RLWE-based encoding
scheme having an element of ring Rp as a message. It gives
a zk-SNARK for arithmetic circuits over a ring Rp, to which
one can apply the traditional message packing method, then
we significantly reduce the proof size in amortized sense.
More precisely, when N is a power of 2 and p = 1 mod 2N,
_Rp is isomorphic to Z[N]p_ [, and a single ring element has]
one-to-one correspondence with an N dimension vector over
Zp, which enables to pack multiple field elements to one
ring element. Then, we can outsource N computations to an
untrusted prover and reduce the computational complexity of
the prover and the verifier as well as the proof size in the
amortized sense.
In addition, to shorten the proof size, we provide a new
analysis on the parameters of zk-SNARKs using the Hellinger
distance rather than the statistical distance from the previous
construction. For zero-knowledgeness, all conventional latticebased zk-SNARKs [16], [23], [24] from square span programs
(SSPs), SAPs, and QAPs must exploit a noise flooding technique to hide the error term in final encodings which will be
disclosed to a verifier. In other words, for the error term e
given in the final encodings, we must guarantee that no one
can distinguish e+D from D where D is a certain distribution.
To this end, previous work chose D as a uniform distribution
on a large interval and employed the statistical distance as a
measure to show the closeness of the above two distributions.
Unfortunately, the previous analysis with the statistical
distance — providing a rough upper bound on adversaries
success probability — requires that if κ _λ, where λ is_
_≈_
the security parameter and κ is the -log of statistical distance
between two distributions. In contrast, the closeness derived
from Hellinger distance provides more tight analysis on the
success probability of adversaries on (decision) security game,
thereby requiring relaxed requirement, e.g., κ[′] _λ/2 where_
_≈_
_κ[′]_ is -log of Hellinger distance.
As a result, our protocol can also reduce the size of a
single encoding in both asymptotic and concrete settings.
Specifically, the size of single proof is about 276.5 KB when
_λ = 110, which is much smaller than that of the previous work_
in the lattice-based zk-SNARKs [16], [23], [24]. In addition,
under 128 bit security parameter, our proof size is 156 bytes
with an amortization cost and it is comparable to 138 bytes
of Groth [9], the shortest proof size among all zk-SNARKs.
**Concurrent Works. There are two independent and concur-**
rent works[1] that improves the lattice-based SNARKs.
Ganesh et al. [29] propose a new SNARK (without ZK
property) called Rinocchio for general ring arithmetic computations. To satisfy the soundness in the ring setting, they also
employ the generalized Schwartz-Zippel lemma (Lemma 4)
and the Ring-LWE encodings against quantum adversaries. On
the other hand, Rinocchio is slightly different from our ringQAP based zk-SNARK, similarly as the Pinocchio [27] is different from Groth’s work [9]. For example, Rinocchio requires
9 RLWE encodings to describe the proof of arguments, but the
proof of our zk-SNARK only consists of 3 RLWE encodings.
In addition, [29] uses q-PDH and q-PKE assumptions over
rings that are weaker than ‘linear-only’ encoding assumption
that we use. While their work focused on the generality, we
focus on better efficiency exploiting specific case with a ring
of the form Z[n]p [. Furthermore, we provide a tighter analysis on]
noise flooding technique for zero-knowledgeness (while [29]
does not) as will be described in the following subsection. Our
analysis could be applicable to Rinocchio for building a lattice
based zk-SNARK with a shorter proof.
Another work by Yuval Ishai et al. [30] also proposes zkSNARKs for reducing the proof size. Their construction is
built on Bitansky et al. [25] compiler with linear-only vector
encryption suggested by Boneh et al. [17]. To minimize the
proof size, they employ several methods including modulus
switching[2] on the proof encoding, exploiting a linear PCPs
and vector encryptions on quadratic extension field (of a base
1The first version of our draft was submitted in Feb 9 2021 while Rinocchio
was published in ePrint in 10 Mar 2021; [30] was published during the review
period.
2A widely employed technique in fully homomorphic encryption to reduce
the modulus of a ciphertext without modifying the underlying messages.
-----
Fig. 1. Verifying ML training phase with zk-SNARK.
finite field Zp), etc. While they can achieve the smallest proof
size among all zk-SNARKs based on lattice assumption, their
construction does not support batching multiple proofs in
contrasts to ours. As a quick comparison, our construction
utilizes an extension ring Rp = Zp[X]/(X _[N]_ + 1) with
high degree N targeting the smallest amortized proof size
while their optimization utilizes quadratic extension fields
Zp[X]/(X [2] + 1) to reduce the single proof size. Thus, our
proposal still remains the lattice-based zk-SNARKs having the
smallest proof size in amortized sense.
_B. Application — Verifiable Machine Learning Training_
As an interesting application scenario of our proposed zkSNARK, we present the verification of machine learning (ML)
training. The ML training phase is composed of many computation steps where the portion of input data is used to update
the model parameters. Assume that a client outsources to a
server a training phase of ML model with data to be trained
on. However, since the training phase is composed of many
steps of computations on large data, a server may miss some
portion of training steps and/or the training data. Therefore,
both a client and a server have an incentive to verify and
prove that the final output model is trained correctly with the
given data. This is possible with zk-SNARKs where the client
and the server act as a verifier and a prover, respectively, by
generating and verifying the proof of training computation;
see Fig. 1.
While one can use any zk-SNARKs in this scenario, our zkSNARK — with reduced amortized proof size — can provide
smaller overall proof size than the previous work. For detail,
assume that the training phase is composed of many training
steps each of which can be described as follows:
_fi(W[⃗]_ _i, Di) =_ _W[⃗]_ _i+1,_
where _W[⃗]_ _i and_ _W[⃗]_ _i+1 are the model parameters before and_
after the i-th step fi, respectively, and Di is the (portion of)
data used in each step. Then, the entire training phase can
be verified by verifying every zk-SNARK proof for each step
Fig. 2. A toy example of verifying Merkle proofs.
_fi[3]_ given that the prover sends all intermediate _W[⃗]_ _i’s along_
with the proof to the verifier; hence, it requires CRS for the
computation of each fi’s only (which will be identical in
most cases). In contrasts, if we consider proving the entire
training phase with initial and final parameters only, it requires
problematically large amount of CRS requirement due to the
huge size of entire computation. In the former case with
affordable CRS size, our zk-SNARK whose proof is capable of
proving and verifying many computations simultaneously provides much smaller proof size than the previous zk-SNARKs.
Specifically, if there are n training steps, previous zk-SNARKs
require n proofs while ours does only _n/N_ proofs where
_⌈_ _⌉_
_N is the maximum proof capability of ours in one proof_
encoding. Note that the verifier has all intermediate _W[⃗]_ _i’s_
and arranges them correctly as input and output of parallel
circuits then verifies correct computation of all fi’s with the
zk-SNARK proof.
Note, in addition, that the number n of training steps are
usually bigger or comparable to the amortization capability
_N = 2, 048 in our zk-SNARK construction, realizing the best_
possible amortized proof size in most cases. On the other hand,
the server may not want to disclose some of hyper-parameters
— the external values used to control the learning process,
e.g., learning rate, batch size, number of iterations, etc. —
since it comprises a secret know-how for getting good training
results. This can be kept secret by zero-knowledge property
of zk-SNARK.
_C. Application — Merkle Proof with Smaller Proof Size_
Merkle trees, proposed by Ralph Merkle [31], is a binary
tree in which the leaf node is a cryptographic hash of a data
block, and non-leaf node is a cryptographic hash value of its
child nodes. This technique is used to prove efficiently that
some data block received from the other belongs to the tree.
Therefore, Merkle trees are widely used in many applications,
especially, peer-to-peer systems such as Git and BitTorrent and
recently, many cryptocurrencies also employ Merkle trees to
verify the data block received from other nodes.
3In usual ML training fi’s are almost the same for all steps. Our zk-SNARK
can also handle different fi’s given that the circuit size of them are bounded.
-----
For Merkle proofs, the prover provides a sequence of hash
value needed to compute the hash value of its parent from the
leaf node to the root node. Then, the verifier climbs Merkle
trees and ensure the validity of the proof when the computed
root hash value coincides with the public root value. To be
more concrete, given a data D belonging to a binary tree
of depth ℓ, the Merkle proof for D is π = (π1, · · ·, πℓ),
where πi is the hash value for level i and π1 is the publicly
known Merkle root. For a cryptographic hash function H,
_hℓ_ = H(D), and for each i ∈{2, 3, · · ·, ℓ}, the verifier goes
to level i − 1 nodes from level i nodes by checking if hi−1
is H(πi, hi) or H(hi, πi) depending on the Merkle path of
_D to the root. Lastly, he can obtain h1 and accepts the proof_
only if h1 = π1. In other words, the verification circuit can be
represented by multiple evaluations of the same hash function
_H as follows: for i_ 2, 3, _, ℓ_,
_∈{_ _· · ·_ _}_
H(πi, hi), if level i node is connected
_hi−1 =_ to a right leaf in level i − 1,
H(hi, πi), otherwise.
Now, for the application of zk-SNARK, we assume, similarly
as the previous application example, that the prover sends all
_πi’s, hi’s, and the information of the Merkle path along with_
a SNARK proof to a verifier. Then, the verifier arranges each
input for each evaluation appropriately (as πi, hi or hi, πi) and
verifies above computation with the SNARK proof. Since the
verifier has the intermediate hash values, this process enables
the verifier to check the dependency between levels. In this
case, with our zk-SNARK, the size of proof can be reduced
considerably since we need only _ℓ/N_ proofs while previous
_⌈_ _⌉_
one requires ℓ proofs. Ours will be also beneficial if one needs
to prove/verify many Merkle proofs simultaneously. Moreover,
with zk-SNARK, the computational complexity for the verifier
can be less than that for the original Merkle verification where
she needs to evaluate many hash functions.
We tried to implement Merkle proof to prove the efficiency.
Even though libsnark provides a gadget for SHA-256, it
does not fit in SEAL library. For this reason, we generate a random circuit with 2[15] multiplicative gates and 2[5] the number of
inputs instead. As far as we know, a circuit representing SHA256 also has about 2[15] multiplicative gates. Our experiment
result is summarized in Table III in Section V. According
to our implementation, the verification time is fast enough.
Moreover, a proof contains many independent instances and
it implies that it is beneficial to prove many instances at the
same time such as Merkle proof. Additionally, our scheme
is designed based on Groth’s zk-SNARK scheme [9] whose
one of key features is that complexity for the verifier is
independent on the circuit. In this context, we believe that the
current experiment indicates that verification using zk-SNARK
is useful when the number of hash functions is sufficiently
large.
_D. Organization_
Section II provides preliminaries about discrete Gaussian
distributions, definitions of RLWE and zk-SNARK. Section III
provides our main protocol, zk-SNARK from RLWE, which
consists of ring-QAP, zk-SNARK from RLWE, security proofs
and better noise flooding using Rènyi divergence. Section IV
provides the size of the proof with various security parameters
and comparison between ours and the previous work.
II. PRELIMINARIES
Let Z, Q, R be the integers, rational, real, respectively, and
Zq = Z/qZ the set of integers modulo q represented as
integers from (−q/2, q/2] ∩ Z, and Z[X] be the set of all
polynomials with integer coefficients. Throughout this paper,
we denote N for a power of two integer so that X _[N]_ + 1
is a cyclotomic polynomial, R = Z[X]/(X _[N]_ + 1) and
_Rq = R/qR = Zq[X]/(X_ _[N]_ +1). We use left-arrow notations
in the following two cases: For a finite set S, s _S denotes_
_←_
that s is uniformly sampled from S. For a distribution,
_D_
_s_ denotes that s is sampled from the distribution . A
_←D_ _D_
statistical distance between two discrete distributions D1 and
_D2, denoted by SD(D1, D2), is_ [�]x∈X 21 [Pr][ |][D][1][(][x][)] _[−]_ _[D][2][(][x][)][|][.]_
We denote Pr[s _A] as the probability that an event A_
_←D |_
occurs when s .
_←D_
_A. Lattices and Discrete Gaussian Distribution_
A lattice is defined as an additive discrete subgroup of
_L_
R[n] and is represented by integral linear combinations of a
basis B ∈ R[n][×][r], i.e., L = BZ[r]. We first recall the definition
and some properties of a Discrete Gaussian distribution over
a lattice .
_L_
_Definition 1 (Discrete Gaussian Distribution): Let_ be a
_L_
lattice contained in R[n]. Then, for any positive real number σ,
we define a function ρ as follows.
_ρσ(x) = exp(−π∥x∥[2]/2σ[2])._
Then, we define a discrete Gaussian distribution χL,σ with
a standard deviation σ whose probability density function is
_ρσ(x)/ρσ(L) where ρσ(L) is the sum of all points x ∈L,_
_i.e., ρσ(L) =_ [�]x∈L _[ρ][σ][(][x][)][.]_
_Lemma 1 (Tail Bounds): For σ > 0 and T > 0, it holds that_
2
Pr[e ← _χσ : |e| > Tσ] ≤_ _√_
_T_ 2π [exp(][−][T][ 2][/][2)][.]
_Lemma 2 ( [32], Tail Bounds of Inner Product): For σ > 0_
and T > 0, it holds that
Pr[e ← _χ[n]σ_ [:][ ⟨][e][,][ c][⟩≥] _[Tσ][∥][c][∥][]][ <][ 2 exp(][πT][ 2][)][.]_
_B. Linear-Only Encoding Scheme from RLWE_
We introduce an encoding scheme which is a building block
of our zk-SNARK construction. The encoding scheme has the
ring Rp and Rq as the message space and the encoding space,
respectively for some integers p and q.
_Definition_ _2_ _(RLWE_ _Encoding_ _Scheme):_ Our RLWE
Encoding scheme is composed of three algorithms
KeyGen, Enc, Dec as follows: (∆= ⌊q/p⌋, χσ denotes the
discrete Gaussian distribution with a standard deviation σ)
_• KeyGen(1[λ]) →_ sk: Sample s ← _Rq. Output sk = s as a_
secret key.
-----
_• Enc(sk, m) →_ ct: To encrypt m ∈ _Rp, sample a ←_ _Rq_
and e ← _χ[N]σ_ [for a discrete Gaussian distribution][ χ][σ][.]
Then, compute b = a **s + e + ∆m mod q, and output a**
_·_
ciphertext ct := (a, b).
_• Eval(c, d, {ct}i[I]=1[)][ →]_ [ct][: For a given vector][ c][ ∈] _[R]p[I]_ [and]
_I ciphertexts cti = (ai, bi), output the ciphertext ct :=_
(c · (a1, a2, · · ·, aI ), c · (b1, b2, · · ·, bI ) + ∆d)
Dec(sk, ct) **m: To decrypt ct = (a, b), compute d =**
_•_ _→_
**b −** **as mod q and output ⌊** _[p]q[d]_
_[⌉]_ [mod][ p]
This encoding scheme is indeed a symmetric key encryption from [33] and is semantically secure under the RLWE
assumption (Definition 4 given below). We also remark that
an addition of and a scalar multiplication on ciphertexts
_homomorphically correspond to those operations on the un-_
derlying messages, or more precisely: for all (mi)i∈I ∈ _Rp[I]_ [,]
**_c ∈_** _Rp[I]_ [, and][ d][ ∈] _[R][p][, the following probability is bigger than]_
1 negl(λ):
_−_
_• {(ai, bi = ais+e mod q) : ai, s ←_ _Rq and e ←_ _χ[N]σ_ _[}}]_
_• {(a, u) : a, u ←_ _Rq}_
where χσ is a discrete Gaussian distribution with a standard
deviation σ defined on Z.[4]
The following lemma is a corollary from [36].
_Lemma 3 (Hardness of RLWE [36]): Let N be a power_
of two integer, and R be the ring Z[X]/(X _[N]_ + 1) and
_R√q_ = _R/qR, where q_ = 1 mod 2N . If σ = _αq_ _>_
_N_ (Nm/ log(Nm))[1][/][4] _ω([√]log N_ ), then the RLWE prob
_·_
lem with respects to parameters N, q, m, χ where χ a discrete
Gaussian with standard deviation σ, is quantumly at least as
hard as approximate the shortest vector problem within a factor
_O�(N_ (Nm/ log(Nm))[1][/][4]/α) in an ideal of the ring Z[ζ2N ]
where ζ2N is the 2N -root of the unity.
_Remark 2 (SIMD Operation): When p is a prime such that_
_p ≡_ 1 mod 2N, the message space Rp of the above encoding
scheme is isomorphic as a ring to Z[N]p [. Therefore, we can]
simultaneously encode N messages from Zp to a single encoding, and a single operation (addition or scalar multiplication)
on the ciphertexts or messages of Rp corresponds to those
on many messages of Zp, which is called a single instruction
multiple data (SIMD) operation.
_C. Succinct Non-Interactive Arguments_
In this section, we provide definitions of the notion of succinct non-interactive zero-knowledge argument of knowledge
(zk-SNARK).
_Definition 5 (Non-interactive proof system): Let_ be a
_R_
relation which comprises pairs (ϕ, ω) . We call ϕ the
_∈R_
statement and ω the witness. A non-interactive proof system
for a relation comprises three algorithms as follows:
_R_
Setup( ): The setup algorithm provides a common
_•_ _R_
reference string crs and a simulation trapdoor τ for the
relation .
_R_
Prove( _.crs, ϕ, ω): The prove algorithm outputs a proof_
_•_ _R_
_π._
Verify( _.crs, ϕ, π): The verify algorithm provides 0 (re-_
_•_ _R_
ject) or 1 (accept).
If a verifier can only verify a proof using the verifiable
common string vrs which contains secret information, the
proof system is called the ‘designated’ non-interactive proof
system.
Completeness roughly says that an honest prover can convince an honest verifier. We formally describe the completeness.
_Definition 6 (Completeness): (Setup, Prove, Verify, Sim)_
algorithms are said to have completeness if they satisfy that
for all
�sk KeyGen(1[λ])
_←_
Pr
_{cti ←_ Enc(sk, mi)}i[I]=1
Dec(sk, Eval(c, d, {cti}i[I]=1[))]
����= c · (m1, m2, · · ·, mI ) + d
�
_,_
given that pI **e** _< q/2p. This allows a party (without sk)_
_∥_ _∥_
to output a ciphertext whose underlying message is an affine
combination of the underlying messages of given ciphertexts.
Essential to our construction of zk-SNARK is the assumption that above encoding scheme is a linear-only encoding scheme. Roughly, it assumes that the only way for a
PPT adversary to generate a valid new ciphertext is linearly
combining the given ciphertexts. The formal definition is as
follows, and a generalized version of this assumption (with
messages composed of vectors) was also exploited in [17],
[18] to construct SNARG from (R)LWE assumptions.
_Definition_ _3_ _(Linear-Only_ _Encoding_ _[17],_ _[25]):_
Fix a security parameter _λ._ An encoding scheme
_E = (KeyGen, Enc, Dec) over a ring R is a linear-only_
encoding scheme if for any PPT adversary, there exists
_A_
an efficient extractor XA such that for all auxiliary inputs
_z_ 0, 1, and any plaintext generation algorithm
_∈{_ _}[λ]_ _M_
(which outputs some elements from R), we have that
for sk _←_ KeyGen(1[λ]), (a1, a2, · · ·, am) _←_ _M(1[λ]),_
cti ← Enc(sk, ai) for all i ∈ [m], ct[′] _←A({cti}i∈[m]; z),_
(π, b) ←XA({cti}i∈[m]; z), a[′] _←_ (a1, a2, · · ·, am) · π + b,
Pr[Dec(sk, ct[′]) = a[′]] = negl(λ). (1)
_̸_
_Remark 1: The fact that our encoding scheme resembles_
usual fully homomorphic encryption (FHE) schemes does not
contradict our assumption that it is a Linear-Only Encoding.
In FHE, it is necessary for the ciphertext after non-scalar
multiplication to be decrypted with specified secret key [33]
or to be performed key-switching procedure [34] for correct
decryption, both of which are not allowed in our encoding
scheme.
We finally describe the ring learning with errors (RLWE)
assumption as follows.
_Definition_ _4_ _(Ring_ _LWE_ _assumption):_ Let _R_ be
Z[X]/(X _[N]_ + 1) with a power of two integer N, and
_Rq = R/qR. Then, a decision ring LWE (RLWE) assumption_
is hard to distinguish the two distributions
� (crs, τ ) Setup( );
_←_ _R_
Pr
_π_ Prove( _, crs, ϕ, ω)_
_←_ _R_
�
Verify(R, crs, ϕ, ω) = 1 _,_
����
(2)
4Formal definition of RLWE is different to the definition 4, but if N is
power of two, then the definition of RLWE is the same as definition 4. We
refer to [35].
-----
is bigger than 1 negl(λ), where negl(λ) is a negligible
_−_
function in λ.
The knowledge soundness says that if a prover can provide
a valid proof, then there is an efficient algorithm for extracting
the witness for the given statement with the same inputs and
random coins. We formally describe the knowledge soundness.
_Definition 7 (Knowledge soundness): For any polynomial_
time adversary, there exists a polynomial time extractor
_A_
_XA such that_
Pr ((((Rcrsϕ, π, z, τ)); ←) ← ω)R ←Setup(1[λ]();A∥X(RA);)(R, z, crs) ������Verify(ϕ, π)( /∈RR.crs, ϕ, π) = 1
_≤_ negl(λ),
where negl(λ) is a negligible function in λ.
The zero-knowledge roughly says that a prover cannot leak
any information except for the truth of the statement. We also
formally describe the zero-knowledge.
_Definition 8: [ϵzk-Zero-Knowledge] For all (ϕ, ω) ∈R and_
an adversary, the following equality holds.
_A_
� (crs, τ ) Setup( ); �
Pr _π_ Prove ← ( _, crsR, ϕ, ω)_ _A(R, z, crs, π) = 1_
_←_ _R_ ����
� (crs, τ ) Setup( ); �
_≈_ Pr _π_ Sim ←( _, crs, ϕ, τR_ ) _A(R, z, crs, π) = 1_ _,_
_←_ _R_ ����
where means that the difference of two probability is
_≈_
bounded by ϵzk ≪ 1.
Now we present the definition of succinctness and finally,
zk-SNARK.
_Definition 9 (Succinctness): A non-interactive proof system_
is succinct if the proof size and verification time is polynomial
in the security parameter λ and _ϕ_ + log _w_ where ϕ and w
_|_ _|_ _|_ _|_
are input and witness of the relation, respectively.
_R_
_Definition 10 (zk-SNARK): If a non-interactive proof sys-_
tem satisfies the completeness, knowledge soundness, zeroknowledge, and succinctness, then it is called zk-SNARK. In
addition, if it requires a secret information for a verifier (to
Verify), it is called the designated zk-SNARK.
III. ZK-SNARK FROM RLWE
In this section, we propose a zk-SNARK from RLWE. Here,
we use a ring Rp = Zp[X]/(X _[N]_ +1) as a message space with
a power-of-two N and a prime p such that p = 1 mod 2N to
fully exploit slot-wise computations. Indeed, the ring isomorphism Rp =[∼] Z[N]p [allows][ N][ slot-wise computations. Then, we]
can simultaneously verify at most N possibly distinct circuits
(having the same bound on the size) in a single decryption
process. Previously, to verify circuits with N pairs of input
and output, a verifier must perform N decryption processes.
**Intuition of zk-SNARK construction: Groth [9] and ours.**
Since our zk-SNARK construction resembles that of Groth [9],
we briefly overview the intuition behind the construction
of [9] and the distinguished aspect of ours. Both constructions
are based on the QAP (detail will be given below) where
the divisibility of vC,i(x) · wC,i(x) − _yC,i,o(x) by tC(x) is_
equivalent to the correct evaluation of a circuit C with an
input i and output o. The divisibility is checked by letting a
prover to provide the quotient hC,i,o(x) along with (the part
of) polynomials vC,i(x), wC,i(s), yC,i,o(x) satisfying the divisibility condition with tC(x). On the other hand, for efficient
verification and succinct proof, instead of working directly on
those polynomials, a verifier (or a trusted party) encodes the
random point r (and its corresponding powers) with an linearly
homomorphic encoding scheme so that a prover can generate
a proof without knowing r (necessary for soundness). Two
significant differences of our construction from Groth [9] are
(i) We use RLWE encoding scheme for better amortized proof
size, which additionally requires noise flooding technique for
zero-knowledge; (ii) We exploit generalized version of QAP
for a finite ring (instead of field) to deal with the messages
space Rp of the RLWE encoding scheme.
_A. Ring-Quadratic Arithmetic Program (Ring-QAP)_
Previously, QAP has been used to confirm arithmetic circuit
satisfiability over finite field F, so every element which appears
at the above definition is contained in F. However, since the
message space of the RLWE based encoding is not a field,
but a ring, the existing QAP definition cannot capture the
case. Thus, the necessity for ring-QAP is natural, which is
the generalization of the previous QAPs from a finite field F
to a ring R. We first introduce a definition of ring-QAP.
_Definition 11 (Ring-QAP; adapted from [27]): A QAP_
_Q_
over a ring R comprises three sets of m + 1 polynomials
_V = {vk(x)}, W = {wk(x)}, Y = {yk(x)} (over R), for_
_k ∈{0, 1, . . ., m}, and a target polynomial t(x) ∈_ R[x].
Suppose C : R[n] _→_ R[n][′] is an arithmetic circuit that takes
as input n elements of R and outputs n[′] elements, for a
total of n + n[′] I/O elements. Then we say that Q computes
_C if: (a1, a2, . . ., an+n′_ ) ∈ R[n][+][n][′] is a valid assignment
of C’s inputs and outputs, if and only if there exist coefficients (an+n[′]+1, an+n[′]+2, . . ., am) such that t(x) divides
_p(x), where:_
_m_ ��
�
_v0(x) +_ _aivi(x)_ _w0(x) +_
_i=1_
� _m_ �
�
_y0(x) +_ _aiyi(x)_ _._
_i=1_
�
_p(x) =_
�
_−_
_m_
�
_aiwi(x)_
_i=1_
_Remark 3 (Description of_ _,_ _,_ _): Recall that, in the_
_V_ _W_ _Y_
original QAP [27] over a finite field, the target polynomial t(x)
is defined by [�]g[(][x][ −] _[r][g][)][ with distinct roots][ r][g][’s, each cor-]_
responding to each multiplication gate. Then, polynomials,
_V_
, and are constructed in a way that their evaluation values
_W_ _Y_
on rg, i.e., (v0(rg)+[�][m]i=1 _[a][i][v][i][(][r][g][)][,][ (][w][0][(][r][g][)+][�]i[m]=1_ _[a][i][w][i][(][r][g][)][,]_
and (y0(rg) + [�]i[m]=1 _[a][i][y][i][(][r][g][)][ are respectively, left input, right]_
input, and output of the multiplication gate corresponding to
_rg. In our Ring-QAP, the target polynomial t(x), along with_
,, and are defined in the same way as those of the
_V_ _W_ _Y_
original QAP, but with a caution in choosing rg’s due to the
following Schwartz-Zippel lemma on the ring.
For the soundness of zk-SNARKs, the Schwartz-Zippel
lemma should be required. The original lemma only provides
an upper bound of the probability that the evaluation of
-----
nonzero multivariate polynomials at a random point from some
finite set is zero. Thus, it does not also capture a polynomial ring case, but fortunately Schwartz [37] and Bishonoi
_et al. [38] deal with the ring variant of Schwartz-Zippel lemma_
as follows.
_Lemma 4 (Generalized Schwartz-Zippel Lemma [37], [38]):_
Let R be a finite ring, and let S ⊆ R be a set satisfying that
for all x, y _S such that x_ = y, _x_ _y is invertible.[5]_
_∈_ _̸_ _−_
Then, for all n-variate nonzero polynomial f : R[n] _→_ R of
total degree D,
Pr
_x←S[n][[][f]_ [(][x][) = 0]][ ≤] _|[D]S|_ _[.]_
_Example 1 (Set S with Maximal Cardinality): For a prime_
_p, when a ring Rp = Zp[X]/(X_ _[N]_ +1) is isomorphic to Z[N]p [, a]
set S := {(a, a, . . ., a) : a ∈ Zp} ⊆ _Rp satisfies the desired_
condition of the above lemma with R = Rp. Note that |S| = p
and S has the maximal cardinality among all such subsets: if
a set S[′] has cardinality bigger than p, then by pigeon hole
principle, there exist distinct x, y ∈ _S[′]_ having the same value
in at least one of its coordinate (hence, x _y is not invertible)._
_−_
To exploit the above lemma in our case, we choose S :=
_{a·1[N]_ : a ∈ Zp}, where 1 is a vector of ones; all coefficients
are one. Thus, we directly obtain the Theorem 1 that the
probability that f can be vanished at a random point can be
described as follows.
_Theorem 1 (Rp-QAP with maximal cardinality): For a ring_
_Rp_ =[∼] Z[N]p [(with prime][ p][) and any arithmetic circuit][ C][ :][ R]p[n] _[→]_
_Rp[n][′]_ of fan-in 2 with m wires and d multiplication gates, if
_p ≥_ _d, then there exists a QAP Q = (V = {vk(x)}k[m]=0[,][ W][ =]_
_{wk(x)}k[m]=0[,][ Y]_ = {yk(x)}k[m]=0[, t][(][x][))][ computing][ C][. More]
precisely,
_d−1_
�
_t(x) :=_ (x − _ri),_
_i=0_
and V, W, Y can be defined by combining {λj(x)}j[d]=0[−][1] [where]
_Y = {yk(x)}, and t(x) be the ring-QAP (Definition 11)_
corresponding to this arithmetic circuit. Then, for the valid
statements (a1, . . ., an+n′ ) and witnesses (an+n′+1, . . ., am)
of the relation, it holds that
_R_
� _m_ �� _m_ �
� �
_v0(x) +_ _aivi(x)_ _w0(x) +_ _aiwi(x)_
_i=1_ _i=1_
� _m_ �
�
_−_ _y0(x) +_ _aiyi(x)_ = h(x)t(x).
_i=1_
for some polynomial h(x) of degree at most the number of
multiplication gates.
(crs, vrs) **Setup(** ). This algorithm receives a relation
_←_ _R_ _R_
as an input and outputs a common refernce string crs. In addition, our scheme only supports a designated verifier and Setup
outputs an additional information, called vrs. The trusted third
party (TTP) chooses random elements α, β, δ, r ← _Rp, and_
generates the master secret key of RLWE encoding s ← _Rq._
Then, TTP computes crs and vrs as follows:
�Enc(r[i]), Enc � _r[i]tδ(r)_ ��d
_i=0_
vrs = sk, α, β, δ, r _._
_{_ _}_
Here Enc denotes an encoding algorithm for RLWE as defined
in Section II-B and Enc(0j)’s are encodings of zero. TTP
makes public crs, however, vrs is sent to the designated
verifier and it should be kept secret.
_π ←_ **Prove(crs, a1, . . ., am). To generate a proof π, a**
prover executes Prove algorithm which receives crs, statements and witnesses as an input. He chooses random elements γu, γv _Rp and generates three encodings Enc(A(r)),_
_←_
Enc(B(r)), and Enc(C(r)) through homomorphic summations
and scalar multiplications where
crs =
Enc(α), Enc(β), Enc(δ), {Enc(0i)}i∈[n log q+2λ],
�Enc � _βui(r)+αvδi(r)+wi(r)_ ��m
_i=n+n[′]+1_ _[,]_
_,_
_λj(x) :=_
_d−1_
�
_i=0,(i≠_ _j)_
_x −_ _ri_
_,_
_rj −_ _ri_
for distinct roots r0, · · ·, rd−1 ∈ _A := {a·1[N]_ : a ∈ Zp} ⊆ _Rp._
_B. Our Designated zk-SNARK from RLWE_
We now describe our zk-SNARK with RLWE-based linearonly encoding (Section II-B), which is composed of three
algorithms: (Setup, Prove, Verify). Roughly speaking, the
protocol is a natural conversion from a DL-based encoding
into the RLWE-based linear-only encoding with Rp-QAP with
maximal cardinality where Rp = Zp[x]/(x[N] + 1) and N is a
power of 2. We assume p, q 1 mod 2N which implies that
_≡_
_Rp_ =[∼] Z[N]p [and][ R][q] _[∼][=][ Z]q[N]_ [.]
Let be a relation that a prover wants to prove, which
_R_
is represented by an arithmetic circuit with n inputs, n[′]
outputs, and m wires (composed of input and output of the
circuit, output of multiplication gates, and constant addition
and multiplication gates). Let V = {vk(x)}, W = {wk(x)},
_m_
�
_A(r) = α +_ _aiui(r) + γuδ,_
_i=0_
_m_
�
_B(r) = β +_ _aivi(r) + γvδ,_
_i=0_
�m � _βui(r) + αvi(r) + wi(r) + h(r)t(r)_
_C(r) =_
_δ_
_i=n+n[′]+1_
+ γvA(r) + γuB(r) − _γuγvδ[2],_
and for rA, rB, rC ←{0, 1}[N][ log][ q][+2][λ] and I ∈{A, B, C},
computes re-randomized encodings
Enc(I(r)) ← Enc(I(r)) + (0, e[∗]I [)]
_N log q+2λ_
�
+ _rI,jEnc(0j) mod qR,_
_j=1_
�
-----
where e[∗]I [is sampled from a distribution that outputs large]
elements to smudge the error terms in RLWE encodings.
We will formally describe how to sample e[∗]I [in the next]
Section III-C.
1/0 ← **Verify(π, vrs, a1, . . ., an+n′** ). A (designated) verifier who has vrs can check the validity of _π_ =
(Enc(A(r)), Enc(B(r)), Enc(C(r))). The verifier can obtain
a tuple (A, B, C) through executing a decryption algorithm
from π. Then, he tests
parameter. To circumvent this limitation, there have been
several approaches (especially lattice-based cryptography) proposed to use closeness measures other than statistical distance
on distributions [39]–[46]. Our approach can be seen as an
adaptation of [39] for the zero-knowledge property in latticebased encoding scheme.
At first, we introduce the Hellinger distance and its property,
a key ingredient for our better noise flooding technique.
_Definition 12 (Hellinger distance): Let D1, D2 be two_
discrete distributions over a domain X. The Hellinger distance
between D1 and D2 is defined by
�
�
_H(D1, D2) =_ 1 −
_x∈X_
_AB = αβ + Cδ +_
_n+n[′]_
�
_ai(βui(r) + αvi(r) + wi(r))._
_i=0_
(3)
�
_D1(x)D2(x)._
and accepts the proof if the test passes.
_C. Noise Flooding with Optimized Parameters_
A verifier in our protocol decrypts the RLWE ciphertexts
using a secret key to obtain messages. The decryption process
of the RLWE based encoding gives a verifier the error terms
as well as corresponding message. Due to the construction of
RLWE ciphertexts, error terms may contain some information
about affine computations which are conducted on encrypted
data and thus information about the error term must be hidden.
To overcome this restriction, previous works [16], [23], [24]
introduced a noise flooding technique where one adds a large
values to hide an existing error term.
**Noise flooding in the previous work. In previous work,**
prover injects a sufficiently large error e[∗] to a proof ciphertext
(a, b = a **s + e) so that the added ciphertext (a, b + e[∗]** mod
_·_
_qR) has an error e + e[∗]_ mod qR that is statistically close to
**e[∗]. Then, following lemma guarantees that no adversary can**
obtain any significant information on the error term e from
the decryption of a proof ciphertext.
_Lemma 5 ( [23]): Let B1, B2 be positive numbers and x_
be a fixed number in an interval [−B1, B1]. Let Y be the
uniform distribution defined on an interval [−B2, B2]. Then,
the statistical distance between a distribution Y and Y + x is
bounded by B1/B2.
Specifically, the lemma implies that B2 = B1 2[κ] bounds
_×_
the statistical distance between two distributions to be 2[−][κ].
Then, from the probability preservation property of statistical
distance, it gives that the scheme with noise flooding satisfies
the zero-knowledge property (Definition 8) with ϵzk = 2[−][κ].
**Our noise flooding with tighter parameters. In this paper,**
we propose to investigate the computational costs of distinguishing two distributions with the notion of Hellinger distance
as recently proposed by [39]. From this, by computing the cost
more tightly, we can use better parameters while providing the
same zero-knowledge property. More specifically, we compute
more tight lower bound of the computational costs required
for an adversary to break the zero-knowledge then set the
parameter accordingly.
We remark that, conventional argument based on the statistical distance (as above) requires that a new error to be
larger than the initial error in ratio exponential to the statistical
If H(D1, D2) is smaller than 2[−][t], we say that a pair
(D1, D2) is 2[−][t]-Hellinger close pair.
[39] recently showed that replacing a distribution D1 with
the other distribution D2 in the security game for the decision
problem loss only a few bit security if (D1, D2) is 2[−][κ/][2]Hellinger close pair. More formally, they proved the following
lemma.
_Lemma 6 (Theorem 5 in [39]): Let Π[D][1]_ be a cryptographic
primitive with black box access to a distribution D1 and G[D][1]
be a decision security game regarding Π[D][1] . Suppose that
(D1, D2) is 2[−][κ/][2]-Hellinger close pair. Then, if Π[D][1] achieves
_κ-bit security, then Π[D][2]_ satisfies κ 6.847-bit security.
_−_
Now, with this lemma, we can show that adding a noise
from appropriate discrete Gaussian distribution achieves the
goal of noise flooding technique as follows: Let D1 and D2
be discrete Gaussian with the standard deviation σ[′] centered
at zero and e, respectively. Then, from the above lemma with
Π[D][i] as a designated zk-SNARK from RLWE with black box
access to Di, it suffices to show that H(D1, D2) ≤ 2[−][κ/][2],
which results in G[D][2], a security game for the zero-knowledge
(Definition 8), is κ-bit secure, i.e., the advantage of adversary
is less than 2[−][κ] (note that G[D][1] is already _κ_ +6.847 secure
_≥_
since it does not have any information on the error term e).
The following lemma provides a sufficient size of σ[′] in the
above argument given that **e** _B._
_∥_ _∥≤_
_Lemma 7: Let P and Q be discrete Gaussian distributions_
with the standard deviation σ[′] centered at zero and y, respectively, such that _y_ _B. Then, it satisfies that_
_|_ _| ≤_
_H(P, Q)[2]_ 1 exp(
_≤_ _−_ _−_ _[B][2]_
8σ[′][2][ )][.]
_Proof: We will regard P, Q as continuous Gaussian distri-_
butions because σ[′] that we will use is sufficiently large. Then,
from the definition of Gaussian distribution and Hellinger
distance,
1
4 [(][x][ −] _[y][)][2][ +][ 1]4_ _[x][2]_
_σ[′][2]_
�
_dx._
1
_H(P, Q)[2]_ = 1 − _√_
2πσ[′]
� �
exp
_−_
R
The integral can be converted as follows.
exp(
_−_
1 � _∞_
8 _[y][2]_
exp(
_−_ [1]
_σ[′][2][ )][ ×]_ 2σ[′][2][ ·][ (][x][ −] [1]2 _[y][)][2][)][dx]_
_−∞_
-----
Using the fact that �
R [exp(][−][cu][2][)][du][ = (][c/π][)][−][1][/][2][ for all]
_c > 0, we obtain_
exp(
_−_ _[y][2]_
8σ[′][2][ )][ ×][ σ][′][√]
2π.
Substituting this to the first equation gives the claim.
Now, to satisfy κ-bit security in zero-knowledge (i.e.,
_ϵzk = 2[−][κ]_ in Definition 8), it requires that
1 exp(
_−_ _−_ _[B][2]_
8σ[′][2][ )][ <][ 2][−][κ][.]
_Proof: We build a simulated proof π[′]_ that follows the
same distribution as a proof π. The algorithm comprises two
steps; constructing elements and generating RLWE ciphertexts
using crs. First, choose A[′], B[′] _←_ _Rp, and compute_
_i=0_ _[a][i][(][βu][i][(][r][) +][ αv][i][(][r][) +][ w][i][(][r][))]_
_C_ _[′]_ = _[A][′][B][′][ −]_ _[αβ][ −]_ [�][n][+][n][′] _._
_δ_
In other words,
_B_
_σ[′][ ≥]_
� 1
_−_ ln(1√−2[−][κ]) = O(2[−][κ/][2]). (4)
2 2
Then, using crs, we can generate three RLWE ciphertexts
Enc(A[′]), Enc(B[′]), and Enc(C _[′]) and output a proof π[′]_ =
(Enc(A), Enc(B), Enc(C)). Then, the simulated proof can
pass the verification (3).
As the last step, we need to prove that π and π[′] are
statistically or computationally indistinguishable. Each encoding in π and π[′] consists of a pair (a, b mod qR) with
**b = a · s + e +** _p[q]_ **[m][. By the leftover hash lemma 8, the]**
first component of any encoding looks like a random element
in Rq =[∼] Z[N]q [. More precisely, every element in][ R][q] [can]
be regarded as a vector in Z[N]q [, so we apply the lemma 8]
to randomize the first component of each encoding when
_N log q + 2λ encodings of zero are provided._
On the other hand, the noise flooding technique in Lemma 5
shows that e is independent of any witness since the error
term looks like a random element. Therefore, two proofs are
indistinguishable.
_Lemma 10 (Knowledge soundness): The protocol has knowl-_
edge soundness under the parameters in Theorem 2.
_Proof: Suppose that there exists an adversary_ which
_A_
can break knowledge soundness with a non-negligible probability. We will construct a knowledge extractor based on
_X_
.
_A_
Let π = (A(r), B(r), C(r)) be a tuple of RLWE ciphertexts. Then, which allows affine computations can obtain
_A_
follows.
_A = Aαα + Aββ + Aδδ + A(r)_
**Parameters. Let κ be the statistical parameter and B the**
size of the error term in the final encodings (before noise
flooding). To achieve the κ-bit security, it suffices to set σ[′]
as above (4). Then, the remaining part is to choose q such
that q/2p is bigger than Ω(σ[′]) to achieve the correctness.
On the other hand, according to the previous analysis that
exploited the statistical distance as a measure of closeness of
two distributions, σ[′] is approximately set to Ω(2[κ]B), which
implies that q/2p Ω(2[κ]B). Consequently, in our tight
_≥_
analysis based on the Hellinger distance, q is polynomial in κ
rather than exp(κ) as in the conventional analysis.
More specifically, in later Section IV, we will present
improved concrete parameters due to the analysis with the
Hellinger distance, and estimate the size of proof based on
the improved parameters.
_D. Security Proofs_
_Theorem 2: Let κ be the statistical security parameters, and_
_λ be the security parameter. Let N = N_ (λ), q = q(λ) and
_σ = σ(λ) be RLWE parameters in Lemma 3 satisfying that_
_B = 8pσ[√]m + N log q + 2λ + 3, where m is the number of_
wires in target circuit C. Assume that our RLWE encoding
scheme (Definition 2) is a Linear-Only Encoding scheme
(Definition 3). Then, for the circuit C, the scheme described
in Section III-B is a designated zk-SNARK (Definition 10).
Clearly, it is straightforward to prove the completeness.
Moreover, our scheme consists of three RLWE encodings
which are polynomial size in λ, and the verification procedure
takes polynomial time in λ, so the succinctness also holds.
We now introduce a leftover hash lemma [47] which is
necessary to prove zero-knowledge property.
_Lemma 8 (Specialized leftover hash lemma): For non-_
negative integers n, q, 2, t and real number ϵ, if A ← Z[n]q _[×][t]_
and r ← Z[t]2[, then we have]
�
SD((A, A **r), (A, u))** 2[−][t] _q[n],_
_·_ _≤_ [1] _·_
2
_[·]_
where A · r is computed in Zq, and u ← Z[n]q [. Thus, if][ t >]
_N log q + 2 log(1/ϵ), then two distributions are SD((A, A_
_·_
**r), (A, u))** _ϵ._
_≤_
_Lemma 9 (Zero-knowledge): The protocol has zero knowl-_
edge under the parameters in Theorem 2.
+
_m_
�
_Ai(_ _[βu][i][(][r][) +][ αv][i][(][r][) +][ w][i][(][r][)]_ )
_δ_
_i=n+n[′]+1_
_t(r)_
+ Ah(r) _δ [,]_
where Aα, Aβ, Aδ, {Ai}i[m]=n+n[′]+1 [are scalars in][ R][p][ and]
_A(r), Ah(r) are polynomials of degree d with coefficients in_
_Rp. Similarly we obtain representations about B and C._
Our construction allows slot-wise computations by ring
operations, and the verification (3) can be considered as slotwise computations, i.e., independent computations on each Zp.
Note that a verifier in our protocol outputs accept when the
equation holds for all slots, and it is enough to show slot-wise
knowledge soundness.
Since can break the soundness, can pass the verification
_A_ _A_
equation on each slot. For simplicity, we use a notation tilde
to denote slot-wise results. Then, for each slot, the verification
equation is considered as follows.
_A�B � = �αβ� + �Cδ� +_
_n+n[′]_
�
�ai(β[�]u�i(r) + �αv�i(r) + �wi(r)).
_i=0_
(5)
-----
Moreover, after computes affine operations rings, can
_A_ _A_
obtain equations
_A� = �Aαα� + �Aβ �β + �Aδδ + �A(r)_ (6)
_m_
+ � _A�i(_ _β�u�i(r) + �αv�i(r) + �wi(r)_ )
_i=n+n[′]+1_ _δ�_
_t(r)_
�
+ _A[�]h(r)_ _,_
_δ�_
where tilded elements are included in a finite field Zp for some
prime p. Similarly, obtains representations about _B and_ _C._
_A_ [�] [�]
Now, we reconsider the random elements _α,_ _β,_ _δ as formal_
� [�] [�]
variables. Then, _AB contains formal variables_ _α[2],_ _β[2]_ and
[�] [�] � [�]
1/δ[�][2], but they are not included in the right-hand side of the
verification (5) in each slot. Thus, for passing the verification
process, _AαBαα[2]_ must be the zero, which implies _Aα or_ _Bα_
[�] [�] [�] [�]
is zero. Without loss of generality, we assume that _B[�]α = 0._
Similarly, we compare the coefficients of _αβ[�] with_ _β[2]._
� [�]
Since _A[�]α =_ _B[�]β = 1 and_ _A[�]β =_ _B[�]α = 0, each component of a_
coefficient term of 1/δ[�][2] is zero. Hence, _A[�] and_ _B[�] are rewritten_
as follows.
_A� = �α + �Aδδ� + �A(r)_
_B� = �β + �Bδδ� + �B(r)_
Moreover, it holds that
_A�B � = (α� + �Aδδ + �A(r))( �Bβ �β + �Bδδ + �B(r))_
_n+n[′]_
�
= �αβ[�] + _C[�]δ[�] +_ �ai(β[�]u�i(r) + �αv�i(r) + �wi(r)).
_i=0_
=
Thus, the verification equation (5) implies that
_B�(r)α� + �A(r)β� + �A(r) �B(r)_
_m_
�
+ (C[�]iβ[�]u�i(r) + �αv�i(r) + �wi(r)) + [�]h(r)[�]t(r)
_i=n+n[′]+1_
_n+n[′]_
�
�ai(β[�]u�i(r) + �αv�i(r) + �wi(r)),
_i=0_
coeff of _αβ[�] in LHS of (5) =_ _AαBβ +_ _AβBα_
� [�] [�] [�] [�]
coeff of _αβ[�] in RHS of (5) = 1_
�
coeff of _β[�][2]_ in LHS of (5) = _A[�]βB[�]β_
coeff of _β[�][2]_ in RHS of (5) = 0
since _δ[�] are considered as a formal variable. Moreover, since_
_α and_ _β are also formal variables, we also observe that_
� [�]
_m_
�
_C�iv�i(r),_
_i=n+n[′]+1_
_m_
�
_C�iu�i(r),_
_i=n+n[′]+1_
_n+n[′]_
�
�aiv�i(r) +
_i=0_
_n+n[′]_
�
�aiu�i(r) +
_i=0_
_n+n[′]_
�
�ai �wi(r)
_i=0_
_m_
�
+ _C�i �wi(r) + �h(r)�t(r)._
_i=n+n[′]+1_
Thus, it holds that _A�α �Bβ + �Aβ �Bα =_ _A�α �Bβ_ = 1 and
_A�β �Bβ = 0. Without loss of generality, we also assume that_
_A�α = �Bβ = 1 and �Bβ = 0. For a coefficient of 1/δ[2], we_
observe that
� _m_ �
�
_A�h(r)�t(r) +_ _A�i(β�u�i(r) + �αv�i(r) + �wi(r))_
_i=n+n[′]+1_
� _m_
�
_×_ _B�h(r)�t(r) +_ _B�i(β�u�i(r) + �αv�i(r) + �wi(r))_
_i=n+n[′]+1_
= 0.
_B�(r) =_
_A�(r) =_
_A�(r) �B(r) =_
�
Moreover, for the coefficients of _α/δ[�] and_ _β/δ[�], we observe_
� [�]
that
� _m_ �
�
_A�i(β�u�i(r) + �αv�i(r) + �wi(r)) �Ah(r)�t(r)_ _×_ _B[�]α_
_i=n+n[′]+1_
= 0,
� _m_
�
_B�i(β�u�i(r) + �αv�i(r) + �wi(r) + �Bh(r)�t(r)_
_i=n+n[′]+1_
We set �ai = Ci for i ∈{n + n[′] + 1, · · ·, m}. Then, it
holds that _B[�](r) =_ [�]i[m]=0 [�][a][i][v][�][i][(][r][)][ and][ �][A][(][r][) =][ �]i[m]=0 [�][a][i][u][�][i][(][r][)][.]
Moreover, for a variable�m _r, we also observe that_ _A[�](r)B[�](r) =_
_i=0_ [�][a][i][ �][w][i][(][r][)+][�][h][(][r][)][�][t][(][r][)][, which implies that for each slot, the]
set {�ai}i[m]=n+n[′]+1 [=][ {][C][ �][i][}]i[m]=n+n[′]+1 [is a witness of the state-]
ment {�ai}i[n]=1[+][n][′] [. The slot-wise knowledge soundness completes]
the knowledge soundness of our construction.
IV. PROOF SIZE ESTIMATION
We now estimate the size of proof π of our designated zkSNARK from RLWE. First, we provide concrete parameters
of our protocol for circuits with 2[16] gates for achieving the
110, 128, and 164-bit security, respectively. Due to the fancy
analysis with respect to the Hellinger distance (equivalently,
Rènyi divergence of order 1/2), concrete parameters improve
considerably. Specifically, we describe the size of the proof of
our scheme and then compare it with that of previous works.
**Concrete parameters. We set the parameters to satisfy the**
following.
= 0,
� _m_
�
_A�i(β�u�i(r) + �αv�i(r) + �wi(r)) �Ah(r)�t(r)_
_i=n+n[′]+1_
�
�
_×_ _A[�]α_
_×_ _B[�]β_
�
_×_ _A[�]β_
= 0,
� _m_
�
_B�i(β�u�i(r) + �αv�i(r) + �wi(r) + �Bh(r)�t(r)_
_i=n+n[′]+1_
= 0.
-----
TABLE I
CONCRETE PARAMETERS OF OUR DESIGNATED ZK-SNARK FROM RLWE.
HERE d IS THE NUMBER OF MULTIPLICATIVE GATES. WE FIX T = 8 AND
_d = 2[15]_ FOR FAIR COMPARISON.
_λ_ _N_ log α log q log p log σ[′] _m_
164 2048 -104 260 32 146 2[22]
128 2048 -104 208 32 101 2[22]
110 2048 -104 180 32 71 2[22]
_• Our designated zk-SNARK has 164-bit security estimated_
by Albrecht et al’s LWE security estimator [48] with
the reduction_cost_model=BKZ.sieve cost
model.[6] With this model, the parameters of the previous
work only satisfy 110-bit security, but not 164-bit
security that was claimed by authors. Thus, we provide
several types of parameter suggestions as follows.
**– For fair comparison, the bit-size of the message space,**
and other parameters related to circuits are the same
as previous work [23], [24].
**– We provide new parameters satisfying 164-bit security**
that previous work desired.
**– We also provide a parameter achieving the 128-bit**
security to compare our amortized proof size and the
smallest proof size of the group based zk-SNARK [9].
_• To make a fair and easy comparison with previous work,_
we follow the way of selecting parameters in the previous
papers as much as possible.
Let N, q and σ = αq be parameters of RLWE instances, and
_p be a 32-bit prime such that p = 1 mod 2N_ . A tight analysis
based on the Hellinger distance instead of statistical distance
loss 6.847 bit security [39]. In other words, to satisfy 32-bit
statistical security that is the same as previous one, we need
to consider parameters which require 39-bit statistical robust.
More precisely, for fair comparisons with previous work,
we consider an arithmetic circuit with at most 2[16] gates and
_d = 2[15]_ multiplication gates, which can cover many example
applications such as the SHA-256 evaluation. Then, setting a
tailcut parameter T = 8, B = _e_ is 8pσ[√]m + t + 3 where
_|_ _|_
_m is the number of wires in ring-QAP and t = N log q + 2λ._
Furthermore, σ[′] should satisfy that
_H(e + χσ′_ _∥χσ′_ ) =
�
� � 1
1 exp
_−_ _−_ [1] _≤_
4 [(][ B]σ[′][ )][2] 2[20][ .]
Finally, we set m = 2[22] as in [24] so that σ[′] 2[19] _B and_
_≈_ _·_
8σ[′] _< q/2p for the correctness (of encoding scheme). Then,_
it holds that
_√_
8 (2[19] 8pσ
_·_ _·_
_m + t + 3) < q/2p._
For readability, we list the parameters of our zk-SNARK in
Table I with various security parameter λ.
Interestingly, the Hellinger distance provides a significant
practical improvement independent of our introduction of
RLWE encoding. Moreover, with our RLWE encoding and
6After we submitted this paper, a new estimator, called lattice-estimator, was
published. However, we still use a previous estimator, named LWE-estimator
because of the consistency of this paper.
TABLE II
COMPARISON OF PROOF SIZE OF EACH ZK-SNARKS.
Proof Size Computational
_λ_ Total Amortized PQ Programs Assumption
Ours 110 276.5 KB 135 B ✓ Ring QAP linear-only, RLWE
Ours 128 319.5 KB 156 B ✓ Ring QAP linear-only, RLWE
Ours 164 399.4 KB 195 B ✓ Ring QAP linear-only, RLWE
[24] 110 405 KB - ✓ QAP linear-only, LWE
[16] 110 270 KB - ✓ SAP linear-only[7], LWE
[23] 110 640 KB - ✓ SSP[8] PKE, PDH, LWE
[9] 128 138 B[9] - ✗ QAP PKE, PDH
ring-QAP, our protocol is much more efficient in the amortized sense than previous zk-SNARKs from SSPs and QAPs.
For the same circuit satisfiability, Gennaro et al. [23] and
Naganuma et al. [24] chose LWE parameters (N, log α, log q)
as (1400, 180, 736) for 110-bit security. On the other hand,
_−_
we choose RLWE parameters (2048, 98, 160) for achieving
_−_
the same security. Thus, we reduce not only the size of an
encoding in amortized sense but also the size of a single
encoding.
**Proof size. We can now estimate the size of the proof π for**
our scheme. Our proof π comprises three RLWE encodings,
and the size of each encoding is about 2N log q bits because
of Rq = Z[X]/(X _[N]_ + 1) is the encoding space. Then, our
encoding has the size 2048 260 bits 113.1 KB and the
_×_ _≈_
proof size is about 399.4 KB under 164-bit security since the
proof π consists of three encodings. On 110-bit security, we
can see that our scheme has about 276.5 KB of proof size
which is smaller than all previous work [16], [23], [24] from
lattices, e.g., Nitulescu [16] has 270 KB of proof, which is the
smallest among previous lattice-based work.
If we consider the amortized proof size, our scheme is even
comparable to the best result from the previous zk-SNARKs
(without post-quantum security). More precisely, since our
scheme allows N verification simultaneously for each proof
and our proof size is about 284.2 KB under 128-bit security,
the size of amortized proof is only 156 bytes with N = 2048
and it is almost the same as 138 bytes of Groth [9] which has
the shortest proof size among all zk-SNARKs. The proof size
for each scheme is summarized in Table 2.
**Size of common reference string. In lattice-based zk-**
SNARK, the common reference string (crs) is composed of
encodings for proving circuit evaluation and for leftover hash
lemma (for zero-knowledge). In our proposal, the number of
encodings in crs is the same as that in [24] which built a
lattice-based zk-SNARK from QAP as ours. One difference is
that our encoding from RLWE has 2N log q-bits which can be
7In original proposal, [16] is relied on linear-targeted malleability assumption, a weaker assumption than linear-only assumption. However, to achieve
zk-SNARK, it also requires linear-only assumption or a similar one with
efficient extractor.
8Here, we assume the evaluation circuit of SHA-256, which corresponds
to an arithmetic circuit with 2[16] gates or less [23].
9With bn-128 curve, https://github.com/zcash/zcash/issues/2465
-----
reduced to N log q = 2048 180-bits with pseudorandom gen_·_
erator, while the one from LWE in [24] has log q[′] = 736 bits
(when reduced similarly with pseudorandom generator). When
we consider the size of crs in amortized sense (with N
amortization), however, the size of each encoding in our crs
can be log q = 180 bits which is much smaller than that
_≈_
of [24].
**Prover complexity. While our focus is on reducing the**
(amortized) proof size of the zk-SNARK as other work in the
literature, we can also compare the prover/verifier complexity
of our work with the previous works. Note that our SNARK
requires ring multiplications over Zq[X]/(X _[N]_ +1) which may
cost Θ(N [2]) operations over Zq while previous SNARKs from
LWE requires constant multiplications over Zq[N][ ′] which costs
Θ(N _[′]) only. We remark that this problem can be mitigated_
by applying Number Theoretic Transform to our solution,
which can reduce the cost to be Θ(N log N ) (in this case,
we must take the ciphertext modulus q 1 mod N so that
_≡_
the ciphertext space Rq =[∼] Z[N]q [). Then, our prover/verifier]
complexity can be roughly log N in amortized sense — it
is now better than the previous work having N — given that
we utilize the full batch N for the proof.
**Extension to other circuits. We believe that our conversion**
and analysis can be applied to previous zk-SNARKs from
SSP and SAP beyond the QAP. In particular, if someone
wants to convert a SAP based zk-SNARK from LWE to
RLWE assumption for achieving more smaller proof rather
than our QAP based zk-SNARK, then, under the 128-bit
security, he/she can obtain that 213 KB proof size, and it could
be regarded as 104 bytes proof size for a single verification
due to the amortized sense. However, as Naganuma et al. [24]
mentioned, the scheme might be less efficient than QAP based
zk-SNARK.
V. EXPERIMENTAL RESULTS
**Experimental setup. We implement our new lattice-based**
designated zk-SNARK and present the experiment results for
our protocol. On implementation, we adopted libnsark
library [49] for the zk-SNARK part and Microsoft SEAL library [50] for the RLWE encoding part then integrated them.[10]
Our experiments were conducted on Linux Ubuntu 22.04.01
LTS with AMD EPYC 7502 CPU and 32 GB memory.
In our experiment, we generated a random circuit with 2[15]
multiplicative gates and 2[5] number of inputs, which can also
cover the SHA-256 evaluation. Then, we measured the proof
generation and verification time under the various security
parameters given in Table I.
**Prover time. Table III presents the proof generation time for**
each parameter. In our implementation, the main operation for
10To this end, we made some minor changes in each library, e.g., SEAL
only supports a maximum 54-bit coefficient modulus space for N = 2048,
while we require at least 180-bit.
TABLE III
TIMING RESULTS WITH T = 8, d = 2[15], AND NUMBER OF INPUTS 2[5].
Key Generation (s) Prover Time Verifier Time
_λ_ Total (s) Amortized (ms) Total (s) Amortized (ms)
110 13.46s 6.65s 3.2ms 0.011s 0.005ms
128 14.02s 7.19s 3.5ms 0.017s 0.008ms
164 16.95s 8.43s 4.1ms 0.023s 0.012ms
prover is a linear combination between RLWE encoding and
a ring element (instead of multi-exponentiation in other zkSNARKs). In Table III, it takes about 7 seconds to generate
a proof under the parameter with λ = 128. For simplicity,
we measured the time for generating a proof with only one
instance, while an RLWE encoding supports batching multiple
proofs by nature. More specifically, the RLWE encoding
with N = 2048 and log q = 208 can have 2048 messages
simultaneously, and thus the amortized time for generating a
proof for one instance is about 3.5 milliseconds.
**Verifier time. Table III presents the verification time. As**
expected by the (3) (in Section III-B), the verifier complexity
is independent of the circuit size and only depends on the
number of inputs. According to our experiment, it takes about
11ms to verify a proof with the number of inputs 2[5], and
amortized time for verifying a proof is about 0.005 ms.
ACKNOWLEDGMENT
Heewon Chung was partly supported by Institute of
Information & Communications Technology Planning &
Evaluation (IITP) grant funded by the Korea government
(MSIT) (No.2020-0-01373, Artificial Intelligence Graduate School Program (Hanyang University)) and the research fund of Hanyang University (HY-202000000790015).
Jeong Han Kim was partially supported by National Research Foundation of Korea (NRF) Grants funded by
the Korean Government (MSIP) (NRF-2016R1A5A1008055
& 2017R1E1A1A0307070114) and by a KIAS Individual
Grant(CG046002) at Korea Institute of Advanced Study.
REFERENCES
[1] S. Goldwasser, S. Micali, and C. Rackoff, “The knowledge complexity of
interactive proof systems,” SIAM J. Comput., vol. 18, no. 1, pp. 186–208,
1989.
[2] E. B. Sasson et al., “Zerocash: Decentralized anonymous payments from
bitcoin,” in Proc. IEEE Symposium on Security and Privacy (S&P),
2014.
[3] E. B. Sasson et al., “Zerocash: Decentralized anonymous payments from
bitcoin,” in Proc. IEEE Symposium on Security and Privacy (S&P),
2014.
[4] J. Bonneau, I. Meckler, V. Rao, and E. Shapiro, “Coda: Decentralized
cryptocurrency at scale,” Cryptology ePrint Archive, Report 2020/352,
2020.
[5] Z. Ghodsi, T. Gu, and S. Garg, “Safetynets: Verifiable execution of deep
neural networks on an untrusted cloud,” in Proc. NIPS, 2017.
[6] Y. Zhang, D. Genkin, J. Katz, D. Papadopoulos, and C. Papamanthou,
“v SQL: Verifying arbitrary SQL queries over dynamic outsourced
databases,” in Proc. IEEE Computer Society, 2017.
[7] ZKProof, “https://zkproof.org.”
[8] B. Parno, J. Howell, C. Gentry, and M. Raykova, “Pinocchio: Nearly
practical verifiable computation,” in Proc. IEEE Symposium on Security
_and Privacy (S&P), 2013._
|λ|Key Generation (s)|Prover Time Total (s) Amortized (ms)|Verifier Time Total (s) Amortized (ms)|
|---|---|---|---|
|110 128 164|13.46s 14.02s 16.95s|6.65s 3.2ms 7.19s 3.5ms 8.43s 4.1ms|0.011s 0.005ms 0.017s 0.008ms 0.023s 0.012ms|
-----
[9] J. Groth, “On the size of pairing-based non-interactive arguments,” in
_Proc. EUROCRYPT, 2016._
[10] B. Bünz, B. Fisch, and A. Szepieniec, “Transparent SNARKs from
DARK compilers,” in Proc. EUROCRYPT, 2020.
[11] S. Setty, “Spartan: Efficient and general-purpose zkSNARKs without
trusted setup,” in Proc. CRYPTO, 2020.
[12] J. Lee, “Dory: Efficient, transparent arguments for generalised inner
products and polynomial commitments,” in Proc. TCC, 2021.
[13] E. Ben-Sasson, I. Bentov, Y. Horesh, and M. Riabzev, “Scalable, transparent, and post-quantum secure computational integrity,” Cryptology
ePrint Archive, Report 2018/046, 2018.
[14] J. Zhang, T. Xie, Y. Zhang, and D. Song, “Transparent polynomial
delegation and its applications to zero knowledge proof,” in Proc. IEEE
_Symposium on Security and Privacy (S&P), 2020._
[15] A. Chiesa, D. Ojha, and N. Spooner, “Fractal: Post-quantum and
transparent recursive proofs from holography,” in Proc. EUROCRYPT,
2020.
[16] A. Nitulescu, “Lattice-based zero-knowledge snargs for arithmetic circuits,” in LATINCRYPT, 2019.
[17] D. Boneh, Y. Ishai, A. Sahai, and D. J. Wu, “Lattice-based snargs and
their application to more efficient obfuscation,” in Proc. EUROCRYPT,
2017.
[18] D. Boneh, Y. Ishai, A. Sahai, and D. J. Wu, “Quasi-optimal snargs via
linear multi-prover interactive proofs,” in Proc. EUROCRYPT, 2018.
[19] O. Regev, “On lattices, learning with errors, random linear codes, and
cryptography,” J. ACM, vol. 56, no. 6, pp. 1–40, 2009.
[20] C. Gentry, C. Peikert, and V. Vaikuntanathan, “Trapdoors for hard
lattices and new cryptographic constructions,” in Proc. STOC, 2008.
[21] V. Lyubashevsky, C. Peikert, and O. Regev, “On ideal lattices and
learning with errors over rings,” J. ACM, vol. 60, no. 34, pp. 1–35,
2013.
[22] L. Ducas, V. Lyubashevsky, and T. Prest, “Efficient identity-based
encryption over ntru lattices,” in Proc. ASIACRYPT, 2014.
[23] R. Gennaro, M. Minelli, A. Nitulescu, and M. Orrù, “Lattice-based zkSNARKs from square span programs,” in Proc. ACM CCS, 2018.
[24] K. Naganuma et al., “Post-quantum zk-SNARK for arithmetic circuits
using qaps,” in Proc. IEEE AsiaJCIS, 2020.
[25] N. Bitansky, A. Chiesa, Y. Ishai, O. Paneth, and R. Ostrovsky, “Succinct
non-interactive arguments via linear interactive proofs,” in Proc. TCC,
2013.
[26] G. Danezis, C. Fournet, J. Groth, and M. Kohlweiss, “Square span
programs with applications to succinct NIZK arguments,” in Proc.
_ASIACRYPT, 2014._
[27] R. Gennaro, C. Gentry, B. Parno, and M. Raykova, “Quadratic span
programs and succinct NIZKs without PCPs,” in Proc. EUROCRYPT,
2013.
[28] B. Parno, J. Howell, C. Gentry, and M. Raykova, “Pinocchio: Nearly
practical verifiable computation,” in Proc. IEEE Symposium on Security
_and Privacy (S&P), 2013._
[29] C. Ganesh, A. Nitulescu, and E. Soria-Vazquez, “Rinocchio: SNARKs
for ring arithmetic,” Cryptology ePrint Archive, Report 2021/322, 2021,
https://eprint.iacr.org/2021/322.
[30] Y. Ishai, H. Su, and D. J. Wu, “Shorter and faster post-quantum
designated-verifier zkSNARKs from lattices,” in Proc. ACM CCS, 2021.
[31] R. C. Merkle, “A digital signature based on a conventional encryption
function,” in Proc. CRYPTO, 1987.
[32] W. Banaszczyk, “Inequalities for convex bodies and polar reciprocal
lattices inR[n],” Discrete & Computational Geometry, vol. 13, no. 2,
pp. 217–231, 1995.
[33] Z. Brakerski and V. Vaikuntanathan, “Fully homomorphic encryption
from ring-LWE and security for key dependent messages,” in Proc.
_CRYPTO, 2011._
[34] Z. Brakerski, C. Gentry, and V. Vaikuntanathan, “(Leveled) fully homomorphic encryption without bootstrapping,” Trans. Comput. Theory,
vol. 6, no. 3, pp. 1–36, 2014.
[35] M. Rosca, D. Stehlé, and A. Wallet, “On the ring-LWE and polynomialLWE problems,” in Proc. EUROCRYPT, 2018.
[36] L. Ducas and A. Durmus, “Ring-LWE in polynomial rings,” in Proc.
_PKC, 2012._
[37] J. T. Schwartz, “Fast probabilistic algorithms for verification of polynomial identities,” J. ACM, vol. 27, no. 4, pp. 701–717, 1980.
[38] A. Bishnoi, P. L. Clark, A. Potukuchi, and J. R. Schmitt, “On zeros of a
polynomial in a finite grid,” Combinatorics, Probability and Computing,
vol. 27, pp. 310–333, 2018.
[39] K. Yasunaga, “Replacing probability distributions in security games via
hellinger distance,” in Proc. ITC, 2021.
[40] A. Langlois, D. Stehlé, and R. Steinfeld, “GGHLite: More efficient
multilinear maps from ideal lattices,” in Proc. EUROCRYPT, 2014.
[41] S. Bai et al., “Improved security proofs in lattice-based cryptography:
Using the Rényi divergence rather than the statistical distance,” J.
_Cryptology, vol. 31, no. 2, pp. 610–640, 2018._
[42] A. Bogdanov, S. Guo, D. Masny, S. Richelson, and A. Rosen, “On the
hardness of learning with rounding over small modulus,” in Proc. TCC,
2016.
[43] D. Micciancio and M. Walter, “Gaussian sampling over the integers:
Efficient, generic, constant-time,” in Proc. CRYPTO, 2017.
[44] D. Micciancio and M. Walter, “On the bit security of cryptographic
primitives,” in Proc. EUROCRYPT, 2018.
[45] M. Abboud and T. Prest, “Cryptographic divergences: New techniques
and new applications,” in Proc. SCN, 2020.
[46] S. Watanabe and K. Yasunaga, “Bit security as computational cost for
winning games with high probability,” in Proc. ASIACRYPT, 2021.
[47] J. Håstad, R. Impagliazzo, L. A. Levin, and M. Luby, “A pseudorandom
generator from any one-way function,” SIAM J. Comput., vol. 28, no. 4,
pp. 1364–1396, 1999.
[48] M. R. Albrecht, R. Player, and S. Scott., “On the concrete hardness of
learning with errors.” J. Math. Cryptology, vol. 9, pp. 169–203, 2015.
[49] Scipr-lab, “https://github.com/scipr-lab/libsnark.”
[50] Microsoft, “https://github.com/microsoft/SEAL.”
**Heewon Chung received the B.S. in mathemat-**
ics from the Korea Advance Institute Science and
Technology (KAIST), Daejon, Republic of Korea,
in 2009 and M.S. degrees and the Ph.D. degree in
Mathematics from Seoul National University, Seoul,
Republic of Korea, in 2017. Since 2022, he has
been a Cryptographic Researcher in DESILO Inc
in Republic of Korea. His recent research interest
includes solving scalability problem in blockchain
using zero-knowledge proofs (SNARKs, IVC) and
practical applications using homomorphic encryption. From 2016 to 2017, he was a Research Assistant with the Agency for
Science, Technology, and Research (A*STAR) in Singapore. From 2018 to
2019, he was a Manager with Korea Telecom in Republic of Korea. From
2020 to 2021, he was a Postdoctoral Researcher in Hanyang University, Seoul,
Republic of Korea.
**Dongwoo Kim received the B.S. and Ph.D. de-**
grees in mathematical sciences from Seoul National
University, Seoul, South Korea, in 2013 and 2020,
respectively. Since 2023, he has been an Assistant Professor in the Department of AI·SW Convergence, Dongguk University, Seoul, Republic of
Korea. From 2020 to 2023, he had been a Principal
Engineer in security and cryptography at Western Digital Research, Milpitas, CA, USA. Before
that, he has been a Researcher at the Industrial
and Mathematical Data Analytics Research Center,
Seoul National University. His research interests include the improvement
of homomorphic encryption, verifiable computation, and other cryptographic
primitives for practical applications.
-----
**Jeong Han Kim studied Physics and Mathematical**
Physics at Yonsei University (Seoul, Korea), and
earned his Ph.D. in Mathematics at Rutgers University. He was a researcher at AT&T Bell Labs and
at Microsoft Research, and was Underwood Chair
Professor of Mathematics at Yonsei University. He is
currently a Professor of the School of Computational
Sciences at the Korea Institute for Advanced Study.
His main research fields are combinatorics and
discrete mathematics. His best known contribution to
the field is his proof that the Ramsey number R(3, t)
has asymptotic order of magnitude t[2]/ log t. He received the Fulkerson Prize
in 1997 for his contributions to Ramsey theory.
His awards also include Sloan Dissertation Fellowship(1992), Sloan Research Fellowship(1997), Role Model Award for Scientists and Engineers
(2007), Kyung-Ahm Prize (2008) and Sam-il Cultural Awards (Natural
sciences, 2020).
**Jiseung Kim earned his B.S. in Mathematics from**
Chonnam National University in 2009, followed by
a Ph.D. in the Mathematical Sciences from Seoul
National University in Seoul, South Korea, in 2020.
From 2020 to 2022, he worked as a Research
Scientist in the School of Computational Science at
the Korea Institute for Advanced Study. He currently
serves as an Assistant Professor in the Department
of Computer Science and Artificial Intelligence at
Jeounbuk National University, located in Jeounju,
Republic of Korea. His research interests focus on
the mathematical analysis of algebraic lattice-based cryptography and hard
problems.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.23919/jcn.2023.000012?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.23919/jcn.2023.000012, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "https://ieeexplore.ieee.org/ielx7/5449605/8697897/10127630.pdf"
}
| 2,023
|
[
"JournalArticle"
] | true
| null |
[] | 24,770
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/033bcf0f6cd4a36f57bfaddc19901ca48218e3de
|
[
"Computer Science"
] | 0.899884
|
Transfer as a Service: Towards a Cost-Effective Model for Multi-site Cloud Data Management
|
033bcf0f6cd4a36f57bfaddc19901ca48218e3de
|
IEEE International Symposium on Reliable Distributed Systems
|
[
{
"authorId": "1973930",
"name": "R. Tudoran"
},
{
"authorId": "1726606",
"name": "Alexandru Costan"
},
{
"authorId": "1779139",
"name": "Gabriel Antoniu"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Symposium on Reliable Distributed Systems",
"IEEE Int Symp Reliab Distrib Syst",
"SRDS",
"Symp Reliab Distrib Syst"
],
"alternate_urls": null,
"id": "a5003fc4-eef0-4a12-be46-7516f12c4554",
"issn": null,
"name": "IEEE International Symposium on Reliable Distributed Systems",
"type": "conference",
"url": "https://en.wikipedia.org/wiki/International_Symposium_on_Reliable_Distributed_Systems"
}
| null |
## Transfer as a Service: Towards a Cost-Effective Model for Multi-Site Cloud Data Management
### Radu Tudoran, Alexandru Costan, Gabriel Antoniu
To cite this version:
#### Radu Tudoran, Alexandru Costan, Gabriel Antoniu. Transfer as a Service: Towards a Cost-Effective Model for Multi-Site Cloud Data Management. Proceedings of the 33rd IEEE Symposium on Reliable Distributed Systems (SRDS 2014), IEEE, Oct 2014, Nara, Japan. hal-01023282
### HAL Id: hal-01023282
https://inria.hal.science/hal-01023282
#### Submitted on 11 Jul 2014
#### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers.
#### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.
-----
# Transfer as a Service: Towards a Cost-Effective Model for Multi-Site Cloud Data Management
#### Radu Tudoran[∗], Alexandru Costan[†], Gabriel Antoniu[∗]
∗INRIA Rennes - Bretagne Atlantique, France †IRISA / INSA Rennes, France
{radu.tudoran, gabriel.antoniu}@inria.fr alexandru.costan@irisa.fr
Abstract—The global deployment of cloud datacenters is
enabling large web services to deliver fast response to users
worldwide. This unprecedented geographical distribution of the
computation also brings new challenges related to the efficient
data management across sites. High throughput, low latencies,
cost- or energy-related trade-offs are just a few concerns for
both cloud providers and users when it comes to handling data
across datacenters. Existing cloud data management solutions are
limited to cloud-provided storage, which offers low performance
based on rigid cost schemas. Users are therefore forced to
design and deploy custom solutions, achieving performance at the
cost of complex system configurations, maintenance overheads,
reduced reliability and reusability. In this paper, we are proposing
a dedicated cloud data transfer service that supports largescale data dissemination across geographically distributed sites,
advocating for a Transfer as a Service (TaaS) paradigm. The
system aggregates the available bandwidth by enabling multiroute transfers across cloud sites. We argue that the adoption
of such a TaaS approach brings several benefits for both users
and the cloud providers who propose it. For users of multi-site or
federated clouds, our proposal is able to decrease the variability of
transfers and increase the throughput up to three times compared
to baseline user options, while benefiting from the well-known
high availability of cloud-provided services. For cloud providers,
such a service can decrease the energy consumption within
a datacenter down to half compared to user-based transfers.
Finally, we propose a dynamic cost model schema for the
service usage, which enables the cloud providers to regulate and
encourage data exchanges via a data transfer market.
I. INTRODUCTION
With their globally distributed datacenters, cloud infrastructures enable the rapid development of large scale applications.
Examples of such applications running as cloud services across
sites range from office collaborative tools (Microsoft Office
365, Google Drive), search engines (Bing, Google), global
stock market financial analysis tools to entertainment services
(e.g., sport events broadcasting, massively parallel games, news
mining) and scientific applications [1]. Most of the web-based
applications are deployed on multiple sites to leverage proximity to users through content delivery networks. Besides serving
the local client requests, these services need to maintain a
global coherence for mining queries, maintenance or monitoring operations, that require large data movements. Studies
show that the inter-datacenter traffic is expected to triple in
the following years [2], [3]. This geographical distribution of
computation becomes increasingly important for scientific discovery as well. Processing the large amounts of data (e.g., 40
PB per year) generated by the CERN LHC overpasses single
site or single institution capacity, as it was the case for the
Higgs boson discovery, where the processing was extended to
the Google cloud infrastructure [4]. Accelerating the process of
understanding data by partitioning the computation across sites
has proven effective also in solving bio-informatics problems
[5]. However, the major bottlenecks of these geographically
distributed computations are the data transfers, which incur
high costs and significant latencies [6].
Currently, the cloud providers’ support for data management is limited to the cloud storage (e.g., Azure Blobs, Amazon S3). These storage services, accessed through basic REST
APIs, are highly optimized for availability, enforcing strong
consistency and replication [7]. Clearly, they are not well suited
for end-to-end transfers, as this was not their intended goal:
users need to upload data into the remote persistent storage,
from where it becomes then available for download to the other
party. In case of inter-site data movements, the throughput is
drastically reduced by the high latency of the cloud storage
and the low interconnecting bandwidth. Recent developments
led to alternative transfer tools such as Globus Online [8]
or StorkCloud [2]. Although such tools are more efficient
than the cloud storage, they act as third party middleware,
requiring users to setup and configure complex systems, with
the overhead of dedicating some of the resources (initially
leased for computation) to the data management. Our goal is
to understand to what extent and under which incentives the
inter-datacenter transfers can be externalized from users and
be provided as a service by the cloud vendors.
In our previous work [9] we have proposed a user managed
transfer tool that was monitoring the cloud environment for
insights on the underlying infrastructure, used to choose the
best combination of protocol and transfer parameters. In this
paper, we investigate how such a tool can be ”democratized”
and offered transparently by the cloud provider, using a Transfer as a Service (TaaS) paradigm. This shift of perspective
comes naturally: instead of letting users optimize their transfers
by making (possible false) assumptions about the underlying
network topology and performance through intrusive monitoring, we delegate this task to the cloud provider. Indeed,
the cloud owner has extensive knowledge about its network
resources, which it can exploit to optimize (e.g., by grouping)
user transfers, as long as it provides a service to enable them.
Our working hypothesis is that such a service will offer slightly
lower performances than a highly-optimized dedicated userbased setup (e.g., based on multi-routing through extensive
use of network parallelism) but substantial higher performance
than todays’ state-of-the-art transfer solutions (e.g., using the
cloud storage or GridFTP). In turn, this approach has the
advantage of freeing users from the burden of configuring
and maintaining complex data management systems, while
-----
providing the same availability guarantees as for any cloud
managed service.
We argue that by adopting TaaS, cloud providers achieve
a key milestone towards the new generation datacenters, expected to provide mixed service models for accommodating
the business needs to exchange data [10]. In [11], the authors
emphasize that the network and the system innovation are
the key dimensions to reduce costs. Cloud providers rent the
interconnecting bandwidth between datacenters from Tier 1
Internet Service Providers and get discounts based on the
committed transfer levels [12]. Coupled with the flexible
pricing schema that we propose, TaaS can regulate the demand
and increase the number of users which move data. Enabling
fast transfers through simple interfaces, as advocated by TaaS,
cloud providers can therefore grow their outbound traffic and
increase the associated revenues.
Our contributions can be summarized as follows:
- We introduce two user managed options for data transfers
in the cloud (Section II)
- We propose an architecture for a dedicated cloud TaaS,
targeting high performance inter site transfers, and we
discuss its declinations (Section III)
- We perform a thorough comparison between the userand the cloud- managed strategies in different scenarios,
considering several factors that can impact the throughput
(concurrency, data size, CPU load etc.) (Section IV)
- We propose a flexible pricing schema for the service
usage, that enables a “data transfer market” (Sections
V-A,V-B)
- We analyze the energy efficiency of user- versus cloudmanaged inter site transfers (Section V-C)
- We provide an overview of the cloud data management
solutions and their issues (Section VI)
II. CONTEXT OF DATA MANAGEMENT IN THE CLOUD
We first introduce the terminology used throughout this
paper and present the existing user-managed options for data
transfers.
A. The cloud ecosystem
Our proposal relies on the following concepts:
The datacenter (the site) is the largest building block of the
cloud. Public clouds typically have tens of datacenters
distributed in different geographical areas, each datacenter
holding both storage and computation nodes. The compute infrastructure is partitioned in multiple fault domains,
delimited by rack switches. The physical resources of a
node are shared among several VMs, generally belonging
to different users, unless the largest VM type is running,
which is fully mapped to a physical node. Cloud providers
do not own the backbone that interconnects datacenters;
instead, they pay for the inter-site traffic to Tier 1 ISPs.
Multiple network links interconnect the physical nodes
within a site with the ISP infrastructure [11], for higher
performance and availability.
The deployment is the virtual space that aggregates several
VMs, in which a user application is executed. The VMs
are placed on different compute nodes in separate fault
�����
����������
**������**
**����**
������������
�����
������
�����������
**�����������**
**����**
��������������������
������
�����������
Fig. 1. Multi-route user transfers
domains. A load balancer distributes all external requests
among these VMs. A deployment runs within a single
data-center and the number of cores that can be leased
within a deployment is often limited (e.g., in Azure that
is 300 cores / deployment). This means that large scale
applications, using several thousand cores, need to be
distributed across multiple deployments, on multiple sites.
The storage can be used as a high-latency intermediate for
data transfers through some basic store (PUT) or retrieve
(GET) operations. For inter-site transfers, choosing the
best temporary storage location is not trivial. Putting data
at the destination side, at the sender side or in-between
is ambiguous and depends on the access pattern. Adding
to the high latencies and low throughput this increased
sensitivity to each particular transfer, the cloud storage is
clearly an inefficient option for common transfers.
B. User-managed inter-site transfer scenarios
Users can set up their own tools to move data between
deployments, through direct communication, without intermediaries, at higher transfer rates. They can adhere to two
transfer strategies, depending on their cost and performance
requirements:
Endpoint to endpoint solutions leverage the basic transfers
from source to destination, regardless the technology
used (e.g., GridFTP, scp, etc.). This baseline option is
relatively simple to set in place, using the public endpoint
provided for each deployment. The major drawback in
this scenario is the low bandwidth between sites, which
limits drastically the throughput that can be achieved.
Multi-route transfers. Building on the observations that a
user application typically runs on multiple VMs and
that communication between datacenters follows different
physical routes, we have proposed in [9] a multi-route
transfer strategy illustrated in Figure 1. Such a schema
exploits the intra-site low-latency bandwidth to copy data
to intermediate nodes within the deployment. Next, this
data is forwarded towards the destination across multiple
routes, aggregating additional bandwidth between sites.
This approach is better suited for managing Big Data, but
comes at an increased costs: the performance speedup is
sub-linear with respect to the leased resources (i.e., N
times more intermediate nodes do not provide N times
faster throughput). The factors which limit the speedup
are the (small) overhead of inner-deployment transfers
and the congestions in ISP infrastructures.
��������������������
�������������������
-----
������
����������
**������**
**����** �����
�������
�������
����������
**������**
**����** **������������**
**�����**
������
�����������
����������
**�����������**
**����**
������
����������
**�����������**
**����**
������
��������������������
�������������������
�������
����������
������
�����������
**�����**
����������
**�����������**
������
����������
�������
Fig. 2. An asymmetric Transfer as a Service approach
The main issue with these user-managed solutions is that
they are not available out-of-the-box. For instance, prohibiting
factors to deploy the multi-route strategy range from the lack
of user networking and cloud expertise to budget constraints.
Applications might not tolerate even low intrusiveness levels
linked to handling data in the intermediate nodes. Finally,
scaling up VMs for short time periods to handle the transfer
is currently strongly penalized by the VM startup times.
From the cloud provider perspective, having multiple users
that deploy multi-route systems can lead to an uncontrolled
boost of expensive Layer 1 ports towards the ISP [11].
Bandwidth saturation or congestion at the outer datacenter
switches are likely to appear. The bandwidth capacity towards
the Tier 1 ISP backbones, with a ratio of 1:40 or 1:100
compared to the bandwidth between nodes and Tier 2 switches,
can rapidly be overwhelmed by the number of users VMs
staging-out data. Moreover, activating many rack switches for
such communications increases the energy consumption as
demonstrated in Section V-C. Our goal is to find the right
trade-off between the (typically contradicting) cloud providers
economic constraints and users needs.
III.ZOOM ON THE TRANSFER AS A SERVICE
We argue that a cloud-managed transfer service could
substitute the user-based mechanisms without significant performance degradations. At the core of such a service lies a
set of dedicated nodes within each datacenter, used by the
cloud provider to distribute the transferred data and to further
forward it towards the destination. As opposed to our previous
approach, the dedicated nodes are owned and managed by
the cloud provider, they no longer consume resources from
the users deployments. Building on elasticity, the service can
accommodate fluctuating user demands. Multiple parallel paths
are then used for all chunks of data, leveraging the fact that
the cloud routes packages through different switches, racks
and network links. This approach increases the aggregated
inter-datacenter throughput and is based on the empirical
observation that intra-site transfers are at least 10x faster than
the wide-area transfers.
The proposed architecture makes the service locally available to all applications within a datacenter, as depicted in
Figure 2. The usage scenario consists in: 1) applications
transferring data through the intra-site low-latency links to
this service; and 2) the service forwarding the data across
multiple routes towards the destination. The transfer process
becomes transparent to users, as the configuration, operation
������
����������
**������**
**����** �����
�������
�������
����������
**������**
**������������**
**����** **�����**
������
�����������
�������
**������������**
**�����**
����������
**�����������**
**����**
�������
����������
����������
**�����������**
������
�����������
�������
Fig. 3. A symmetric Transfer as a Service approach
and management are all handed to the cloud provider (cloudified), making it resilient to administrative errors.
When the TaaS approach is available at only one endpoint
of the transfer, it can be viewed as an asymmetric service. This
is often the case within federated clouds, where some providers
may not propose TaaS. Users can still benefit from the service
when migrating their data to computation instances located in
different infrastructures. Such an option is particularly interesting for scientific applications which rely on hybrid clouds
(e.g., scaling up the local infrastructure to public clouds).
The main advantage with this architecture is the minimal
number of hops added between the source deployment and the
destination, which translates into smaller overheads and lower
latencies. However, situations can arise when the network
bandwidth between datacenters might still not be used at its
maximum capacity. For instance, applications which exchange
data in real-time can have temporary lower rates of transferred
packages. Taking also into account that the connection to the
user destination is direct, multiplexing data from several users
is not possible. In fact, as only one end of the transmission over
the expensive inter-site link is controlled by the cloud vendor,
communication optimizations are not feasible. To enable them,
the cloud provider should manage both ends of the inter-site
connection.
We therefore advocate the use of the symmetric solution,
in which TaaS is available at both transfer ends. This approach
makes better use of the inter-datacenter bandwidth, and is
particularly suited for transfers between sites of the same
cloud provider. With this architecture, the TaaS is deployed on
every datacenter and when an inter-site transfer is performed,
the local service forwards the data to the destination service,
which further delivers it to the destination node, as depicted
in Figure 3. This approach enables many optimizations which
only require some simple pairwise encode/decode operations:
multiplexing data from different users, compression, deduplication, etc. Such optimizations, which were not possible with
the asymmetric solution, can decrease the outbound traffic, to
the benefit of both users and cloud providers. Moreover, the
topology of the datacenter can now be taken into account by
the cloud provider when partitioning the nodes of the service,
such that load is balanced across the Tier 2 switches. Enabling
this informed resource allocation has been shown to provide
significant performance gains [13]. Despite the potential lower
performance compared to the symmetric solution, due to the
additional dissemination step at destination, this approach has
the potential of bringing several operational benefits to the
cloud provider.
��������������������
�������������������
������
������
��������������������
-----
Fig. 4. Aggregated throughput from multiple routes towards different types
of destination VMs
The service is accessed through a simple API, that currently
implements send and receive functions. Users only need to
provide a pointer to their data and the destination node to
perform a high performance, resilient data movement. The API
can be further enhanced to allow experienced users to configure
several transfer parameters (e.g., chunk size, number of routes).
IV.EVALUATION
In this section we analyze the performance of our proposal
and compare it to user managed schemas through experiments
focusing on realistic usage scenarios. The working hypothesis
is that user based transfers are slightly more efficient but
a cloud service can deliver comparable performance with
less administrative overhead, lower costs and more reliability
guarantees.
A. Experimental setup
The experiments were performed on the Microsoft Azure
cloud, using two datacenters: North Central US, located in
Chicago, and North Europe, located in Dublin, with data being
transferred from US towards EU. These distant sites were
selected in order to ensure a wide geographical setup across
continents, with high-latency interconnecting links crossing
the Atlantic ocean and communication paths across the infrastructures belonging to multiple ISPs. Considering the time
zone differences between sites, the experiments are relevant
both for typical user transfers and for cloud maintenance
operations (e.g., bulk backups, inter-site replication). The latter
are regularly performed by cloud providers and allow the TaaS
approach to be further tuned in order to take into account the
hourly loads of datacenters, as discussed in [3].
The cloud is used at the Platform as a Service (PaaS)
level with Azure Web Roles running Small and xLarge VMs.
The Small VM type is the elementary resource unit in Azure,
offering 1 virtual CPU, mapped to a physical CPU, 1.75 GB
memory, 225 GB local ephemeral storage. The xLarge VM
type spans over an entire physical node, offering 8 virtual
CPUs, 14 GB memory and 2 TB ephemeral local disk. From
the network point of view, a physical node in Azure is
connected through a 1 Gbps Ethernet card, meaning that an
xLarge VM will benefit entirely from it, while a Small VM
might get only one eighth of the network when other user VMs
are deployed on the same physical node.
The experiments are performed by repeatedly transferring
data chunks of 64 MB each from the memory. The intermediate
nodes handle the data entirely in memory, both for user and
cloud transfer configurations. For all experiments scaling up
Fig. 5. The throughput of multiple routes with respect to different combinations of VM types
the number of resources, the amount of transferred data is
increased proportionally, always handling a constant amount
of data per intermediate node. The throughput is computed
at the receiver side by measuring the time to transfer a fixed
amount of data. Each sample is the average of at least 100
independent measurements.
B. User-manged multi-route transfers
We first discuss the throughput gains which can be achieved
by users with multi-route transfer strategies. The performance
shift is represented in Figure 4 based on the overall cumulative
throughput of Small VMs when increasing the number of
intermediate nodes. We notice that the gain obtained when
scaling up to more nodes is asymptotically bounded as first
discussed in [9]. However, our previous results do no conclusively show whether the performance bound is caused by
a bottleneck at the destination side. To answer this question,
we devise a new experiment in which the same sender setup
is kept and the destination node is replaced by a xLarge VM,
which has eight times more resources than the previously used
Small instance. The results show a similar throughput, despite
the extra resources, which means that the performance bound
is not due to a bottleneck at the destination. This observation
prevents wasting resources and increasing costs by using larger
VMs when trying to increase the performance.
This finding raises a new question: is then the performance
bounded due to the sender setup, the bandwidth between the
datacenters or both? To answer it we continue by changing the
sender VM type too and evaluate all resulting combinations:
Small to Small, Small to xLarge, xLarge to Small and xLarge
to xLarge. The results are presented in Figure 5. The number
of intermediate nodes is 3, which is the upper limit of
our resource subscription (i.e., 4 xLarge VMs = 32 cores).
Nevertheless, at this point there is no need to go beyond
that limit as we have already determined the performance
trend in the experiment shown in Figure 4. Contrary to the
expectations, using xLarge VMs at the sender does not improve
the aggregated throughput. This shows that the interconnecting
bandwidth within the sender datacenter has low impact on the
overall transfer. However, the topology of the virtual network
between the sender and intermediate nodes, scattered based
on their size across different physical nodes, fault domains or
racks, can increase the overhead for intra-site communication.
Using Small VMs is therefore sufficient to aggregate the
bandwidth between datacenters. We can conclude that the
transfer performance is mainly determined by the number of
distinct physical paths through which the packages are routed
across the ISP infrastructures connecting the datacenters.
-----
Fig. 6. The coefficient of variation for an increasing number of routes
Next, we focus on the variability with respect to multiroute transfers. Figure 6 shows the coefficient of variation (i.e.,
standard deviation/average%) for the throughput measurements
in Figure 4. Surprisingly, using multiple paths decreases the
otherwise high data transfer variability. This result is explained
by the fact that with multiple routes the drops in performance
on some links are compensated by bursts on others. The
overall cumulative throughput, perceived by an application
in this case, tends to be more stable. This observation is
particularly important for scientific applications which build
on predictability and stability of performance.
C. Evaluating the inter-site transfer options
We present in Figure 7 the comparison between the average
throughput of the cloud transfer service and the user-based
multi-route strategies. The experimental setup consists of 5
nodes per transfer service dedicated for data handling.
The asymmetric solution delivers slightly lower performance (∼16%) than a user-based multi-route schema. The first
factor causing this performance degradation is the overhead
introduced by the load balancer that distributes the incoming
requests (i.e., from the application to the cloud service) between the nodes. The second factor is the placement of the
VMs in the datacenter. For user-based transfers, the sender
node and the intermediate nodes are closer rack-wise, some
of them being even in the same fault domain. This translates
into less congestion in the switches in the first phase of
the transfer when data is sent to the intermediate nodes.
For the cloud-managed transfers, the user source nodes and
the cloud dedicated transfer nodes clearly belong to distinct
deployments, meaning that they are farther apart with no
proximity guarantees.
The symmetric solution is able to compensate for the
previous performance degradation with the extra nodes at the
destination site. The overhead of the additional hop with this
approach is therefore neutralized when additional resources are
provisioned by the cloud provider. The observation opens the
possibility for differentiated cloud-managed transfer services
in which different QoS guarantees are proposed and charged
differently.
D. Dealing with concurrency
Fig. 7. The average throughput and the standard deviation with different
transfer options with 5 intermediate nodes used to multi-route the packets.
User-based Parallel Transfer
Asymmetric Cloud Service
Symmetric Cloud Service
40
30
20
10
0
1 2 3 4 5
Number of Concurrent Applications
Fig. 8. The average throughput and the corresponding standard deviation
for an increasing number of applications using the cloud transfer service
concurrently
The experiment presented in Figure 8 depicts the throughput of an increasing number of applications using the transfer
service in a configuration with 5 intermediate nodes. The
goal of this experiment is to assess whether a sustainable
quality of service can be provided to the user applications in a
highly-concurrent context. Not surprisingly, an increase in the
number of parallel applications decreases the average transfer
performance per application with 25%. This is generated by
the congestion in the transfers to the cloud service nodes and
by the limit in the aggregated inter-site bandwidth that can be
aggregated by these nodes. While this might seem a bottleneck
for providing TaaS at large scale, it is worth zooming on the
insights of the experiment to learn how such a performance
degradation can be alleviated. We have scaled the number of
clients up to the point where their number matches the number
of nodes used for the transfer service. Hypothetically, we can
consider having 1 node from the transfer service per client
application. At this point the transfer performance delivered
by the service per application is reduced, but asymptotically
bounded, with less than 25% compared to the situation where
only one application was accessing the service and all the
5 nodes where serving it. This shows that by maintaining a
number of VMs proportional to the number of applications
accessing the service, TaaS can be a viable solution and that
it can in fact provide high performance for many applications
in parallel.
We further notice that under increased concurrency, the
performance of the symmetric solution drops more than in
the case of the asymmetric one. This demonstrates that the
congestion in handling data packets in the service nodes is the
main cause of the performance degradation, since its effects
are doubled in the case of the symmetric solution. At the same
time, the aggregated throughput achieved by the applications
using the transfer service would require 3 dedicated nodes from
each of them (i.e., 15 nodes in total) compared to 5 or 10 nodes
with the asymmetric or the symmetric solution, respectively.
Deploying such services would make the inter-site transfers
more energy efficient and the datacenters greener.
-----
Fig. 9. Comparing the throughput of the cloud service against user-based
multi-route transfers, using 4 extra nodes. The measurements depict the
performance while the intermediate nodes are handling different CPU loads
E. Towards a cloud service for inter-site data transfers
Not all applications afford to fully dedicate several nodes
just for performing transfers. It is interesting to analyze to what
extent, the computation load from the intermediate nodes can
impact the performance of user-based transfers. We present in
Figure 9 the evolution of the throughput when the computation
done in the intermediate nodes has different CPU loads and
execution priorities. All 100% CPU loads were induced using
the standardized HeavyLoad tool [14], while the 40%-50%
load was generated using system background threads which
only access the memory.
Two main observations can be made based on the results
shown in Figure 9. First, the throughput is reduced from
20% to 50% when the intermediate nodes are performing
other computation in parallel with the transfers. This illustrates
that the IO inter-site throughput is highly sensitive to the
CPU usage levels. This observation complements the findings
related to the IO behavior discussed in [15] for streaming
strategies, in [16] for storing data in the context of HPC or in
[17] for the TCP throughput with shared CPUs between several
VMs. Second, the performance obtained by users under CPU
load is similar, or even worse, to the one delivered by transfer
service under increased concurrency (see Figure 8). This gives
a strong argument for many applications running in the cloud
to migrate towards such a TaaS offered by the cloud provider.
Doing so, applications are able to perform high performance
transfers while discharging their VMs from secondary tasks
other than the computation for which they were rented for.
F. Inter-site transfers for Big Data
In the next experiment larger sets of data ranging from
30 GB to 120 GB are transferred between sites, using the
cloud- and the user- managed options (grouped in 4 scenarios).
The goal of this experiment is to understand the viability of
the cloud services in the context of geographically distributed
Big Data applications. The results are displayed in Figured 10
and 11. The experiment is relevant both to users and cloud
providers since it offers concrete incentives about the costs
(e.g., money, time) to perform large data movements in the
cloud. To the best of our knowledge, there are no previous
performance studies about the data management capabilities
of the cloud infrastructures across datacenters.
Figure 10 presents the transfer times for the 4 scenarios.
The baseline user endpoint to endpoint transfer gives very
poor performance due to the low bandwidth between the
datacenters. In fact, the resulting times can be considered as the
Fig. 10. The time to transfer large data sets using the available options.
The user default Endpoint to Endpoint (E2E) option gives the upper bound,
while the User-Based Multi Route offers the fastest transfer time. The cloud
services, each based on 5 nodes deployments give intermediate performances,
closer to the lower bounds
upper bounds of user-based transfers (i.e., we do not consider
here the even slower options like using the cloud storage). On
the other hand, the user-based multi-route option is the fastest,
and it can be considered as the lower bound for the transfer
times. In-between, the cloud transfer service declinations are
up to 20% slower than user-based multi-route but two times
faster than the user baseline option.
In Figure 11 we depict the corresponding costs of these
transfers. The costs can be divided in two components: the
compute cost, paid for leasing a certain number of VMs for
the transfer period and the outbound cost, which is charged
based on the amount of data exiting the datacenter. Despite
taking longer time for the transfer, the compute cost of the
user-based endpoint to endpoint is the smallest as it only
uses 2 VMs (i.e., sender and destination). On the other hand,
user-based multi-route transfers are faster but at higher costs
resulted from the extra VMs, as explained in Section II-B
and detailed in [9]. The outbound cost only depends on the
data volume and the cost plan. As the inter-site infrastructure
is not the property of the cloud provider, part of this costs
represent the ISP fees, while the difference is accounted by the
cloud provider. The real cost (i.e., the one charged by the ISP)
is not publicly known and depends on business agreements
between the companies. However, we can assume that this is
lower than the price charged to the cloud customers, giving
thus a range in which the price can potentially be adjusted.
Combining the observations about the current pricing margins
for transferring data with the performance of the cloud transfer
service, we argue that cloud providers should propose TaaS as
an efficient transfer mechanisms with flexible prices. Cloud
vendors can use this approach to regulate the outbound traffic
of datacenters, reduces their operating costs, and minimising
the idle bandwidth.
V. DISCUSSION
This section analyses the potential advantages brought
by a cloud service for inter-site data transfers. From the
users perspective, TaaS can offer a transparent and easy-touse method to handle large amounts of data. The service can
sustain high throughput, close to the one achieved by users
when renting and dedicating for the data handling alone at least
4-5 extra VMs. Besides avoiding the burden of configuring
and managing extra nodes or complex transfer tools, the
performance cost ratio can be significantly increased.
From the cloud providers points of view, such a service
-----
|a) Transfer Compute Cost 6 5 (euro) 4 Price 3 2 30G 60G 90G 120G Size (GB) User-Based Multi Route (Upper Cost Limit) User E2E (Lower Cost Limit)|b) Transfer Outbound Cost 30 25 (euro) 20 15 Price 10 5 0 Data t D ra D ata nat s a ft r et r ar sna sn f ms ef ore rsr D e se u tf pu ha p at u l nott o( 31 504 Us 000e TTTr BBB) Cost Plan Data Set Size 120GB 90GB 60GB 30GB|
|---|---|
|Col1|Size (GB)|
|---|---|
||User-Based Multi Route (Upper Cost Limit) User E2E (Lower Cost Limit)|
|||
Fig. 11. The cost components corresponding to the transfer of 4 large data
sets. a) The cost of the compute resources which perform the transfers, given
by their lower and upper bounds. b) The cost for the outbound traffic computed
based on the available cost plans offered by the cloud provider.
would give an incent to increase customer demand and brings
competitive economical and energy advantages. TaaS extends
the rather limited cloud data management ecosystem with a
flexibly priced service, that supports a data transfer market, as
explained in Sections V-A and V-B, and makes the datacenter
greener, as shown in Section V-C.
A. Defining the cost margins for TaaS
In our quest for a viable pricing schema, we start by
defining the cost structure of the transfer options. The price
is based on the outbound traffic and the computation. The
outbound cost structure is identical for all transfer strategies
while the computational cost is particular to each option:
Outbound Cost:
Size ∗ Costoutbound, where Size is the volume of transferred data and the Costoutbound is the price charged by
the cloud provider for the traffic exiting the datacenter.
Computational Cost:
User-managed Endpoint to Endpoint
timeE2E ∗ 2 ∗ CostV M, where timeE2E is the time
to transfer data between the sender and the destination
VMs. To obtain the cost, this has to be multiplied with
the renting price of a VM: V MCost.
User-managed Multi-Route
timeUMR ∗ (2 + NextraV Ms) ∗ CostV M, where
timeUMR is the time to transfer data from the sender to
the destination using NextraV Ms extra VMs. As before,
the cost is obtained by multiplying with the VM cost.
TaaS
timeCT S ∗2∗CostV M +timeCT S ∗servicecomputecost,
where timeCT S is the transfer time and
servicecomputecost is the price charged by the
cloud provider for using the transfer service. Hence,
this cost is defined as the price for leasing the sender
and destination VMs plus the price for using the
service for the period of the transfer.
The computation cost paid by users ranges from the cheapest Endpoint to Endpoint option to the more performant, but
more expensive, User-managed Multi-Route transfers. These
costs can be used as lower and upper margins for defining
a flexible pricing schema, to be charged for the time the
cloud transfer service is used (i.e., servicecomputecost ). Defining the service cost within these limits correlates with the
delivered performance, which is between the same limits of
the user-based options. To represent the servicecomputecost as
a function within these bounds, we introduce the following
gain parameters, that describe the performance proportionality
between transfer options: timeE2E = a ∗ timeUMR =
b ∗ timeCT S and timeCT S = c ∗ timeUMR. Based on the
empirical observations shown in Section IV, we can concretize
the parameters with the following values: a = 3, b = 2.5 and
c = 1.2. Rewriting the previous computation cost equation and
simplifying terms, we obtain in Equation 1 the cost margins
for the servicecomputecost .
2∗CostV M ∗(b−1) ≤ servicecomputecost ≤ CostV M ∗ [2 +][ N][ + 2][ ∗] [c]
c
(1)
Equation 1 shows that a flexible cost schema is indeed
possible. Varying the cost within these margins, a data transfer
market for inter-site data movements can be created, giving the
cloud provider the mechanisms to regulate the outbound traffic
and the demand, as discussed next.
B. Proposal for a data transfer market
Offering diversified services to customers in order to
increase usage and revenues are among the primary goals
of the cloud providers. We argue that these objectives can
be fulfilled by creating a data transfer market. This can be
implemented based on the proposed cloud transfer service
offered at SaaS level with reliability, availability, scalability,
on-demand provisioning and pay-as-you-go pricing guarantees.
In Equation 1 we have defined the margins within which
the service cost can be varied. We illustrate in Figure 12
these flexible prices for the two TaaS declinations (symmetric
and asymmetric). The values are computed based on the
measurements for transferring the large data sets mentioned
in Section IV-F. The cost is normalized and expressed as the
price charged when using the service (i.e., the compute cost
component) to transfer 1 GB of data. A conversion between
the per hour and the per GB usage is possible due to the stable
performance delivered by this approach.
The minimal and maximal values in Figure 12 correspond
to the user-managed solutions (i.e., Endpoint to Endpoint and
Multi-Route). Between these margins, the cloud transfer service can model the price with a range of discretization values.
The two TaaS declinations have different pricing schemas
due to their performance gap, with the symmetric one being
slightly less performant and having a lower price. As for the
outbound cost, the assumption we made is that any outbound
cost schema offered today brings profit to the cloud provider.
Hence, we propose to extend the flexible usage pricing to
integrate this cost component, as shown in Figure 13. The
main advantage is that the combined cost gives a wider range
in which the price can be adjusted. Additionally, it allows cloud
providers to propose a unique cost schema instead of charging
users separately for the service usage and for the outbound
traffic.
A key advantage of setting up a data transfer market for
this service is that it enables cloud providers to regulate the
traffic. A simple strategy to encourage users to send data is to
decrease the price towards the lower bounds shown in Figure
13 in order to reduce the idle bandwidth periods. A price
drop would attract users which otherwise would send data by
dedicating 4-5 additional VMs, with equivalent performance.
Building on such costs and complementing the work described
in [3], applications could buffer in VMs the less urgent data
-----
**Symetric Cloud Service**
Outbaund cost0.16
Compute Cost
Total Price per GB0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
**The price range for the cloud transfer service**
0.062
0.06
0.058
0.056
0.054
0.052
0.05
0.048
**Asymmetric Cloud Service**
Outbaund cost0.16
Compute Cost
Total Price per GB0.14
|Col1|Outbaund c0o.1st6 Compute Cost|
|---|---|
0.12
0.1
0.08
0.06
0.04
0.02
0
1.36 1.6 1.813333333 2.1 2.3 2.5
Cost of the cloud transfer service (euro)
Fig. 12. The range in which the price for the cloud services can be varied.
and send it in bulks only during the discounted periods. On
the other hand, when many users send data simultaneously, independently or using TaaS, the overall performance decreases
due to switch and network bottlenecks. Moreover, the peak
usage of outbound traffic from the cloud towards the ISPs
grows, which leads to lower profit margins and penalty fees
for overpassing the SLA quotas [12], [18], [19]. It is in the
interest of the cloud providers to avoid such situations. With
the flexible pricing, they have the means to react to such
situations by simply increasing the usage price. With the high
prices approaching the ones of user multi-route option, the
demand can be temporarily decreased. At this point, it becomes
more interesting for users to get their own VMs to handle data.
Adjusting the price strategy on the fly, following the
demand, produces a win-win situation for users and cloud
providers. Clients have multiple services with different price
options, allowing them to pay the desired cost that matches
their targeted performance. Cloud providers increase their
revenues by outsourcing the inter-site transfers from clients
and by controlling the traffic. Finally, TaaS can act as a
proxy between ISPs and users, protecting the latter from price
fluctuations introduced by the former; after all, cloud providers
are less sensitive to price changes than users are, as discussed
in [20].
C. The energy efficiency of data transfers
Cost Strategy
Cost Strategy
Fig. 13. Aggregating the cost components, outbound traffic cost and
computation cost, into a unified cost schema for inter-site traffic
where the total energy is the sum of: 1) the energy used
at the application side (i.e., the sender, destination nodes and
the rack switches they activate) and 2) the energy consumed
at the transfer service side, by the nodes which operate it (i.e.,
NodesT aaS) and the switches connecting them.
Comparing the two scenarios, we obtain in Equation 4
the generic ratio for the extra energy used when each user
is handling his data on its own:
User − basedenergy NApp ∗ (2 + NextraV Ms)
= (4)
TaaSenergy (2 ∗ NApp + NodesT aaS) ∗ c
When we illustrate this for the configurations used in the
evaluation section (NextraV Ms = 5, NApp = NodesT aaS and
c = 1.2), we notice that twice more energy is consumed if the
transfers are done by users.
D. One more thing: reliability
When breaking the operating costs of a cloud datacenter,
the authors of [11] find that ”over half the power used by
network equipment is consumed by the top of rack switches”.
Such a rack switch connects around 24 nodes and has an
hourly energy consumption of about 60W, while a server node
consumes about 200W [21]. Our goal is to assess and compare
the energy consumed in a datacenter when transferring data
using the user-managed multi-route setup (EUMR in Equation
2) and the cloud transfer service (ECT S in Equation 3).
We consider NApp applications, each using NextraV Ms extra
nodes to perform user-based multi-route transfers. For simplicity, we use the average transfer time (time) of applications,
and then the total energy consumed by the application nodes
and switches is:
EUMR = ( [N][App][ ∗] [(2 +][ N][extraV Ms][)] ∗ 60W/h
24
+ NApp ∗ (2 + NextraV Ms ∗ 200W/h)) ∗ time (2)
where the first part of the equation corresponds to the energy
used by the rack switches in which the applications nodes are
deployed and the last part gives the power used by the nodes.
ECT S = ( [N][App][ ∗] [2] ∗ 60W/h + NApp ∗ (2 ∗ 200W/h)+
24
NodesT aaS ∗ 60W/h
+ NodesT aaS ∗ 200W/h) ∗ time ∗ c (3)
24
A cloud managed transfer service has the advantage of
being always available, in line with the reliability guarantees
of all cloud services. Requests for transfers are carried over
network paths that the cloud provider constantly monitors and
optimizes for both availability and performance. This allows
to quickly satisfy peaks in demand with rapid deployments
and increased elasticity. Cloud providers ensure that a TaaS
system incorporates service continuity and disaster recovery
assurances. This is achieved by leveraging a highly available
load-balanced dedicated nodes-farm to minimize downtime
and prevent data losses, even in the event of a major unplanned
service failure or disaster. Predictable performance can be
achieved through strict uptime and SLAs guarantees.
User managed solutions typically involve hard-to-maintain
scripts and unreliable manual tasks, that often lead to discontinuity of service and errors (e.g., incompatibility between new
versions of some building blocks of the transfer framework).
These errors are likely to cause VM failures and, currently, the
period while a VM is stopped or is being rebooted is charged to
users. With a TaaS approach, both the underlying infrastructure
failures and the user errors are isolated from the transfer itself:
they are transparent to users and are not charged to them.
This allows to automate file transfer processes and provides a
predictable operating cost per user over a long period.
VI.RELATED WORK
The landscape of cloud data transfers is rather rich in
user managed solutions, spanning from basic tools for endto-end communication (e.g., scp, ftp, GridFTP) to complex
systems that support large-scale data movements for workflows
-----
and scientific applications (e.g., GlobusOnline, Stork, Frugal).
The common denominator of these solutions is their need to
be deployed, fully configured and managed by users, with
potentially little networking knowledge. Meanwhile, the only
viable cloud provided alternative is the use of the cloud
storage, which incurs large latencies and is subject to additional
costs. To the best of our knowledge, our proposal is the first
attempt to delegate the intra- and inter- cloud data transfers
from users to the cloud providers, following a Transfer as a
Service paradigm.
The handiest option for handling data distributed across
several datacenters is to rely on the existing cloud storage services. This approach allows to transfer data between arbitrary
endpoints via the cloud storage and it is adopted by several
systems in order to manage data movements over wide-area
networks [22], [23]. There is a rich storage ecosystem around
public clouds. Cloud providers typically offer their own object
storage solutions (e.g., Amazon S3 [24], Azure Blobs [7]),
which are quite heterogeneous, with neither a clearly defined
set of capabilities nor any single architecture. They offer
binary large objects (BLOBs) storage with different interfaces
(such as key-value stores, queues or flat linear address spaces)
and persistence guarantees, usually alongside with traditional
remote access protocols or virtual or physical server hosting.
They are optimized for high-availability, under the assumption
that data is frequently read and only seldom updated. Most of
these services focus on data storage primarily and support other
functionalities essentially as a ”side effect” Typically, they are
not concerned by achieving high throughput, nor by potential
optimizations, let alone offer the ability to support different
data services (e.g., geographically distributed transfers). Our
work aims is to specifically address these issues.
Besides storage, there are few cloud-provided services that
focus on data handling. Some of them use the geographical
distribution of data to reduce latencies of data transfers. Amazon’s CloudFront [25], for instance, uses a network of edge
locations around the world to cache copy static content close
to users. The goal here is different from ours: this approach
is meaningful when delivering large popular objects to many
end users. It lowers the latency and allows high, sustained
transfer rates. However, this comes at the cost and overhead
of replication, which is considerable for large datasets, making
it inappropriate for simple end-to-end data transfers. Instead,
we don’t use multiple copies of data, but rather exploit the
network parallelism to allow per transfer optimizations.
The alternative to the cloud offerings are the transfer
systems that users can choose and deploy on their own, which
we will generically call user-managed solutions. A number
of such systems emerged in the context of the GridFTP [26]
transfer tool, initially developed for grids. Among these, the
work most comparable to ours is Globus Online [27], which
provides high performance file transfers through intuitive web
2.0 interfaces, with support for automatic fault recovery.
However, Globus Online only performs file transfers between
GridFTP instances, remains unaware of the environment and
therefore its transfer optimizations are mostly done statically.
Several extensions brought to GridFTP allow users to enhance
transfer performance by tuning some key parameters: threading
in [28] or overlays in [29]. Still, these works only focus on
optimizing some specific constraints and ignore others (e.g.,
TCP buffer size, number of outbound requests). This leaves
the burden of applying the most appropriate settings effectively
to users. In contrast, we propose a shift of paradigm and
demonstrate the advantages of having an optimized transfer
service provided by the cloud provider, through a simple and
transparent interface.
Other approaches aim at improving the throughput by exploiting the network and the end-system parallelism or a hybrid
approach between them. Building on the nework parallelism,
the transfer performance can be enhanced by routing data via
intermediate nodes chosen to increase aggregate bandwidth.
Multi-hop path splitting solutions [29] replace a direct TCP
connection between the source and destination by a multihop chain through some intermediate nodes. Multi-pathing
[30] employs multiple independent routes to simultaneously
transfer disjoint chunks of a file to its destination. These
solutions come at some costs: under heavy load, per-packet
latency may increase due to timeouts while more memory
is needed for the receive buffers. On the other hand, endsystem parallelism can be exploited to improve utilization of a
single path. This can be achieved by means of parallel streams
[31] or concurrent transfers [32]. Although using parallelism
may improve throughput in certain cases, one should also
consider system configuration since specific local constraints
(e.g., low disk I/O speeds or over-tasked CPUs) may introduce
bottlenecks. More recently, a hybrid approach was proposed
[33] to alleviate from these. It provides the best parameter
combination (i.e., parallel stream, disk, and CPU numbers)
to achieve the highest end-to-end throughput between two
end-systems. One issue with all these techniques is that they
cannot be ported to the clouds, since they strongly rely on the
underlying network topology, unknown at the user-level (but
instead exploitable by the cloud provider).
Finally, one simple alternative for data management involves dedicated tools run on the end-systems. Rsync, scp, ftp
are used to move data between a client and a remote location.
However, they are not optimized for large numbers of transfers and require some networking knowledge for configuring,
operating and updating them. BitTorrent based solutions are
good at distributing a relatively stable set of large files but do
not address scientists’ need for many frequently updated files,
nor they provide predictable performance.
VII.CONCLUSION
This paper introduces a new paradigm, Transfer as a
Service, for handling large scale data movements in federated
cloud environments. The idea is to delegate the burden of
data transfers from users to the cloud providers, who are
able to optimize them through their extensive knowledge on
the underlying topologies and infrastructures. We propose a
prototype that validates these design principles through the use
of a set of dedicated transfer VMs that further aggregate the
available bandwidth and enable multi-route transfers across geographically distributed cloud sites. We show that this solution
is able to effectively use the high-speed networks connecting
the cloud datacenters and bring a transfer speed-up of up to a
factor of 3 compared to state-of-the-art user tools. At the same
time, it enables a reduction to half of the energy fingerprint for
the cloud providers, while it sets the grounds for a data transfer
market, allowing them to regulate the data movements.
-----
Thanks to these encouraging results, we plan to further
investigate the benefits of TaaS approaches both for users and
cloud providers. In particular, we plan to study new cost models that allow users to bid on idle bandwidth and use it when
their bid exceeds the current price, which varies in real-time
based on supply and demand. We also see a good potential to
use our prototype to study the performance of inter-datacenter
or inter-cloud transfers. We believe that cloud providers could
leverage this tool as a metric to describe the performance of
network resources. As a further evolution, they could provide
Introspection as a Service to reveal information about the cost
of internal cloud operations to relevant applications.
REFERENCES
[1] “Azure Succesful Stories,” http://www.windowsazure.com/en-us/casestudies/archive/.
[2] T. Kosar, E. Arslan, B. Ross, and B. Zhang, “Storkcloud: Data transfer
scheduling and optimization as a service,” in Proceedings of the 4th
ACM Workshop on Scientific Cloud Computing, ser. Science Cloud ’13.
New York, NY, USA: ACM, 2013, pp. 29–36.
[3] N. Laoutaris, M. Sirivianos, X. Yang, and P. Rodriguez, “Interdatacenter bulk transfers with netstitcher,” in Proceedings of the ACM
SIGCOMM 2011 Conference, ser. SIGCOMM ’11. New York, NY,
USA: ACM, 2011, pp. 74–85.
[4] “Cloud Computing and High-Energy Particle Physics:
How ATLAS Experiment at CERN Uses Google Compute Engine in the Search for New Physics at LHC,”
https://developers.google.com/events/io/sessions/333315382.
[5] A. Costan, R. Tudoran, G. Antoniu, and G. Brasche, “TomusBlobs:
Scalable Data-intensive Processing on Azure Clouds,” Journal of Concurrency and computation: practice and experience, 2013.
[6] B. Cho and I. Gupta, “Budget-constrained bulk data transfer via internet
and shipping networks,” in Proceedings of the 8th ACM International
Conference on Autonomic Computing, ser. ICAC ’11. New York, NY,
USA: ACM, 2011, pp. 71–80.
[7] B. Calder and et al, “Windows azure storage: a highly available cloud
storage service with strong consistency,” in Proceedings of the 23rd
ACM Symposium on Operating Systems Principles, 2011.
[8] I. Foster, R. Kettimuthu, S. Martin, S. Tuecke, D. Milroy, B. Palen,
T. Hauser, and J. Braden, “Campus bridging made easy via globus
services,” in Proceedings of the 1st Conference of the Extreme Science
and Engineering Discovery Environment: Bridging from the eXtreme
to the Campus and Beyond, ser. XSEDE ’12. New York, NY, USA:
ACM, 2012, pp. 50:1–50:8.
[9] R. Tudoran, A. Costan, R. Wang, L. Boug´e, and G. Antoniu, “Bridging
data in the clouds: An environment-aware system for geographically
distributed data transfers,” in Proceedings of the 2014 14th IEEE/ACM
International Symposium on Cluster, Cloud and Grid Computing
(Ccgrid 2014), ser. CCGRID ’14. IEEE Computer Society, 2014.
[Online]. Available: http://hal.inria.fr/hal-00978153
[10] T. Bishop, “Data center 2.0 a roadmap for data center transformation,” in
White Paper. http://www.io.com/white-papers/data-center-2-roadmapfor-data-center-transformation/, 2013.
[11] A. Greenberg, J. Hamilton, D. A. Maltz, and P. Patel, “The cost of a
cloud: Research problems in data center networks,” SIGCOMM Comput.
Commun. Rev., vol. 39, no. 1, pp. 68–73, Dec. 2008.
[12] V. Valancius, C. Lumezanu, N. Feamster, R. Johari, and V. V. Vazirani,
“How many tiers?: Pricing in the internet transit market,” SIGCOMM
Comput. Commun. Rev., vol. 41, no. 4, pp. 194–205, Aug. 2011.
[13] R. Tudoran, A. Costan, and G. Antoniu, “Datasteward: Using dedicated compute nodes for scalable data management on public clouds,”
in Proceedings of the 2013 12th IEEE International Conference on
Trust, Security and Privacy in Computing and Communications, ser.
TRUSTCOM ’13. Washington, DC, USA: IEEE Computer Society,
2013, pp. 1057–1064.
[14] “HeavyLoad,” http://www.jam-software.com/heavyload/.
[15] R. Tudoran, K. Keahey, P. Riteau, S. Panitkin, and G. Antoniu, “Evaluating streaming strategies for event processing across infrastructure
clouds,” in Proceedings of the 2014 14th IEEE/ACM International
Symposium on Cluster, Cloud and Grid Computing (Ccgrid 2014), ser.
CCGRID ’14. IEEE Computer Society, 2014.
[16] M. Dorier, G. Antoniu, F. Cappello, M. Snir, and L. Orf, “Damaris: How
to Efficiently Leverage Multicore Parallelism to Achieve Scalable, Jitterfree I/O,” in CLUSTER - IEEE International Conference on Cluster
Computing, 2012.
[17] S. Gamage, R. R. Kompella, D. Xu, and A. Kangarlou, “Protocol
responsibility offloading to improve tcp throughput in virtualized environments,” ACM Trans. Comput. Syst., vol. 31, no. 3, pp. 7:1–7:34,
Aug. 2013.
[18] P. Hande, M. Chiang, R. Calderbank, and S. Rangan, “Network pricing
and rate allocation with content provider participation,” in INFOCOM
2009, IEEE, April 2009, pp. 990–998.
[19] S. Shakkottai and R. Srikant, “Economics of network pricing with
multiple isps,” IEEE/ACM Trans. Netw., vol. 14, no. 6, pp. 1233–1245,
Dec. 2006.
[20] V. Valancius, C. Lumezanu, N. Feamster, R. Johari, and V. V. Vazirani,
“How many tiers?: Pricing in the internet transit market,” in Proceedings
of the ACM SIGCOMM 2011 Conference, ser. SIGCOMM ’11. New
York, NY, USA: ACM, 2011, pp. 194–205.
[21] G. Ananthanarayanan and R. H. Katz, “Greening the switch,” in
Proceedings of the 2008 Conference on Power Aware Computing and
Systems, ser. HotPower’08. Berkeley, CA, USA: USENIX Association.
[22] T. Kosar and M. Livny, “A framework for reliable and efficient data
placement in distributed computing systems.” Journal of Parallel and
Distributed Computing 65, 10, p. 11461157, Oct. 2005.
[23] P. Rizk, C. Kiddle, and R. Simmonds, “Catch: a cloud-based adaptive
data-transfer service for hpc.” in Proceedings of the 25th IEEE International Parallel & Distributed Processing Symposium, 2011, p. 1242
1253.
[24] “Amazon S3,” http://aws.amazon.com/s3/.
[25] “Amazon CloudFront,” http://aws.amazon.com/cloudfront/.
[26] W. Allcock, “GridFTP: Protocol Extensions to FTP for the Grid.”
Global Grid ForumGFD-RP, 20, 2003.
[27] W. Allcock, J. Bresnahan, R. Kettimuthu, M. Link, C. Dumitrescu,
I. Raicu, and I. Foster, “The globus striped gridftp framework and
server,” in Proceedings of the 2005 ACM/IEEE conference on Supercomputing, ser. SC ’05. Washington, DC, USA: IEEE Computer
Society, 2005.
[28] W. Liu, B. Tieman, R. Kettimuthu, and I. Foster, “A data transfer
framework for large-scale science experiments,” in Proceedings of the
19th ACM International Symposium on High Performance Distributed
Computing, ser. HPDC ’10. New York, NY, USA: ACM, 2010, pp.
717–724.
[29] G. Khanna, U. Catalyurek, T. Kurc, R. Kettimuthu, P. Sadayappan,
I. Foster, and J. Saltz, “Using overlays for efficient data transfer over
shared wide-area networks,” in Proceedings of the 2008 ACM/IEEE
conference on Supercomputing, ser. SC ’08. Piscataway, NJ, USA:
IEEE Press, 2008, pp. 47:1–47:12.
[30] C. Raiciu, C. Pluntke, S. Barre, A. Greenhalgh, D. Wischik, and
M. Handley, “Data center networking with multipath tcp,” in Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks,
ser. Hotnets-IX. New York, NY, USA: ACM, 2010, pp. 10:1–10:6.
[31] T. J. Hacker, B. D. Noble, and B. D. Athey, “Adaptive data block
scheduling for parallel tcp streams,” in Proceedings of the High Performance Distributed Computing, 2005. HPDC-14. Proceedings. 14th
IEEE International Symposium, ser. HPDC ’05. Washington, DC,
USA: IEEE Computer Society, 2005, pp. 265–275.
[32] W. Liu, B. Tieman, R. Kettimuthu, and I. Foster, “A data transfer
framework for large-scale science experiments,” in Proceedings of the
19th ACM International Symposium on High Performance Distributed
Computing, ser. HPDC ’10. New York, NY, USA: ACM, 2010, pp.
717–724.
[33] E. Yildirim and T. Kosar, “Network-aware end-to-end data throughput
optimization,” in Proceedings of the first international workshop on
Network-aware data management, ser. NDM ’11. New York, NY,
USA: ACM, 2011, pp. 21–30.
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/SRDS.2014.11?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/SRDS.2014.11, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "other-oa",
"status": "GREEN",
"url": "https://hal.inria.fr/hal-01023282/file/srdsLongHal.pdf"
}
| 2,014
|
[
"JournalArticle"
] | true
| 2014-10-06T00:00:00
|
[] | 15,159
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/033cfd2220c5414a5b35834d68f712a119330bff
|
[
"Computer Science"
] | 0.802263
|
FAIDM for Medical Privacy Protection in 5G Telemedicine Systems
|
033cfd2220c5414a5b35834d68f712a119330bff
|
Applied Sciences
|
[
{
"authorId": "2087747",
"name": "Tzu-Wei Lin"
},
{
"authorId": "1792353",
"name": "Chien-Lung Hsu"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Appl Sci"
],
"alternate_urls": [
"http://www.mathem.pub.ro/apps/",
"https://www.mdpi.com/journal/applsci",
"http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-217814"
],
"id": "136edf8d-0f88-4c2c-830f-461c6a9b842e",
"issn": "2076-3417",
"name": "Applied Sciences",
"type": "journal",
"url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-217814"
}
|
5G networks have an efficient effect in energy consumption and provide a quality experience to many communication devices. Device-to-device communication is one of the key technologies of 5G networks. Internet of Things (IoT) applying 5G infrastructure changes the application scenario in many fields especially real-time communication between machines, data, and people. The 5G network has expanded rapidly around the world including in healthcare. Telemedicine provides long-distance medical communication and services. Patient can get help with ambulatory care or other medical services in remote areas. 5G and IoT will become important parts of next generation smart medical healthcare. Telemedicine is a technology of electronic message and telecommunication related to healthcare, which is implemented in public networks. Privacy issue of transmitted information in telemedicine is important because the information is sensitive and private. In this paper, 5G-based federated anonymous identity management for medical privacy protection is proposed, and it can provide a secure way to protect medical privacy. There are some properties below. (i) The proposed scheme provides federated identity management which can manage identity of devices in a hierarchical structure efficiently. (ii) Identity authentication will be achieved by mutual authentication. (iii) The proposed scheme provides session key to secure transmitted data which is related to privacy of patients. (iv) The proposed scheme provides anonymous identities for devices in order to reduce the possibility of leaking transmitted medical data and real information of device and its owner. (v) If one of devices transmit abnormal data, proposed scheme provides traceability for servers of medical institute. (vi) Proposed scheme provides signature for non-repudiation.
|
# applied sciences
_Article_
## FAIDM for Medical Privacy Protection in 5G Telemedicine Systems
**Tzu-Wei Lin** **[1]** **and Chien-Lung Hsu** **[1,2,3,4,5,]***
1 Graduate Institute of Business and Management, Chang Gung University, Taoyuan 333, Taiwan;
d0640001@cgu.edu.tw
2 Department of Information Management, Chang Gung University, Taoyuan 333, Taiwan
3 Healthy Aging Research Center, Chang Gung University, Taoyuan 333, Taiwan
4 Department of Visual Communication Design, Ming-Chi University of Technology, Taoyuan 243, Taiwan
5 Administration, Chang Gung Memorial Hospital, Taoyuan 333, Taiwan
***** Correspondence: clhsu@mail.cgu.edu.tw
**Featured Application: This work can be applied in 5G telemedicine systems which can remote**
**monitor health condition of patients and provide medical related data to medical professional.**
**Devices on patients, which are IoT devices, should be managed properly, and proposed scheme**
**can achieve the purpose while preserving privacy.**
[����������](https://www.mdpi.com/2076-3417/11/3/1155?type=check_update&version=2)
**�������**
**Citation: Lin, T.-W.; Hsu, C.-L.**
FAIDM for Medical Privacy
Protection in 5G Telemedicine
Systems. Appl. Sci. 2021, 11, 1155.
[https://doi.org/10.3390/](https://doi.org/10.3390/app11031155)
[app11031155](https://doi.org/10.3390/app11031155)
Academic Editor: José
Luis Rojo-Álvarez
Received: 9 December 2020
Accepted: 24 January 2021
Published: 27 January 2021
**Publisher’s Note: MDPI stays neutral**
with regard to jurisdictional claims in
published maps and institutional affil
iations.
**Copyright: © 2021 by the authors.**
Licensee MDPI, Basel, Switzerland.
This article is an open access article
distributed under the terms and
conditions of the Creative Commons
[Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/)
[creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/)
4.0/).
**Abstract: 5G networks have an efficient effect in energy consumption and provide a quality experi-**
ence to many communication devices. Device-to-device communication is one of the key technologies
of 5G networks. Internet of Things (IoT) applying 5G infrastructure changes the application scenario
in many fields especially real-time communication between machines, data, and people. The 5G
network has expanded rapidly around the world including in healthcare. Telemedicine provides
long-distance medical communication and services. Patient can get help with ambulatory care or
other medical services in remote areas. 5G and IoT will become important parts of next generation
smart medical healthcare. Telemedicine is a technology of electronic message and telecommunication related to healthcare, which is implemented in public networks. Privacy issue of transmitted
information in telemedicine is important because the information is sensitive and private. In this
paper, 5G-based federated anonymous identity management for medical privacy protection is proposed, and it can provide a secure way to protect medical privacy. There are some properties below.
(i) The proposed scheme provides federated identity management which can manage identity of
devices in a hierarchical structure efficiently. (ii) Identity authentication will be achieved by mutual
authentication. (iii) The proposed scheme provides session key to secure transmitted data which is
related to privacy of patients. (iv) The proposed scheme provides anonymous identities for devices
in order to reduce the possibility of leaking transmitted medical data and real information of device
and its owner. (v) If one of devices transmit abnormal data, proposed scheme provides traceability
for servers of medical institute. (vi) Proposed scheme provides signature for non-repudiation.
**Keywords: telemedicine; 5G; anonymity; identity management; medical privacy preservation**
**1. Introduction**
5G (the fifth generation) networks, also known as next generation of 4G, is the newest
standard of mobile telecommunication from 3GPP which is being deployed, providing highspeed network, big capacity, and scalability [1,2]. 5G networks have an efficient effect in
energy consumption and provide a quality experience via a large number of communication
devices [3]. End point devices transmit data and request for services through a small base
station (SBS) and major base station (MBS) [1,4,5]. A device connects with SBS by using a
high-band spectrum (5G mmWave) technology and device-to-device (D2D) communication,
which is one of the key technologies of 5G networks [1,4,5]. Moreover, 5G combines and
-----
_Appl. Sci. 2021, 11, 1155_ 2 of 21
connects virtual systems to cloud environments through artificial intelligence and helps
derive different calculating models [6]. 5G will totally change connected services and
devices through higher reliability, connectivity, and cloud storage [6]. Because 5G network
is a multi-server environment, conventional schemes for single server structure are not
suitable [3]. Many reasons lead to multi-server environment requirements including load
balance, expanded coverage, and security [3].
IoT becomes a focus because of being predicted to be an important component of 5G
networks [1]. IoT applying 5G infrastructure changes application scenario in many fields
especially real-time communication between machines, data, and people [7]. Moreover,
5G network can work with amount of IoT devices [7]. We can see a form of 5G-based
IoT networks which assembles smart phone, virtual reality, sensors, and other numerous
wireless communication devices [3]. As the result, IoT with 5G technology influences social
life largely [3].
Nowadays, medical healthcare systems face many challenges, such as infrastructure,
connections, professional requirements, data management, and real-time monitoring [8].
About 40% countries have less than one doctor for one thousand population and less than
18 sickbeds for ten thousand population according to global survey data from 2005 to
2015 [8,9]. 5G networks have expanded rapidly around the world including in healthcare [5]. Internet of things (IoT) with a 5G environment provides solutions for network
layers, including enhancing quality of service, router and jamming control, and resource
optimization, to solve some challenges of smart medical healthcare [1]. Lloret et al. utilized
a smart phone to continuously monitor chronic patients in IoT with a 5G environment [10].
Chen et al. proposed a mobile medical system based on IoT with a 5G environment to
continuously evaluate and monitor diabetes patients [11]. This augers a new and reliable
business model of medical health with 5G technology. 5G and IoT will become important
parts of the next generation smart medical healthcare [1].
Medical privacy is of the utmost importance. Once leaked, it not only brings huge
economic losses and loss of credibility to hospitals and other related institutions, but also
does potential harm to patients, and, more importantly, it can even endangers lives of
patients, which will seriously damage the healthy development of medical industry [12].
Unfortunately, the healthcare industry has lagged to meet users’ expectations. The health
data, which is stored in conventional system, are very difficult to share due to varying standards and data formats, i.e., current healthcare ecosystem is ill-suited for the instantaneous
needs of modern user. Maintaining privacy of user data is very important and failure to
this will result in implications related to financial as well as legal sectors [13]. If a person’s
medical information is the key to finding clinical treatment, how to maintain the privacy of
health records is a central issue that determines the success of medical practice. Increasingly, people interact with health-care providers, using digital media technologies [14–16].
Accompanying the acceleration of medical data collection are rapid advancements in algorithmic computing capacities to aggregate, analyze, and draw sensitive inferences about
individuals from their health data [15,17–19].
Since the above description, federated anonymous identity management (FAIDM) for
medical privacy protection in telemedicine systems is proposed in this paper, which can
provide a secure way to protect medical privacy. There are some properties below. (i) The
proposed scheme provides federated identity management which can manage identity of
devices in a hierarchical structure efficiently. (ii) Identity authentication will be achieved
by mutual authentication. (iii) The proposed scheme provides session key to secure
transmitted data which is related to privacy of patients. (iv) The proposed scheme provides
anonymous identities for devices in order to reduce the possibility of leaking transmitted
medical data and real information of device and its owner. (v) If one of devices transmit
abnormal data, the proposed scheme provides traceability for servers of medical institute.
(vi) the proposed scheme provides signature for non-repudiation.
The rest of this paper is organized as follows. We introduce related works in Section 2,
including telemedicine, healthcare certificate, ID-based cryptosystem, definitions of Cheby
-----
_Appl. Sci. 2021, 11, 1155_ 3 of 21
shev chaotic maps, and chaotic maps-based signature which we apply in our scheme.
In Section 3, we describe the proposed scheme. We discuss the security and performance
analysis of proposed scheme in Sections 4 and 5, respectively. Finally, some concluding
remarks are presented.
**2. Related Works**
In this section, we introduce telemedicine, Chebyshev chaotic maps, healthcare certificate, chaotic maps-based signature, and some related works.
_2.1. Telemedicine_
Telemedicine is a technology of electronic message and telecommunication related to
healthcare [20,21]. The patient will send healthcare related information, which is important, sensitive, and private, to healthcare services through public networks when using
telemedicine technology [21]. Medical professionals can know users’ health condition if
they are able to view the information immediately [21]. Data transmission security will
be discussed, such as eavesdropping, man-in-the-middle attack, data tempering attack,
message modification attack, and data interception attack [22]. Technical support is not
enough though Health Insurance Portability and Accountability Act (HIPAA), General
Data Protection Regulation (GDPR), and Safe Harbor Laws have been made which provide
personal information privacy [22–24].
A general telemedicine system can be divided to three levels [25]. Level 1 (primary
healthcare unit) consist of users with webcam, smart phone, or wearable devices; Level 2
(city or district hospital) is clinic or local hospital which patient may visit before being
transferred to large hospital or medical center; Level 3 (specialty center) will take part in
telemedicine in case of rare disease or incurable disease [25]. Figure 1 illustrates a remote
patient monitoring system in 5G IoT architecture which can assist medical professional to
monitor remote patient’s biodata through specific devices [2,25,26]. Mobile health plays
an important role on medical healthcare monitoring and alarm system and clinical data
storage and maintenance system. In remote patient monitoring systems, wearable devices
and mobile phones belong to a sensor layer which is responsible for gathering measured
data. Measured data is transmitted to network layer, IoT gateway for example, through
small base station (SBS) communication. After that, data will be transmitted out of the local
area network to major base station (MBS), such as 5G link, through MBS communication.
Both the network layer and communication layer are responsible for data processing.
Finally, data will be transmitted to a medical services servers of clinic or local hospital
called architecture layer, such as an electronic health records (EHR) system, cloud storage,
and analytics. Authorized medical professionals in the main hospital can access medical
services servers to monitor a patient. Authorized medical professionals in specialty center
will involve and observe measured data in case of rare disease or incurable disease.
In this paper, we introduced a cryptographic protocol which can be applied in asynchronous telemedicine and synchronous telemedicine and provide communication security
and user anonymity to protect patient’s privacy.
_2.2. Healthcare Certificate_
Medical and healthcare devices nowadays are required certifications under safety
and functional requirements [27]. Meanwhile, healthcare service providers should go
through a certification procedure based on ISO 27000 and 20000 series in order to process
healthcare data [27]. However, different kinds of healthcare devices have different safety
and privacy requirements [27]. Establishing a general certification for all healthcare sectors
is difficult [27]. One of the solutions to the problem would be to design segregated schemes
with links between them, such as a healthcare certificate issued by a trusted institution [27].
All medical and healthcare devices should be issued certificates to proof that they are qualified in safety and functional requirements [27]. In other words, using certified components
can be a requirement in medical and healthcare field.
-----
_Appl. Sci. 2021, 11, 1155_ 4 of 21
**Figure 1. A general telemedicine system with asynchronous telemedicine and synchronous telemedicine scenario.**
_2.3. ID-Based Cryptosystem_
In 1985, Shamir introduced the concept of identity-based (ID-based) cryptosystem [28].
The main difference from traditional public key cryptosystem is that it derives the user’s
public key from public information that uniquely identifies the user. Since it is meaningful
information, we do not need any certificate to prove the validity of the corresponding
public key. In 2002, Gentry et al. proposed hierarchical ID-based cryptography, also called
HIDC [29]. The major purpose of Gentry et al.’s scheme is reducing the loading of private
key generation (called PKG) and the risk of key escrow [29]. In the structure of HIDC,
there is a key generation center at each level, and the one at the top level is root PKG.
The root PKG is the third trusted center, and there will be legal sub-level key generation
centers where users under the same domain register to. In 2009, Yan et al. discovered that
HIDC was suitable for cloud computing and improved the register phase in order to achieve
federated identity management because as more and more cloud service providers provide
various cloud services via different interfaces, federal identity management becomes a
rising issue [30]. The cloud service providers in Yan et al.’s scheme can compose an alliance,
and users can sign on with one account and use various cloud services [30]. However,
Yan et al.’s scheme [30] only proposed mutual authentication for security except for rules
of identity authentication code, and it did not mention the possible security problems of
cloud computing. Nevertheless, Yan et al.’s scheme [30] does not provide user anonymity.
Park et al. [31] proposed an HIDC scheme for VANETs which provided vehicle user
anonymity, but it is not suitable for cloud computing. Shen et al. introduced an HIDC
scheme with time-bound and key management for multicast systems [32]. Fremantle
and Aziz [33] proposed a cloud-based federal identity management mechanism for IoT,
and Maria et al. [34] proposed a lightweight federal identity management mechanism
for IoT. However, federal identity management in 5G IoT environment is still lack of
discussions, not to mention telemedicine in a 5G IoT environment.
_2.4. Chebyshev Chaotic Maps_
The chaotic system is characterized by a sensitive dependence on initial conditions,
pseudo-randomness, and ergodicity [35–37]. These features have the excellent properties
of diffusion and confusion, which are important in cryptography [35,36]. Researchers have
proposed image encryption in chaotic maps [38,39]. Definitions of Chebyshev chaotic maps
are introduced below.
-----
_Appl. Sci. 2021, 11, 1155_ 5 of 21
**Definition 1. The Chebyshev polynomial Tn(x) : [−1, 1] →** [−1, 1] is a polynomial in x of
_degree n, defined as Tn(x) = cos(ncos[−][1](x))._
**Definition 2. The recurrent relation of Tn(x) is defined as Tn(x) = 2xTn−1(x) −** _Tn−2(x) for_
_any n ≥_ 2, T0(x) = 1, and T1(x) = x.
**Definition 3. One of the most important properties of Chebyshev polynomials is semi-group**
_property which establishes Tr(Ts(x)) = Trs(x) = Ts(Tr(x)) for any (s, r) ∈_ _Z and s ∈_ [−1, 1].
_The interval [−1, 1] is invariant under the action of the map Tn(x) : [ −_ 1, 1] → [− 1, 1] . There_fore, the Chebyshev polynomial restricted to the interval [−1, 1] is a well-known chaotic map for all_
_n > 1. It has a unique continuous invariant measure with positive Lyapunov exponent ln n. For n =_
_2, Chebyshev maps reduces to well-known logistic maps._
**Definition 4. In order to enhance property of Chebyshev chaotic maps, Zhang [40] proved that the**
_semi-group property holds for Chebyshev polynomials defined on interval [_ ∞, +∞]. This paper
_−_
_utilizes the following enhanced Chebyshev polynomials._
_Tn(x) = (2xTn−1(x) −_ _Tn−2(x)) mod N_
_where n_ 2, x ( ∞, +∞), and N is a large prime number. According to the equations,
_≥_ _∈_ _−_
_the semi-group property still holds, and the enhanced Chebyshev polynomials also commute._
_Tr(Ts(x)) mod N = Trs(x) mod N = Ts(Tr(x)) mod N_
**Definition 5. Chaotic maps-based discrete logarithm problem (CMDLP). Given two elements x**
_and y, it is computationally infeasible to find the integer n such that Tn(x) mod N = y._
**Definition 6. Chaotic maps-based Diffie-Hellman problem (CMDHP). Given three elements x,**
_Tr(x) mod N, and Ts(x) mod N, it is computationally infeasible to compute Trs(x) mod N._
_2.5. Chaotic Maps-Based Signature_
Chebyshev chaotic maps has been utilized not only in authentication, key agreement
schemes but signature schemes. Chain and Kuo first proposed a digital signature scheme
based on chaotic maps [41]. Several signature schemes based on chaotic maps have been
proposed recently. For example, Tahat and Hijazi [42] proposed an enhanced signature
scheme to improve Chain and Kuo’s [41]; Tahat et al. proposed an ID-based cryptographic
model for Chebyshev chaotic maps to demonstrate the transformation model of ID-based
schemes [43]; Tahat et al. proposed an ID-based blind signature based on chaotic maps [44].
Meshram et al. focused on online/offline short signature schemes and proposed schemes
using chaotic maps [45], such as ID-based short signature scheme and subtree-based short
signature scheme for wireless sensor network [46]. In this paper, we apply Meshram et al.’s
ID-based online short signature scheme [45].
**3. Proposed Scheme**
In this paper, we proposed a FAIDM for medical privacy protection in 5G telemedicine
systems. The notations of the scheme are shown as Table 1. The system structure of
proposed scheme includes remote server node, gateway node (GWi), and constrained
node (CNij). A constrained node is in a sensor layer of a proposed 5G IoT remote patient
monitoring system structure and can be in devices which gather measured data, such as
sensors or wearable devices that can be carried by a human. The role of these devices
consists of monitoring or sensing the environment, so they collect and transmit data to
gateway nodes. For example, in a healthcare application, sensors can be planted in or
on a human’s body in order to collect health-related data. Gateway nodes, which are
SBSs/MBSs, are in the network or communication layers of the proposed 5G IoT remote
-----
_Appl. Sci. 2021, 11, 1155_ 6 of 21
patient monitoring system structure, and it can be assumed that the gateway nodes have
enough energy resources, performance processors, and memory. Gateway nodes process
received data collected by the different constrained nodes and forward the to the remote
server node. Remote server node is in architecture layer of proposed 5G IoT remote patient
monitoring system structure and can be assumed that remote server node has no limitations
of computing resource. Medical professionals in the architecture layer can continuously
follow a patient’s health status based on data received. Note that the interaction between
communication and the architecture layer should be secure which may be guaranteed by
functions in the core network, such as authentication server function (AUSF), authentication
credential repository and processing function (ARPF), subscription identifier de-concealing
function (SIDF), and security anchor function (SEAF) [47,48], but secure communication
between these two layers is not discussed in our scheme. The remote server node takes
part in the system initialization and generating system parameters. A constrained node
has to register to any legitimate gateway node for becoming legitimate. A gateway node
has to register to the remote server node for becoming legitimate. When a patient wears a
wearable healthcare device and goes home from hospital, the device transmits measured
data through an IoT gateway (GWt) at home which is in different domain from hospital.
The system structure is show as Figure 2.
**Table 1. Notations of the proposed scheme.**
**Notations** **Definitions**
_s0_ The secrete value of remote server node S
_si_ The secrete value of ith gateway node (GWi)
_sij_ The secrete value of ijth constrained node (CNij)
_Si_ Private key of GWi after registering to remote server node
_Sij_ Private key of CNij after registering to GWi
_aSij_ _CNij’s anonymous private key issued by GWi_
_IDi, IDij, aIDij_ Identity of GWi, CNij, and CNij’s anonymous identity
_Q0, Qi, Qij, QaIDij_ Public parameters of generated by secrete values
_skGWi↔CNij_ The session key of CNij and GWi
_H1(.), H2(.), H3(.)_ Collision-resistance one-way hash functions
_hK(.)_ Collision-resistance secure one-way chaotic hash function using K as the key
_Ek(.), Dk(.)_ The symmetric encryption and decryption using k as the key
_MACGWi_, MACCNij The message authentication code algorithm of GWi and CNij
_CertHCA→S_ The certification issued by healthcare certification authority to remote server node S.
_CertS→GWi_ The certification issued by remote server node S to GWi which is generate from CertHCA→S.
_CertGWi→CNij_ The certification issued by GWi to CNij which is generate from CertS→GWi .
The proposed scheme consists of seven phases: System initialization phase, gateway
node registration phase, constrained node registration phase, mutual authentication and
key agreement phase, anonymous identity distribution phase, and anonymous signature
and verification phase. The notations of proposed scheme are shown in Table 1.
Before system initialization phase, the healthcare services provider needs to apply for
certificates CertHCA→S from healthcare certification authority before providing healthcare
services. The healthcare certification authority should be a credible and dependent institute,
such as National Health Service Business Services Authority (NHSBSA) of United Kingdom [49], European Federation Gateway Service (EFGS) of European Commission [50],
American Hospital Association Certification Center (AHA-CC) of USA [51], Pharmaceuticals and Medical Devices Agency (PMDA) of Ministry of Health, Labor and Welfare,
Japan [52], or Healthcare Certification Authority (HCA) of Ministry of Health and Welfare,
Taiwan [53]. The certificate CertHCA→S is regarded as the root certificate in the system,
and only certified healthcare services provider can obtain CertHCA→S.
-----
_Appl. Sci. 2021, 11, 1155_ 7 of 21
**Figure 2. System structure of proposed scheme.**
_3.1. System Initialization Phase_
In the remote server node initial phase, the remote server node S, which provides
telemedicine services and is certified by healthcare certification authority, sets up parameters by performing following steps.
Step 1: The healthcare certification authority issues a certificate CertHCA→S to remote server node S which provides telemedicine services and is certified by healthcare
certification authority.
Step 2: The remote server node S generates a secret value s0, a big prime p, and random
number x ∈ (−∞, +∞) and computes Q0 = Ts0 (x) mod p.
Step 3: The remote server node S choses a symmetric encryption algorithm Ek(.),
a symmetric decryption algorithm Dk(.), collision-resistance one-way hash functions
(H1(.), H2(.), H3(.)), and a collision-resistance secure one-way chaotic hash function hk (.).
Step 4: The remote server node S outputs public parameters {Q0, p, x, H1(.), H2(.),
_H3(.), hk(.), Ek(.), Dk(.)} and private parameters s0._
Step 5: The gateway node GWi generates two large random primes (pi, qi), Ni, and ϕi
as follows. Then, the gateway node GWi selects a random integer ei, where 1 < ei < ϕi
and gcd (ei, ϕi) = 1, and makes it public. After that, the gateway node GWi computes di,
where 1 < di < ϕi and eidi 1 (mod ϕt) and keeps it secretly.
_≡_
_3.2. Gateway Node Registration Phase_
In this phase, gateway node GWi interacts with remote server node S for registration.
To deal with the registration request submitted by the gateway node GWi, the remote
server node S validates the gateway node GWi’s legitimacy then issues the private key Si
and certificate CertS→GWi via a secure channel. Note that remote server node S computes a
private key by gateway node GWi’s registration information. Figure 3. illustrates process
of gateway node registration phase. Detailed descriptions are stated as follows:
-----
_Appl. Sci. 2021, 11, 1155_ 8 of 21
**Figure 3. Gateway node registration phase of proposed scheme.**
Step 1: The gateway node GWi chooses an identifier IDi and submits to remote server
node S.
Step 2: Upon receiving IDi from gateway node GWi, remote server node checks the
format of IDi. If IDi is valid, remote server node S computes Si correspond to IDi, generates
_CertS→GWi from CertHCA→S, and sends (Si, CertS→GWi_ ) via secure channel to the gateway
node GWi.
_Pi = H1(IDi)_ (1)
_Si = Ts0_ (Pi) mod p (2)
Step 3: The gateway node GWi chooses a random number si as secret value and
computed Qi and stores CertS→GWi to complete gateway node registration phase.
_Qi = Tsi_ (x) mod p (3)
_3.3. Constrained Node Registration Phase_
The constrained node CNij submits registration information to gateway node GWi
in this phase. The gateway node GWi verifies the constrained node CNij’s legitimacy
then issues private key Sij and certificate CertGWi→CNij to complete this phase. Note that
the gateway node GWi computes private key Sij by constrained node CNij’s registration
information. Figure 4. illustrates process of constrained node registration phase. Detailed
descriptions are stated as follows:
**Figure 4. Constrained node registration phase of proposed scheme.**
Step 1: Constrained node CNij chooses an identifier IDij and a random number sij as
his own secret, computes Qij, and sends (IDij, Qij) to gateway node GWi.
-----
_Appl. Sci. 2021, 11, 1155_ 9 of 21
_Qij = Tsij_ (x) mod p (4)
Step 2: Upon receiving IDij from constrained node CNij, gateway node GWi checks the
format of IDij. If IDij is valid, gateway node GWi computes private key Sij correspond to
_IDij, generates CertGWi→CNij from CertS→GWi_, and sends (Sij, CertGWi→CNij) to constrained
node CNij via secure channel.
_Pij = H2(Qij, IDi)_ (5)
_Sij = SiTsi_ (Pij) mod p (6)
Step 3: The constrained node CNij stores (Sij, CertGWi→CNij) to complete the constrained node registration phase.
_3.4. Mutual Authentication and Key Agreement Phase_
After the constrained node joins the remote server node alliance as a remote server
node member, it can use the services not only provided by the registered services provider
but also other services provider in the same remote server node alliance. When the constrained node applies for remote server node services, the gateway node and constrained
node will executive mutual authentication to ensure the further interaction between the
gateway node and constrained node is secure and validated. Figure 5. illustrates process of mutual authentication and key agreement phase. Detailed descriptions are stated
as follows:
**Figure 5. Mutual authentication and key agreement phase of proposed scheme.**
Step 1 Constrained node CNij chooses a random number aij, computes µij and Ct,
and sends (Ct, IDij) to gateway node GWt.
_µij = Tsij_ (aij) mod p (7)
_Ct = (Tet_ (µij||a ij||Cert GWi→CNij ) mod p)Pt (8)
-----
_Appl. Sci. 2021, 11, 1155_ 10 of 21
Step 2: Upon receiving (Ct, IDij), gateway node GWt obtains (µij||a ij||Cert GWi→CNij )
by decrypting Pt and verifies CertGWi→CNij is valid. If CertGWi→CNij is valid, gateway node
_GWt progresses to steps below, or gateway node GWt abandons request._
(µij||a ij||Cert GWi→CNij = )(Td(Ct) mod p)/Pt (9)
Step 3: Gateway node GWt computes (ωt, skGWt↔CNij, Pi, Pij, Pt, K, MACGWt ) and
sends (MACGWt, ωt) to the constrained node CNij.
_ωt = Tst_ (aij) mod p (10)
_skGWt↔CNij = H3(Tst_ (µij) mod p) (11)
_Pi = H2(IDi)_ (12)
_Pij = H2(Qij, IDi)_ (13)
_Pt = H1(IDt)_ (14)
_K = (Pi||Q 0)_ � (Pij||Q i) � (Pt||Q ij) � (skGWt↔CNij _||ωt)_ (15)
_MACGWt = hK(Pt, Pij, µij)_ (16)
Step 4: Upon receiving (MACGWt, ωt), constrained node CNij computes (sk[′]GWt↔CNij [,]
_K[′]) and verifies MACGWt_ . If result of verification is true, constrained node CNij computes
_MACCNij and sends MACCNij to gateway node GWt._
_sk[′]GWt↔CNij_ [=][ H]3[(][T]sij [(][ω]t[)][ mod][ p][)] (17)
_K[′]_ = (Pi||Q 0) � (Pij||Q i) � (Pt||Q ij) � (sk[′]GWt↔CNij _[||][ω][t][)]_ (18)
_hK′_ (Pt, Pij, µij) ? = MACGWt (19)
_MACCNij = hsk′GWt_ _↔CNij_ [(][P][ij][,][ P][t][,][ ω][t][)] (20)
Step 5: Upon receiving MACCNij, gateway node GWt verifies MACCNij . If the result of
verification is true, mutual authentication and key agreement is completed.
_hskGWt_ _↔CNij (Pij, Pt, ωt) ? = MACCNij_ (21)
_3.5. Anonymous Identity Distribution Phase_
If the constrained node needs an anonymous identity for some remote server node
services, the gateway node will generate an anonymous identity and the corresponding
private key for constrained node according to the registration information. Note that
anonymous identity will compute by adding constrained node’s ID to ensure their connection. Figure 6. illustrates process of anonymous identity distribution phase. Detailed
descriptions are stated as follows:
Step 1: Gateway node GWt generates a random number tt, uses session key skGWt↔CNij
to encrypt IDij and tt, and generates and sends pseudonym aIDij to constrained node CNij.
_aIDij = EskGWt_ _↔CNij (IDij||t t)_ (22)
Step 2: Upon receiving aIDij, constrained node CNij computes PaIDij and QaIDij and
sends QaIDij to gateway node GWt.
_PaIDij = H1(IDt||aID ij)_ (23)
-----
_Appl. Sci. 2021, 11, 1155_ 11 of 21
_QaIDij = EskGWt_ _↔CNij (IDt||P aIDij_ ) (24)
**Figure 6. Anonymous identity distribution phase of proposed scheme.**
Step 3: After receiving QaIDij, gateway node GWt decrypts QaIDij with skGWt↔CNij and
checks PaIDij using IDt and aIDij. If it holds, gateway node GWt computes aSij and encrypts
_aSij with skGWt↔CNij. Then, gateway node GWt encrypts (C, Pt) to MAC[′]GWt_ [and sends]
_MAC[′]GWt_ [to constrained node][ CN]ij[.]
(IDt||P aIDij ) = DskGWt _↔CNij (QaIDij_ ) (25)
_H1(IDt||aID ij) ? = PaIDij_ (26)
_aSij= StTst_ (Pt) mod p (27)
_C = EskGWt_ _↔CNij (aSij)_ (28)
_MAC[′]GWt_ [=][ E]skGWt _↔CNij_ [(][C][,][ P]t[)] (29)
Step 4: Upon receiving MAC[′]GWt [, gateway node][ GW]t [verifies][ MAC]CNij[. If result of]
verification is true, gateway node GWt obtain aSij by decrypting C, and anonymous identity
distribution phase is completed.
(C, P[′]t[) =][ D]skGWt _↔CNij_ [(][MAC][′]GWt [)] (30)
_H1(IDt) ? = P[′]t_ (31)
-----
_Appl. Sci. 2021, 11, 1155_ 12 of 21
_aSij= DskGWt_ _↔CNij (C)_ (32)
_3.6. Anonymous Signature and Verification Phase_
Gateway node GWt (verifier) receives and verifies message with signature generated
by anonymous private key aSij using verification function. Figure 7. illustrates process of
anonymous signature and verification phase. Detailed descriptions are stated as follows:
**Figure 7. Anonymous signature and verification phase of proposed scheme.**
Step 1: Constrained node CNij chooses a random number Rij ∈ _Z[∗]q_ [, computes (][W]ij[,]
_Vij, tij) for further computation._
_Wij= TRij_ (x) mod p (33)
_Vij= H1(Wij||aID ij||aS ij)_ (34)
_tij= RijVij mod p_ (35)
Step 2: Constrained node CNij chooses a random number Lij ∈ _Z[∗]p_ [so that][ L]ij_g [is the]
_gth bit of Lij. Then, constrained node CNij computes (Oij, bij) to obtain Yij and ηij, generates_
signature with signature σij, and sends σij to gateway node GWt.
_Oij = ∏gp1=1_ _[Q][′]_ _g−1_ (36)
_bij= H1(Q, Wij, M)_ (37)
_Yij= Lijbijtij mod p_ (38)
_ηij= TYij_ (x) mod p (39)
_σij= (Q, Wij, Yij)_ (40)
Step 3: Upon receiving signature σij, gateway node GWt verifies signature σij. If η[′]ij
holds, signature is accepted.
_η[′]ij[=][ O][ij][T][b]ij_ [(][W]ij[)][T]bijVij [(][aS]ij[)][ mod][ p] (41)
-----
_Appl. Sci. 2021, 11, 1155_ 13 of 21
**4. Security Analysis**
We present formal verification using BAN logic [54] and theoretical analyses to prove
that proposed scheme can achieve security properties and resist potential common attacks.
_4.1. Formal Verification Using BAN Logic_
BAN logic has become a widely accepted and well-known logical methodology for
analyzing security of schemes [54–65]. The goal of BAN logic is to verify the exchanged
information and the belief relationship among communicating parties and analyze protocols by deriving beliefs to proof that honest and legitimate parties can correctly execute
and complete a protocol [54,66–68]. We apply BAN logic [54] to prove the authenticity of
our scheme. The notations used in BAN logic [54] analysis are defined as follows. P and Q
are principles, X and Y are statements, C is channel, r and w are set of readers and writers
respectively, and K is encryption key. P|≡X denotes that P believes X; P|~X denotes that
_P once said X; C(X) means that X is transited via channel C; r(C) and w(C) denotes as the_
set of readers and writers of C respectively. P◁C(X) means that P sees C(X). X is transited
via C and can be observed by P, and P must be a reader of C to read X. P◁X|C means that
_P sees X via C. (X)K denotes that X is encrypted with the key K. P↔[K]_ _Q means that P and Q_
can establish a secure communication channel by using K. The logical postulates in BAN
logic [54] are described using rules below.
Rule 1. _P◁C(X), P∈r(C)_
_P |≡(P◁_ _X |C), P ◁X_ [: If][ P][ receives and reads][ X][ via][ C][, then][ P][ believes that][ X][ has]
arrived on C and P sees X.
Rule 2. _[P]P[◁]◁[C]X[(], P[X][,]◁[ Y]Y[)]_ [: If][ P][ sees a hybrid message (][X][,][ Y][), then][ P][ sees][ X][ and][ Y][ separately.]
Rule 3. _P |≡(w(C) = {P, Q})_
_P |≡(P◁_ _X |C) →_ _Q | ∼X_ [: If][ P][ believes that][ C][ can only be written by][ P][ and][ Q][,]
then P believes that if P receives X via C, then Q said X.
Rule 4. _P |≡_ (Q| ∼(X, Y))
_P |≡_ (Q| ∼X), P≡ (Q| ∼Y) [: If][ P][ believes that][ Q][ said a hybrid message (][X][,][ Y][), then][ P]
believes that Q has said X and Y separately.
Rule 5. _P |≡(→sij_ _ECMDH(secret) P), P|≡(→ωt_ _ECMDH(public)Q)_ : If P believes that sij is its extended
_skGWt_ _↔CNij_
_P |≡(P←−−−−−→Q)_
chaotic maps-based Diffie–Hellman secret and that ωt is the extended chaotic maps-based
Diffie–Hellman component from Q, then P believes that skGWt↔CNij is the symmetric key
shared between P and Q.
Rule 6. _P |≡_ (Q| ∼ _X), P|≡#(X)_ : If P believes that another Q said X and P also believes
_P |≡_ (Q| ∼X)
that X is fresh, then P believes that Q has recently said X.
Rule 7. _P |≡#(X)_
_P |≡#(X, Y)_ [: If][ P][ believes that a part of a mixed message][ X][ is fresh, then it]
believes that the whole message (X, Y) is fresh.
Rule 8. _P |≡(Φ1P→ |≡ΦΦ2 )2, P|≡Φ1_ : If P believes that Φ1 implies Φ2 and P believes that Φ1 is
true, then P believes that Φ2 is true.
The proposed scheme is described in logic as below.
_µij_
Step 1. GWt ◁ (→ECMDH(public)CNij, CGWt, CNij((Pt, Pij, µij)K)
Step 2. CNij ◁ (ω→tECMDH(public)GWt, CCNij, GWt (Pij, Pt, ωt)skGWt _↔CNij_, ωt)
Table 2 lists used assumptions, where A and B are CNij and GWt, but A ̸= B.
**Table 2. Assumptions of logic of the proposed scheme.**
**Assumptions** **Definitions**
A1. A ∈ _r(CA, B)_ _A can read from the channel CA, B._
A2. A |≡ (w(CA, B) = {A, B}) _A believes that A and B can write on CA, B._
A3. A |≡ (B|~ Φ → Φ ) _A believes that B only says what it believes._
A4. A |≡ #(NA) _A believes that NA is fresh._
_sij_ _A believes that sij is its extended chaotic maps-based_
A5. A |≡ (→ECMDH(secret)A) Diffie-Hellman secret.
-----
_Appl. Sci. 2021, 11, 1155_ 14 of 21
Based on to the assumptions and logical analyses, the proposed scheme must realize
goals in Table 3.
**Table 3. Goals of the proposed scheme.**
**Goals** **Definitions**
_skGWt_ _↔CNij_ Constrained node CNij believes that skGWt↔CNij = H3(Tst (µij) mod p) is a
G1. CNij |≡ (CNij _←−−−−→_ _GWt)_ symmetric key shared between participants CNij and GWt.
_skGWt_ _↔CNij_ Gateway node GWt believes that skGWt↔CNij = H3(Tst (µij) mod p) is a
G2. GWt |≡ (CNij _←−−−−→_ _GWt)_ symmetric key shared between participants CNij and GWt.
Constrained node CNij believes that Sj is convinced of
G3. CNij |≡ _GWt |≡_ (CNij _←−−−−→skGWt_ _↔CNij_ _GWt)_ _skGWt↔CNij = H3(Tst_ (µij) mod p). is a symmetric key shared between CNij
and GWt
Gateway node GWt believes that U is convinced of
G4. GWt |≡ _CNij |≡_ (CNij _←−−−−→skGWt_ _↔CNij_ _GWt)_ _skGWt↔CNij = H3(Tst_ (µij) mod p) is a symmetric key shared between CNij
and GWt.
To accomplish the Goal 1, we have Equations (42) and (43). Equations (42) and (43)
must hold because of Rule 5 and A5.
_sij_
_CNij |≡_ (→ECMDH(secret)CNij) (42)
_CNij | ≡_ (→ωt _ECMDH(public)GWt)_ (43)
Next, we have Equations (44) and (45) that must hold because of A3 and Rule 8 to
accomplish Equation (43).
_CNij | ≡_ (→ωt _ECMDH(public)GWt, CCNij, GWt_ (Pij, Pt, ωt)skGWt _↔CNij_,ωt ) → (→ωt _ECMDH(public)GWt)_ (44)
_CNij | ≡_ (GWt | ∼ _→ωt_ _ECMDH(public)GWt)_ (45)
We have Equation (46) which must hold because of Rule 6 and 7 and A4 to accomplish
Equation (45).
_CNij | ≡_ #(→ωt _ECMDH(public)GWt)_ (46)
We have Equations (47)–(49) which must hold because of Rule 1, 2, and 3, and A1 and
A2 to accomplish Equation (46).
_CNij ∈_ _r(CGWt, CNij_ ) (47)
_CNij | ≡_ (w(CGWt, CNij ) = �CNij, GWt}) (48)
_CNij | ≡_ - _CGWt, CNij_ (→ωt _ECMDH(public)GWt)_ (49)
We have the proposed scheme realizes that G1 is achieved by using Rule 5. Similarly,
we have that the proposed scheme realizes G2 by using the same arguments of G1.
We have Equations (50) and (51) which must hold because of Rule 3 and A3 to
accomplish G3.
_skGWt_ _↔CNij_
_CNij | ≡_ (GWt| ∼(CNij _↔_ _GWt) →_ _GWt | ≡_ (CNij
_skGWt_ _↔CNij_
_↔_ _GWt))_ (50)
_CNij | ≡_ (GWt| ∼(CNij
_skGWt_ _↔CNij_
_↔_ _GWt)_ (51)
We have Equations (51) and (52) which must hold because of Rule 6 and 7 and A4 to
accomplish Equation (51).
_CNij | ≡_ #(CNij
_skGWt_ _↔CNij_
_↔_ _GWt)_ (52)
-----
_Appl. Sci. 2021, 11, 1155_ 15 of 21
We have Equations (47), (48), and (53) which must hold because of Rule 1, 2, and 3,
and A1 and A2 to accomplish Equation (53).
_CNij ◁CCNij, GWt_ (CNij
_skGWt_ _↔CNij_
_↔_ _GWt)_ (53)
Thus, the proposed scheme realizes G3 is achieved. Similarly, using the same arguments of G3, the proposed scheme realizes G4. Therefore, the proposed scheme realizes
G1, G2, G3, and G4.
_4.2. Theoretical Analyses_
We present theoretical analyses to prove that proposed scheme can achieve security
properties and resist potential common attacks.
4.2.1. Security of Secret Key
We assume that adversary wants to get the master secret key obtained by remote
server node, gateway node GWi and constrained node CNij, such like Q0 = Ts0 (x) mod p
and Qi = Tsi (x) mod p. The adversary must have to solve the question based on CMDLP.
If an adversary wants to get the gateway node GWi’s secret key by compute Si = Ts0 (Pi)
mod p = Ts0 (H1(IDi)) mod p, adversary needs to solve the question based on CMDLP.
On the other hand, the gateway node GWi generates the secret key for the constrained node
_CNij. by performing Sij = SiTsi_ (Pij) mod p = Ts0 (H1(IDi))Tsi (H2(Qij, IDi)) mod p. The
gateway node GWi use private key Si and its own secret si in the computing process,
hence only gateway node GWi. can know the constrained node CNij’s secret key.
4.2.2. Session Key Confirmation and Security of Session Key
We provide session key confirmation which can guarantee the correctness of the
encryption key in the session through message authentication code MACGWt and MACCNij .
If the adversary wants to obtain a session key skGWt↔CNij, the adversary has to solve
CMDHP even with knowledge of ωt. Moreover, session key skGWt↔CNij is not the same
every time because of random number aij.
4.2.3. Mutual Authentication
In the authentication process, constrained node CNij and gateway node GWi compute
their session key K by public parameters (IDi, IDij, Qij, X, IDt, Y). In addition, each party
generates message authentication code MACGWt and MACCNij by K and skGWt↔CNij respectively to verify their validity. Moreover, because the feature of HIDC, gateway node GWt
can realize constrained node CNij comes from which cloud services provider by public
parameter IDij.
4.2.4. Device Anonymity
After mutual authentication phase, constrained node CNij can obtain pseudonym
private key aSij corresponding to pseudonym identity aIDij from supplier gateway node
_GWt. The pseudonym identity aIDij involve not only constrained node CNij’s IDij but_
also time stamp ts, that is to ensure every time the constrained node can obtain different
pseudonym identity to avoid attack by remove the linkage between the real identity and
pseudonym identity. Besides, aIDij is computed by a supplier with its own secret. That is,
only the supplier who gave aIDij to the constrained node CNij can recover the constrained
node’s real identity.
4.2.5. Traceability of Anonymity
Server node S can audit transmission history by recovering anonymous ID aIDij.
The gateway node GWt decrypts aIDij with secret si to recover anonymous real identity by
performing (IDij||t t) = DskGWt _↔CNij (aIDij)._
-----
_Appl. Sci. 2021, 11, 1155_ 16 of 21
4.2.6. Unforgeability
If the adversary wants to forge validated anonymous identity, adversary has to acquire
gateway node GWi’s secret si and private key Si. The adversary has to solve CMDLP if
adversary wants to compute gateway node GWi’s secret si and private key Si from public
parameter Qi.
4.2.7. Without Assistance of Registration Center
Ying and Nayak [4] and Ul Haq et al. [5] proposed scheme for multi-server 5G networks which included a registration center (RC) in their system structures. RC is a third
party for both sides of communication, and two parties have to go through registration
phase to RC before communication. Privilege attack or malicious insider attack may occur
if the adversary is in RC, and risk of message leakage may happen. If privilege attack or
malicious insider attack happen in telemedicine system, patience privacy may be damaged.
Moreover, system structure including RC in 5G networks is no difference from the one in
conventional networks. In proposed scheme, we introduced hierarchical system structure
which is suitable for 5G networks without RC or trusted third party.
4.2.8. Non-Repudiation and Security of Signature
When constrained node CNij executes signature function based on Chebyshev chaotic
maps with anonymous private key aSij to generate signature σij. Gateway node GWt can
verify ηij. As the result, non-repudiation can be achieved. We apply signature Meshram
et al.’s ID-based online short signature scheme [45] in anonymous signature and verification
phase, and security of signature has been proven using Bellar et al.’s method [69].
4.2.9. Resistant to Bergamo et al.’s Attack
Bergamo et al.’s attack [70] is based on two conditions: Attackers can obtain related
elements (x, aij, µij, ωt) or several Chebyshev polynomials pass through the same point
due to the periodicity of cosine function. In the authenticated key exchange phase of the
proposed scheme, attackers cannot obtain any of the related elements (x, aij, µij, ωt) because
they are encrypted in transmitted messages and only the user and server can retrieve
the decryption key. Moreover, the proposed protocol utilizes the extended Chebyshev
polynomials, in which the periodicity of the cosine function is avoided by extending the
interval of x to ( ∞, +∞) [40]. As a result, our scheme can resist the attack proposed by
_−_
Bergamo et al. [70].
**5. Performance Analysis**
We present comparisons of Yan et al.’s [30], Hu et al.’s [71], Ying and Nayak’s [4],
Ul Haq et al.’s [5], and proposed schemes concerning security requirements and computational complexity comparison.
_5.1. Security Requirements Comparison_
As shown in Table 4, proposed scheme provides all listed security requirements.
Yan et al.’s [30] and Hu et al.’s [71], and proposed schemes utilize hierarchical system
structure. Yan et al.’s [30] and Ying and Nayak’s [4] only achieve one security requirement. Hu et al.’s [71] scheme achieves mutual authentication and anonymity, and Ul Haq
et al.’s [5] scheme achieves mutual authentication, session key confirmation, and anonymity.
None of mentioned previous schemes achieve traceability of anonymity, unforgeability,
and non-repudiation except proposed scheme.
_5.2. Computational Complexity Comparison_
We present a computational complexity comparison of our scheme with Yan et al.’s [30],
Hu et al.’s [71], Ying and Nayak’s [4], and Ul Haq et al.’s [5] schemes in Table 5. We can ignore the time taken for computing XOR operation because the value is too low to influence
the result. Hu et al.’s [71], Ying and Nayak’s [4], and Ul Haq et al.’s [5] schemes take more
-----
_Appl. Sci. 2021, 11, 1155_ 17 of 21
computational cost than Yan et al.’s [30] and ours. Hu et al.’s scheme [71] takes the most
computational cost, and the reason may be that Hu et al.’s scheme [71] is the only scheme
which performs exponentiation operations among them. Ying and Nayak’s [4] and Ul Haq
et al.’s [5] schemes take more computational cost than Yan et al.’s [30] and ours because
Ying and Nayak’s [4] and Ul Haq et al.’s [5] schemes perform more not only one-way hash
function operations but elliptic curve point multiplications. The results have proven that
performing an elliptic curve point multiplication takes more time than a Chebyshev chaotic
maps operation, and, compared to RSA and ECC, Chebyshev polynomials can offer smaller
key size and faster computation [42,43,72–74]. However, Yan et al.’s scheme [30] performs
only two elliptic curve point multiplications in total while our scheme performs six Chebyshev chaotic maps operations. For the above reason, Yan et al.’s scheme [30] takes less time
than our scheme. Although Yan et al.’s scheme [30] is more efficient than our scheme by a
narrow margin, Yan et al.’s scheme [30] cannot provide key confirmation because of lacking
session key agreement, and neither can Ying and Nayak’s scheme [4]. Moreover, Yan et al.’s
scheme [30] cannot provide mutual authentication, anonymity, traceability of anonymity,
unforgeability, and non-repudiation. Figure 8. illustrates computational complexity of
receiver/gateway node with varying number of devices.
**Table 4. Security requirements comparison.**
**Security Requirements** **Yan et al. [30]** **Hu et al. [71]** **Ying and Nayak [4]** **Ul Haq et al. [5]** **Ours**
Mutual authentication X O X O O
Session key confirmation X X X O O
Anonymity X O O O O
Traceability of anonymity X X X X O
Unforgeability X X X X O
Non-repudiation X X X X O
Without RC O O X X O
**Table 5. Computational complexity comparison.**
**Scheme**
**Yan et al. [30]** **Hu et al. [71]** **Ying and Nayak [4]** **Ul Haq et al. [5]** **Ours**
**Role**
2Th + Tecc 2Te + Tecc 8Th + 5Tecc 6Th + 5Tecc 3Tch + 3Th
Sender/constrained node
= 128.18Th = 2214.16Th = 638.8Th = 636.8Th = 129.12Th
2Th + Tecc 2Tecc 4Th + 5Tecc 4Th + 5Tecc 3Tch + 6Th
Receiver/gateway note
= 128.18Th = 252.32Th = 634.8Th = 634.8Th = 132.12Th
4Th + 2Tecc 2Te + 3Tecc 12Th + 10Tecc 10Th + 10Tecc 6Tch + 9Th
Both ends
= 256.36Th = 2466.48Th = 1273.6Th = 1271.6Th = 261.24Th
_Tch: Time for performing a Chebyshev chaotic maps operation; Tecc: Time for performing an elliptic curve point multiplication; Tsym: Time_
for performing a symmetry encryption operation; Te: Time for performing an exponentiation operation; Th: Time for performing a one-way
hash function operation; Tch = 42.04Th; Tecc = 126.16Th; Tsym = 17.4Th; Te = 1044Th; Th = 0.006 ms.
-----
_Appl. Sci. 2021, 11, 1155_ 18 of 21
**Figure 8. Computational complexity of receiver/gateway node with varying number of devices.**
**6. Conclusions**
5G networks has an efficient effect in energy consumption and provides quality of
experience and amount of devices communication, and 5G will change connected services
and devices through higher reliability, connectivity, and cloud storage. IoT applying 5G
infrastructure changes application scenario in many fields especially real-time communication between machines, data, and people. IoT with 5G environment provides solutions of
network layer, including enhancing quality of service, router and jamming control, and resource optimization, to solve challenges of smart medical healthcare. Medical privacy is
important in smart medical healthcare because data leaking brings potential harm to patients and hospital. We propose a FAIDM for medical privacy protection in 5G telemedicine
systems which provides federated identity management which provide a secure way to
protect medical privacy. To achieve privacy preservation, we provide anonymous identity
to constrained nodes for reducing exposure of personal private data. Our scheme provides
features below. (i) Proposed scheme provides federated identity management which can
manage identity of devices in a hierarchical structure efficiently. (ii) Identity authentication
will be achieved by mutual authentication between devices and SBSs/MBSs. (iii) The proposed scheme provides session key to secure transmitted data which is related to privacy
of patients. (iv) The proposed scheme provides anonymous identities for devices in order
to reduce the possibility of leaking transmitted medical data and real information of device
and its owner. (v) If one of devices transmit abnormal data, the proposed scheme provides
traceability of anonymous identities for servers of medical institute to check specific device.
(vi) the proposed scheme provides anonymous signature for non-repudiation of devices,
and records of signatures can be used for periodical audit of medical institute.
**Author Contributions: Conceptualization, C.-L.H. and T.-W.L.; methodology, C.-L.H. and T.-W.L.;**
security analysis, T.-W.L.; writing—original draft preparation, T.-W.L.; writing—review and editing, T.-W.L.; supervision, C.-L.H. All authors have read and agreed to the published version of
the manuscript.
**Funding: This research was funded by Ministry of Science and Technology, Taiwan, grant num-**
ber MOST 108-2221-E-182-011, Healthy Aging Research Center, Chang Gung University, Taiwan,
grant number EMRPD1K0461 and EMRPD1K0481, and Chang Gung University, Taiwan, grant number PARPD3K0011.
**Institutional Review Board Statement: Not applicable.**
**Informed Consent Statement: Not applicable.**
**Data Availability Statement: This study did not report any data.**
-----
_Appl. Sci. 2021, 11, 1155_ 19 of 21
**Conflicts of Interest: The authors declare no conflict of interest.**
**References**
1. Ahad, A.; Tahir, M.; Yau, K.L.A. 5G-based smart healthcare network: Architecture, taxonomy, challenges and future research
[directions. IEEE Access 2019, 7, 100747–100762. [CrossRef]](http://doi.org/10.1109/ACCESS.2019.2930628)
2. Chettri, L.; Bera, R. A Comprehensive Survey on Internet of Things (IoT) Toward 5G Wireless Systems. IEEE Internet Things J.
**[2020, 7, 16–32. [CrossRef]](http://doi.org/10.1109/JIOT.2019.2948888)**
3. Kakkar, A. A survey on secure communication techniques for 5G wireless heterogeneous networks. Inf. Fusion 2020, 62, 89–109.
[[CrossRef]](http://doi.org/10.1016/j.inffus.2020.04.009)
4. Ying, B.; Nayak, A. Lightweight remote user authentication protocol for multi-server 5G networks using self-certified public key
[cryptography. J. Netw. Comput. Appl. 2019, 131, 66–74. [CrossRef]](http://doi.org/10.1016/j.jnca.2019.01.017)
5. Haq, I.U.; Wang, J.; Zhu, Y. Secure two-factor lightweight authentication protocol using self-certified public key cryptography for
[multi-server 5G networks. J. Netw. Comput. Appl. 2020, 161, 102660. [CrossRef]](http://doi.org/10.1016/j.jnca.2020.102660)
6. Anwar, S.; Prasad, R. Framework for Future Telemedicine Planning and Infrastructure using 5G Technology. Wirel. Pers. Commun.
**[2018, 100, 193–208. [CrossRef]](http://doi.org/10.1007/s11277-018-5622-8)**
7. Mistry, I.; Tanwar, S.; Tyagi, S.; Kumar, N. Blockchain for 5G-enabled IoT for industrial automation: A systematic review, solutions,
[and challenges. Mech. Syst. Signal Process. 2020, 135. [CrossRef]](http://doi.org/10.1016/j.ymssp.2019.106382)
8. [Rao, K. The Path to 5G for Health Care. IEEE Future Netw. 2018. Available online: https://futurenetworks.ieee.org/images/files/](https://futurenetworks.ieee.org/images/files/pdf/applications/5G--Health-Care030518.pdf)
[pdf/applications/5G--Health-Care030518.pdf (accessed on 20 July 2020).](https://futurenetworks.ieee.org/images/files/pdf/applications/5G--Health-Care030518.pdf)
9. _World Health Statistics 2017: Monitoring Health for the SDGs (Sustainable Development Goals); World Health Organization: Geneva,_
Switzerland, 2017.
10. Lloret, J.; Parra, L.; Taha, M.; Tomás, J. An architecture and protocol for smart continuous eHealth monitoring using 5G. Comput.
_[Netw. 2017, 129, 340–351. [CrossRef]](http://doi.org/10.1016/j.comnet.2017.05.018)_
11. Chen, M.; Yang, J.; Zhou, J.; Hao, Y.; Zhang, J.; Youn, C.H. 5G-Smart Diabetes: Toward Personalized Diabetes Diagnosis with
[Healthcare Big Data Clouds. IEEE Commun. Mag. 2018, 56, 16–23. [CrossRef]](http://doi.org/10.1109/MCOM.2018.1700788)
12. Fan, K.; Jiang, W.; Li, H.; Yang, Y. Lightweight RFID Protocol for Medical Privacy Protection in IoT. IEEE Trans. Ind. Inform. 2018,
_[14, 1656–1665. [CrossRef]](http://doi.org/10.1109/TII.2018.2794996)_
13. Murugan, A.; Chechare, T.; Muruganantham, B.; Ganesh Kumar, S. Healthcare information exchange using blockchain technology.
_[Int. J. Electr. Comput. Eng. 2020, 10, 421–426. [CrossRef]](http://doi.org/10.11591/ijece.v10i1.pp421-426)_
14. Lin, W.-Y.; Zhang, X.; Song, H.; Omori, K. Health information seeking in the Web 2.0 age: Trust in social media, uncertainty
[reduction, and self-disclosure. Comput. Hum. Behav. 2016, 56, 289–294. [CrossRef]](http://doi.org/10.1016/j.chb.2015.11.055)
15. Park, Y.J.; Chung, J.E.; Shin, D.H. The Structuration of Digital Ecosystem, Privacy, and Big Data Intelligence. Am. Behav. Sci. 2018,
_[62, 1319–1337. [CrossRef]](http://doi.org/10.1177/0002764218787863)_
16. Lupton, D. The thing-power of the human-app health assemblage: Thinking with vital materialism. Soc. Theory Health 2019, 17,
[125–139. [CrossRef]](http://doi.org/10.1057/s41285-019-00096-y)
17. [Libert, T. Privacy implications of health information seeking on the web. Commun. ACM 2015, 58, 68–77. [CrossRef]](http://doi.org/10.1145/2658983)
18. Gandy, O.H., Jr.; Nemorin, S. Toward a political economy of nudge: Smart city variations. Inf. Commun. Soc. 2019, 22, 2112–2126.
[[CrossRef]](http://doi.org/10.1080/1369118X.2018.1477969)
19. Park, Y.J.; Shin, D.D. Contextualizing privacy on health-related use of information technology. Comput. Hum. Behav. 2020, 105,
[106204. [CrossRef]](http://doi.org/10.1016/j.chb.2019.106204)
20. [Marciniak, R. Role of new IT solutions in the future of shared service model. Pollack Period. 2013, 8, 187–194. [CrossRef]](http://doi.org/10.1556/Pollack.8.2013.2.20)
21. Garai, Á.; Péntek, I.; Adamkó, A. Revolutionizing healthcare with IoT and cognitive, cloud-based telemedicine. Acta Polytech.
_[Hung. 2019, 16, 163–181. [CrossRef]](http://doi.org/10.12700/APH.16.2.2019.2.10)_
22. Zriqat, E.; Altamimi, A.M. Security and Privacy Issues in Ehealthcare Systems: Towards Trusted Services. Int. J. Adv. Comput. Sci.
_Appl. 2016, 7, 229–236._
23. Health Insurance Portability and Accountability Act of 1996. Public Law 104-191. In United States Statutes at Large; Office of the
Federal Register: Washington, DC, USA, 1996; Volume 110, pp. 1936–2103.
24. _Proposal for a Regulation of the European Parliament and of the Council on the Protection of Individuals with Regard to the Processing of_
_Personal Data and on the Free Movement of Such Data (General Data Protection Regulation): 2012/0011 (COD); Council of the European_
Union: Brussels, Belgium, 2013.
25. Pramanik, P.K.D.; Pareek, G.; Nayyar, A. Security and privacy in remote healthcare: Issues, solutions, and standards.
In Telemedicine Technologies: Big Data, Deep Learning, Robotics, Mobile and Remote Applications for Global Healthcare; Elsevier:
[Amsterdam, The Netherlands, 2019; pp. 201–225. [CrossRef]](http://doi.org/10.1016/B978-0-12-816948-3.00014-3)
26. Devaraj, S.J. Emerging paradigms in transform-based medical image compression for telemedicine environment. In Telemedicine
_Technologies: Big Data, Deep Learning, Robotics, Mobile and Remote Applications for Global Healthcare; Elsevier: Amsterdam, The Nether-_
[lands, 2019; pp. 15–29. [CrossRef]](http://doi.org/10.1016/B978-0-12-816948-3.00002-7)
27. The European Union Agency for Cybersecurity. ICT Security Certification Opportunities in the Healthcare Sector; European Union
Agency For Network and Information Security: Attiki, Greece, 2019.
-----
_Appl. Sci. 2021, 11, 1155_ 20 of 21
28. Shamir, A. Identity-Based Cryptosystems and Signature Schemes. In Lecture Notes in Computer Science (Including Subseries Lecture
_Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 1985; Volume 196,_
pp. 47–53.
29. Gentry, C.; Silverberg, A. Hierarchical id-based cryptography. In Lecture Notes in Computer Science (Including Subseries Lecture Notes
_in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2002; Volume 2501, pp. 548–566._
30. Yan, L.; Rong, C.; Zhao, G. Strengthen cloud computing security with federal identity management using hierarchical identitybased cryptography. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes
_in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2009; Volume 5931, pp. 167–177._
31. Park, Y.; Sur, C.; Rhee, K.-H. A Privacy-Preserving Location Assurance Protocol for Location-Aware Services in VANETs. Wirel.
_[Pers. Commun. 2011, 61, 779–791. [CrossRef]](http://doi.org/10.1007/s11277-011-0432-2)_
32. Shen, V.R.L.; Huang, W.C. A Time-Bound and Hierarchical Key Management Scheme for Secure Multicast Systems. Wirel. Pers.
_[Commun. 2015, 85, 1741–1764. [CrossRef]](http://doi.org/10.1007/s11277-015-2865-5)_
33. Fremantle, P.; Aziz, B. Cloud-based federated identity for the Internet of Things. Ann. Des Telecommun. Ann. Telecommun. 2018, 73,
[415–427. [CrossRef]](http://doi.org/10.1007/s12243-018-0641-8)
34. Santos, M.L.B.A.; Carneiro, J.C.; Franco, A.M.R.; Teixeira, F.A.; Henriques, M.A.A.; Oliveira, L.B. FLAT: Federated lightweight
[authentication for the Internet of Things. Ad Hoc Netw. 2020, 107. [CrossRef]](http://doi.org/10.1016/j.adhoc.2020.102253)
35. Mishkovski, I.; Kocarev, L. Chaos-Based Public-Key Cryptography. In Chaos-Based Cryptography: Theory, Algorithms and Applications;
[Kocarev, L., Lian, S., Eds.; Springer: Berlin/Heidelberg, Germany, 2011; pp. 27–65. [CrossRef]](http://doi.org/10.1007/978-3-642-20542-2_2)
36. Yoon, E.J.; Jeon, I.S. An Efficient and Secure Diffie–Hellman Key Agreement Protocol Based on Chebyshev Chaotic Map. Commun.
_[Nonlinear Sci. Numer. Simul. 2011, 16, 2383–2389. [CrossRef]](http://doi.org/10.1016/j.cnsns.2010.09.021)_
37. Yoon, E.J.; Yoo, K.Y. Cryptanalysis of Group Key Agreement Protocol Based on Chaotic Hash Function. Ieice Trans. Inf. Syst. 2011,
_[94, 2167–2170. [CrossRef]](http://doi.org/10.1587/transinf.E94.D.2167)_
38. [Broumandnia, A. Image encryption algorithm based on the finite fields in chaotic maps. J. Inf. Secur. Appl. 2020, 54. [CrossRef]](http://doi.org/10.1016/j.jisa.2020.102553)
39. Musanna, F.; Kumar, S. Image encryption using quantum 3-D Baker map and generalized gray code coupled with fractional
[Chen’s chaotic system. Quantum Inf. Process. 2020, 19. [CrossRef]](http://doi.org/10.1007/s11128-020-02724-3)
40. Zhang, L. Cryptanalysis of the Public Key Encryption Based on Multiple Chaotic Systems. ChaosSolitons Fractals 2008, 37, 669–674.
[[CrossRef]](http://doi.org/10.1016/j.chaos.2006.09.047)
41. [Chain, K.; Kuo, W.-C. A new digital signature scheme based on chaotic maps. Nonlinear Dyn. 2013, 74, 1003–1012. [CrossRef]](http://doi.org/10.1007/s11071-013-1018-1)
42. Tahat, N.; Hijazi, M.S. A new digital signature scheme based on chaotic maps and quadratic residue problems. Appl. Math. Inf.
_[Sci. 2019, 13, 115–120. [CrossRef]](http://doi.org/10.18576/amis/130115)_
43. Tahat, N.; Alomari, A.K.; Al–Freedi, A.; Al-Hazaimeh, O.M.; Al–Jamal, M.F. An Efficient Identity-Based Cryptographic Model for
[Chebyhev Chaotic Map and Integer Factoring Based Cryptosystem. J. Appl. Secur. Res. 2019, 14, 257–269. [CrossRef]](http://doi.org/10.1080/19361610.2019.1621513)
44. Tahat, N.; Tahat, A.A.; Albadarneh, R.B.; Edwan, T.A. Design of identity-based blind signature scheme upon chaotic maps. Int. J.
_[Online Biomed. Eng. 2020, 16, 104–118. [CrossRef]](http://doi.org/10.3991/ijoe.v16i05.13809)_
45. Meshram, C.; Li, C.T.; Meshram, S.G. An efficient online/offline ID-based short signature procedure using extended chaotic maps.
_[Soft Comput. 2019, 23, 747–753. [CrossRef]](http://doi.org/10.1007/s00500-018-3112-2)_
46. Meshram, C.; Lee, C.; Meshram, S.G.; Meshram, A. OOS-SSS: An Efficient Online/Offline Subtree-Based Short Signature Scheme
[Using Chebyshev Chaotic Maps for Wireless Sensor Network. IEEE Access 2020, 8, 80063–80073. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.2991348)
47. 3rd Generation Partnership Project. Technical Specification Group Services and System Aspects. In Security Architecture and
_Procedures for 5G System; (Release 17); The 3rd Generation Partnership Project (3GPP): Sophia Antipolis CEDEX, France, 2020._
48. 3rd Generation Partnership Project. Technical Specification Group Services and System Aspects. In System Architecture for the 5G
_System (5GS); Stage 2 (Release 16); The 3rd Generation Partnership Project (3GPP): Sophia Antipolis CEDEX, France, 2020._
49. [National Health Service Business Services Authority. Available online: https://www.nhsbsa.nhs.uk/exemption-certificates/](https://www.nhsbsa.nhs.uk/exemption-certificates/medical-exemption-certificates)
[medical-exemption-certificates (accessed on 8 September 2020).](https://www.nhsbsa.nhs.uk/exemption-certificates/medical-exemption-certificates)
50. European Commission. European Interoperability Certificate Governance: A Security Architecture for Contact Tracing and Warning Apps,
1st ed.; European Commission: Brussels, Belgium, 2020.
51. [American Hospital Association Certification Center. Available online: https://www.aha.org/career-resources/certification-center](https://www.aha.org/career-resources/certification-center)
(accessed on 8 September 2020).
52. [Pharmaceuticals and Medical Devices Agency. Available online: https://www.pmda.go.jp/english/ (accessed on 8 September](https://www.pmda.go.jp/english/)
2020).
53. [Healthcare Certification Authority. Available online: https://hca.nat.gov.tw/Default.aspx (accessed on 8 September 2020).](https://hca.nat.gov.tw/Default.aspx)
54. [Burrows, M.; Abadi, M.; Needham, R. A logic of Authentication. ACM Trans. Comput. Syst. (TOCS) 1990, 8, 18–36. [CrossRef]](http://doi.org/10.1145/77648.77649)
55. Ali, R.; Pal, A.K.; Kumari, S.; Karuppiah, M.; Conti, M. A secure user authentication and key-agreement scheme using wireless
[sensor networks for agriculture monitoring. Future Gener. Comput. Syst. 2018, 84, 200–215. [CrossRef]](http://doi.org/10.1016/j.future.2017.06.018)
56. Barman, S.; Das, A.K.; Samanta, D.; Chattopadhyay, S.; Rodrigues, J.J.P.C.; Park, Y. Provably Secure Multi-Server Authentication
[Protocol Using Fuzzy Commitment. IEEE Access 2018, 6, 38578–38594. [CrossRef]](http://doi.org/10.1109/ACCESS.2018.2854798)
57. Challa, S.; Das, A.K.; Odelu, V.; Kumar, N.; Kumari, S.; Khan, M.K.; Vasilakos, A.V. An efficient ECC-based provably secure
three-factor user authentication and key agreement protocol for wireless healthcare sensor networks. Comput. Electr. Eng. 2018,
_[69, 534–554. [CrossRef]](http://doi.org/10.1016/j.compeleceng.2017.08.003)_
-----
_Appl. Sci. 2021, 11, 1155_ 21 of 21
58. Chatterjee, S.; Roy, S.; Das, A.K.; Chattopadhyay, S.; Kumar, N.; Vasilakos, A.V. Secure Biometric-Based Authentication Scheme
[Using Chebyshev Chaotic Map for Multi-Server Environment. IEEE Trans. Dependable Secur. Comput. 2018, 15, 824–839. [CrossRef]](http://doi.org/10.1109/TDSC.2016.2616876)
59. Dodangeh, P.; Jahangir, A.H. A biometric security scheme for wireless body area networks. J. Inf. Secur. Appl. 2018, 41, 62–74.
[[CrossRef]](http://doi.org/10.1016/j.jisa.2018.06.001)
60. Li, C.T.; Lee, C.C.; Weng, C.Y. Security and efficiency enhancement of robust ID based mutual authentication and key agreement
[scheme preserving user anonymity in mobile networks. J. Inf. Sci. Eng. 2018, 34, 155–170. [CrossRef]](http://doi.org/10.6688/JISE.2018.34.1.10)
61. Liu, X.; Li, Y.; Qu, J.; Lu, L. ELAKA: Energy-Efficient and Lightweight Multi-Server Authentication and Key Agreement Protocol
[Based on Dynamic Biometrics. Wirel. Pers. Commun. 2018, 100, 767–785. [CrossRef]](http://doi.org/10.1007/s11277-018-5348-7)
62. Sahoo, S.S.; Mohanty, S.; Majhi, B. An Improved and Secure Two-factor Dynamic ID Based Authenticated Key Agreement Scheme
[for Multiserver Environment. Wirel. Pers. Commun. 2018, 101, 1307–1333. [CrossRef]](http://doi.org/10.1007/s11277-018-5764-8)
63. Sharma, V.; You, I.; Leu, F.Y.; Atiquzzaman, M. Secure and efficient protocol for fast handover in 5G mobile Xhaul networks. J.
_[Netw. Comput. Appl. 2018, 102, 38–57. [CrossRef]](http://doi.org/10.1016/j.jnca.2017.11.004)_
64. Sutrala, A.K.; Das, A.K.; Kumar, N.; Reddy, A.G.; Vasilakos, A.V.; Rodrigues, J.J.P.C. On the design of secure user authenticated
[key management scheme for multigateway-based wireless sensor networks using ECC. Int. J. Commun. Syst. 2018, 31. [CrossRef]](http://doi.org/10.1002/dac.3514)
65. Tan, Z. Secure Delegation-Based Authentication for Telecare Medicine Information Systems. IEEE Access 2018, 6, 26091–26110.
[[CrossRef]](http://doi.org/10.1109/ACCESS.2018.2832077)
66. Mandal, S.; Mohanty, S.; Majhi, B. Cryptanalysis and Enhancement of an Anonymous Self-Certified Key Exchange Protocol. Wirel.
_[Pers. Commun. 2018, 99, 863–891. [CrossRef]](http://doi.org/10.1007/s11277-017-5156-5)_
67. [Qiu, Y.; Ma, M. Secure Group Mobility Support for 6LoWPAN Networks. IEEE Internet Things J. 2018, 5, 1131–1141. [CrossRef]](http://doi.org/10.1109/JIOT.2018.2805696)
68. Xu, G.; Qiu, S.; Ahmad, H.; Xu, G.; Guo, Y.; Zhang, M.; Xu, H. A multi-server two-factor authentication scheme with un-traceability
[using elliptic curve cryptography. Sensors 2018, 18, 2394. [CrossRef]](http://doi.org/10.3390/s18072394)
69. Bellare, M.; Namprempre, C.; Neven, G. Security proofs for identity-based identification and signature schemes. J. Cryptol. 2009,
_[22, 1–61. [CrossRef]](http://doi.org/10.1007/s00145-008-9028-8)_
70. Bergamo, P.; D’Arco, P.; de Santis, A.; Kocarev, L. Security of Public-key Cryptosystems Based on Chebyshev Polynomials. IEEE
_[Trans. Circuits Syst. I Regul. Pap. 2005, 52, 1382–1393. [CrossRef]](http://doi.org/10.1109/TCSI.2005.851701)_
71. Hu, C.; Liu, P.; Guo, S.; Xu, Q. Anonymous hierarchical identity-based encryption with bounded leakage resilience and its
[application. Int. J. High Perform. Comput. Netw. 2017, 10, 226–239. [CrossRef]](http://doi.org/10.1504/IJHPCN.2017.084251)
72. Zhu, H. Secure chaotic maps-based group key agreement scheme with privacy preserving. Int. J. Netw. Secur. 2016, 18, 1001–1009.
73. Lin, T.W.; Hsu, C.L. Anonymous group key agreement protocol for multi-server and mobile environments based on Chebyshev
[chaotic maps. J. Supercomput. 2018, 74, 4521–4541. [CrossRef]](http://doi.org/10.1007/s11227-018-2251-7)
74. Guo, X.; Sun, D.; Yang, Y. An Improved Three-Factor Session Initiation Protocol Using Chebyshev Chaotic Map. IEEE Access 2020,
_[8, 111265–111277. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.3002558)_
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/APP11031155?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/APP11031155, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.mdpi.com/2076-3417/11/3/1155/pdf?version=1611909903"
}
| 2,021
|
[] | true
| 2021-01-27T00:00:00
|
[
{
"paperId": "68bca8557883e669fa95345fc73b26b79a2cc15b",
"title": "A survey on secure communication techniques for 5G wireless heterogeneous networks"
},
{
"paperId": "3ab5dabc37c44e2664bb74aa535d00a63b98ed0c",
"title": "Image encryption algorithm based on the finite fields in chaotic maps"
},
{
"paperId": "4d411a14ca598bbe33c92e535f9f1204b1bd7625",
"title": "Secure two-factor lightweight authentication protocol using self-certified public key cryptography for multi-server 5G networks"
},
{
"paperId": "a416963d9f1079c3481e3f2f93e14abfd2b99492",
"title": "FLAT: Federated lightweight authentication for the Internet of Things"
},
{
"paperId": "3769dc2a8f555f3c72b539c9c9df2c0ddece4157",
"title": "Image encryption using quantum 3-D Baker map and generalized gray code coupled with fractional Chen’s chaotic system"
},
{
"paperId": "590176171e1a1e00d9a58f1a9cc839aafeae8a25",
"title": "An Improved Three-Factor Session Initiation Protocol Using Chebyshev Chaotic Map"
},
{
"paperId": "a1e84182f4db6c79d42be4ad50596f47860a6794",
"title": "Design of Identity-Based Blind Signature Scheme Upon Chaotic Maps"
},
{
"paperId": "55216737b1c3e75b8ab049935208030669e9907a",
"title": "Contextualizing privacy on health-related use of information technology"
},
{
"paperId": "6d9192d8a872c84de116c15984ad3153c2a38cc2",
"title": "Healthcare information exchange using blockchain technology"
},
{
"paperId": "58df8ecc89886a3d0fa63a847f64f80f4898f639",
"title": "A Comprehensive Survey on Internet of Things (IoT) Toward 5G Wireless Systems"
},
{
"paperId": "ef8e9fcadab13b5d20e03d9b56d491da754b1953",
"title": "Blockchain for 5G-enabled IoT for industrial automation: A systematic review, solutions, and challenges"
},
{
"paperId": "da956164c50b7566463334b6346a39303bff49b4",
"title": "5G-Based Smart Healthcare Network: Architecture, Taxonomy, Challenges and Future Research Directions"
},
{
"paperId": "cc5bb660a095e1e54a94f89026684fecca71280f",
"title": "An Efficient Identity-Based Cryptographic Model for Chebyhev Chaotic Map and Integer Factoring Based Cryptosystem"
},
{
"paperId": "b67b11f17f7371475d9964f2d4362c7d68d3b3d0",
"title": "The thing-power of the human-app health assemblage: thinking with vital materialism"
},
{
"paperId": "b46d96225a4fd963b53a392793450bf09093bbff",
"title": "Revolutionizing Healthcare with IoT and Cognitive, Cloud-based Telemedicine"
},
{
"paperId": "50531a19ac3595a5fa245daf3d1b196437780025",
"title": "Lightweight remote user authentication protocol for multi-server 5G networks using self-certified public key cryptography"
},
{
"paperId": "0ac2bad68849f69d91bf42683227b867801788ca",
"title": "A New Digital Signature Scheme Based on Chaotic Maps and Quadratic Residue Problems"
},
{
"paperId": "7accd06ffb5ce59f75858dae2b2de273642acb6a",
"title": "Secure Biometric-Based Authentication Scheme Using Chebyshev Chaotic Map for Multi-Server Environment"
},
{
"paperId": "abf21088ff665302c7e202c260aa6f879fc9d82f",
"title": "A biometric security scheme for wireless body area networks"
},
{
"paperId": "a40046bc89b5f2d6798eaf1e996a128b6d4e9a09",
"title": "The Structuration of Digital Ecosystem, Privacy, and Big Data Intelligence"
},
{
"paperId": "667ed27f849fda5b3286ceebd8ffff67329b1c9e",
"title": "Provably Secure Multi-Server Authentication Protocol Using Fuzzy Commitment"
},
{
"paperId": "ebf34b3701673dc2c4ba7394ce96b0bcc2682148",
"title": "A Multi-Server Two-Factor Authentication Scheme with Un-Traceability Using Elliptic Curve Cryptography"
},
{
"paperId": "cba1ed43ddb9c90dc060545c5fd423638ff9a853",
"title": "Cloud-based federated identity for the Internet of Things"
},
{
"paperId": "dd0562def3d259b6f8ba73ba137a991aff631b74",
"title": "Toward a political economy of nudge: smart city variations"
},
{
"paperId": "ed3c1633c003a729873dfff0baf46b87c4e1561d",
"title": "On the design of secure user authenticated key management scheme for multigateway‐based wireless sensor networks using ECC"
},
{
"paperId": "18f2c1d69d26fb93377f30c7becb21ec7c088f99",
"title": "World health statistics 2017: monitoring health for the SDGs, Sustainable Development Goals"
},
{
"paperId": "2e1f89b4b7053c8f5bd398f11c9e1cfa9bbd3bbb",
"title": "World Health Statistics 2008"
},
{
"paperId": "dc80acdabccae2904fd8e0018fde4249e77bcb48",
"title": "An Improved and Secure Two-factor Dynamic ID Based Authenticated Key Agreement Scheme for Multiserver Environment"
},
{
"paperId": "a7bf7683afc118b061d6c4822132cc6bc8396a3a",
"title": "5G-Smart Diabetes: Toward Personalized Diabetes Diagnosis with Healthcare Big Data Clouds"
},
{
"paperId": "30a518cc6884c1b5a5553dae2dce2c442740a6d5",
"title": "Framework for Future Telemedicine Planning and Infrastructure using 5G Technology"
},
{
"paperId": "01faba93bf95c1975a22b569b9aab0dcc13a9e4f",
"title": "Anonymous group key agreement protocol for multi-server and mobile environments based on Chebyshev chaotic maps"
},
{
"paperId": "5979aeaabd2438cb211e60af20f2b55feb07b3b8",
"title": "An efficient online/offline ID-based short signature procedure using extended chaotic maps"
},
{
"paperId": "4831f28a9bfd1fe3d85db363ab8ff1f94aaf5df6",
"title": "Secure Group Mobility Support for 6LoWPAN Networks"
},
{
"paperId": "634cdce343cff38431e428da75e7960fbda01d26",
"title": "ELAKA: Energy-Efficient and Lightweight Multi-Server Authentication and Key Agreement Protocol Based on Dynamic Biometrics"
},
{
"paperId": "453e5f5d37ec3b41faa99a71f140b17353cbea3c",
"title": "Lightweight RFID Protocol for Medical Privacy Protection in IoT"
},
{
"paperId": "f68065a1ff4f343052eee4504d200a68646eb618",
"title": "Secure and efficient protocol for fast handover in 5G mobile Xhaul networks"
},
{
"paperId": "75041582caa52bc87f2cc8a770708839b7d68139",
"title": "Cryptanalysis and Enhancement of an Anonymous Self-Certified Key Exchange Protocol"
},
{
"paperId": "67dc958c3a029f38ed6e11558ae39e946823f510",
"title": "An architecture and protocol for smart continuous eHealth monitoring using 5G"
},
{
"paperId": "db93ec07cb937847be0ed2a443d8e2ad51232a14",
"title": "An efficient ECC-based provably secure three-factor user authentication and key agreement protocol for wireless healthcare sensor networks"
},
{
"paperId": "4da6e6d2b96a74c090d8925979349177480456e0",
"title": "A secure user authentication and key-agreement scheme using wireless sensor networks for agriculture monitoring"
},
{
"paperId": "681ae91f4d81259df63725c2b7b41104a6a89ee4",
"title": "Anonymous hierarchical identity-based encryption with bounded leakage resilience and its application"
},
{
"paperId": "aa7fb1569a2fc2b58d725f316c59c77c4bdb321f",
"title": "Secure Chaotic Maps-based Group Key Agreement Scheme with Privacy Preserving"
},
{
"paperId": "ac025b0f472d4d2d62133aac392789693d4d3033",
"title": "Security and Efficiency Enhancement of Robust ID Based Mutual Authentication and Key Agreement Scheme Preserving User Anonymity in Mobile Networks"
},
{
"paperId": "718193ef0b343177d0d3744a1391785c7e2b5ff5",
"title": "Health information seeking in the Web 2.0 age: Trust in social media, uncertainty reduction, and self-disclosure"
},
{
"paperId": "60ce2c3c9645eb0246279446e771aa1c71eeb664",
"title": "A Time-Bound and Hierarchical Key Management Scheme for Secure Multicast Systems"
},
{
"paperId": "d1c942b522d4f5e35fe386a820dcbf7521e2f8bf",
"title": "Privacy implications of health information seeking on the web"
},
{
"paperId": "69834299b39d3efdf3ec653dc4a46be537370eed",
"title": "A new digital signature scheme based on chaotic maps"
},
{
"paperId": "f6374a4a6128a40aa30db17369d4423542e1d667",
"title": "ROLE OF NEW IT SOLUTIONS IN THE FUTURE OF SHARED SERVICE MODEL"
},
{
"paperId": "56a662937a0d47ed3c63f7d0824c8f8920331bb4",
"title": "Cryptanalysis of Group Key Agreement Protocol Based on Chaotic Hash Function"
},
{
"paperId": "5026d55f882dd8bd38a6bd17ce1c1c6ade1fdcfa",
"title": "A Privacy-Preserving Location Assurance Protocol for Location-Aware Services in VANETs"
},
{
"paperId": "4fdfa1d7887f247cb585f83e931bf997b0d68820",
"title": "An efficient and secure Diffie–Hellman key agreement protocol based on Chebyshev chaotic map"
},
{
"paperId": "c5e23deb95d481b0933dabad9448224f8414dd95",
"title": "Strengthen Cloud Computing Security with Federal Identity Management Using Hierarchical Identity-Based Cryptography"
},
{
"paperId": "5579472a9487e40455ee1cb8d694c54b1ddb61ac",
"title": "Security Proofs for Identity-Based Identification and Signature Schemes"
},
{
"paperId": "bf0262f17140ce23d88b98348807ecc30bd8d4a0",
"title": "Cryptanalysis of the public key encryption based on multiple chaotic systems"
},
{
"paperId": "3651b4cd9994142c6fbb2f4e1630c406a25fca77",
"title": "Security of public-key cryptosystems based on Chebyshev polynomials"
},
{
"paperId": "ec91e6bf2530b75bd37dafdbd15e2144f6d86e7f",
"title": "Hierarchical ID-Based Cryptography"
},
{
"paperId": "502935ba4c4be621106cd186610bdcb8ce935ea0",
"title": "A logic of authentication"
},
{
"paperId": "5281536f3d07af0074666f48884b9d8b860dd046",
"title": "Identity-Based Cryptosystems and Signature Schemes"
},
{
"paperId": "e37f0b2d95492ed28afadf4bb6ecdc88280f894d",
"title": "OOS-SSS: An Efficient Online/Offline Subtree-Based Short Signature Scheme Using Chebyshev Chaotic Maps for Wireless Sensor Network"
},
{
"paperId": null,
"title": "Healthcare Certification Authority"
},
{
"paperId": null,
"title": "National Health Service Business Services Authority"
},
{
"paperId": null,
"title": "European Interoperability Certificate Governance: A Security Architecture for Contact Tracing and Warning Apps"
},
{
"paperId": null,
"title": "ICT Security Certification Opportunities in the Healthcare Sector ;"
},
{
"paperId": "cc855b099477a9c5bad89d52c9850bcf65e7aebb",
"title": "Security and Privacy in Remote Healthcare"
},
{
"paperId": "328f000cd667c81ac338f9126f1ad586d259795c",
"title": "Emerging Paradigms in Transform-Based Medical Image Compression for Telemedicine Environment"
},
{
"paperId": null,
"title": "The European Union Agency for Cybersecurity. ICT Security Certification Opportunities in the Healthcare Sector; European Union Agency For Network and Information Security"
},
{
"paperId": "d405ac2914c3f3324589050b0eb6d8a3ce3641d5",
"title": "Secure Delegation-Based Authentication for Telecare Medicine Information Systems"
},
{
"paperId": "78130120247a1dd67e50b041380aa82865d769e0",
"title": "The Path to 5 G for Health Care"
},
{
"paperId": "69a78bf5a1b862ee06fc34745ef6143ba67a50bf",
"title": "Security and Privacy Issues in Ehealthcare Systems: Towards Trusted Services"
},
{
"paperId": "3d4c709be74c6926b220f4cf7d8adf50082f3886",
"title": "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)"
},
{
"paperId": "2612342c6224b235046decf6a2356ae7298bbd7c",
"title": "Chaos-Based Public-Key Cryptography"
},
{
"paperId": "b5ab37576f306455118423a67d406a6868a499aa",
"title": "Health Insurance Portability and Accountability Act of 1996. Public Law 104-191."
},
{
"paperId": null,
"title": "Public Law 104-191. In United States Statutes at Large; Office of the Federal Register"
},
{
"paperId": null,
"title": "American Hospital Association Certification Center Pharmaceuticals and Medical Devices Agency Healthcare Certification Authority"
},
{
"paperId": "07b0f76d8398a25f06f344bcdb45b982180320ab",
"title": "Technical Specification Group Services and System Aspects ; 3 G Security ; Specification of the MILENAGE Algorithm Set : An example algorithm set for the 3 GPP authentication and key generation functions"
}
] | 20,447
|
en
|
[
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/033d53b7bf8b0fccc106b436cef69628e0309cae
|
[] | 0.873141
|
A Proposed Secured Health Monitoring System for the Elderly using Blockchain Technology in Nigeria
|
033d53b7bf8b0fccc106b436cef69628e0309cae
|
Journal of Electronics,Computer Networking and Applied Mathematics
|
[
{
"authorId": "2187621468",
"name": "Maimunatu Ya’u Ibrahim"
},
{
"authorId": "118873443",
"name": "K. I. Musa"
},
{
"authorId": "119513437",
"name": "Y. Yarima"
},
{
"authorId": "143661007",
"name": "Aminu Ahmad"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"J Electron Netw Appl Math"
],
"alternate_urls": null,
"id": "a29584fc-68b9-478f-a5d5-2f865c7ee48f",
"issn": "2799-1156",
"name": "Journal of Electronics,Computer Networking and Applied Mathematics",
"type": "journal",
"url": null
}
|
A large number of connected smart objects and sensors, as well as the establishment of seamless data exchange between them, have been made possible by the Internet of Things (IoT) technology's recent rapid development. As a result, there is a high demand for platforms for data analysis and data storage, such as cloud computing and fog computing. IoT makes it possible to customize apps for older persons as well as for rapidly expanding markets that must modify their products to match the preferences of their customers. This study suggests a framework for a decision-support system and a protected health monitoring system using IoT data collected from senior residents' homes. The study intends to provide security to the users’ data from the point of acquiring of the data to the relaying of the data to cloud and to the alert generation using blockchain technology. Further research should focus on key management and security, as well as the capability to easily replace lost or compromised keys.
|
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
# A Proposed Secured Health Monitoring System for the
Elderly using Block chain Technology in Nigeria
**Maimunatu Ya’u Ibrahim[1*], Kabiru Ibrahim Musa[2], Yakubu Abdullahi Yarima[3],**
**Aminu Ahmad[4 ]**
_1*,2,3,4Department of Management and Information Technology, Abubakar Tafawa Balewa_
_University Bauchi, Nigeria_
_[Email: [2]imkabir@atbu.edu.ng, [3]yakubuyerima318@gmail.com,](mailto:imkabir@atbu.edu.ng)_
_[4ahmad.aminu@commtech.gov.ng](mailto:ahmad.aminu@commtech.gov.ng)_
_[Corresponding Email: [1*]yimaimunatu.pg@atbu.edu.ng](mailto:yimaimunatu.pg@atbu.edu.ng)_
**Received: 27 February 2022 Accepted: 22 May 2022 Published: 25 June 2022**
**_Abstract: A large number of connected smart objects and sensors, as well as the_**
**_establishment of seamless data exchange between them, have been made possible by the_**
**_Internet of Things (IoT) technology's recent rapid development. As a result, there is a high_**
**_demand for platforms for data analysis and data storage, such as cloud computing and fog_**
**_computing. IoT makes it possible to customize apps for older persons as well as for rapidly_**
**_expanding markets that must modify their products to match the preferences of their_**
**_customers. This study suggests a framework for a decision-support system and a protected_**
**_health monitoring system using IoT data collected from senior residents' homes. The study_**
**_intends to provide security to the users’ data from the point of acquiring of the data to the_**
**_relaying of the data to cloud and to the alert generation using blockchain technology._**
**_Further research should focus on key management and security, as well as the capability to_**
**_easily replace lost or compromised keys._**
**_Keywords: Internet of Things (Iot); Blockchain; Cloud Computing, Health Monitoring,_**
**_Behavioural Reasoning Theory (BRT)_**
**1.** **INTRODUCTION**
Internet of Things is a new technology that has emerged as a result of the recent proliferation
of embedded systems and information and communication technology (ICT) (IoT). IoT makes
it possible for physical items and people to communicate with data and virtual worlds. IoT is
positioned to play a significant role in all facets of health management due to the quick
development of IoT device deployment and growing need to make healthcare more affordable,
customized, and proactive [1]. The Internet of Things (IoT) is gaining traction as a disruptive
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
paradigm to offer new capabilities and services in a variety of industries, including smart cities,
industry 4.0, smart energy, and connected cars [2]. One of the most well-liked technology
innovations in healthcare right now is the Internet of Things (IoT), which is used for AmbientAssisted Living (AAL), remote health monitoring, chronic illness management, senior care,
and fitness programs [3].
The need for effective healthcare solutions, particularly for the elderly, is driven by aging
populations and the rise in chronic illnesses worldwide. Concentrating on remote health
monitoring systems powered by IoT technology is one technique that has attracted a lot of
academic attention. In particular for patients with chronic illnesses and the elderly, this notion
can help lessen the burden on hospital systems and healthcare personnel, lower healthcare
expenditures, and enhance homecare [4]. Older adults can now receive real-time healthcare by
sending information about their physical conditions to a medical facility via wireless sensor
networks thanks to the development of a variety of smart wearable systems, which provide
immediate feedback on vital signs like heart rate and blood pressure [5].
IoT usage and the advancement of wireless communication technologies enable for the
real-time streaming of patient health status to caregivers [6]. Sensors, actuators, and smart
textiles can be used to construct a smart wearable system that is backed by technologies like
wireless sensor networks and electronic care surveillance devices for health assessment and
decision assistance. The major functions of modern smart wearable devices are fall prevention,
location tracking, body movement monitoring, and monitoring of vital indicators [5].
Additionally, a number of readily accessible sensors and portable devices may instantly
monitor some human physiological parameters including blood pressure (BP), respiration rate
(RR), and heart rate (HR) with a single touch. Although it is still in the early stages of
development, businesses and industries have swiftly incorporated the potential of IoT into their
current systems and seen gains in both user experiences and production [7].
All health and medical data are saved on the central server computer according to the
usual centralized storage pattern. Each hospital department's computers have the ability to
store, gather, and query health information. In this instance, a hacker attack on the main server
compromises health data. In reality, figures show that the healthcare industry saw almost
millions of patient records exposed in hundreds of breaches between 2010 and 2017 [8]. IoT
technology integration in healthcare does present certain difficulties, including those related to
data management, storage, and sharing between devices as well as security and privacy.
Blockchain and cloud computing technologies are two potential responses to these problems
[9].
Due to extra regulatory needs to secure patients' medical information, the healthcare sector has
specific security and privacy requirements. With cloud storage and the use of mobile health
devices, the exchange of records and data is becoming more common in the Internet age, but
so is the possibility of hostile assaults and the chance of private information being
compromised. The sharing and privacy of this information are issues when people visit several
providers and access to health information via smart devices increases. Authentication,
interoperability, data sharing, the transfer of medical information, and considerations for
mobile health are the specific criteria that the healthcare sector must meet [10]. As a result,
blockchain technology may be used by IoT and cloud providers to communicate data in a
decentralized fashion that is both safe and private.
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
As more people are eager to participate in healthcare decision-making, IoT technology is
becoming more commonplace in the industry. Additionally, patients are more eager to
personalize their treatment by acting more proactively. Smart gadgets and smart sensors that
capture and deliver crucial health data to their doctor to remotely monitor chronic illnesses can
help to personalize healthcare and treatment [11]. Smart watches, contact lenses, fitness bands,
chips embedded in the skin, and wireless sensors are a few examples of IOT healthcare
applications that help seniors make better health decisions. However, these wireless systems
don't always give security the same kind of consideration that other, more sensitive systems,
such databases and databanks, do. IoT systems are susceptible to several threats and
vulnerabilities that might jeopardize security [12].
Symmetric key cryptography is the security approach used by a Cloud-Centric IoT-based
healthcare system by [13] to secure and protect the patient's medical data. The user password
is encrypted using a "private key" as the foundation of the security system. Despite the many
advantages of symmetric encryption, there is one significant drawback: the difficulty of
sending the keys needed to encrypt and decode data. These keys are susceptible to being
intercepted by nefarious third parties when they are transmitted over an insecure connection.
The security of any data encrypted with a specific symmetric key is jeopardized if an
unauthorized user obtains that key. Since a result, the asymmetric cryptographic method can
aid in resolving the trust issue in health data management [14], as only secured and decisionsupporting data can be trusted. The asymmetric cryptographic technique used by the
blockchain-based healthcare system effectively solves the authentication challenge. Two
cryptographic keys, a public key and a private key, are stored on each node. Anyone who
wishes to communicate encrypted material to the owner of the private key can receive the
public key.
Data privacy and security issues for important stakeholders have unavoidably increased as the
healthcare system becomes more complicated. Even though the vast majority of medical
records have been converted to digital format, they are frequently scattered throughout many
medical facilities around the world in storage towers. This has repercussions for the healthcare
sector, which depends on organizations providing accurate and comprehensive information. It
is claimed that blockchain technology can solve the problems with information exchange in the
healthcare industry.
Blockchain technology has become increasingly popular in recent years thanks to its many
attractive features, including chronological and time-stamped data records, auditable and
cryptographically sealed information blocks, consensus-based transactions, and policy-based
access to help with data protection, fault-tolerant distributed ledger, and many more.
Blockchain link parties directly without the need for middlemen; they are economical and
distributed ledgers that improve information accessibility [15]. These characteristics make
blockchain technology a profitable choice for the healthcare sector. Due to its intrinsic
properties, blockchain can offer a suitable solution to privacy and security issues, making it a
viable paradigm for fields where privacy and security are highly valued [16].
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
The aim of this study is to propose a cloud-centric IoT based framework for health monitoring
with blockchain capability.
**A. Overview of internet of things**
The name "Internet of Things" is a combination of two concepts: the first is "Internet," which
is described as networks of networks that may link billions of people using common internet
protocols [17]. IoT is also known as an open, vast network of intelligent devices that may selforganize, exchange information, data, and resources, and respond and act in response to
circumstances and environmental changes [18].
Fig. 1. Basics of Internet of Things
Source: Adopted from [19]
**_B. History of IoT_**
The Intert of Things (IoT) sector ushers in a new era of technology and communication where
devices can converse, compute, and change data as needed. This communication situation has
already been tried, but it wasn't well received. In 1999, Kevin Auston, the Executive Director
of MIT's Auto-ID Labs, created the phrase "Internet of Things." Through the Auto-ID Centre
in 2003, as well as in linked market analyses and its publications, the idea of IoT initially gained
significant popularity [20]. When the idea of this type of communication first emerged, various
businesses concentrated on it, tried to understand its significance, and started to pinpoint its
function and related future aspects. Following this, these businesses began investing in the IoT
sector at irregular but consistent intervals of time [21].
**C. IoT architecture**
IoT system technical standards and reference designs are still in need of completion and
standardization [22]. Typical IoT communication topologies allow IoT devices to interact with
one another independently in addition to connecting to the Internet, which serves as the
network's infrastructure [23]. IoT architecture does not currently have a recommended
standard, but [17] presents five architects or models created by academics, writers, and
practitioners. An IoT system typically consists of three layers, though there may be slight
variations in the architectural models: a physical perception layer that perceives the physical
environment and human social life; a network layer that transforms and processes the data from
the perceived environment; and an application layer that provides pervasive, context-aware
intelligent services [24].
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
The mixing of various hardware and software components in an IoT multilayer stack
made up of three fundamental layers—the item or device layer, the connection layer, and the
IoT cloud layer—is also described in [25]. [26] provide an explanation of each layer, indicating
that the IoT is architecturally divided into three layers: the device layer, which is the foundation
of the IoT and uses technologies like RFID, NFC, wireless sensor networks, and embedded
intelligence; the connection layer, which is made up of gateways and the core network; and the
application layer, which is made up of objects with sensors. The study of [27] notes that sensors
collect, analyse, and measure data. IoT is made possible by the tethering of equipment and
sensors. Furthermore, the secret to exploiting leveraged data is cloud-based apps.
Without cloud-based apps to analyse and send the data flowing from numerous sensors,
the IoT cannot work. An IoT architecture model that was simplified was created using the
findings of this research. View Figure 2. IoT development teams need to know the architecture
in order to build and maintain the IoT, but academics and practitioners are more likely to be
interested in IoT applications.
Fig. 1. IoT architectural model
Source: Adopted from [28]
**_D. IoT applications_**
IoT applications may be used in a wide range of contexts, from "big" company to personal.
This idea is supported by [29], who claim that the IoT makes it possible to create a wide range
of industry- and user-specific IoT applications. IoT apps enable device-to-device and humanto-device interactions in a reliable and resilient way, whilst devices and networks offer physical
connectivity. The study of [30] divided IoT applications into four major categories: personal
and social domain, healthcare, smart environments (home, workplace, and plant), and
transportation and logistics. Manufacturing, retail trade, information services, banking, and
insurance were ranked by [29] as the top four industries in terms of IoT value. The breadth of
the relevance of IoT applications inside sections of their enterprises was shown by the findings
of a survey involving 500 senior executives from across the world who were in charge of IoT
activities [31].
A study conducted by [32] evaluated three things in a research on IoT applications: what people
search for on Google, what people tweet about, and what people post on LinkedIn. The top 10
application category rankings followed. According to this report, the top three categories are
"smart home," "wearables," and "smart city." The study of [33] classified the top 10 IoT
application areas after verifying 640 genuine enterprise-IoT projects. On the list of actual IoT
project areas, connected industry, smart cities, and smart energy were at the top. Despite being
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
a young technology, the IoT has a wide range of applications and has a substantial influence
on a sizable portion of society. This is supported by [25], which shows that the IoT
technologies' domains of use are as varied as they are numerous, touching almost every aspect
of daily life.
**_E. IoT in healthcare_**
IoT healthcare solutions are appealing in this new environment because they allow clinical
healthcare to be personalized, resulting in not just considerable cost savings but also better
results because to increased responsiveness, customisation, and efficient use of aggregated
data. IoT healthcare can speed up the diagnosis of medical conditions [34], deliver effective,
high-quality treatment, and lower the cost of hospitalization [35] and likelihood of readmission
for the same medical problem [36: 37]. Connected IoT healthcare enables people to track their
own development and provide clinicians ongoing input, increasing patient engagement and
happiness. The introduction of IoT in healthcare also creates new opportunities for enhancing
doctors' current standard diagnostic methods by allowing for rich longitudinal data collecting
from many sources. In particular, data analytics can automatically identify physiological
abnormalities for additional inquiry, and visualization tools can highlight important patterns
without taxing doctors' cognitive resources or obstructing their interactions with patients in the
clinic [38].
In order to satisfy the demands of healthcare follow-up connected to the enormous
growth in the senior population and provide e-health services, Health Monitoring Systems is a
fantastic option. Medical treatments that were previously exclusively offered in hospitals can
now be provided at home thanks to this approach. The gathering and analysis of patient-related
data through wearable sensors is a key component of these services. This unprocessed data is
insufficient for e-health services and may be misinterpreted by health monitoring systems. By
analysing the observed patient's Activities of Daily Living, we may evaluate the context,
increase our knowledge of the subject, and interpret the patient more accurately [19]. Health
professionals can now deliver quicker, more effective, and better healthcare services, which
improves the patient experience. This is made possible by the integration of IoT and cloud
computing into the healthcare industry. Better healthcare services, a better patient experience,
and fewer paperwork for healthcare personnel are all benefits of this.
Health monitoring may be used to create Internet of Things (IoT) healthcare apps that
offer clinical decision assistance to patients in an efficient manner. Clinical participation will
be reduced via change prediction and decision support. Patients can receive feedback such as
suggestions for medication, a healthy diet, and exercise without the involvement of a doctor
[4].
**F. IoT and cloud computing for healthcare**
The only ways that doctors and patients could communicate before the Internet of Things and
cloud computing periods were in-person visits, phone calls, and texts. Doctors were unable to
remotely check on their patients' health in order to administer prompt therapy. But recently,
IoT and cloud computing-based healthcare systems have opened up the possibility of real-time
applications in the healthcare industry, unleashed the full potential of IoT and cloud computing
in the healthcare, and assisted doctors in providing top-notch medical treatment [9]. Because
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
patient and doctor communications have become more accessible and effective, IoT and cloud
computing have enhanced patient participation and satisfaction. Additionally, remote
monitoring shortens hospital stays and prevents readmissions. These new technologies
consequently have a big influence on lowering healthcare costs and raising patient outcomes.
By fostering the development of a new range of IoT-connected medical equipment and
enhancing patient engagement in healthcare systems, IoT and cloud computing technologies
are enhancing the healthcare sector. Applications for IoT and cloud computing in healthcare
are being created at an increasing rate to benefit patients, families, doctors, hospitals, and
insurance providers [9].
The cloud computing paradigm has emerged as one of the most popular subjects in
information technology in recent years. By offering consumers on-demand computing
resources (such as storage, services, networks, servers, apps, and hardware), it provides
advantages in terms of scalability, mobility, and security. Research by [39] indicates that cloud
computing has lately become the foundation of IoT healthcare systems. The capacity to share
information amongst medical staff, carers, and patients in a more structured and organized
manner, hence reducing the likelihood of lost medical data, is another significant benefit of
cloud computing [40]. As a result, the development of technologies like IoT and Cloud
Computing has helped healthcare services and applications [41]. Because of this, healthcare
organizations are depending on the adoption of IoT and Cloud computing to improve the way
elderly patients and staff/health care professionals receive healthcare services [42]. These
technologies have the ability to assist the medical services offered for the elderly's best health
management in a comfortable setting that improves quality of life.
**I. Blockchain Technology**
A blockchain is an immutable, traceable distributed ledger, also known as database, of
transactions [43]. There are three main blockchain designs: public, which is not controlled by
any one entity and is open source, allowing any actor to participate without restriction, private,
which is permission-based and accessible only by authorized actors, and finally hybrid, which
offers flexibility and the option to designate specific data subsets to be available publicly or
privately [44]. Bitcoin, which Satoshi Nakamoto introduced in 2008 [45], was supported by
the first blockchain. Although a thorough technical description is outside the scope of this
paper, it is important to note that the system was designed as a shared, distributed ledger that
uses encryption and is independent of a central authority that validates transactions [45]. By
establishing a system where two strangers may transfer value to one other without prior
established trust in a secure, irrefutable fashion, this idea effectively eliminates middlemen.
According to the research, this technique might be used to safeguard healthcare data more
effectively while facilitating system interoperability [46]. According to the literature,
blockchain is a suitable option when there are several stakeholders, there is a lack of
confidence, and accurate and accountable tracking is needed [15].
Enhancing data sharing, EMR administration, and access control are at the forefront of
this technology's application in healthcare, according to recent systematic literature reviews by
[15] and [47]. With Guardtime being used to administer the EMRs for more than 1 million
residents, Estonia, for instance, was the first nation to utilize blockchain technology on a
national scale [15; 48; 49]. Each individual has the option to grant or deny permission for their
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
healthcare records to be accessed and utilized by third parties thanks to blockchain technology
[50].
Inviting suggestions on how blockchain technology may be used into US healthcare, the CDC
in America has also started to investigate how distributed ledgers can assist PH practitioners in
responding more promptly to crises [51; 52]. However, there hasn't been much study done to
yet on possible obstacles to blockchain adoption in the healthcare industry. According to some
study, the difficulties with implementation are related to scalability, cyberattacks, and
excessive energy use [53; 54]. Additionally, it has been acknowledged that while the initial
costs of using blockchain technology are high, they may be reduced over time [55].
**_J. The need for blockchain in healthcare_**
Blockchain is characterized as a growing body of information, commonly referred to as blocks
that are connected through cryptography in a way that prevents alterations [56] and encourages
security and transparency. Through the removal of expenses and privacy problems, improving
coverage and quality, and enabling user provision of healthcare, technology can help to
enhance health service delivery and quality of care support [57]. Every area of information and
communications technology (ICT) has been impacted by blockchain technology, and usage has
increased significantly in recent years. Healthcare is one of the industries where blockchain is
thought to have a lot of promise [58].
The management of data that might profit from the capacity to link disparate systems
and improve the accuracy of EHRs should be the focus of efforts to reform healthcare. Access
control, data sharing, and monitoring an audit trail of medical activities are all possible with
the use of blockchain technology. It can also be used to support medicine prescriptions and
supply chain management, pregnancy and any risk data management, as well as access control.
Provider credentials, medical billing, contracts, medical record interchange, clinical trials, and
anti-counterfeiting medications are all sectors of healthcare that stand to gain from blockchain
technology. One advantage of adopting blockchain, which is based on peer-to-peer networks,
is that it updates in real-time, eliminating the need for middlemen and their expenses [59].
Blockchain provides a transparent environment where both patients and healthcare providers
may access records easily and without further expenses because it is resistant to alterations [59;
60]. By lowering the likelihood of lost records and mistakes, it also improves system security
[61].
The provision of healthcare services is evolving to support a patient-centric philosophy.
Since people will have authority over their medical information, blockchain-based healthcare
solutions might improve the security and dependability of patient data. These technologies
could also aid in the consolidation of patient data, facilitating the sharing of medical records
between various healthcare facilities. In the healthcare industry, it is crucial to store patients'
medical information [62]. Due to their extreme sensitivity, these data make for a lucrative target
for online assaults. Consequently, it is crucial to protect any critical data. Control of data is
another issue, which should ideally be handled by the patient. Therefore, obtaining and
exchanging patient health data is another use case that can profit from cutting-edge
contemporary technology. Blockchain technology offers a variety of access control strategies
and is very resistant to assaults and failures. Blockchain therefore offers a solid platform for
healthcare data [62].
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
The healthcare sector is described as a conventional one that is resistive to novel
techniques and extremely inflexible to measure owing to the realities of change. Healthcarerelated issues, such as privacy, treatment quality, and information security, have gained
attention in recent years on a global scale. Blockchain technologies are being recognized more
and more as a solution to solve current problems with information dissemination. It could
enhance the provision of immediate healthcare services and support for excellent treatment, for
instance [10]. Civilian health records have an inherent difficulty with data sharing and access
in addition to a security issue. Sharing medical records can be challenging at times since an
individual's complete data can be kept in many different places. In the same way that healthcare
practitioners do not have access to the most recent patient data if the records are kept elsewhere,
patients do not have a unified picture of these dispersed records [63].
Blockchain technology can protect clinical trial results, health information, and
regulatory compliance. Blockchain technology is utilized to show how smart contracts might
facilitate real-time patient monitoring and medical treatments [64]. Such systems guarantee
record security while enabling Health Insurance Portability and Accountability Act (HIPAA)compliant access for patients and medical providers. By encoding data in a block, the
blockchain's security is accomplished utilizing cryptographic keys, a distributed network, and
a network servicing protocol. Once information (such as a transaction request) has been
verified, meta-data is stored in a block and cannot be deleted without the networks and the
record's creators' knowledge and consent. A block does not alter when it is included in a chain
of other blocks.
**H. Empirical Review**
Many researchers concentrate their efforts in this field of study. A people-centric sensing
paradigm for the aged and handicapped was presented by [65]. The methodology's goal is to
offer a service-oriented emergency response in the event that the patient's state is aberrant. To
reduce vulnerabilities in a healthcare setting based on the Internet of Things, [40] presented an
intelligent collaborative security architecture in 2015. 74 They also looked at developments in
IoT healthcare technology. Additionally, a focus is placed on examining the most advanced
network architecture/platform, applications, and commercial advances for IoT-based
healthcare solutions. A Smart Hospital System (SHS) was proposed by [66] in 2015 employing
technology improvements, primarily RFID, WSN, and smart phones. Through an architecture
of IPv6 through low-power wireless personal area networks, these technologies communicate
with one another.
Healthcare Industrial IoT, a real-time health monitoring system, was presented in [67].
(HealthIoT). This approach has a lot of promise for analysing patient healthcare data to
disprove causes of mortality. The medical devices and sensors used in this IoT framework for
healthcare are used to gather patient data. Additionally, this framework incorporates security
practices like watermarking and signal augmentation to prevent identity theft and clinical
mistakes made by medical personnel.
The integration recommendations for remote health monitoring in medical practice
have been given by [68]. In IoT infrastructure, cell phones have been employed as
concentrators, while clouds or cloudlets have been used for data aggregation. It is also
understood that employing clouds for data processing would be more effective than combining
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
cloudlets and wearable sensors for data collection. Using these sensors, authors conducted a
two- to three-day period of continuous physiological monitoring and gathered key
physiological information to update the pertinent health database.
Study of [69] described a new technology known as a body sensor network that is built
on improvements in IoT medical devices (BSN). Using various tiny-powder and lightweight
sensor networks, the patient may be observed inside this framework. Additionally, this
architecture took into account the security needs for constructing BSN-healthcare systems. In
2015, [70] explored the history of IoT and its use from the standpoint of healthcare. The authors
developed an IoT for u-healthcare ideological framework.
A framework for the human vital sign monitoring system was established by [71] in
2015. From a distance, the device measures the body temperature and pulse rate. Additionally,
in the event of anomalies in health measures, an IoT enabled network infrastructure and
computational processor are employed to create emergency alerts. Using context motion
tracking, [72] created an emergency scenario monitoring system for patients with chronic
diseases in 2015. The system uses contextual information to diagnose the patient's present
condition and then uses patient life patterns to offer the necessary information. To identify and
stop the spread of the chikungunya virus, [73] created a fog-assisted cloud-based healthcare
system in 2017. By combining proximity data and temporal network analysis at the cloud layer,
the status of the chikungunya virus outbreak is ascertained.
They mention a few instances of the use of blockchain technology in healthcare in their
assessment by [48]. These include the MedRec project, which was developed to make the
management of permissions, authorization, and data sharing between healthcare entities easier,
and the Guard time company, which runs a blockchain-based healthcare platform for the
validation of patients' identities for Estonian citizens. Similar to this, [15] lists a number of
"notable" instances of blockchain technology businesses in the healthcare industry. These
businesses are categorized under three main healthcare use cases: the dentistry sector, patientcentred medical records, and prescription medication fraud detection. This assessment is also
comparable to the one by [49], in which he lists some instances of blockchain-based enterprises
and applications in the fields of managing public health, conducting medical research, and
combating medication fraud in the pharmaceutical sector.
The primary advantages of blockchain over conventional databases for healthcare
applications are published by [74] on their end. They go on to describe how these advantages
might be used to develop healthcare data ledgers, boost clinical research, and streamline
insurance claim procedures. In their study, [75] also discusses the current and future uses of
blockchain in a variety of medical disciplines, including legal medicine, health data analytics,
biomedical research, electronic medical records, meaningful use, and the payment of medical
services, among others.
They provided a high-level system architecture in the research of [1] to illustrate the
integration of HIoT into clinical healthcare. Data acquisition and sensing technologies will
profit from future VLSI technologies that require less battery power for their operation,
according to research on HIoT clinical applications. Meanwhile, communication standards will
continue to advance to provide higher communication throughput with lower power
consumption requirements. The study's framework is shown in Fig. 3.
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
Fig. 3. High-level system architecture illustrating HIoT integration into clinical healthcare
Source: Adopted from [1]
Another research by [4] put out an IoT Tiered Architecture (IoTTA) as a means of producing
real-time clinical feedback from sensor data. According to the study's findings, the next wave
of IoT applications for healthcare should concentrate on self-care, data mining, and machine
learning. The study's architecture is shown in Fig. 4.
Fig. 4. IoT Tiered Architecture (IoTTA) for transforming clinical feedback
Source: Adopted from [4]
An IoT-based approach for pain evaluation and treatment was suggested in a research by [76].
The study's findings demonstrate how IoT-enabled solutions may increase pain assessment
accuracy while also achieving high levels of usability and compliance. They learn that the IoT
mind-set has led to the adoption of technologies that enable the IoT in isolation rather than in
combination, and that further implementations and study are required to assure the viability
and acceptability of suggested solutions. The study's suggested framework is shown in Fig. 5.
Fig. 5. IoT-based system for pain assessment and management
Source: Adopted from [76]
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
IoT healthcare solutions give healthcare applications the capacity to fully exploit the IoT
backbone and handle different communication protocols for smart devices, according to a
research on internet of things and cloud computing for healthcare by [9]. Figure 6 depicts the
study's conceptual structure.
Fig. 6. IoT and cloud computing-based healthcare system
Source: Adopted from [9].
They suggested a fog IoT cloud-based health monitoring system in the study [19]. The study's
findings demonstrate that patient data privacy and data anonymization are honoured during all
communications across the sensor, fog, and cloud layers. Their technique makes it possible for
medical professionals to monitor elderly or alone patients' health conditions and behavioural
changes. Additionally, the technology offers a way to track patients' rehabilitation and recovery
processes. A local gateway for storing data locally and fast, a wireless sensor network, and a
Lambda cloud architecture make up their Fog-IoT design. Figure 7 illustrates the study's
conceptual structure.
Fig. 7. Fog IoT cloud-based health monitoring system
Source: Adopted [19]
Table 1: Related IoT Based Healthcare Frameworks
**S/No Author(s)** **Method Used** **Outcome** **Recommendation**
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
|S/No|Author(s)|Method Used|Outcome|Recommendation|
|---|---|---|---|---|
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
1 [1] Data analytics VLSI technologies that 1.limited public
and inference require less battery power datasets are available
algorithms for operation will be for training ML
advantageous for data algorithms and HCOs
acquisition and sensing. rely on their individual
Advances in communication databases
standards will allow for 2.Large datasets that
higher communication are required to train
throughput while reducing sophisticated
the power requirements algorithms (such as
placed on sensing networks, ones that use Deep
according to researchers at Learning) are not
Texas A&M University's freely available.
Energy Institute (EAT).
2 [4] 1.CIoud The review's findings Future work will focus
computing indicate that the next wave on data collection and
2. Machine of IoT applications for analysis for the
learning healthcare should development of the
3.Classification concentrate on self-care, falls detection and
and regression data mining, and machine prevention system
algorithms learning. based on IoTTA
4. Bluetooth, approach
RFID, NFC,
UWB
5. Wearable
devices
(sensors)
3 [76] 1.Search IoT-enabled solutions aid in Further development
strategy achieving high levels of of this field depends on
2.Identification usability and compliance effective collaboration
and selection of while also enhancing the between engineers and
relevant studies precision of pain healthcare providers.
assessment.
4 [9] 1.Algorithm The framework enables Future work will focus
2.Smart phone healthcare applications to on data collection and
3. RFID fully take use of the Internet analysis for the
of Things and cloud development of the
computing. falls detection and
The framework also offers prevention system
protocols to facilitate the based on IoTTA
transmission of unprocessed approach.
medical signals from a
variety of sensors and
intelligent devices to a
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
|1|[1]|Data analytics and inference algorithms|VLSI technologies that require less battery power for operation will be advantageous for data acquisition and sensing. Advances in communication standards will allow for higher communication throughput while reducing the power requirements placed on sensing networks, according to researchers at Texas A&M University's Energy Institute (EAT).|1.limited public datasets are available for training ML algorithms and HCOs rely on their individual databases 2.Large datasets that are required to train sophisticated algorithms (such as ones that use Deep Learning) are not freely available.|
|---|---|---|---|---|
|2|[4]|1.CIoud computing 2. Machine learning 3.Classification and regression algorithms 4. Bluetooth, RFID, NFC, UWB 5. Wearable devices (sensors)|The review's findings indicate that the next wave of IoT applications for healthcare should concentrate on self-care, data mining, and machine learning.|Future work will focus on data collection and analysis for the development of the falls detection and prevention system based on IoTTA approach|
|3|[76]|1.Search strategy 2.Identification and selection of relevant studies|IoT-enabled solutions aid in achieving high levels of usability and compliance while also enhancing the precision of pain assessment.|Further development of this field depends on effective collaboration between engineers and healthcare providers.|
|4|[9]|1.Algorithm 2.Smart phone 3. RFID|The framework enables healthcare applications to fully take use of the Internet of Things and cloud computing. The framework also offers protocols to facilitate the transmission of unprocessed medical signals from a variety of sensors and intelligent devices to a|Future work will focus on data collection and analysis for the development of the falls detection and prevention system based on IoTTA approach.|
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
network of fog nodes for
communication and
dissemination.
5 [19] Using of The privacy of patient data In future, the complete
physiological and data anonymization are traceability of data
and upheld during all patient treatment must
environmental communications between again be implemented.
signals the sensor, fog, and cloud
allowing to layers.
provide
contextual
information in
terms of Daily
Living
Activities.
Source: Generated by researcher, 2022.
**2.** **METHODS**
The approaches appropriate for the investigation are presented in this section. A Behavioral
Reasoning Theory (BRT) methodology and an optimized blockchain model for IoT-based
healthcare applications were explored in the study on wearables based on the internet of things
for geriatric healthcare.
**Internet of things (IoT) based wearables for elderly healthcare: a behavioural reasoning**
**theory (BRT) approach**
The adoption of IoT-based wearables is being studied using BRT theory, which broadens the
scope of the literature on innovation dissemination. The adoption of healthcare technology has
been the subject of extensive past study [77]. The "reasons for" and "reasons against" adoption
are not offered in a unified framework, though. The application of BRT to IoT-based wearables
is extended in this one-of-a-kind study, which emphasizes the context-specific factors that
affect senior citizens' cognitive processing of innovation adoption in a developing nation like
India. The elderly have a long-standing practice of seeing physicians for medical health checks;
they find it challenging to employ IoT-based wearables; and they also worry that anybody with
internet access may access their healthcare data in the cloud. IoT-based healthcare wearables
are "results against" adoption because to the user barrier, conventional barrier, and risk barrier.
According to the perspective of older persons, ease, relative benefit, iniquitousness, and
compatibility are the main "reasons for" adoption of IoT wearables. According to research,
IoT-based wearables make it easier for seniors to monitor their health condition since they save
them the time and hassle of making routine clinic visits [78].
**Optimized blockchain model for IoT based healthcare applications**
They presented a unique blockchain paradigm that is tailored for IoT devices in the work of
[79]. They cited the example of remote patient monitoring to demonstrate their point (RPM).
RPM allows a medical facility to interact with the patient outside of the typical clinical setting
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
|Col1|Col2|Col3|network of fog nodes for communication and dissemination.|Col5|
|---|---|---|---|---|
|5|[19]|Using of physiological and environmental signals allowing to provide contextual information in terms of Daily Living Activities.|The privacy of patient data and data anonymization are upheld during all communications between the sensor, fog, and cloud layers.|In future, the complete traceability of data patient treatment must again be implemented.|
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
(in the home as an example). Wearable Internet of Things (IoT) devices are worn by patients,
and they may provide data to medical professionals about a patient's blood sugar level, blood
pressure, breathing pattern, and other things. In their proposal, they attempt to install a
lightweight blockchain while retaining the fundamental privacy and security advantages of
blockchain technology. In their simplistic approach, they do away with the idea of Proof of
Work (PoW). As a result, the distributed feature of the model is what the proposed framework's
security and privacy rely on. The Blockchain Network, Cloud Storage, Healthcare Providers,
Smart Contracts, and Patients using Healthcare Wearable IoT Devices make up the five main
components of their platform. They separated their blockchain into clusters using an overlay
network to reduce network cost and latency. They employ clusters rather than a single
blockchain, where each cluster is a collection of nodes with one node acting as the Cluster
Head [79].
**3.** **RESULTS AND DISCUSSION**
Three phases make up the conceptual structure of the IoT-based m-Health Monitoring system.
Users' health information is gathered in phase 1 through sensors and medical equipment. Using
a gateway or local processing unit, the obtained data is sent to the cloud subsystem (LPU). In
phase 2, the medical diagnosis system uses the medical measures to inform a cognitive choice
about an individual's health. In phase 3, a warning regarding people's health is sent to the
parents or guardians. Additionally, if an emergency occurs, a warning is sent to the local
hospital to manage the medical problem.
Fig. 8. A framework for IoT based m-health disease diagnosing system
Source: Adopted from [13]
They use symmetric key cryptography and a role-based access mechanism as the foundation
of their Cloud-Centric IoT (CCIoT) security system (RBAM). The security method of their
suggested system is based on encrypting the user password with a "private key" provided by a
reliable third party (TTP). Furthermore, TTP is a body that carries out the security procedure
in the system we offer. Additionally, it only grants access to authorized individuals that have
registered with CCIoT. Following the authentication stage, authorisation is dependent on each
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
user's role. Given that the owner of the data has the authority to place restrictions on a number
of accessible partners and provide them varying levels of authorisation,
Fig. 9: Flow diagram of Cloud-Centric IoT (CCIoT) diagnosis security system
Source: Adopted from [13]
This study aims to improve the security already offered to users' data from the point of data
acquisition to the point of data relay to the cloud and to the production of alerts utilizing
blockchain technology. As a result, this project will employ asymmetric key cryptography in
conjunction with blockchain technology. Blockchain technology will be used to safeguard the
sharing of medical records since medical records are sensitive data and are thus subject to
attack. Security has been a big worry in the IoT because hackers or attackers may simply access
the data. Blockchain includes a number of capabilities built-in, including distributed ledgers,
decentralized storage, authentication, security, and immutability. It has progressed past hype
to find actual use cases in industries like healthcare. Therefore, this study would adapt the
framework for IoT based m-health of [13].
Fig. 10 is the proposed conceptual framework for the elderly health monitoring system and
alert generation to the elderly caretakers or guardians.
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
Fig. 10: Proposed conceptual framework
**4.** **CONCLUSION**
All around the world, the percentage of the population that is older is considerably rising. For
elderly people who want to keep their independence, health monitoring systems in smart
settings aim to replace conventional healthcare solutions by lowering the financial expenditures
and reducing the danger of hospitalization in healthcare facilities (nursing homes or hospitals).
The elderly make up a significant portion of our society and require particular care. To
guarantee that the transition that IoT and cloud computing bring to the healthcare business is
smooth, government, organizations, and research groups from all over the world are closely
collaborating. In this work, we proposed a cloud-centric IoT based framework for health
monitoring with blockchain capability. Then, we also propose a decision support level
encryption for cloud-centric IoT based framework for health monitoring.
**Recommendation**
IoT and Blockchain are still relatively new technologies in the healthcare industry, and new
applications are continually being developed and investigated. The suggestions for the research
are listed below:
1. Scalability of healthcare afforded by blockchain requires research. Scalability is a
significant problem since the healthcare sector is expanding, especially as our population
ages. As more users or patients join the system, it will become increasingly more difficult
to operate blockchain-enabled services.
2. Key management, security, and the capability to quickly replace lost or compromised keys
should all be the subject of further study.
3. Identity verification needs to be a major area of research as well. However, in an
emergency, what are fall-back plans or emergency protocols that may be utilized to provide
a doctor access to the information without authorization? Many trials focused on having
the patient be able to authorize access to patient records beforehand.
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
**Dedication**
I dedicated this paper to my late mom (Maryam Bint Halima). May her soul find eternal rest,
amen.
**5.** **REFERENCES**
1. H, Habibzadeh., K. Dinesh., O. Rajabi Shishvan., A. Boggio-Dandry., G. Sharma. And
T. Soyata. “A Survey of Healthcare Internet of Things (HIoT): A Clinical Perspective”.
In IEEE Internet of Things Journal, Vol. 7, Issue 1, pp. 53–71. IEEE.
[https://doi.org/10.1109/JIOT.2019.2946359 [2020]](https://doi.org/10.1109/JIOT.2019.2946359)
2. F. Firouzi., B. Farahani., M. Weinberger., G. DePace. And F.S. Aliee. “IoT fundamentals:
definitions, architectures, challenges, and promises”. in F. Firouzi, K. Chakrabarty, S.
Nassif (Eds.), Intelligent Internet of things. 2020
3. R. Watson. “Top 8 Healthcare Technology Trends to Watch Out for in 2020”.
https://www.datadriveninvestor.com/2019/11/30/ top- 8- healthcare- technologytrends- to- watch- out- for- in- 2020/, 2019 accessed 19 November 2020.
4. H.H. Nguyen., F. Mirza., M.A. Naeem. And M. Nguyen. “A review on IoT healthcare
monitoring applications and a vision for transforming sensor data into real-time
clinical feedback”. Proceedings of the 2017 IEEE 21st International Conference on
Computer Supported Cooperative Work in Design, CSCWD 2017, 257–262.
[https://doi.org/10.1109/CSCWD. 2017. 8066704](https://doi.org/10.1109/CSCWD)
5. J. Li., Q. Ma., A.H. Chan. And S.S. Man. “Health monitoring through wearable
technologies for older adults: Smart wearables acceptance model”. Applied Ergonomics,
75, 162–169. doi:10.1016/j.apergo. 2019. 10.006
6. B. Abidi., A. Jilbab. And M.E. Haziti. “Wireless sensor networks in biomedical: Wireless
body area networks in Europe and MENA Cooperation Advances in Information and
Communication Technologies”. Springer: Berlin/Heidelberg, Germany, pp. 321–329.
2017.
7. V. Scuotto., A. Ferraris. And S. Bresciani. “Internet of Things: Applications and
challenges in smart cities: A case study of IBM smart city projects”. Bus. Process Manag.
J., 22, 357–367. 2016.
8. M. Chernyshev., S. Zeadally. And Z. Baig. “Healthcare data breaches: Implications for
digital forensic readiness”. Journal of Medical Systems, 43(1): 7. 2019.
9. L. Minh Dang., M.J. Piran., D. Han., K. Min. and H. Moon. “A survey on internet of
things and cloud computing for healthcare”. Electronics (Switzerland), 8(7), 1–49.
[https://doi.org/10.3390/electronics8070768. 2019.](https://doi.org/10.3390/electronics8070768)
10. M. Prokofieva., S.J. Miah., C. Agbo., Q. Mahmoud., J. Eklund., T. McGhin., K.K.R.
Choo., C.Z. Liu., D. He., C. Pirtle. And J. Ehrenfeld. “Blockchain Technology in
Healthcare: A Systematic Review”. Australasian Journal of Information Systems,
[135(2), 1–3. https://doi.org/10.3127/ajis.v23i0.2203. 2019.](https://doi.org/10.3127/ajis.v23i0.2203)
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
11. Z. Zheng. et al. “An overview of blockchain technology: architecture, consensus, and
future trends. In: Big Data”. (BigData Congress), 2017 IEEE International Congress.
12. A. Stanciu. “Blockchain based distributed control system for edge computing. In: Control
Systems and Computer Science”. (CSCS), 2017 21st International Conference on.
https://doi.org/10.1109/CSCS.2017.102. IEEE.
13. P. Verma. And S.K. Sood, “Cloud-centric IoT based disease diagnosis healthcare
framework”. J. Parallel Distrib. Comput. doi.org/10.1016/j.jpdc.2017.11.018
14. V. Babich. And G. Hilary. “Distributed ledgers and operations: What operations
management researchers should know about blockchain technology”. Manufacturing and
Service Operations Management, 22(2): 223–240. 2020.
15. M.A. Engelhardt. “Hitching Healthcare to the Chain: An Introduction to Blockchain
Technology in the Healthcare Sector”. Technol. Innov. Manag. Rev., 7, 22–34. 2017.
16. J. Gong. And L. Zhao. “Blockchain application in healthcare service mode based on
[Health Data Bank”. Front. Eng. Manag. https://doi.org/10.1007/s42524-020-0138-9.](https://doi.org/10.1007/s42524-020-0138-9)
2020.
17. S. Madakam., R. Ramaswamy. and S. Tripathi. “Internet of Things (IoT): A literature
review”. Journal of Computer and Communications, 3 (5), 164–173. https:
//doi.org/10.4236/jcc. 2015. 35021
18. N. Kamdar., V. Sharma. And S. Nayak. “A Survey paper on RFID Technology, its
Applications and Classification of Security/Privacy Attacks and Solutions”. IRACST –
International Journal of Computer Science and Information Technology & Security
(IJCSITS), ISSN: 2249-9555 Vol.6, No4, July-August 2016
19. C. Debauche., S. Mahmoudi., P. Manneback. And A. Assila. “Fog iot for health: A new
architecture for patients and elderly monitoring”. Procedia Computer Science, 160, 289–
[297. https://doi.org/10.1016/j.procs. 2019.](https://doi.org/10.1016/j.procs)
20. M. Al-Fuqaha., Guizani,. M. Mohammadi., M. Aledhari. and M. Ayyash. “Internet of
things: A survey on enabling technologies, protocols, and applications”. IEEE
Communications Surveys & Tutorials, vol. 17, pp. 2347-2376, 2015.
21. A. Luigi., I. Antonio. and M. Giacomo. “The Internet of Things: A survey”. Science
Direct journal of Computer Networks, Volume 54, Pages: 2787–2805. 2010.
22. R. Nicolescu., M. Huth., P. Radanliev. and D.D. Roure, “Mapping the values of IoT”.
Journal of Information Technology. https://doi.org/10.1057/s41265- 018- 0054- 1. 2018.
23. H. Petersen., E. Baccelli., M. Wählisch., T.C. Schmidt. And J. Schiller. “The role of the
Internet of Things in network resilience. International Internet of Things Summit,
283–296. 2014.
24. Z. Yan., P. Zhang. And A.V. Vasilakos. A survey on trust management for Internet of
Things. Journal of Network and Computer Applications, 42, 120–134.
https://doi.org/10.1016/j.jnca.2014.01.014. 2014.
25. F. Wortmann. and K. Flüchter. “Internet of Things”. Business & Information Systems
[Engineering, 57 (3), 221–224. https://doi.org/10.1007/s12599-015-0383-3. 2015.](https://doi.org/10.1007/s12599-015-0383-3)
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
26. S. Bandyopadhyay., P. Balamuralidhar. and A. Pal. “Interoperations among IoT
[standards”. Journal of ICT Standardization, 1, 253–270. https://doi.org/10.13052/](https://doi.org/10.13052/)
jicts2245-800X.12a9. 2013.
27. D. Burrus. “The Internet of Things is Far Bigger than Anyone Realizes”. Retrieved from
[https://www.wired.com/insights/2014/11/the- internet- of- things-bigger/, (Accessed 11](https://www.wired.com/insights/2014/11/the-%20internet-%20of-%20things-bigger/)
November 2018).
28. J.H. Nord., A. Koohang. And J. Paliszkiewicz. “The Internet of Things: Review and
theoretical framework”. In Expert Systems with Applications (Vol. 133, pp. 97–108).
[Elsevier Ltd. https://doi.org/10.1016/j.eswa. 2019.05.014](https://doi.org/10.1016/j.eswa)
29. I. Lee. And K. Lee. “The Internet of Things (IoT): Applications, investments, and
[challenges for enterprises”. Business Horizons, 58 (4), 431–440. https://doi.org/10.](https://doi.org/10)
1016/j.bushor. 2015.03.008.
30. L. Atzori., A. Iera. And G. Morabito. “The Internet of Things: A survey”. Computer
[Networks, 54 (15), 2787–2805. https://doi.org/10.1016/j.comnet. 2010.05.010.](https://doi.org/10.1016/j.comnet)
31. Team, Insights. “Why collaboration is essential for successful IoT implementation”.
Forbes Insights Retrieved from https://www.forbes.com/sites/ insights
hitachi/2017/12/18/why-collaboration-is-essential-for-successful-iotimplementation/#27effdde10e0 Accessed 01 February 2019
32. K.L. Lueth. “The 10 most popular Internet of Things applications right now”. IoT
Analytics Retrieved from https://iot- analytics.com/10- internet- of- things- applications/
Accessed 12 December 2018.
33. J. Bartje. “The top 10 application areas –Based on eeal IoT projects”. IoT Analytics.
Retrieved from https://iot-analytics.com/top-10-iot-project-application- areas- q3- 2016/.
Accessed 11 November 2018.
34. S. Reddy. “Can tech speed up emergency room care”. Wall Street Journal. Accessed on:
[Oct. 4, 2019. [Online]. Available: https://www.wsj.com/articles/can-techspeed-up-](https://www.wsj.com/articles/can-techspeed-up-emergency-room-care-1490629118)
[emergency-room-care-1490629118](https://www.wsj.com/articles/can-techspeed-up-emergency-room-care-1490629118)
35. D.D. Maeng., A.E. Starr., J.F. Tomcavage., J. Sciandra., D. Salek. and D. Griffith. “Can
telemonitoring reduce hospitalization and cost of care? a health plan’s experience in
managing patients with heart failure,” Population Health Management, Vol. 17, Issue 6,
pp. 340–344. 2014.
36. J. Couturier., D. Sola. G.S. Borioli. and C. Raiciu, C. How can the internet of things help
to overcome current healthcare challenges, Communications & Strategies, No. 87, 3rd
Quarter. 2012.
37. B.I. Rosner., M. Gottlieb. and W.N. Anderson, W. N. “Effectiveness of an automated
digital remote guidance and telemonitoring platform on costs, readmissions, and
complications after hip and knee arthroplasties,” The Journal of Arthroplasty, vol. 33, no.
4, pp. 988 – 996. 2018.
38. H. Bolhasani., M. Mohseni. And A.M. Rahmani. “Deep learning applications for IoT in
health care: A systematic review”. Informatics in Medicine Unlocked, 23 [Dec, 2020],
100550. https://doi.org/10.1016/j.imu.2021.1005500
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
39. N. Sultan. “Making use of cloud computing for healthcare provision: Opportunities and
challenges”. Int. J. Inf. Manag. 34, 177–184. 2014.
40. S.R. Islam., D. Kwak., M.H. Kabir., M. Hossain. And K.S. Kwak, K.S. “The internet of
things for health care: A comprehensive survey”. IEEE Access 3, 678–708. 2015.
41. A. Darwish., A.E. Hassanien., M. Elhoseny., A.K. Sangaiah. And K. Muhammad. “The
impact of the hybrid platform of internet of things and cloud computing on healthcare
systems: Opportunities, challenges, and open problems”. Journal of ambient Intelligence
Humanization and Computation. 1–16. 2017.
42. A. Alexandru., D. Coardos. and E. Tudora. “IoT-Based Healthcare Remote Monitoring
Platform for Elderly with Fog and Cloud Computing”. 22nd International Conference on
Control Systems and Computer Science (CSCS). doi:10.1109/cscs.2019.00034. 2019.
43. A. Antonopoulos. “Mastering Bitcoin: Programming the Open Blockchain”. 2nd ed.,
O’Reilly. 2017.
44. B. Shah., N. Shah., S. Shakhla. and V. Sawant. “Remodelling the Healthcare Industry by
employing Blockchain Technology”. Int Conf Circuits Syst Digit Enterp Technol
(ICCSDET). 2018; 2018:1–5. https ://doi.org/10.1109/ICCSD ET.2018.88211 13.
45. S. Nakamoto. “Bitcoin: A Peer-to-Peer Electronic Cash System”. 2008. https ://bitco
in.org/en/bitco in-paper Accessed 11 Apr 2018.
46. M.H. Kassab., J. DeFranco., T. Malas., P. Laplante., G. Destefanis. And V.V.G. Neto.
“Exploring Research in Blockchain for Healthcare and a Roadmap for the Future”. IEEE
Trans Emerg Top Comput. 2019. 1–1. https ://doi.org/10.1109/TETC.2019.29368 81.
47. C.C. Agbo., Q.H. Mahmoud. And J.M. Eklund. “Blockchain Technology in Healthcare:
A Systematic Review, Healthcare”. (Basel); 7. https ://doi.org/10.3390/healt hcare 70200
56. 2019.
48. S. Angraal., H. Krumholz. and W.L. Schulz. “Blockchain Technology: Applications in
Health Care, Circulation: Cardiovascular Quality and Outcomes”.10:e003800.
https://doi.org/10.1161/. 2017.
49. M. Mettler. “Blockchain Technology in Healthcare the Revolution Starts Here”. In
Proceedings of the 2016 IEEE 18th International Conference on E-Health Networking,
Applications and Services (Healthcom), Munich, Germany, 14–17; pp. 520–522
[September 2016].
50. T. Heston. “A case study in blockchain health care innovation”. International Journal of
Current Research; 9:60587–60588. https://doi.org/10.22541 /au.151060471.10755 953.
2017.
51. J.K. Cohen. “15 best papers on blockchain in health IT, as chosen by HHS”.
https://www.beckershospitalrevie w.com/healthcare-information-technology/15-best
papers-on-blockchain-in-healthit-as-chosen-by-hhs.html Accessed 15 Apr 2018.
52. M. Orcutt. “Why the CDC thinks blockchain can save lives, MIT Technology Review”.
https://www.technology review.com/s/60895 9/why-the-cdc-wants-in-on-block chain /
Accessed 31 Mar 2018.
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
53. A. Clim. And R.D. Zota. “Constantinescu R. Data exchanges based on blockchain in m
Health applications”. Procedia Computer Science; 160:281–8.
https://doi.org/10.1016/j.procs. 2019.11.088.
54. JMIR. “Towards a Stakeholder-Oriented Blockchain-Based Architecture for Electronic
Health Records”. Design Science Research Study | Beinke | Journal of Medical Internet
Research, (n.d.). https :// www.jmir.org/2019/10/e1358 5 Accessed 6 Feb 2021.
55. H. Chen. And X. Huang. “Will Blockchain Technology Transform Healthcare and
Biomedical Sciences?” EC Pharmacol Toxicol. 2018; 6:910–1.
56. Deloitte. “Breaking blockchain open-Deloitte’s 2018 global blockchain survey”. Deloitte
[Insights, 48. https://doi.org/10.1002/ejoc.201200111](https://doi.org/10.1002/ejoc.201200111)
57. W.J. Gordon. And C. Catalini. “Blockchain technology for healthcare: Facilitating the
transition to patient-driven interoperability”. Computational and Structural
[Biotechnology Journal, 16, 224–230. https://doi.org/10.1016/j.csbj.2018.06.003 2018](https://doi.org/10.1016/j.csbj.2018.06.003)
58. European Coordination Committee of the Radiological (ECCR). “Blockchain in
Healthcare; Technical report; European Coordination Committee of the Radiological:
Brussels, Belgium. 2017.
59. P. Zhang., D.C. Schmidt., J. White. and G. Lenz. “Blockchain technology use cases in
healthcare”. In Blockchain technology: Platforms, tools and use cases, Vol. 111, pp. 1–
42. 2018.
60. P. Zhang., J. White., D.C. Schmidt., G. Lenz. and S.T. Rosenbloom. “FHIRChain:
Applying blockchain to securely and scalably share clinical data”. Computational and
Structural Biotechnology Journal, 16, 267–278.
[https://doi.org/10.1016/j.csbj.2018.07.004 2018](https://doi.org/10.1016/j.csbj.2018.07.004)
61. M. Benchoufi. “Blockchain technology for improving clinical research quality, 1–5.
[https://doi.org/10.1186/s13063-017-2035-zt 2017.](https://doi.org/10.1186/s13063-017-2035-zt)
62. M. Hölbl., M. Kompara., A. Kamišalić. And L.N. Zlatolas. “A Systematic Review of the
Use of Blockchain in Healthcare. Symmetry”. 10:470. https://doi.org/10.3390/sym10
10047 00. 2018.
63. A. Roehrs., C.A. da Costa., da Rosa. And R. Righi. “OmniPHR: a distributed architecture
[model to integrate personal health records”. J. Biomed. Inf. https://doi.org/10.1016/ j.jbi.](https://doi.org/10.1016/)
2017. 05.012.
64. K.N. Griggs., O. Ossipova., C.P. Kohlios., A.N. Baccarini., E.A. Howson. And T.
Hayajneh. “Healthcare blockchain system using smart contracts for secure automated
remote patient monitoring”. Journal of Medical Systems, 42(7), 1–7.
[https://doi.org/10.1007/s10916-018-0982-x. 2018](https://doi.org/10.1007/s10916-018-0982-x)
65. A. Hussain., R. Wenbi., A. Lopes., M. Nadher. And M. Mudhish. “Health and
emergency-care platform for the elderly and disabled people in the smart city”. J. Syst.
[Softw. 110 253-263. https://doi.org/10.1016/j.jss. 08.041. 2015.](https://doi.org/10.1016/j.jss.2015.08.041)
66. L. Catarinucci., D. De Donno., L. Mainetti., L. Palano., L. Patrono., M.L. Stefanizzi. And
L. Tarricone. “An IoT-aware architecture for smart healthcare systems”. IEEE Internet
Things J. 2 (6) 515-526. DOI: 480 10.1109/JIOT. 2417684. 2015.
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
**ISSN: 2799-1156**
Vol: 02, No. 04, June – July 2022
[http://journal.hmjournals.com/index.php/JECNAM](http://journal.hmjournals.com/index.php/JECNAM)
**[DOI: https://doi.org/10.55529/jecnam.24.31.53](https://doi.org/10.55529/jecnam.24.31.53)**
67. M.S. Hossain. And G. Muhammad. “Cloud-assisted Industrial Internet of Things (IIoT)
enabled framework for health monitoring”. Comput. Networks. 101 192-202.
[https://doi.org/10.1016/j.comnet.01.009. 2016.](https://doi.org/10.1016/j.comnet.01.009)
68. M. Hassanalieragh., A. Page., T. Soyata., G. Sharma., M. Aktas., G. Mateos., B. Kantarci.
and S. Andreescu. “Health monitoring and management using internet-of-things (IoT)
sensing with cloud-based processing: Opportunities and challenges”. In: Proceedings of
IEEE International Conference on Services Computing (SCC), New York, USA, pp 285–
292. 2015.
69. P. Gope. and T. Hwang. “BSN-Care: A secure IoT-based modern healthcare system using
body sensor network”. IEEE Sensor J.16(5)1368-1376.
DOI:10.1109/JSEN.2015.2502401. 2016.
70. Y.E. Gelogo., H.J. Hwang. and H. Kim. “Internet of Things (IoT) framework for u
healthcare System”. Int. J. Smart Home. 9 323-330.
[http://dx.doi.org/10.14257/ijsh.9.11.31. 2015.](http://dx.doi.org/10.14257/ijsh.9.11.31)
71. P. Kakria., N.K. Tripathi. and P. Kitipawang. “A real-time health monitoring system for
remote cardiac patients using smartphone and wearable sensors”. Int. J. Telemed. Appl.
Article ID 373474 DOI:10.1155/2015/373474. 2015.
72. S.H. Kim. and K. Chung. “Emergency situation monitoring service using context motion
tracking of chronic disease patients”. Cluster Comput. 18 (2) 747-759. DOI
10.1007/s10586-015-0440-1. 2015.
73. S.K. Sood. and I. Mahajan. “A Fog based healthcare framework for chikungunya”. IEEE
Internet Things J. pp. 1-8. DoI 10.1109/JIOT.2017.2768407. 2017.
74. T.T. Kuo., H.E. Kim. and L. Ohno-Machado. “Blockchain Distributed Ledger
Technologies for Biomedical and Health Care Applications”. J. Am. Med. Inform. Assoc,
24, 1211–1220. 2017.
75. J.M. Roman-Belmonte., H. De la Corte-Rodriguez., E.C.C. Rodriguez-Merchan., H. la
Corte-Rodriguez. and E. Carlos Rodriguez-Merchan. “How Blockchain Technology Can
Change Medicine. Postgrad. Med. 130, 420–427. 2018.
76. E.J.A. Prada. “The Internet of Things (IoT) in pain assessment and management: An
overview”. Journal of Informatics in Medicine Unlocked. Volume 18, 100298.
[https://doi.org/10.1016/j.imu.2020.100298 2020.](https://doi.org/10.1016/j.imu.2020.100298)
77. A.Y.L. Chong., M.J. Liu., J. Luo. And O. Keng-Boon. “Predicting RFID adoption in
healthcare supply chain from the perspectives of users”. International Journal of
Production Economics, Vol. 159 No. 1, pp. 66-75. 2015.
78. B. Sivathanu. “Adoption of internet of things (IOT) based wearables for elderly
healthcare – a behavioural reasoning theory (BRT) approach”. Journal of Enabling
Technologies. doi:10.1108/jet-12-2017-0048. 2018.
79. A.D. Dwivedi., L. Malina., P. Dzurenda. And G. Srivastava. “Optimized Blockchain
Model for Internet of Things based Healthcare Applications”. 42nd International
Conference on Telecommunications and Signal Processing
(TSP). doi:10.1109/tsp.2019.8769060. 2019.
Copyright The Author(s) 2022.This is an Open Access Article distributed under the CC BY
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.55529/jecnam.24.31.53?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.55529/jecnam.24.31.53, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "BRONZE",
"url": "http://journal.hmjournals.com/index.php/JECNAM/article/download/924/909/2590"
}
| 2,022
|
[] | true
| 2022-07-30T00:00:00
|
[] | 19,210
|
en
|
[
{
"category": "Mathematics",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/033f010190bd01b2e3d5f9768955b6106497ded8
|
[
"Mathematics",
"Computer Science"
] | 0.896367
|
Distributed real-time sentiment analysis for big data social streams
|
033f010190bd01b2e3d5f9768955b6106497ded8
|
International Conference on Control, Decision and Information Technologies
|
[
{
"authorId": "47814143",
"name": "Amir Hossein Akhavan Rahnama"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"CoDIT",
"CDIT",
"Int Conf Control Decis Inf Technol"
],
"alternate_urls": null,
"id": "27e2a414-d399-477e-a6d3-81b1c4fc5aad",
"issn": null,
"name": "International Conference on Control, Decision and Information Technologies",
"type": "conference",
"url": null
}
|
Big data trend has enforced the data-centric systems to have continuous fast data streams. In recent years, real-time analytics on stream data has formed into a new research field, which aims to answer queries about “what-is-happening-now” with a negligible delay. The real challenge with real-time stream data processing is that it is impossible to store instances of data, and therefore online analytical algorithms are utilized. To perform real-time analytics, pre-processing of data should be performed in a way that only a short summary of stream is stored in main memory. In addition, due to high speed of arrival, average processing time for each instance of data should be in such a way that incoming instances are not lost without being captured. Lastly, the learner needs to provide high analytical accuracy measures. Sentinel is a distributed system written in Java that aims to solve this challenge by enforcing both the processing and learning process to be done in distributed form. Sentinel is built on top of Apache Storm, a distributed computing platform. Sentinel's learner, Vertical Hoeffding Tree, is a parallel decision tree-learning algorithm based on the VFDT, with ability of enabling parallel classification in distributed environments. Sentinel also uses SpaceSaving to keep a summary of the data stream and stores its summary in a synopsis data structure. Application of Sentinel on Twitter Public Stream API is shown and the results are discussed.
|
# Distributed Real-Time Sentiment Analysis for Big
Data Social Streams
### Amir Hossein Akhavan Rahnama
Department of Mathematical Information Technology
University of Jyväskylä
Jyväskylä, Finland
amirrahnama@gmail.com
**_Abstract—_** **Big data trend has enforced the data-centric systems**
**to have continuous fast data streams. In recent years, real-time**
**analytics on stream data has formed into a new research field,**
**which aims to answer queries about “what-is-happening-now”**
**with a negligible delay. The real challenge with real-time stream**
**data processing is that it is impossible to store instances of data,**
**and therefore online analytical algorithms are utilized. To**
**perform real-time analytics, pre-processing of data should be**
**performed in a way that only a short summary of stream is**
**stored in main memory. In addition, due to high speed of arrival,**
**average processing time for each instance of data should be in**
**such a way that incoming instances are not lost without being**
**captured. Lastly, the learner needs to provide high analytical**
**accuracy measures. Sentinel is a distributed system written in**
**Java that aims to solve this challenge by enforcing both the**
**processing and learning process to be done in distributed form.**
**Sentinel is built on top of Apache Storm, a distributed computing**
**platform. Sentinel’s learner, Vertical Hoeffding Tree, is a parallel**
**decision tree-learning algorithm based on the VFDT, with ability**
**of enabling parallel classification in distributed environments.**
**Sentinel also uses SpaceSaving to keep a summary of the data**
**stream and stores its summary in a synopsis data structure.**
**Application of Sentinel on Twitter Public Stream API is shown**
**and the results are discussed.**
**_Keywords— real-time analytics, machine learning, distributed_**
**_systems, vertical hoeffding tree, distributed data mining systems,_**
**_sentiment analysis, social media mining, Twitter_**
I. INTRODUCTION
In recent years, stream data is generated at an increasing
rate. The main sources of stream data are mobile applications,
sensor applications, measurements in network monitoring and
traffic management, log records or click-streams in web
exploring, manufacturing processes, call detail records, email,
blogging, twitter posts, Facebook statuses, search queries,
finance data, credit card transactions, news, emails, Wikipedia
updates [5]. On the other hand, with growing availability of
opinion-rich resources such as personal blogs and micro
blogging platforms challenges arise as people now use such
systems to express their opinions. The knowledge of real-time
sentiment analysis of social streams helps to understand what
social media users think or express “right now”. Application
978-1-4799-6773-5/14/$31.00 ©2014 IEEE
of real-time sentiment analysis of social stream brings a lot of
opportunities in data-driven marketing (customer’s immediate
response to a campaign), prevention of disasters immediately,
business disasters such as Toyota’s crisis in 2010 or Swine Flu
epidemics in 2009 and debates in social media. Real-time
sentiment analysis can be applied in almost all domains of
business and industry.
Data stream mining is the informational structure extraction
as models and patterns from continuous and evolving data
streams. Traditional methods of data analysis require the data
to be stored and then processed off-line using complex
algorithms that make several passes over data. However in
principles, data streams are infinite, and data is generated with
high rates and therefore it cannot be stored in main memory.
Different challenges arise in this context: storage, querying and
mining. The latter is mainly related to the computational
resources to analyze such volume of data, so it has been widely
studied in the literature, which introduces several approaches in
order to provide accurate and efficient algorithms [1], [3], [4].
In real-time data stream mining, data streams are processed in
an online manner (i.e. real-time processing) so as to guarantee
that results are up-to-date and that queries can be answered in
real-time with negligible delay [1], [5]. Current solutions and
studies in data stream sentiment analysis are limited to
perform sentiment analysis in an off-line approach on a
sample of stored stream data. While this approach can work in
some cases, it is not applicable in the real-time case. In
addition, real-time sentiment analysis tools such as MOA [5]
and RapidMiner [3] exist, however they are uniprocessor
solutions and they cannot be scaled for an efficient usage in a
network nor a cluster. Since in big data scenarios, the volume
of data rises drastically after some period of analysis, this
causes uniprocessor solutions to perform slower over time. As
a result, processing time per instance of data becomes higher
and instances get lost in a stream. This affects the learning
curve and accuracy measures due to less available data for
training and can introduce high costs to such solutions.
Sentinel relies on distributed architecture and distributed
learner’s to solve this shortcoming of available solutions for
real-time sentiment analysis in social media.
-----
This paper is organized as follows: In section 2, we discuss
stream data processing. In section 3, stream data classification
is discussed. Section 4 is a discussion on distributed data
mining, followed by section 5 about distributed learning
algorithms. In section 6, we discuss Sentinel’s architecture and
lastly, we present the Twitter’s public stream case study in
section 7 and section 8 includes a brief summary of this paper.
II. DATA STREAM PROCESSING
Stream data processing problem can be generally described as
follows. A sequence of transactions arrives online to be
processed utilizing a memory-resident data structure called
_synopsis_ [1]and an algorithm that dynamically adjusts structure
storage to reflect the evolution of transactions. Each
transaction is either an insertion of a new data item, a deletion
of an existing data item, or any allowed type of query. The
synopsis data structure, as well as the algorithm is designed to
minimize response time, maximize accuracy and confidence
of approximate answers, and minimize time/space needed to
maintain the synopsis [4].
Data stream environment has significant differences with
batch settings. Therefore each stream data processing method
must satisfy the following four requirements in order to be
considered:
- Requirement 1: Process an example at a time, and
inspect it only once (at most)
- Requirement 2: Use a limited amount of memory
- Requirement 3: Work in a limited amount of time
- Requirement 4: Be ready to predict at any time
The algorithm is passed the next available example from the
stream (Requirement 1). The algorithm processes the example,
updating its data structures. It does so without exceeding the
memory bounds set on it (requirement 2), and as quickly as
possible (Requirement 3). The algorithm is ready to accept the
next example. On request it is able to predict the class of
unseen examples (Requirement 4) [5].
III. STREAM DATA CLASSIFICATION
In this study, sentiment analysis is formulated as a
classification problem. Several predefined categories, which
each sentiment can be expressed as, are created. The classifier
will decide upon whether a sentiment is expressed in a
positive, negative, neutral, … category in an evolving data
stream.
_Sequential_ _supervised_ _learning_ (i.e. _data_ _stream_
_classification) problem as follows: Let (𝑥#, 𝑦#) #'() be a set ofs_
_N training examples. Each example is a pair of sequences_
(𝑥#, 𝑦#), where 𝑥# =< 𝑥#,(, 𝑥#,,, …, 𝑥#,./ > and 𝑦# =<
𝑦#,(, 𝑦#,,, …, 𝑦#,./ >. For example, in part-of-speech tagging,
one (𝑥#, 𝑦#), pair might consist of 𝑥# =
⟨do you want fries with that⟩ and 𝑦# =
⟨verb pronoun verb noun prep pronoun⟩. The goal is to
construct a classifier _h that can correctly predict a new label_
sequence 𝑦= ℎ𝑥, given an input sequence x [13].
IV. DISTRIBUTED DATA MINING SYSTEMS
The successful usage of distributed systems in many data
mining cases were shown in [9], [10], [12] and [15],
Distributed systems increase the performance by forming a
cluster of low-end computers and they guarantee reliability by
having no single point of failure. Distributed systems are
scalable in contrast with monolithic uniprocessor systems.
Such features make distributed data mining systems a sound
candidate for data stream mining in real-time. In general,
distributed data mining systems perform distributed learning
algorithms on top of distributed computing platforms in which
components are located on networked computers and
communicate their actions by passing messages. Distributed
systems function according to a topology. _Topology is a_
collection of connected processing items and streams. It
represents a network of components that process incoming data
streams. A _processor is a unit of computation element that_
executes parts of algorithm on a specific Stream Processing
Engine (SPE). Processors contain the logic of the algorithms.
_Processing items (PI) are the internal different concrete_
implementation of processors.
Distributed Systems pass content events through streams. A
_stream is an unbounded sequence of tuples. A tuple is a list of_
values and each value can be any type as long as the values and
each value can be any type as long as the values are
serializable, i.e. dynamically typed. In other words, a stream is
a connection between a PI into its corresponding destinations
PIs. Stream can be seen as a connector between PIs and
mediums to send content events between PIs. A content event
wraps the data transmitted from a PI to another via a stream. A
source PI is called a spout. A _spout sends content events_
through stream and read from external sources. Each stream
has one spout. A _bolt is a consumer of one of more input_
streams. Bolts are able to perform several functions for the
input stream such as filtering of tuples, aggregation of tuples,
joining multiple streams, and communication with external
entities (caches or database). Bolts need grouping mechanisms,
which determines how the stream routes the content events. In
_shuffle grouping, stream routes the content events in a round-_
robin way to corresponding runtime PIs, meaning that each
runtime PI is assigned with the same number of content events
from the stream. In all grouping, stream replicates the content
events and routes them to all corresponding runtime PIs. In key
grouping, the stream routes the content event based on the key
of the content event, meaning that content events with the same
value of key are always routed into the same runtime PI.
1 discussed in more details in section 6.
-----
Fig 1. Instatiation of a Stream and Examples of Groupings [2]
Transformation of stream between spouts and bolts follows
the _pull model, i.e. each bolt pulls tuples from the source_
components (which can be bolts or spouts). This implies that
loss of tuples happen in spouts when they are unable to keep up
with external event rates.
There are two types of nodes in a Storm cluster: master
node and worker node. Master node runs the Nimbus daemon
that is responsible for assigning tasks to the worker nodes. A
nimbus is the master node that acts as entry point to submit
topologies for execution on cluster. It distributes the code
around the cluster via supervisors. A supervisor runs on slave
node and it coordinates with ZooKeeper to manage the
workers. A worker corresponds to JVM process that executes a
part of topology and comprises of several executors and tasks.
An executor corresponds to a thread spawned by a worker and
it consists of one or more tasks from the same bolt or spouts. A
task performs the actual data processing based on spout/bolt
implementation.
V. DISTRIBUTED LEARNING ALGORITHMS
Parallelism type refers to the ways a distributed learning
algorithm performs its parallelization. There are three types of
parallelism: horizontal, vertical and task parallelism. In order to
go deeper in each type of parallelism, we need to define a few
concepts. A _processing item (PI) is a unit of computation_
element (a node, a thread of process) that executes some part of
algorithm. A _user is a client (human being or a software_
component such as a machine learning framework) who
executes the algorithm. _Instance is defined as a datum in the_
training data. Meaning that, the training data consists of a set of
instances that arrive once at a time to the algorithm. In
_horizontal parallelism,_ SPI sends instances into distributed
algorithm. The advantage of horizontal parallelism lies in the
fact that it is suited for very high arrival rates of instances. The
algorithm is also flexible in the fact that it allows the user to
add more processing power. In _vertical parallelism, The_
difference is that local-statistic PIs do not have the local model
as in horizontal parallelism and each of them only stores
sufficient statistic of several attributes that are assigned to it
and computes the information-theoric criteria based on that
assigned statistic. _Task parallelism consists of sorter_
processing item and updater-splitter processing item, which
distributes model into available processing items.
VI. SENTINEL: ARCHITECTURE & COMPONENTS
_A._ _Programming model_
While map-reduce is the most popular programming model
for big data scenarios, map-reduce model is inapplicable to
data stream processing. Map-reduce operations are not I/O
efficient, since map and reduce are blocking operations,
therefore a transition to the next stage cannot be done until all
tasks of the current stage are finished. Consequently, pipeline
parallelism cannot be achieved. One key drawback is the poor
performance of map-reduce due to the fact that all the input
should be already prepared for a map-reduce job in advance
which causes a high latency in working with map-reduce
algorithms [3].
Sentinel runs on top of Apache Storm. _Apache Storm[2] is_
a free and open source distributed real-time computation
system. Storm makes it easy to reliably process unbounded
streams of data. Storm has many use cases: real-time analytics,
online machine learning, continuous computation, distributed
Remote Procedure Calls (RPC), Extract Transform Load
(ETL), and more. Storm allows the computation to happen in
parallel in different nodes, which can be in different clusters.
This feature enables parallel pipeline of data, which makes
Storm a perfect framework for data stream mining settings.
_B._ _Overall Architecture_
In Sentinel, input social stream is read from the source and
instances of stream continuously are read by ADWIN with an
adaptive window. ADWIN node reads the data and checks the
source distribution along with arrival rate of instances and
adapts the window size to the speed and volume of incoming
instances. Then it passes the instances to the data pipeline
node. In data pipeline node, first adaptive filtering component
filters the instances based on the desired attribute and converts
the data into a vector format and passes it onwards. Feature
selector summaries the text from the incoming instances[3] and
passes it to the frequent item miner algorithm. Frequent item
miner algorithms keep a summary of the text tokens and
number of appearances in the document. The resulting hash
table will be ready to be sent to the Vertical Hoeffding Tree
learner’s node. Source Processing Item(s) (SPI) take the input
from the passed hash table and passes it to the model and their
local statistic APIs. The evaluator-processing item updates the
result of learning onto the synopsis. The summary of data can
be stored in the archive database in specific long time intervals
per day. This is an online real-time process and the raw
instance is never saved in any node or components. On the
other end of the system, the user will be able to query the
synopsis at anytime and the result will be the coming from the
model.
2 http://storm.incubator.apache.org/
3 In social stream, instance are in form of documentss
-----
Data reduction techniques are used to get a smaller
volume of the data. Sentinel considers each document as a
list of words. Adaptive filter will transform them to vectors
of features, obtaining the most relevant ones. We use an
incremental tf-idf weighting scheme:
fT,U =
freqT,U
W freqW,U
Fig 2. Sentinel Architecture
_C._ _ADWIN_
In proposed solution, we use sliding window models in
conjunction with learning and mining algorithms, namely
modified version of ADWIN[4] (Adaptive Windowing)
algorithm that maintains a window of variable size. _ADWIN_
grows the size of sliding window in case of no change and
shrinks it when changes appear in the stream [7].
_D._ _Synopses_
Synopsis data structure is substantively smaller than their
base data, resides in main memory, transmitted remotely at
minimal cost and transmitted remotely at minimal cost [14].
_E._ _Data Pipeline Node_
_1)_ _Frequency and Language Filter_
In this component, after that the tweets are filtered by
their language[5], they are converted to sparse vectors
by tf-idf filter. This is to make the instances ready for
feature selection.
𝑡𝑓−𝑖𝑑𝑓K,L = 𝑡𝑓K,L×𝑖𝑑𝑓K
Where𝑡𝑓K,L is the frequency of term t in document d. .
Inverse document frequency, 𝑖𝑑𝑓K is as follows:
𝑁
𝑖𝑑𝑓𝑡, 𝐷= log
𝑑∈𝐷: 𝑡 ∈𝑑
is a logarithm of N as total number of documents in
the corpus divided by total number of documents
containing the term d.
_2)_ _Feature selector_
4 called ADWIN2 which has improvements in performance over its older
version, ADWIN
5
language filter is based on a Naive Bayesian Classifier from
language detection library with a guarantee of precision up to
99%, available at https://code.google.com/p/languagedetection/
idfT = log [N]
nT
where 𝑓#,Y is the frequency of term i in document j that is
the number of occurrences of term i in document j divided
by the sum of frequency of all terms in document j, i.e. the
size of the document. 𝑖𝑑𝑓# is the inverse document
frequency of term i. N is the number of documents and 𝑛#.
This approach improves the performance of using synopsis
by keeping only the most relevant words within a
document into the synopsis data structure.
_3)_ _Frequnet Item Miner_
Several algorithms have been proposed to enable
balance between the infinity of data streams compared to
finite storage capacity of a computing machine. The core
idea behind such algorithms is that only a portion of the
stream gets to be stored. Frequent item miners have three
different categories: _Sampling-based, Counting-based_
_algorithms and_ _Hashing-based algorithms._ For our
purpose, we store the tokens with their number of
appearances in the stream. Therefore, the frequent item
miner algorithm will need to be from counting-based
category. In this study, based on the extraordinary
performance result of measures in precision and recall of
different Count-based algorithms in [4], _Space Saving_
algorithm was chosen due to having recall and precision
close to 92%.
_Space Saving was proposed for hot-list queries under_
some assumptions on the distribution of the input stream
data. The space-saving algorithm is designed to estimate
the frequencies of significant elements and store these
frequencies in an always-sorted structure, accurately. The
gain in using Space-Saving in our proposed solution is that
it returns not only ε-deficient frequent items for queries,
but also guarantees and sorts top-k items for hot-list
queries under appropriate assumptions on the distribution
of the input.
_F._ _Vertical Hoeffding Tree Node_
Vertical Hoeffding Tree [2] is our selection for
distributed stream mining classifier. VHT is based on VFDT
(Very Fast Decision Tree) with vertical parallelization. In
social stream settings, stream mining algorithm is applied to
instances in form of documents. Social streams involve
instances with high number of attributes therefore VHT is a
-----
suitable candidate due to its vertical parallelism approach.
VHT’s vertical parallelism brings advantages over other types
of parallelism types in this context. Also, since the learning
algorithm is based on VFDT, it applies well to social streams
with high speed of arrival of data instances.
Fig. 3: Vertical Hoeffding Tree: Components & Process [2]
In VHT algorithm, each node has a corresponding
number that represents its level of parallelism. Modelaggregator PI consists of the decision tree model and connects
to local-statistic PIs through _attribute and_ _control stream._
Model aggregator splits instances based on attribute and each
local-statistic PI contains local statistic for such attributes that
was assigned to them. Model-aggregator PI sends the split
instances through attribute stream and it sends control
messages to ask local-statistic PI to ask local-statistic PIs to
perform computation via control stream.
Model-aggregator receives _instance_ content events
from source PI and it extracts the instances from content
events. After that, model-aggregator PI needs to split the
instance based on the attribute and send _attribute stream to_
update the sufficient statistic for the corresponding leaf. When
a local-statistic PI receives attribute content event, updates its
corresponding local statistic. To perform this, it keeps a data
structure that store local statistic based on leaf ID and attribute
ID. When it is the time to grow the tree, model-aggregator
sends compute content event via _control stream to local-_
statistic PI. Upon receiving compute content event, each localstatistic PI calculates 𝐺\(𝑋#) [6]to determine the best and second
best attributes. The next part of the algorithm is to update the
model once it receives all computation results from local
statistics.
Whenever the algorithm receives a local-result
content event, it retrieves the correct leaf l from the list of the
splitting leaves. Then, it updates the current best attribute 𝑋^
and second best attributes 𝑋_. If all local results have arrived
into model-aggregators PI, the algorithm computes Hoeffding
bound and decides whether to split the leaf l or not. To handle
stragglers, model-aggregator PI has time-out mechanism to
wait for computation results. If the time out happens, the
algorithm uses current 𝑋^ and 𝑋_ to compute Hoeffding bound
and make the decision[7].
6 G is information gain measure however it can be replaced with other statistic
measures such as Gini index or gain ratio.
7 See [2] for a complete pheducecode of different steps of the algorithm.
VII. CASE STUDY: TWITTER PUBLIC STREAM API
Twitter currently provides a Stream API and two discrete
REST APIs. Through the stream API users can obtain realtime access to tweets. Twitter public Stream API[8] is the
example of social stream that we showcase Sentinel. Common
problem in unbalanced data streams such as Twitter is that
classifiers have high accuracy, close to 90% due to the fact
that a large portion of falls into one of the classification
classes. This is more apparent in Twitter Stream however, as
mentioned before, in projects such as Sentiment 140[9], data
does not constitute a representative sample of the real Twitter
stream due to the fact that the data is pre-processes, balanced
and has shrunk in size to obtain a balanced and representative
sample.
_A._ _Training_
In this study, we converted the raw data into a new vector
format as in [6]. Data pipeline particularly for this case study,
performs the feature reduction and labeling via emoticons
during the model’s testing phase as follows:
- _Feature Reduction: Data pipeline replaces words_
starting with the @ symbol with the token USER, and
URLs within the same Tweet by the token URL.
- _Emoticons: Data pipeline uses emoticons to generate_
class labels during the training phase of the classifier
however after that all emoticons are deleted.
To measure accuracy and performance of learning
algorithm, a forgetting mechanism with a sliding window of
most recent observation can be used [6]. It was shown that
prequential evaluation is not a reliable measure for unbalanced
data streams and proposed Kappa as an evaluation measure. In
this study, we show that based on a sliding window and with
usage of Kappa statistic this issue is solved [5].
_B._ _Experiment_
For the experiment, we ran sentinel on a three-node
cluster. Each node had 48GB of RAM and quad core Intel
Xeon 2.90GHZ CPU with 8 processors. We ran the
experiment with a sample of 1 million tweet instances. We
have filtered the tweets to only English tweets tree learning
algorithms, which were used, were Multinomial Naïve Bayes,
Hoeffding Tree, and Vertical Hoeffding Tree. In case we have
followed an offline approach for each million tweets, 1GB of
disk space was needed, however since we follow an online
approach, there is no need for disks. Due to the release of
iPhone 6 to the date of this publication, we focused on
performing a sentiment analysis on the newest iPhone. We
trained our three learners with query “iPad” and we tested the
model with query “iOS 8”. It should be noted that generally in
Twitter or most social networks, users have more positive or
8
https://dev.twitter.com/docs/streaming-apis/streams/public
9 http://www.sentiment140.com/
-----
almost positive sentiment rather than negative ones.
TABLE I. MEASURES OF DIFFERENT CLASSIFIERS IN SENTINEL
a.
**Accuracy Measures &**
**Classifiers** **Processing Time**
**_Kappa_** **_Time_**
Multinomial
57.78% 3123 sec.
Naïve Bayes
Hoeffding
66.20% 4017 sec.
Tree
Vertical
78.57% 1309 sec.
Hoffding Tree
As you can see in Table 1, Vertical Hoeffding Tree
performs significantly better both in accuracy and in time
compared to the other classifiers. Multinomial Naïve Bayes
classifier is faster than Hoeffding Tree however it is less
accurate. One of the main reasons is that VHT is based on
VFDT Figure 4 show the learning curve of classifiers with
sliding window of 10000 instances per window. It should be
mentioned that due to memory and speed limitation in labeling
stream instances in online approaches, the learners have less
accurate results compared to online approaches.
Fig 4. Sliding window Kappa Statistic (%) per millions of instance
VIII. CONCLUSION
In this study, we presented a distributed system to perform
real-time sentiment analysis. After discussing data stream
mining and distributed data mining systems, different
components of the system were discussed. The learning
algorithm of the solution is based on Vertical Hoeffding Tree,
a parallel decision tree classifier. We ran the solution against
Multinomial Naïve Bayes and Hoeffding Tree classifiers and
compared the results which showed significant both accuracy
and performance improvement compared to uniprocessor
classifiers.
REFERENCES
[1] Mena-Torres, Dayrelis, and Jesús S. Aguilar-Ruiz. "A similarity-based
approach for data stream classification." Expert _Systems_ _with_
_Applications 41.9 (2014): 4224-4234_
[2] Murdopo, Arinto, et al. "SAMOA.", 2013
[3] Bockermann, Christian, and Hendrik Blom. "Processing data streams
with the rapidminer streams-plugin." Proceedings of the RapidMiner
_Community Meeting and Conference. 2012._
[4] Liu, Hongyan, Yuan Lin, and Jiawei Han. "Methods for mining frequent
items in data streams: an overview." Knowledge and information
_systems 26.1 (2011): 1-30._
[5] Bifet, Albert, et al. "Moa: Massive online analysis." The Journal of
_Machine Learning Research 11 (2010): 1601-1604._
[6] Gama, João, Raquel Sebastião, and Pedro Pereira Rodrigues. "Issues in
evaluation of stream learning algorithms." Proceedings of the 15th ACM
_SIGKDD international conference on Knowledge discovery and data_
_mining. ACM, 2009._
[7] Bifet, Albert, and Ricard Gavalda. "Learning from Time-Changing Data
with Adaptive Windowing." SDM. Vol. 7. 2007.
[8] Metwally, Ahmed, Divyakant Agrawal, and Amr El Abbadi. "Efficient
computation of frequent and top-k elements in data streams." Database
_Theory-ICDT 2005. Springer Berlin Heidelberg, 2005. 398-412._
[9] Aggarwal, Charu C., et al. "On demand classification of data
streams."Proceedings of the tenth ACM SIGKDD international
conference on Knowledge discovery and data mining. ACM, 2004.
[10] Aggarwal, Charu C., et al. "A framework for clustering evolving data
streams."Proceedings of the 29th international conference on Very large
_data bases-Volume 29. VLDB Endowment, 2003._
[11] O'callaghan, Liadan, et al. "Streaming-data algorithms for high-quality
clustering." 2013 IEEE 29th International Conference on Data
_Engineering (ICDE). IEEE Computer Society, 2002._
[12] Dietterich, Thomas G. "Machine learning for sequential data: A
review."Structural, syntactic, and statistical pattern recognition.
Springer Berlin Heidelberg, 2002. 15-30.
[13] Guha, Sudipto, et al. "Clustering data streams." Foundations of computer
science, 2000. proceedings. 41st annual symposium on. IEEE, 2000.
[14] Alon, Noga, et al. "Tracking join and self-join sizes in limited
storage."Proceedings of the eighteenth ACM SIGMOD-SIGACTSIGART symposium on Principles of database systems. ACM, 1999.
[15] Toivonen, Hannu. "Sampling large databases for association
rules." VLDB. Vol. 96. 1996.
|Classifiers|Accuracy Measures & Processing Time|Col3|
|---|---|---|
||Kappa|Time|
|Multinomial Naïve Bayes|57.78%|3123 sec.|
|Hoeffding Tree|66.20%|4017 sec.|
|Vertical Hoffding Tree|78.57%|1309 sec.|
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1612.08543, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/1612.08543"
}
| 2,014
|
[
"JournalArticle",
"Conference"
] | true
| 2014-11-01T00:00:00
|
[
{
"paperId": "236fc9a2493322a188976f6b9e4a1e365ef45491",
"title": "A similarity-based approach for data stream classification"
},
{
"paperId": "daedbcd23ed448f815694f7032f62ce482cd669f",
"title": "MOA: Massive Online Analysis"
},
{
"paperId": "38eceb35d1518bafe6fa043effd0001cb3021952",
"title": "Methods for mining frequent items in data streams: an overview"
},
{
"paperId": "bf22b3489072b51b5b4c6059088be014a0371cbb",
"title": "Issues in evaluation of stream learning algorithms"
},
{
"paperId": "72f15aba2e67b1cc9cd1fb12c99e101c4c1aae4b",
"title": "Efficient Computation of Frequent and Top-k Elements in Data Streams"
},
{
"paperId": "2d6922f366139ff7680b286f9c0b6fd5be6d5960",
"title": "On demand classification of data streams"
},
{
"paperId": "aa5d8f3af3a93de7506262e0faadb6d4ca90766d",
"title": "Proceedings of the 29th international conference on Very large data bases - Volume 29"
},
{
"paperId": "cbc2b5ada23f785b7897a498a2697c4109be616f",
"title": "A Framework for Clustering Evolving Data Streams"
},
{
"paperId": "dfeb03d7dd83d3b0ba19629a873c13c9e6cb60b8",
"title": "Streaming-data algorithms for high-quality clustering"
},
{
"paperId": "23c3953fb45536c9129e86ac7a23098bd9f1381d",
"title": "Machine Learning for Sequential Data: A Review"
},
{
"paperId": "5eb75ac359fbd32c378a783741d7543186fe58d8",
"title": "Clustering data streams"
},
{
"paperId": "5f0675b2d925a3187017dfa1d1dbf43c55a4b8ff",
"title": "Tracking join and self-join sizes in limited storage"
},
{
"paperId": "e8d88bb7d1141539224cb94d3ab6eda3fd1dfa0b",
"title": "Sampling Large Databases for Association Rules"
},
{
"paperId": "98f2cd2dd9561af3e383ecde3e6da6d3418e4753",
"title": "SAMOA"
},
{
"paperId": "fb152fb31d567ad281bbfdaf70850c394a93c664",
"title": "Processing Data Streams with the RapidMiner Streams Plugin"
},
{
"paperId": "d7f8d7b89593c5333eb174b2411bf004f5a91f7d",
"title": "Learning from Time-Changing Data with Adaptive Windowing"
},
{
"paperId": "10de04b6920c7d9ac76492a138b147e4636a583f",
"title": "Structural, Syntactic, and Statistical Pattern Recognition"
},
{
"paperId": null,
"title": "Foundations of computer science, 2000. proceedings. 41st annual symposium on"
},
{
"paperId": null,
"title": "language filter is based on a Naive Bayesian Classifier from language detection library with a guarantee of precision up to 99%"
}
] | 7,379
|
en
|
[
{
"category": "Physics",
"source": "external"
},
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Physics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/033f1faa87287e00944fe8b663ef9e6d9922d5c2
|
[
"Physics",
"Computer Science"
] | 0.826483
|
One-Time Universal Hashing Quantum Digital Signatures without Perfect Keys
|
033f1faa87287e00944fe8b663ef9e6d9922d5c2
|
Physical Review Applied
|
[
{
"authorId": "2145728672",
"name": "Bing-Hong Li"
},
{
"authorId": "153432450",
"name": "Yuan-Mei Xie"
},
{
"authorId": "2149214202",
"name": "Xiao-Yu Cao"
},
{
"authorId": "2109407472",
"name": "Chen-Long Li"
},
{
"authorId": "143785074",
"name": "Yao Fu"
},
{
"authorId": "6116902",
"name": "Hua‐Lei Yin"
},
{
"authorId": "2145421885",
"name": "Zenghu Chen"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Phys Rev Appl",
"Phys rev appl",
"Physical review applied"
],
"alternate_urls": [
"https://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.7.064018"
],
"id": "8524a479-f8d2-45d7-a5d1-b5285ba750e2",
"issn": "2331-7019",
"name": "Physical Review Applied",
"type": "journal",
"url": "https://journals.aps.org/prapplied"
}
|
Quantum digital signatures (QDS), generating correlated bit strings among three remote parties for signatures through quantum law, can guarantee non-repudiation, authenticity, and integrity of messages. Recently, one-time universal hashing QDS framework, exploiting the quantum asymmetric encryption and universal hash functions, has been proposed to significantly improve the signature rate and ensure unconditional security by directly signing the hash value of long messages. However, similar to quantum key distribution, this framework utilizes keys with perfect secrecy by performing privacy amplification that introduces cumbersome matrix operations, thereby consuming large computational resources, causing delays and increasing failure probability. Here, we prove that, different from private communication, imperfect quantum keys with limited information leakage can be used for digital signatures and authentication without compromising the security while having eight orders of magnitude improvement on signature rate for signing a megabit message compared with conventional single-bit schemes. This study significantly reduces the delay for data postprocessing and is compatible with any quantum key generation protocols. In our simulation, taking two-photon twin-field key generation protocol as an example, QDS can be practically implemented over a fiber distance of 650 km between the signer and receiver. For the first time, this study offers a cryptographic application of quantum keys with imperfect secrecy and paves a way for the practical and agile implementation of digital signatures in a future quantum network.
|
## One-Time Universal Hashing Quantum Digital Signatures without Perfect Keys
Bing-Hong Li,[1] Yuan-Mei Xie,[1] Xiao-Yu Cao,[1] Chen-Long Li,[1] Yao Fu,[2,][ ∗] Hua-Lei Yin,[1,][ †] and Zeng-Bing Chen[1,][ ‡]
1National Laboratory of Solid State Microstructures and School of Physics,
_Collaborative Innovation Center of Advanced Microstructures, Nanjing University, Nanjing 210093, China_
2Beijing National Laboratory for Condensed Matter Physics and Institute of Physics,
_Chinese Academy of Sciences, Beijing 100190, China_
(Dated: October 6, 2023)
Quantum digital signatures (QDS), generating correlated bit strings among three remote parties
for signatures through quantum law, can guarantee non-repudiation, authenticity, and integrity of
messages. Recently, one-time universal hashing QDS framework, exploiting the quantum asymmetric encryption and universal hash functions, has been proposed to significantly improve the signature
rate and ensure unconditional security by directly signing the hash value of long messages. However,
similar to quantum key distribution, this framework utilizes keys with perfect secrecy by performing privacy amplification that introduces cumbersome matrix operations, thereby consuming large
computational resources, causing delays, and increasing failure probability. Here, we prove that,
different from private communication, imperfect quantum keys with partial information leakage can
be used for digital signatures and authentication without compromising the security while having
eight orders of magnitude improvement on signature rate for signing a megabit message compared
with conventional single-bit schemes. This study significantly reduces the delay for data postprocessing and is compatible with any quantum key generation protocols. In our simulation, taking
two-photon twin-field key generation protocol as an example, QDS can be practically implemented
over a fiber distance of 650 km between the signer and receiver. For the first time, this study offers a
cryptographic application of quantum keys with imperfect secrecy and paves a way for the practical
and agile implementation of digital signatures in a future quantum network.
**I.** **INTRODUCTION**
Digital signatures are cryptographic primitives that offer data authenticity and integrity [1], especially for the
non-repudiation of sensitive information. It has become
an indispensable and essential technique in the global internet owing to its wide application especially in digital
financial transactions, email, and digital currency. However, the security of classical digital signatures, guaranteed by public-key infrastructure [2–4], is threatened by
rapidly developing algorithms [5, 6] and quantum computing [7]. Different from classical digital signatures,
quantum digital signatures (QDSs) can provide a higher
level of security, information-theoretic security, by employing the fundamental principles of quantum mechanics. That is, QDS can protect data integrity, authenticity,
and non-repudiation even if the attacker utilizes unlimited computational power. The rudiment of the single-bit
QDS scheme was introduced in 2001 [8], but it could not
be implemented due to some impractical requirements
such as high-dimensional single-photon states and quantum memories. Subsequently, there have been many developments to remove these impractical requirements [9–
11], making QDS closer to real implementation. Furthermore, based on non-orthogonal encoding [12] and orthogonal encoding [13], respectively, two independent singlebit QDS protocols without secure quantum channels were
proposed and proved to be secure for the first time.
These two protocols have triggered numerous achievements of single-bit QDS theoretically [14–25] and experimentally [26–36].
Nonetheless, all these schemes still have several limitations. Protocols utilizing orthogonal encoding require additional symmetrization steps which results in
extra secure channels [13]. Therefore, to guarantee
information-theoretic security, quantum key distribution
(QKD) and one-time pad encryption are required between two receivers in orthogonal-type protocols [28, 29].
Single-bit QDS schemes based on non-orthogonal encoding [12, 20, 23] are independent of additional QKD channels, but the signature rate is sensitive to the misalignment error of the quantum channel. In addition, all these
schemes can sign only a one-bit message in each round.
If one wants to sign a multi-bit message using single-bit
QDS schemes, he needs to encode it into a new message
string and sign the new string bit by bit [22, 37–41]. However, these solutions have not been completely proved as
information-theoretically secure with the quantified failure probability, and the signature rate is very low and
far from implementation for long messages with a lot of
bits.
Recently, an efficient QDS scheme has been proposed
based on secret sharing, one-time pad, and one-time universal hashing (OTUH) [42]. Different from single-bit
QDS protocols that require a long key string to sign
a one-bit message, this OUTH-QDS protocol offers a
method to directly sign the hash value of multi-bit messages through one key string with information-theoretic
security, and thus drastically improves the QDS effi
_∗_ [yfu@iphy.ac.cn](mailto:yfu@iphy.ac.cn)
_[† hlyin@nju.edu.cn](mailto:hlyin@nju.edu.cn)_
_[‡ zbchen@nju.edu.cn](mailto:zbchen@nju.edu.cn)_
-----
ciency. However, this framework requires perfect keys
with complete secrecy, which is an expensive resource
guaranteed by the complete procedure of QKD or quantum secret sharing (QSS). Accordingly, privacy amplification steps are required, thereby adding to the complexity of the algorithm and causing unendurable delays.
Here, we point out that quantum keys with imperfect secrecy are adequate for protecting the authenticity and integrity of messages in such a digital signature scheme. Accordingly, we propose a new OTUHQDS protocol with imperfectly secret keys, utilizing only
asymmetric quantum keys without perfect secrecy to sign
multi-bit messages. We demonstrate that our proposed
scheme provides information-theoretic security for digital signature tasks and simulate the performance of our
protocol. The result reveals that our protocol outperforms other QDS schemes in terms of signature rate and
transmission distance. In a practical case of signing a
megabit message, the proposed scheme has a higher signature rate of nearly eight orders of magnitude, compared with single-bit QDS schemes due to its robustness against message size. Moreover, we show that our
scheme can significantly reduce the computational costs
and delays of postprocessing owing to the removal of privacy amplification. Furthermore, the proposed scheme is
a general framework that can be applied to all existing
QKD protocols. When utilizing the idea of two-photon
twin-field QKD [43], one of the most efficient QKD protocols, to execute our work, a transmission distance of 650
km can be achieved with a signature rate of 0.01 times
per second.
To date, almost all quantum communication protocols
such as QKD [44–54], QSS [55–57], and quantum conference key agreement [55, 58] aim at generating quantum
states among the parties and extract keys with perfect
secrecy through complex postprocessing steps. Thereafter, these keys are then used to finish the corresponding cryptographic tasks such as private communication,
secret sharing, and group encryption. In contrast, the
proposed protocol offers a new approach to digital signature tasks that only require keys with imperfect secrecy
through quantum optical communication. The troublesome postprocessing steps are thus moved out without relaxing the security assumption. This is the first instance
of applying this kind of keys to cryptographic tasks with
information-theoretic security. We believe that our proposed solution can provide a feasible approach to the
practical application of QDS and enlighten other applications of quantum keys with imperfect secrecy in a future
quantum communication network.
The remainder of this paper is organized as follows.
In Sec. II we review OTUH-QDS scheme and introduce
the motivation of this work. In Sec. III we propose our
protocol with two approaches of universal hashing. In
Sec. IV we give the security proof of authentication based
on quantum keys with imperfect secrecy and then, the
security analysis of the proposed QDS protocol. In Sec. V
we discuss the performance of the proposed scheme and
compare it with both single-bit QDS and OTUH-QDS
schemes. Finally, we conclude the paper in Sec. VI.
**II.** **PRELIMINARIES**
**A.** **OTUH-QDS protocol**
The schematic of OTUH-QDS [42] isreviewed herein.
The protocol can be segmented into the distribution stage
and messaging stage, consistent with single-bit QDS introduced in Appendix A 1. The length of message is denoted as m. The schematic of OTUH-QDS is shown in
Fig. 1(a).
_1._ _distribution stage_
Alice, Bob, and Charlie each have two key bit strings
_{Xa, Xb, Xc} with n bits and {Ya, Yb, Yc} with 2n_
bits, satisfying the perfect correlation Xa = Xb _Xc and_
_⊕_
_Ya = Yb_ _Yc, respectively. The distribution stage can be_
_⊕_
realized using quantum communication protocols, such
as QKD and QSS. It need to be mentioned that singlebit QDS requires only the quantum part of QKD protocols, also refered as key generation protocol (KGP). In
OTUH-QDS, the users need to perform additional error
correction and privacy amplification steps after KGP.
_2._ _messaging stage_
(i) Signing of Alice—First, Alice uses a local quantum
random number, characterized by an n-bit string pa, to
randomly generate an irreducible polynomial p(x) of degree n [59]. Second, she uses the initial vector (key bit
string Xa) and irreducible polynomial (quantum random
number pa) to generate a random linear feedback shift
register-based (LFSR-based) Toeplitz matrix [60] Hnm,
with n rows and m columns. Third, she uses a hash operation with Hash= Hnm · Doc to acquire an n-bit hash
value of the m-bit document. Fourth, she exploits the
hash value and the irreducible polynomial to constitute
the 2n-bit digest Dig = (Hash||pa). Fifth, she encrypts
the digest with her key bit string Ya to obtain the 2nbit signature Sig = Dig ⊕ _Ya using OTP. Finally, she_
uses the public channel to send the signature and document _Sig, Doc_ to Bob. Note that Sig includes the
_{_ _}_
information of the irreducible polynomial chosen for the
hashing.
(ii) Verification of Bob—Bob uses the authentication
classical channel to transmit the received _Sig, Doc_, as
_{_ _}_
well as his key bit strings {Xb, Yb}, to Charlie. Thereafter, Charlie uses the same authentication channel to
forward his key bit strings {Xc, Yc} to Bob. Bob obtains
two new key bit strings {KXb = Xb _⊕Xc, KYb = Yb_ _⊕Yc}_
by the XOR operation. Bob exploits KYb to obtain an
expected digest and bit string pb via XOR decryption.
-----
**(a)**
**(b)**
**Distribution stage** **Messaging stage**
**Distribution stage** **Messaging stage**
**error correction** **error correction**
**privacy amplification** **Alice** **privacy amplification** **Alice**
**KGP**
**KGP**
**Bob** **Charlie**
**Bob** **Charlie**
**error correction** **Alice** **error correction**
**KGP** **KGP**
**Bob** **Charlie**
**Alice**
**Bob** **Charlie**
FIG. 1. (a) OTUH-QDS [42]. In the distribution stage, Alice, Bob and Charlie share key bit strings with perfect secret sharing
relationship through key generation protocol (KGP), error correction and privacy amplification. In the messaging stage Alice
generates the signature through AXU hashing, and sends the message and signature to Bob. Bob then sends his keys and
received information to Charlie, who will later send his keys to Bob. Ultimately, Bob and Charlie use their own and received
keys to infer Alice keys and then perform AXU hashing to verify the signature. (b) Schematic of the proposed protocol. In
the distribution stage, the users only perform KGP and error correction to share keys with full correctness but some secrecy
leakage. Their keys still hold secret sharing relationship. In the messaging stage the manipulation of classic information is
analogous to that in OUTH-QDS.
Bob utilizes the initial vector KXb and irreducible polynomial pb to establish an LFSR-based Toeplitz matrix.
He uses a hash operation to acquire an n-bit hash value
and then constitutes a 2n-bit actual digest. Bob will
accept the signature if the actual digest is equal to the
expected digest. Then, he informs Charlie of the result.
Otherwise, Bob rejects the signature and announces to
abort the protocol.
(iii) Verification of Charlie—If Bob announces that he
accepts the signature, Charlie then uses his original key
along with the key sent to Bob to create two new key
bit strings {KXc = Xb ⊕ _Xc, KYc = Yb ⊕_ _Yc}. Charlie_
employs KYc to acquire an expected digest and bit string
_pc via XOR decryption. Charlie uses a hash operation to_
obtain an n-bit hash value and then constitutes a 2n-bit
actual digest, where the hash function is an LFSR-based
Toeplitz matrix generated by initial vector KXc and irreducible polynomial pc. Charlie accepts the signature
only if the two digests are identical; otherwise, Charlie
rejects the signature.
The core point of this protocol is to realize the perfect
bits correlation of the three parties, construct a completely asymmetric key relationship for them, and perform one-time almost XOR universal2 (AXU) hashing,
specifically, LFSR-based Toeplitz hashing, to generate
the signature. AXU hash functions is a special class of
hash functions that can map an input value of arbitrary
length into an almost random hash value with a preset
length [61]. The signature generated in OTUH-QDS is
simply the AXU hash value of the long message to be
signed, where the AXU hash function is determined by
using only one string of Alice keys. After the distribution
stage, Alice’s, Bob’s and Charlie’s keys are completely secret and correct with the relationship of secret sharing.
Bob (Charlie) can only obtain Alice’s keys after he receives keys of Charlie (Bob). Thus Bob can obtain no
information of Alice’s keys which decides the AXU hash
function before transfering the message and signature to
Charlie. Accordingly, Bob’s forging attack under this
protocol is equivalent to that against an authentication
scenario where Alice sends an authenticated message to
Charlie. It has been proved that such a message authentication scheme based on AXU hashing is informationtheoretically secure [60]. Consequently, forging attack is
protected by one-time AXU hash functions and key relationship among three parties. From the perspective of
Alice, Bob and Charlie’s keys are totally symmetric when
they verify the signature. Thus, Alice’s repudiation attack is prevented as well.
-----
**B.** **Motivation of this work**
Different from all single-bit QDS protocols that require a long key string to sign a one-bit message, OUTHQDS offers a method to sign multi-bit messages through
one key string with information-theoretic security, and
thereby drastically improves the QDS efficiency. Essentially, this advantage is introduced by AXU hash
functions, which has been proved to be informationtheoretically secure only under perfectly secret keys in
previous studies. Thus, compared with single-bit QDS,
OTUH-QDS requires extra error correction and privacy
amplification steps to realize the perfect bits correlation
in the distribution stage. These postprocessing steps especially privacy amplification involves multiplication calculations on matrices with comparable length of data
size, which introduces heavy computational costs and
unpleasant delays in practical scenarios. For large-size
data, the delays will become unendurable and constrain
the practicality.
The process of AXU hashing is equivalent to the scenario where the input value decides the function, mapping the initial input keys into almost random output
hash values. We notice that partial secrecy leakage of input value (keys) will be concealed in AXU hash value because of its randomness. Thus, these imperfect keys with
partial secrecy leakage will not undermine the authenticity of messages in a QDS scheme like OTUH-QDS. Moreover, the integrity of messages is also not compromised.
Based on this concept, in this paper we propose a solution for OTUH-QDS protocols with imperfectly secret
keys. In other words, we implement QDS with quantum
keys without privacy amplification. As the additional
computational cost and delays of OTUH-QDS are primarily introduced by privacy amplification, this concept
can effectively reduce the weaknesses of OTUH-QDS and
lay a ground for the future implementation of QDS in a
quantum network.
The schematic of the proposed protocol is illustrated in
Fig. 1(b). In the distribution stage users only perform the
error correction step after KGP, ensuring that their keys
have no mismatches, and build a secret sharing relationship through Alice’s XOR operation. The final keys will
be randomly divided into several n-bit groups for AXU
hashing. Each of these groups of keys contains full correctness and some secrecy leakage with an upper bound
which can be estimated through finite-size analysis using
experimental data in KGP. In the messaging stage, the
rules of information exchange are consistent with that in
OTUH-QDS. We will prove that the bit stings generated
in our distribution stage are sufficient for AXU hashing
and quantify the security bound in Sec. IV. In addition,
we give two solutions based on different types of AXU
hash functions.
**III.** **QDS PROTOCOL**
A schematic of setups of the proposed QDS protocol is
illustrated herein and illustrated in Fig. 2.
**A.** **Distribution stage**
Our proposal is a general framework in which KGP can
be derived from any type of QKD protocol. As an example, the proposed scheme is demonstrated based on twophoton twin-field (TP-TF) QKD [43]. In the distribution stage, Alice—Bob and Alice—Charlie independently
implement TP-TF KGP (TP-TFKGP for simplicity) to
share key bit strings. We remark that in this three-party
protocol the process of Alice—Bob and Alice—Charlie
are independent, and can be performed separately. The
difficulty of the experiment is the same as two-party QKD
protocols. Specifically, TP-TFKGP utilizes the idea of
two-photon interference to distribute quantum states.
Consequently, the performance is independent of probability and intensity for each user, meanwhile having high
misalignment error tolerance. The protocol is thus unaffected by the addition or deletion of users (as long as the
number of users is on less than three), highly versatile
and suitable for future quantum metropolitan networks.
_1. Preparation._ At each time bin i 1, 2, . . ., N,
_∈{_ _}_
Alice and Bob (Alice and Charlie) each independently
prepare a weak coherent pulse |e[i][(][θ]x[i] [+][r]x[i] _[π][)][�]kx[i]_ _⟩_ with prob
ability pkx, where the subscript x ∈{a, b, c} represents
the user Alice, Bob or Charlie, the phase θx[i] _[∈]_ [[0][,][ 2][π][),]
classical bit rx[i] _[∈{][0][,][ 1][}][, intensity][ k]x[i]_ _[∈{][µ][x][, ν][x][,][ o][x][,][ ˆ][o][x][}]_
(represent signal, decoy, preserve-vacuum and declarevacuum intensity, µx > νx > ox = ˆox = 0) are chosen randomly. Then Alice and Bob (Alice and Charlie)
transmit the corresponding pulses to the untrusted relay
Eve through insecure quantum channels, respectively. In
addition, they send a bright reference light to Eve to
measure the phase noise difference ϕ[i]ab [(][ϕ]ac[i] [).]
_2. Measurement. Eve performs interference measure-_
ments on every received pulse pair with a beam splitter
and two detectors. If one and only one detector clicks,
Eve announces that she obtained a successful detection
event and which detector clicked. In the following we
use the brace with the information of users’ intensity selection in it to distinguish these events. For example,
_{µa, ob} represents the events that Alice selects signal_
intensity and Bob selects vacuum intensity.
_3. Sifting. Here we only list the sifting process between_
Alice and Bob for simplicity since Alice–Bob and Alice–
Charlie are symmetric. Alice and Charlie will sift their
successful detection events following the same approach.
All successful events are segmented into two parts. The
first part is those when neither Alice nor Bob selects the
decoy or declare-vacuum intensity, i.e., {µa, ob}, {µa, µb},
_{oa, µb}, and {oa, ob}, which will be used for generating_
data in the Z basis to form the key. The other successful events, i.e., the second part, are used for estimating
-----
**Alice**
**AWG**
**Laser**
**IM** **PM** **VOA**
**Bob**
**AWG**
**Laser**
**VOA** **PM** **IM**
FIG. 2. Schematic of the setup of the proposed QDS protocol. The red line represents quantum optical channel in the
distribution stage, and the black arrow line represents the information exchange through classic authenticated channel in the
messaging stage. (i) In the distribution stage, Alice, Bob and Charlie utilize a narrow-linewidth continuous-wave laser, intensity
modulator (IM), phase modulator (PM), arbitrary wave generator (AWG) and variable optical attenuator (VOA) to prepare a
phase-randomized weak coherent source with different intensities and phases. The signals from Bob and Charlie will both go
through an optical switch. An untrusted relay Eve performs interference measurement on the signals from Alice and an optical
switch with a beam splitter (BS) and a single-photon detector (SPD). After sifting, parameter estimation and error correction,
Alice can share bit strings with Bob and Charlie, respectively. (ii) In the messaging stage, Alice transmits the desired message
to Bob. Bob sends the message along with his keys to Charlie. Charlie will then send his keys to Bob. Then Bob verifies the
signature by his own and received keys. If he accepts the signature, he will inform Charlie who will also verify the signature
by his own and received keys. The signature is successfully validated if both Bob and Charlie accept it.
**Charlie**
**AWG**
**Laser**
**VOA** **PM** **IM**
parameters. For the first part of events, Alice randomly
matches a time bin i of intensity µa with another time
bin j of intensity oa. Thereafter she sets her bit value as
0 (1) if i < j (i > j), and informs the serial numbers i and
_j to Bob. In the corresponding time bins, if Bob chooses_
intensities kb[min][{][i,j][}] = µb (ob), and kb[max][{][i,j][}] = ob (µb),
he sets his bit value as 0 (1). Bob announces to abort the
event where kb[i] [=][ k]b[j] [=][ o][b][ or][ µ][b][. To conclude, the pre-]
served events in the Z basis are sifted as {µaoa, obµb},
_{µaoa, µbob}, {oaµa, obµb}, and {oaµa, µbob}._
For the second part of events, Alice and Bob communicate their intensities and phase information with each
other via an authenticated channel. Define the global
phase difference at time bin i as θ[i] := θa[i] _[−]_ _[θ]b[i]_ [+][ ϕ]ab[i] [.]
Alice and Bob keep detection events {νa[i] _[, ν]b[i][}][ only if]_
_θ[i]_ [ _δ, δ]_ [π _δ, π + δ]._ They randomly select
_∈_ _−_ _∪_ _−_
two retained detection events that satisfy ��θi − _θj��_ = 0
or π, and then match these two events, denoted as
_{νa[i]_ _[ν]a[j][, ν]b[i][ν]b[j][}][.]_ By calculating classical bits ra[i] _[⊕]_ _[r]a[j]_ [and]
_rb[i]_ _b[, Alice and Bob extract a bit value in the][ X][ ba-]_
_[⊕]_ _[r][j]_
sis, respectively. Subsequently, Bob always flips his bit
in the Z basis. In the X basis, Bob flips part of his bits
to correctly correlate them with those of Alice. To be
specific, when the global phase difference between two
matching time bins is 0 (π) and the two clicking detectors announced by Eve are different (same), Bob will flip
**Eve**
**SPD** **SPD**
**BS**
his bits. Otherwise, Bob will directly save his bits for
later use. The other events in the second part is used for
decoy analysis.
_4. Parameter estimation._ Alice and Bob (Alice and
Charlie) form the nZ-length raw key bit from the random
bits under the Z basis. The remaining bits in the Z basis
are used to estimate the bit error rate E[z]. Further, they
communicate all bit values in the X basis to obtain the
total number of errors. The decoy-state method [62, 63]
is used to estimate the number of vacuum events in the
_Z basis s[z]0µb_ [, the count of single-photon pairs][ s]11[z] [, and]
the phase error rate of single-photon pairs ϕ[z]11 [on the][ Z]
basis.
_5. Error correction and examination. Alice and Bob_
(Alice and Charlie) distill final keys by utilizing an error
correction algorithm with εcor-correctness. [64, 65] The
size of the distilled key remains nZ, and the unknown
information of a possible attacker can be expressed as
. Alice then randomly disturbs the orders of the dis_H_
tilled key and publicizes the new order to Bob (Charlie)
through the authenticated channel. Subsequently, Alice
and Bob (Alice and Charlie) divide the final keys into
several n-bit strings, each of which is used to perform a
task in the messaging stage. The grouping process can
be considered as a random sampling. More details are
shown in Sec. IV A and Appendix C 1.
-----
**B.** **Messaging stage**
Various AXU hash functions can be employed in the
messaging stage of the proposed protocol by following
the framework presented in Fig. 1(b). To demonstrate
the detailed procedure, we here present two specific approaches to the messaging stage utilizing LFSR based
Toeplitz hashing and generalized division hashing, respectively. LFSR-based Toeplitz hashing is highly compatible with the hardware systems whereas generalized
division hashing is more suitable for realizing software
systems. In a practical case we select either of methods of
hashing depending on the different application environments of users. The message to be signed is denoted as
_M_ . For each M if using LFSR-based Toeplitz hashing Alice generates six bit strings XB, XC, YB, YC, ZB, ZC,
each of length n. If choosing generalized division hashing in the messaging stage, Alice will only generate four
bit strings XB, XC, YB, YC. The subscripts represent
the participants performing KGP with Alice, where B
represents Bob and C represents Charlie. Thereafter,
Alice will generate Xa = Xb Xc, Ya = Yb Yc, and
_⊕_ _⊕_
Za = Zb Zc as her own key strings. For the scheme
_⊕_
with LFSR-based Toeplitz hashing the signature rate is
_RLFSR = nZ/3n,_ (1)
whereas for generalized division hashing there is
_RGDH = nZ/2n._ (2)
_1._ _Utilizing LFSR-based Toeplitz hashing_
**Definition 1. LFSR-based Toeplitz hash functions:**
LFSR-based Toeplitz hash functions can be expressed
as hp,s(M ) = HnmM, where p, s determines the function and M = (M0, M1, ..., Mm−1)[T] is the message in
the form of an m-bit vector. The process of generating
LFSR-based Toeplitz hash function is detailed as follows.
A randomly selected irreducible polynomial of order
_n in the field GF(2), p(x), determines the construc-_
tion of LFSR. p(x) = x[n] + pn−1x[n][−][1] + ... + p1x + p0
can be characterized by its coefficients of order from 0
to n − 1, i.e., p = (pn−1, pn−2, ..., p1, p0). For the initial state s which is also represented as an n-bit vector
_s = (an, an−1, ..., a2, a1)[T]_, the LFSR will be performed
n times to generate n vectors. Specifically, it will shift
down every element in the previous column, and add a
new element to the top of the column. For instance,
the LFSR transforms s into s1 = (an+1, an, ..., a3, a2)[T],
where an+1 = p · s, and likewise, transforms s1 to s2.
Then the m vectors s, s1, ..., sm−1 will together construct
the Toeplitz matrix Hnm = (s, s1, ..., sm−1), and the hash
value of the message is HnmM .
(i) Alice obtains a string of random numbers through
a quantum random number generator and uses it to randomly generate a monic irreducible polynomial in GF(2)
of order n, denoted as p(x). p(x) can be characterized
by its coefficients of order from 0 to n 1, i.e., an n-bit
_−_
string, denoted by pa. Details of generating p(x) can be
found in Appendix B 1.
(ii) Alice uses her key bit string Ya and p(x) to perform LFSR-based Toeplitz hashing and generates an nbit hash value Dig = HY,pa (M ), and encrypts it by Za to
obtain the final signature Sig = Dig ⊕ Za. In addition,
Alice encrypts pa by the key set Xa to obtain the encrypted string p = pa ⊕ Xa. Here we adopt a different expression from that in OUTH-QDS that we independently
list the hash value as Dig and the coefficients of the irreducible polynomial as pa, i.e., Sig does not include the
information of the irreducible polynomial to avoid misunderstanding. She then transmits _Sig, p, M_ to Bob
_{_ _}_
through an authenticated classical channel.
(iii) Bob transmits _Sig, p, M_ as well as his key
_{_ _}_
bit strings {Xb, Yb, Zb} to Charlie so as to inform
Charlie that he has received the signature. Thereafter,
Charlie forwards his key bit strings {Xc, Yc, Zc} to
Bob. These data are all transmitted through an authenticated channel. Bob obtains three new key bit strings
_KXb = Xb ⊕_ Xc, KYb = Yb ⊕ Yc, and KZb = Zb ⊕ Zc
using the XOR operation. He exploits KXb and KZb to
obtain the expected digest and string pb via XOR decryption. He utilizes KYb and pb to establish an LFSR-based
Toeplitz matrix and derive an actual digest via a hash
operation. Bob will accept the signature if the actual
digest is equal to the expected digest. Then he informs
Charlie of the result.
(iv) If Bob announces that he accepts the signature,
Charlie creates three new key bit strings KXc = Xb ⊕
Xc,, KYc = Yb ⊕ Yc and KZc = Zb ⊕ Zc using his original
key and that received from Bob. He employs KXc and
_KZc to acquire the expected digest and variable pc via_
XOR decryption. Charlie obtains an actual digest via
hash operation, where the hash function is an LFSRbased Toeplitz matrix generated by KYc and pc. Charlie
accepts the signature if the two digests are identical.
_2._ _Utilizing generalized division hashing_
**Definition 2. Generalized division hash functions: The**
generalized division hash functions can be expressed as
_hP (M_ ) = M (x) · x[n/k] mod P (x), where P (x) is a monic
irreducible polynomial of order n/k in the field GF(2[k]),
_M is the message and M_ (x) is the polynomial of order m/k in GF(2[k]) with every coefficient corresponding
to k bits of M . The calculation is also performed in
GF(2[k]). The final result is a polynomial of order n/k
in field GF(2[k]), and can be transformed into an n-bit
strings. [66]
Commonly, k is set as k = 2[x] for simplicity, where x
is a positive integer. In the current scheme, we select
_k = 2[3]_ = 8.
(i) In this case, Alice selects Xa = Xb Xc and
_⊕_
Ya = Yb Yc as her own key sets. Alice first obtains
_⊕_
-----
a string of random numbers through a quantum random
number generator and uses it to randomly generate a
monic irreducible polynomial in GF(2[8]) of order n/8, denoted by P (x). The generation process of p(x) are detailed in Appendix B 1. P (x) can be characterized by
its coefficients of order from 0 to n/8 1. By encoding
_−_
each coefficient into an 8-bit string, we can use an n-bit
string to express P (x), denoted as Pa. Subsequently, Alice encrypts Pa by the key set Xa to obtain the encrypted
string P = Pa ⊕ Xa
(ii) Alice uses P (x) to perform the generalized division hashing [66] to obtain an n-bit hash value Dig =
_hPa_ (M ). She encrypts Dig by Ya to derive the signature
_Sig = Dig ⊕_ Ya and transmits the message along with
the obtained signature _Sig, p, M_ to Bob.
_{_ _}_
step (iii) and (iv) are similar to those utilizing LFSRbased Toeplitz hashing. Bob and Charlie will exchange
their key strings in turn through an authenticated channel and examine their expected and received digests.
This summarizes the entire procedure of the proposed
protocol. Note that the TP-TFKGP can be replaced by
any other KGP such as BB84-KGP or sending-or-notsending (SNS)-KGP. Actually, in the distribution stage
Alice shares bit strings with Bob and Charlie in the relationship of secret sharing. Thus, the distribution stage
can also be performed based on QSS without employing
the privacy amplification step.
**IV.** **SECURITY ANALYSIS**
Similar to OTUH-QDS, the core point of the proposed
protocol is the security of the authentication based on
AXU hashing, which directly protects the security of
QDS against fogery [42]. However, the security of our
protocol differs because of the information leakage during
the distribution stage. In this section we first analyze the
success probability of an attacker guessing a key string
generated in the distribution stage, and thereafter provide a more detailed security analysis of AXU hashing
under imperfect keys with partial secrecy leakage, and
finally demonstrate the security of our protocol.
**A.** **Guessing probability of the attacker**
Unlike QKD that generates keys with perfect secrecy,
in our protocol the keys are imperfectly secret. Any
possible attackers may obtain partial information on the
keys. After the distribution stage, users share keys in the
form of several n-bit strings. We need to quantify the information leakage and bound the maximum probability
of the attacker guessing such a string of keys. Suppose an
n-bit key string as X and the attacker’s system is B. We
consider a general attack scenario where attackers can
execute any entangling operations on the system of any
or all states, obtain a system ρ[x]B [and perform any positive]
operator-valued measure {EB[x] _[}][x][ on it. The probability]_
that the attacker correctly guesses X using an optimal
strategy is denoted as Pguess(X|B). According to the definition of min-entropy in Ref. [67],
�
_Pguess(X|B) = max_ _Px tr(EB[x]_ _[ρ][x]B[) = 2][−][H][min][(][X][|][B][)][ρ]_ _[,]_
_{EB[x]_ _[}][x]_ _x_
(3)
where Hmin(X|B)ρ is the min-entropy of X and B. If
X is generated in the distribution stage of our protocol,
_Hmin(X|B)ρ can be estimated by_
_Hmin(X|B)ρ = Hn._ (4)
Thus, we have the relationship
_Pguess(X|B) = 2[−H][n]_ _,_ (5)
which means that the attacker can correctly guess X with
a probability no more than 2[−H][n] . Hn is the total unknown information of the n-bit string and can be upper bounded by parameters estimated in the distribution
stage
� �
_Hn ≤_ _s[zn]0µb_ [+][ s]11[zn] 1 − _H(ϕ[zn]11_ [)] _−_ _nfH(E[z]),_ (6)
where f is the error correction efficiency, s[zn]0µb [and][ s]11[zn]
are the lower bounds of vacuum events and single-photon
pairs events in the n-bit string, respectively, and ϕ[zn]
11 [is]
the upper bound of the phase error rate of single-photon
pairs in the n-bit string. More details of calculation are
shown in Appendix C 1.
**B.** **Security of authentication based on hashing**
In our QDS schemes, hashing is used to perform the
authentication task. Thus we first consider the authentication scenario where the sender generates a signature
_Sig = h(M_ ) _r as message authentication code, and_
_⊕_
sends _M, Sig_ to the recipient. The attacker can inter_{_ _}_
cept and capture _M, Sig_, tamper a new message and
_{_ _}_
signature {M _[′], Sig[′]}, and send it to the recipient, who_
will examine whether Sig[′] = h(M _[′]) ⊕_ _r before accepting_
it. The attacker succeed if and only if (iff) a combination _m, t_ is select with the relationship h(m) = t, and
_{_ _}_
_{M_ _[′]_ = M ⊕m, Sig[′] = Sig _⊕t} is sent to the recipient. In_
this case, the recipient will accept the message because of
the relationship h(M _m) = h(M_ ) _h(m) = Sig_ _t. It_
_⊕_ _⊕_ _⊕_
should be mentioned that m = 0 due to the requirement
_̸_
for a valid forge.
Suppose keys generated in the distribution stage of our
protocol, i.e., keys with partial information leakage, are
used to perform this authentication task, and define ϵ
as the success probability of the attacker under this scenario. We should consider three types of possible attacks.
The first one is to randomly generate m, t. It is a trivial
strategy whose success probability is only
_ϵ1 = 2[−][n]._ (7)
-----
The other two types of attacks are guessing keys that
decide the hash function and recovering the function from
signatures.
_1._ _Attack of guessing keys_
The LFSR-based Toeplitz hash function is represented
as hp,s(M )= Hnm _M_, where Hnm is determined by the
_·_
two bit strings p and s [60]. Herein we follow the terminology in the messaging stage of the proposed protocol
where p is actually pa encrypted by Xa, s is Ya, and the
hash value Dig is encrypted by Za. We show that guessing only Xa or in other words, guessing only pa is enough
to execute an optimal attack by a proposition.
**Proposition 1. For the LFSR-based Toeplitz hash func-**
_tion hp,s(M_ )= Hnm · M _, if p(x)|M_ (x) = Mm−1x[m][−][1] +
_... + M1x + M0, then hp,s(M_ ) = 0.
The proof of this proposition is shown in Appendix B 2.
It means that the attacker can easily generate a message
_m satisfying the relationship h(m) = 0 if he knows p. In_
the scenario described above, suppose the attacker obtains a string Xg as his estimation of Xa. He can decrypt it to obtain pg as his guessing of pa and transform
_pg into a polynomial pg(x). Thereafter the attacker can_
easily generate a bit string m satisfying pg(x)|m(x), and
there is the relationship h(m) = 0 if pg = pa (or equivalently Xg = Xa) according to Proposition 1. Then he
can tamper the message into M _m without changing_
_⊕_
the signature. _M +_ _m, Sig_ will pass the authentication
_{_ _}_
test if Xg = Xa. As m(x) is m-order and the polynomial
is n-order, the attacker can select no more than m/n
polynomials and multiply them to consist his choice of
_m(x). In other words, he can guess the string Xa for no_
more than m/n times. It must be considered that the
attacker knows pa is irreducible, so he will only choose
those guesses that satisfy pa is irreducible. The success
probability of this optimized strategy can be expressed
as
_P1 =_ _[m]_ (8)
_n_
_[·][ P]_ [(][X][a][ =][ X][g][|][p][g][ ∈I][)][,]
where P (A _B) represents the probability of event A un-_
_|_
der the condition that event B occurs, and denotes the
_I_
set of all irreducible polynomials of order n in GF(2).
The cardinal number of, i.e., the number of all n-order
_I_
irreducible polynomials in GF(2), is more than 2[n][−][1]/n.
Thus P (pg ∈I) ≤ (2[n][−][1]/n)/2[n] = 1/2n. It is obvious that P (Xa = Xg, pg ) = P (Xa = Xg) because if
_∈I_
Xa = Xg then pg = pa . Then we can obtain the
_∈I_
upper bound of the success probability of this type of
attack, denoted as ϵLFSR,
_P1 =_ _[m]_
_n_ _[·][ P]P[(][X](p[a]g[ =] ∈I[ X][g])[)]_
_≤_ _[m]_ 1
_n_
_[·][ 2][−H]2n_ _[n]_
=m · 2[1][−H][n] = ϵLFSR.
The only difference in the calculation is that there are at
least 2[n][−][1]/(n/8) irreducible polynomials of order n/8 in
GF(2[8]), so P (Pg ∈I) ≥ (2[n][−][1]/(n/8))/2[n] = 4/n.
_2._ _Attack of recovering keys from signature_
The attacker can attempt to recover the desired keys
from the captured signature. In both kinds of hashing
the hash value is encrypted to generate the signature.
Thus the attacker must first guess the corresponding key
strings (Za in LFSR-based Toeplitz hashing or Ya in generalized division hashing) and then perform the recovering algorithm. The success probability of this strategy is no more than that only guessing the bit string
(P (Za = Zg) or P (Ya = Yg)) and is obviously no more
than ϵLFSR or ϵGDH.
In conclusion, the optimal strategy on LFSR-based
Toeplitz hashing and generalized division hashing (GDH)
is to guess the key string that encrypts the polynomial.
We can quantify the upper bound of failure probability
of authentication based on both types of hashing with
imperfect keys of secrecy leakage:
_ϵLFSR =m · 2[1][−H][n]_ _,_ (10)
_ϵGDH =m · 2[−][2][−H][n]_ _._ (11)
The attacker can also guess the strings Xa and Ya to obtain p and s so that he can guess the hash function and
make a successful attack for certainty. Under this circumstance his success probability is no more than ϵLFSR,
_P2 =P_ (Xa = Xg, Ya = Yg|pg ∈I)
_≤P_ (Xa = Xg|pg ∈I)
_≤ϵLFSR._
The generalized division hash function hP (M )= m(x)·
_x[n/][8]_ mod P (x) is determined only by P . As earlier,
we also follow the terminology in the proposed protocol
that P is Pa encrypted by Xa and the hash value Dig is
encrypted by Ya. The attacker’s strategy is to guess a
string Xg such that he can obtain Pg and then forge a
message. Analogous to the analysis discussed above, the
upper bound of the success probability is defined as
_ϵGDH =_ _[m]_ 4 = m · 2[−][2][−H][n] _._ (9)
_n_
_[·][ 2][−H]n_ _[n]_
-----
**C.** **Security of the QDS scheme**
Finally, we analyze the security in the QDS scheme
which contains three parts, robustness, repudiation, and
forgery.
_1._ _Robustness._
10[4]
10[2]
10[0]
10[-2]
The honest run abortion means the protocol is aborted
when all parties are honest. It occurs only when Alice
and Bob (or Charlie) share different key bits after the
distribution stage. In the proposed protocol Alice and
Bob (Charlie) perform error correction in the distribution
stage. Thus, they share the identical final key, and the
honest run occurs only at the case where errors occur.
The robustness bound is ϵrob = 2ϵcor + 2ϵ[′], where ϵcor
is the failure probability of the error correction protocol
in the distribution stage, and ϵ[′] is the probability that
error occurs in classical message transmission. Remark
that we assume ϵ[′] = 10[−][11] for simplicity since it is a
parameter of classical communication.
_2._ _Repudiation._
10[-4]
10[-6]
0 100 200 300 400 500 600 700
Distance (km)
Alice successfully repudiates when Bob accepts the
message while Charlie rejects it. For Alice’s repudiation
attacks, Bob and Charlie are both honest and symmetric
and possess the same new key strings. They will converge
on the same decision for the same message and signature.
In other words, when Bob rejects (accepts) the message,
Charlie also rejects (accepts) it. Repudiation attacks succeed only when errors occur in one of the key exchange
steps. Thus, the repudiation bound is ϵrep = 2ϵ[′].
_3._ _Forgery._
FIG. 3. Signature rates of the proposed protocol with
TP-TFKGP, BB84-KGP, SNS-KGP, and decoy state BB84QDS [13], SNS-QDS [21], SNS-QDS with random pairing [24]
with the message size of 1 Kb. In the proposed protocol we
use generalized division hashing in messaging stage. The repetition rate of the laser is 1 GHz. The distances between
Alice–Bob and Alice–Charlie are assumed to be the same.
The data size N is 10[13] and the security bound is 10[−][10].
**V.** **DISCUSSION**
Bob forges successfully when Charlie accepts the tampered message forwarded by Bob. According to the proposed protocol, Charlie accepts the message iff Charlie
obtains the same result through one-time pad decryption and one-time AXU hash functions. In principle, this
is the same as an authentication scenario in Sec. IV B
where Bob is the attacker attempting to forge the information sent from Alice to Charlie. Therefore, the probability of a successful forgery ϵfor can be determined by the
failure probability of hashing, i.e., one chooses two distinct messages with identical hash values. For the scheme
utilizing LFSR-based Toeplitz hash ϵfor = m _·_ 2[1][−H][n], and
for generalized division hashing ϵfor = m · 2[−][2][−H][n] .
The total security bound of QDS, i.e., the maximum failure probability of the protocol, is ϵ =
max{ϵrob, ϵrep, ϵfor}.
From Eqs. (1), (2), (10), and (11), there are just differences in a constant 2/3 between two signature rates and a
constant 8 between two security parameters. The difference between the two approaches is trivial. For simplicity, we only discuss the protocol with generalized division
hashing in this section.
To demonstrate the advantage of the current proposal,
we first build our protocol based on BB84-KGP, SNSKGP and TP-TFKGP, and compare them with decoystate BB84-QDS [13] and SNS-QDS [21] which are singlebit QDS protocols based on BB84-KGP and SNS-KGP.
We also compare SNS-QDS with random pairing [24],
which improves the signature rate of SNS-QDS and can
be applied to other QDS. More details of the calculation
are shown in Appendix C. In the simulations, we consider
two common cases where each message to be signed is
10[3] bits (1 Kb) and 10[6] bits (1 Mb), respectively. The
repetition rate of the laser is 1 GHz, and the distances
between Alice-Bob and Alice-Charlie are assumed to be
TABLE I. Simulation parameters. ηd and pd denote the detector efficiency and dark count rate, respectively. ed represents
the misalignment error rate. _N is the data size._ _α is the_
attenuation coefficient of the fiber. f is the error correction
efficiency. ϵ is the failure probability of QDS schemes.
_ηd_ _pd_ _ed_ _N_ _α_ _f_ _ϵ_
70% 10[−][8] 0.02 10[13] 0.165 1.1 10[−][10]
-----
10[5]
10[5]
10[4]
10[0]
10[-5]
10[3]
10[2]
10[1]
10[0]
0 100 200 300 400 500 600
Distance (km)
10[-10]
0 100 200 300 400 500 600 700
Distance (km)
10[-1]
10[-2]
FIG. 4. Signature rates of the proposed protocol with
TP-TFKGP, BB84-KGP, SNS-KGP, and decoy state BB84QDS [13], SNS-QDS [21], SNS-QDS with random pairing [24]
with the message size of 1 Mb. In the protocol, we use generalized division hashing in messaging stage. The repetition
rate of the laser is 1 GHz. The distances between Alice–Bob
and Alice–Charlie are assumed to be the same. The data size
N is 10[13] and the security bound is 10[−][10].
the same. The unit of signature rate is set as time per
second (tps). Detailed analysis is shown in Appendix A 2.
Other simulation parameters are listed in Table I.
It should be mentioned that all conventional singlebit QDS protocols sign only a one-bit message every
round. In the case of signing the multi-bit message, an mbit message must be encoded into a new sequence with
length h by inserting ‘0’ and adding ‘1’ to the original
sequence. The signing efficiency, i.e., ˆη = m/h, is obviously less than 1. For simplicity, we use the upper bound
_ηˆ = 1, i.e., h = m, in our simulation. It is obvious that_
key consumption of single-bit QDS increases linearly with
message size m. In other words, the signature rate is proportional to 1/m. In our proposed scheme, the signature
is generated by hash functions operating on the message,
so that the signature rate is robust against the length of
the message. From Eqs. (10) and (11), ϵ increases linearly as m increase, but decrease exponentially as Hn increases. Thus, to guarantee the same epsilon, Hn, which
is proportional to group size n, increases logarithmically
with m. Consequently, the signature rate of the proposed
scheme is proportional to − log2 m.
The simulation results of all the protocols mentioned
are presented in Figs. 3 and 4. For the message size of 1
Kb, our protocols show an advantage on signature rate
of over five orders of magnitude compared with conventional QDS schemes, which is a quite larger improvement
than SNS-QDS with random pairing. If the message size
becomes 1 Mb, the signature rate of conventional BB84QDS, SNS-QDS, and SNS-QDS with random pairing will
decrease by three orders of magnitude, whereas that of
FIG. 5. Signature rates of the proposed protocols with TPTFKGP under different data sizes N = 10[9], 10[11] and 10[13].
The message size is assumed to be 1 Mb, and the repetition
rate of the laser is 1 GHz. The security bound is 10[−][10].
our protocols decreases only slightly. Thus the proposed
QDS scheme delivers a signature rate with eight orders
of magnitude higher than previous schemes. As demonstrated, the proposed protocol shows great robustness to
message size. Furthermore, based on TP-TFKGP the
proposed scheme can reach a transmission distance of
650 km as well as a signature rate of approximately 0.01
times per second (tps). It is an immense breakthrough
in terms of both distance and signature rate, indicating
the considerable potential of the proposed protocol in
the practical implementation of QDS. The performance
of the proposed protocol under different data sizes 10[9],
10[11] and 10[13] is depicted in Fig. 5. The curve of N = 10[9]
stops at 1 tps, i.e., one time for all data, because signing
less than 1 time (1 message) for all data is nonsense. The
result shows that even with a data size as small as 10[9],
the proposed protocol can reach a transmission distance
of 350 km, and performance of data size N = 10[11] is
close to that of N = 10[13]. The influence of finite-size
effects caused by small data size on our protocol is in an
acceptable level.
Compared with OTUH-QDS, the proposed protocol
does not require perfectly secret keys, and thus involves
no privacy amplification step. Therefore, the proposed
protocol only consumes keys with partial information
leakage, which is an affordable and practical resource
compared with perfect quantum keys generated by quantum secure communication. Error correction of quantum
keys can be easily performed by classical Cascade protocol [64, 65] where the bit string is first blocked and then
manipulated by blocks. Thus the complexity of error correction increases linearly with the data size N and can
be performed via stream computing. Privacy amplification, however, requires a hash matrix multiplication step
-----
TABLE II. Time consumption of error correction TEC and privacy amplification TP A under different data sizes N = 10[13] and
_N = 10[11]_ when the distance is 400 km. T1 = TEC and T2 = TEC + TPA represent the postprocessing time of the proposed
scheme and OTUH-QDS, respectively. nZ is the number of raw bits generated in TP-TFKGP; l is the length of keys after
privacy amplification. In case N = 10[13], postprocessing time of OTUH-QDS is 5.85 h, and that of the proposed protocol is
only 8.07 min.
_N_ _nZ_ errors l _TEC_ _TPA_ _T1_ _T2_
10[13] 1.695 × 10[8] 300 4.87 × 10[7] 8.07 min 5.71 h 8.07 min 5.85 h
10[11] 1.267 × 10[6] 39830 2.51 × 10[5] 3.62 s 2.98 s 3.62 s 6.6 s
where the numbers of columns and rows are proportional
to N . Thus the computational complexity of privacy
amplification is O(N [2]). The fast Fourier transform algorithm can reduce the complexity to O(N log N ) [68], and
one can also block the keys before performing privacy
amplification. However, as the minimum blocks should
be adequately large to minimize the finite-size effect, the
actual computational cost and delay of privacy amplification are still very large.
The time consumed in conducting consumption of error correction, privacy amplification, and data transmission are listed in Table II, including the total postprocessing time of both protocols, at a distance of 400 km with
data sizes 10[13] and 10[11]. Details of simulation are introduced in Appendix D. If N = 10[11], time consumption of
postprocessing in OTUH-QDS is 6.6 s, while that of the
proposed protocol is 3.62 s. Moreover, when N = 10[13],
time for privacy amplification is 5.71 h, which will introduce a quite long delay in experiment. Accordingly,
time for error correction is only 8.07 min. The proposed
scheme, free of privacy amplification, can significantly
save computational resources and minimize postprocessing delays.
We further compare the signature rates of the proposed
protocol and that of OTUH-QDS [42]. Theoretically, the
two signature rates should be equal under ideal conditions. In practical cases, there are two effects that influence the performance of the proposed protocol compared
with OTUH-QDS. The first effect is that in our protocol
the parameter n is optimized, which will improve the signature rate compared with OUTH-QDS. This effect will
decrease as distance increases. The second effect is that
in our protocol we consider the statistical fluctuation of
the error rate in the grouping process. This effect will
damage the signature rate compared with OUTH-QDS.
At both long and short distances, this effect is slight because the size of groups is small and the error rate is
small, respectively.
In Fig. 6, we draw the ratio of the signature rate of
the proposed protocol based on TP-TFKGP and that of
OTUH-QDS [42] combined with TP-TFQKD, if not considering the postprocessing time, with data sizes of 10[13]
and 10[11], and message size of 1Kb. The result shows that
the ratio is more than 80% for transmission distances less
than 500 km. Overall, the signature rates of the two protocols are comparable. In addition, in case of assuming
the repetition rate of the laser as 1 GHz, time consump
FIG. 6. Ratio of the signature rate of the proposed protocol
with TP-TFKGP and that of OTUH-QDS [42] combined with
TP-TFQKD, if not considering the postprocessing time, with
data sizes 10[13] and 10[11]. The message size is 1 Kb and the
repetition rate of the laser is 1 GHz. The ratio is more than
0.8 with a transmission distance lower than 500 km.
tion for postprocessing (2.057 10[4]s) is even longer than
_×_
time for data transmission (10[4]s) for N = 10[13]. The
signature rate of OTUH-QDS will be constrained by the
efficiency of privacy amplification. That is, in practice
the signature rate of OTUH-QDS is lower than the simulation result, while the proposed scheme can overcome
this shortcoming. Considering the fact that the proposed
protocol can save postprocessing time by even one hundred times, our proposal shows significant improvement
in the practical scenario especially when the digital signature tasks are performed at high frequency and the
data size is large.
**VI.** **CONCLUSION**
In summary, in this paper we prove that keys with
partial secrecy leakage can protect the authenticity and
integrity of messages if combined with AXU hash functions. Furthermore, we theoretically propose an efficient
QDS protocol utilizing imperfect quantum keys without
privacy amplification based on the framework of OTUH
-----
QDS, reducing computational resources and delays of
postprocessing without compromising the security. The
simulation results demonstrate that the proposed protocol outperforms previous single-bit QDS protocols in
terms of both signing efficiency and distance. For instance, for a 1-MB-size message to be signed, the signature rate of the proposed protocol is higher than that of
single-bit QDS protocols by over eight orders of magnitude. Specifically, for the protocol based on TP-TFKGP,
the transmission distance can reach up to 650 km and still
holds a signature rate of 0.01 tps. Moreover, compared
with OUTH-QDS, the proposed protocol notably saves
the postprocessing time into an endurable range and
therefore, significantly improves the practicality. Our
scheme is a general framework that can be applied to
any existing QKD or QSS protocol, and is highly compatible with future quantum networks and feasible in numerous applications. Additionally, this work, only requiring
keys with imperfect secrecy, is a new approach of quantum communication that is different from other quantum
secret communication protocols. We suggest that raw
quantum keys can be directly used to finish cryptographic
tasks including message authentication and digital signatures, indicating the enormous potential of this resource
and the possibility of removing the classical postprocessing step in a future quantum world. We believe that the
proposed scheme and the idea of utilizing imperfect quantum keys provide a solution for the real implementation
of practical and commercial QDS as well as other quantum cryptography tasks in future quantum networks.
**ACKNOWLEDGMENTS**
This study was supported by the National Natural Science Foundation of China (No. 12274223), the
Natural Science Foundation of Jiangsu Province (No.
BK20211145), the Fundamental Research Funds for the
Central Universities (No. 020414380182), the Key Research and Development Program of Nanjing Jiangbei
New Area (No. ZDYD20210101), the Program for Innovative Talents and Entrepreneurs in Jiangsu (No. JSSCRC2021484), and the Program of Song Shan Laboratory (Included in the management of Major Science and Technology Program of Henan Province) (No.
221100210800-02).
**Appendix A: single-bit QDS**
**1.** **schematic of single-bit QDS**
Here, we first introduce orthogonal encoding QDS [13]
as an example of single-bit QDS schemes. Commonly,
all single-bit QDS protocols can be segmented into two
stages: the distribution stage and messaging stage. The
schematic of orthogonal encoding QDS is shown in Fig. 7.
distribution stage:
FIG. 7. Orthogonal encoding QDS. In the distribution stage,
Alice–Bob and Alice–Charlie independently perform KGP to
generate correlated bit strings with limited mismatches. Then
Bob and Charlie symmetrize their keys by exchanging half of
their keys. In the messaging stage Alice generates the signature depending on the message bit, and sends the message
and signature to Bob, who will transfer it to Charlie. Bob
and Charlie examine their mismatch and compare it with the
threshold to verify the signed message..
(i)For each possible future message m = 0 or 1, Alice
uses the KGP to generate four different length L keys,
_A[0]B[, A][1]B[, A][0]C[, A][1]C[, where the subscript denotes the par-]_
ticipant with whom she performed the KGP and the superscript denotes the future message, to be decided later
by Alice. Bob holds the length L strings KB[0] _[, K]B[1]_ [and]
Charlie holds the length L strings KC[0] _[, K]C[1]_ [.]
(ii)For each future message, Bob and Charlie symmetrize their keys by choosing half of the bit values in
their KB[m][, K]C[m] [and sending them (as well as the cor-]
responding positions) to the other participant using the
Bob-Charlie secret classical channel. They will only use
the bits they did not forward and those received from
the other participant. Their final symmetrized keys are
denoted as SB[m] [and][ S]C[m][. Bob (and Charlie) will keep a]
record of whether an element in SB[m] [(][S]C[m][) came directly]
from Alice or whether it was forwarded to him by Charlie
(or Bob).
messaging stage:
(i) To send a signed one-bit message m, Alice sends
(m, Sigm) to the desired recipient (say Bob), where
_sigm = (A[m]B_ _[, A]C[m][).]_
(ii) Bob checks whether (m, Sigm) matches his SB[m] [and]
records the number of mismatches he finds. He separately
checks the part of his key received directly from Alice and
the part of the key received from Charlie. If there are
fewer than sa(L/2) mismatches in both halves of the key,
where sa < 1/2 is a small threshold determined by the
parameters and the desired security level of the protocol,
then Bob accepts the message.
(iii) To forward the message to Charlie, Bob forwards
the pair (m, Sigm) that he received from Alice.
(iv) Charlie tests for mismatches in the same way, but
in order to protect against repudiation by Alice he uses a
different threshold. Charlie accepts the forwarded message if the number of mismatches in both halves of his
**Distribution stage** **Messaging stage**
**KGP** **Alice** **KGP** **Alice**
**Bob** **Charlie**
**Bob** **Charlie**
-----
key is below sv(L/2) where sv is another threshold, with
0 < sa < sv < 1/2.
KGP is actually part of QKD protocol except for the
error correction and privacy amplification steps. In distribution stage, A[m]X [and][ K]X[m] [generated through KGP are]
correlated with limited mismatch, and A[m]X [contains fewer]
mismatches with KX[m] [than does any string produced by]
an eavesdropper, where X _B, C_ represents Bob and
_∈{_ _}_
Charlie, m is the message. After Bob and Charlie’s symmetrization step, Bob holds SB[m] [and Charlie holds][ S]C[m][,]
each containing half of KB[m] [and][ K]C[m][. From the perspec-]
tive of Alice, SB[m] [and][ S]C[m] [are symmetric. Alice has no in-]
formation on whether it is Bob’s SB[m] [or Charlie’s][ S]C[m] [that]
contains a particular element of the string (KB[m][, K]C[m][).]
This protects against repudiation. From the perspective
of Bob, SB[m] [and][ S]C[m] [are asymmetric. Bob has access to]
all of KB[m] [and only half of][ K]C[m][, but, even if he is dishon-]
est, he does not know the half of KC[m] [that Charlie chose]
to keep. This protects against forging.
The framework of non-orthogonal encoding QDS is
analogous to that of orthogonal encoding. The difference
is that it does not require the symmetrization step. However the signer needs to send the same quantum states
to two receivers and only detection events where the two
receivers both have clicks are valid.
**2.** **Signing a multi-bit message using single-bit QDS**
The framework above only offer a way for signing a
one-bit message. To sign multi-bit messages with these
protocols, it is not sufficient to directly iterate the protocol on each bits of the message, which will give a chance
for an outside or inside attacker to perform forgery attacks [37]. In order to offer information-theoretic security, one must reconstruct the multi-bit message and then
sign it bit by bit. This step will make the new message
become longer and thus damage the efficiency. To date,
the most efficient coding rule is given in Ref. [39], which
can be summarized as follows.
Suppose the signer Alice needs to sign an n-bit message
_M = m1||m2||...||mn, mi ∈{0, 1}, i = 1, 2, ..., n. She will_
encode M into
multi-bit message M . According to the encoding rule,
_h = n + [_ _[n]x_ [] + 2][x][ + 4. For a given][ n][ we can optimize][ x][ to]
obtain the minimal h and the maximum efficiency η = _[n]h_ [.]
It is clear that if n is large, the maximum efficiency will
be close to 1, but will definitely be less than 1. Thus
in our simulation in Sec. V we use the upper bound of
deficiency, i.e., assume h = n.
**Appendix B: Mathematical details**
**1.** **Generating an irreducible polynomial**
In this section we introduce ways to generate an irreducible polynomial over Gloise fields GF(2) in random,
which is the first step in the messaging stage of our protocol.
Suppose p(x) is a polynomial of order n in GF(2). p(x)
is irreducible means that no polynomials can divide it
except the identity element ’1’ and p(x) itself. The necessary and sufficient condition for p(x) being irreducible
can be expressed as:
_x[2][n]_ _x mod p(x)_
_≡_
n (B1)
gcf(x[2] d − x, p(x)) = 1
_Mˆ = 11||12||...||1x+1||0||m1||m2||...||mx||0||_
_||mx+1||mx+2||...||m2x||0||_
_..._
_||m[_ _[n]x_ []][x][+1][||][m][[][ n]x []][x][+2][||][...][||][m][n][||][0][||][1][1][||][1][2][||][...][||][1][x][+1][,]
(A1)
where d is any prime factor of n, gcf(f(x), g(x)) represents
the greatest common factor (GCF) of f (x) and g(x).
In order to randomly generate an irreducible polynomial, one way is to generate polynomials at random and
test for irreducibility through the condition above. However, this is quite time consuming and requires a lot of
random bits.
A better solution is proposed in Ref. [66]. We can first
have an irreducible polynomial of order n, defining the
extension field GF(2[n]). Given this, we generate a random
element in GF(2[n]) and then compute the minimal polynomial of this element, which will be irreducible. This
procedure only needs n random bits and consumes less
time. The concrete procedure is as follows.
Denote the initial irreducible polynomial as f (x) and
the polynomial generated by random element as g(x).
We will calculate the sequence a0 = g0(0), a1 = g1(0), ...,
_a2n−1 = g2n−1(0), where gi(x) = g[i](x) mod f_ (x). This
sequence of 2n elements can fully determine the minimal
polynomial of g(x), which can be efficiently computed
by Berlekamp-Massey algorithm [69]. The result, i.e.,
the minimal polynomial of g(x), will be the irreducible
polynomial we generate.
If choosing generalized division hashing, we need to
generate an irreducible polynomial over GF(2[k]). The
procedure is the same as that described above. The only
difference is that all the calculations need to be done
under GF(2[k]).
where x refers to the coding interval, [x] is the round
down function. To conclude, the coding rule is that the
encoder replenishes a ‘0’ in the head of M, and another
in the tail. Then, the encoder inserts a ‘0’ every x bits
and adds ‘1’ with a number of x +1 to both the start and
the end.
Denote the length of _Mˆ as h._ An iteration of
conventional QDS protocols with h rounds on _Mˆ is_
an information-theoretically secure protocol to sign the
-----
**2.** **Proof of proposition 1**
An LFSR-based Toeplitz hash function can be expressed as hp,s(M ) = HnmM . The construction of Hnm
is introduced in Def. 1. Here we follow the expression
in Def. 1 and define an n _n matrix W which is only_
_×_
decided by p.
When Alice and Bob send intensities ka and kb with
phase difference θ, the gain corresponding to only one
detector (L or R) clicking is
_Q[Lθ]kakb_ [=][y][k]a[k]b �e[ω][kakb][ cos][ θ] _−_ _ykakb_ � _,_
(C1)
_Q[Rθ]kakb_ [=][y][k]a[k]b �e[−][ω][kakb][ cos][ θ] _−_ _ykakb_ � _._
_−(ηaka_ +ηbkb )
where ykakb = e 2 (1 − _pd), ωkakb =_ _[√]ηakaηbkb._
The overall gain can be expressed as _Qkakb_ =
1/2π �02π[(][Q]k[Lθ]akb [+][ Q]k[Rθ]akb [)][dθ][ = 2][y][k]a[k]b [[][I][0][(][ω][k]a[k]b [)][ −] _[y][k]a[k]b_ [],]
where I0(x) refers to the zero-order modified Bessel functions of the first kind.
The total number for {ka, kb} is
_xkakb = Npka_ _pkb_ _Qkakb_ _._ (C2)
The valid post-matching events on the basis
of _Z_ can be divided into two types: correct
events {µaoa, obµb}, {oaµa, µbob}, and incorrect events
_{µaoa, µbob}, {oaµa, obµb}. The corresponding numbers_
are denoted as n[z]C [and][ n]E[z] [, respectively, which can be]
written as
_,_ (B2)
_W =_
_pn−1 pn−2 ... p1 p0_
1 0 _..._ 0 0
0 1 _..._ 0 0
_..._ _..._ _... ... ..._
0 0 _..._ 1 0
then we can express si through s and W
_si = W_ _[i]s._ (B3)
Thereafter we rewrite hp,s(M )
_hp,s(M_ ) =HnmM
_M0_
_M1_
_..._
_Mm−1_
(B4)
_xµaob_ = _[x][o][a][µ][b]_ _[x][µ][a][o][b]_ _,_
_x1_ _xmax_
_xµaµb_ = _[x][o][a][o][b]_ _[x][µ][a][µ][b]_ _,_
_x1_ _xmax_
� �
= _s s1 ... sm−1_
and
_n[z]C_ [=][ x][min] _xoaµb_
_x0_
_n[z]E_ [=][ x][min] _xoaob_
_x0_
=
_m−1_
� _MiW_ _[i]s_
_i=0_
=M (W )s,
where M (W ) = Mm−1W _[m][−][1]_ + ... + m1W + m0I is an
_n_ _n matrix._
_×_
Define f (x) as the characteristic polynomial of the matrix W, and we can calculate it as follows.
_f_ (x) = _xI_ _W_
_|_ _−_ _|_
_x + pn−1 pn−2 ... p1 p0_
1 _x_ _..._ 0 0
= 0 1 _..._ 0 0 (B5)
_..._ _..._ _... ... ..._
����������� 0 0 _..._ 1 _x_ �����������
=x[n] + pn−1x[n][−][1] + ... + p1x + p0.
It is obvious that f (x) = p(x), in other words, p(x) is
the characteristic polynomial of the matrix W . Then according to Hamilton-Cayley theorem, p(W ) = 0. Thereafter, it is trivial that if p(x) _M_ (x), M (W ) = 0, and thus
_|_
_hp,s(M_ ) = M (W )s = 0.
**Appendix C: Calculation details**
**1.** **TP-TFQKD and TP-TFKGP in this work**
The calculation of TP-TFKGP in this work is analogous to that in TP-TFQKD [43].
where x0 = xoaµb + xoaob, x1 = xµaob + xµaµb, xmin =
min{x0, x1}, and xmax = max{x0, x1}. s[z]11 [corresponds]
to the number of successful detection events, where Alice
and Bob emit a single photon in different time bins in
the Z basis. The overall number of events in the Z basis
is
_n[z]_ =n[z]C [+][ n]E[z] _[.]_ (C3)
Considering the misalignment error e[z]d[, the number of bit]
errors in the Z basis is m[z] = (1 − _e[z]d[)][n]E[z]_ [+][ e]d[z][n]C[z] [. Thus,]
the bit error rate in the Z basis is
_E[z]_ = _[m][z]_ (C4)
_n[z][ .]_
The overall number of “effective” events in the X basis
is
� _δ_
_n[x]_ = [1] _x[θ]νaνb_ _[dθ]_
_π_ 0
� _σ+δ_ (C5)
= _[Np][ν]π[a]_ _[p][ν][b]_ _σ_ _yνaνb_ (e[ω][νaνb][ cos][ θ]
+ e[−][ω][νaνb][ cos][ θ] _−_ 2yνaνb )dθ.
For simplicity, we only consider the case in which all
matched events satisfy θ[i] _θ[j]_ = 0. In this case, when
_−_
_ra[i]_ _[⊕]_ _[r]a[j]_ _[⊕]_ _[r]b[i]_ _[⊕]_ _[r]b[j]_ [= 0 (1), the][ {][ν]a[i] _[ν]a[j][, ν]b[i][ν]b[j][}][ event is con-]_
sidered to be an error event when different detectors (the
same detector) click at time bins i and j.
-----
The overall error count in the X basis can be given as
� _σ+δ_
_m[x]_ = [1] _x[θ]νaνb_ _[p][E][dθ]_
_π_ _σ_
� _σ+δ_
= [2][Np][ν][a] _[p][ν][b]_ _yνaνb_ _×_
_π_ _σ_
� (1 − _yνaνb_ )[2] �
1 _dθ,_
_−_
_e[ω][νaνb][ cos][ θ]_ + e[−][ω][νaνb][ cos][ θ] _−_ 2yνaνb
(C6)
of the expected values is z[∗]00 [=][ p][µ]a _[p][o]b_ _[e][−][µ][a]_ _[x][d]oo[∗][/p]o[d]aob_
and z[∗]0µb [=][ p][µ]a _[p][µ]b_ _[e][−][µ][a]_ _[x][∗]oaµb_ _[/p][o]a_ _[p][µ]b_ [, respectively. Here,]
we employ the relationship between the expected value
_x[∗]oaµb_ [=][ p][o]a _[x][∗]oˆaµb_ _[/p][o][ˆ]a_ [, and][ x][∗]oaob [=][ p][o]a _[p][o]b_ _[x][d]oo[∗][/p]oo[d]_ [. The]
lower bound of s[z]0µ[∗] _b_ [can be written as]
_s[z]0µ[∗]_ _b_ [=][ x]o[∗]aµb _[z]00[∗]_ + _[x]o[∗]aob_ _[z]0[∗]µb_ (C10)
_xmax_ _xmax_
2q[Lθ]
where pE = _q[θ]νaνbq[q][θ]νaνb[Rθ]_
_νaνb_ _νaνb_ [.]
We can then calculate the parameters in Eq. (6) to
estimate the key rate and the information leaked after
the distribution stage. In the following description, let
_x[∗]_ denote the expected value of x. We denote the number
of {ka, kb} as xkakb . We denote the number and error
number of events {ka[i] _[k]a[j]_ _[, k]b[i][k]b[j][}][ after post-matching as]_
_nkiakaj_ _[, k]b[i]_ _[k]b[j]_ [and][ m][k]a[i] _[k]a[j]_ _[, k]b[i]_ _[k]b[j]_ [, respectively. For simplicity,]
we abbreviate ka[i] _[k]a[j]_ _[, k]a[i]_ _[k]a[j]_ [as 2][k][a][,][ 2][k][b] [when][ k]a[i] [=][ k]a[j] [and]
_kb[i]_ [=][ k]b[j][.]
(1) s[z]11[.]
_s[z]11_ [corresponds to the number of successful detec-]
tion events, where Alice and Bob emit a single photon
in different time bins in the Z basis. Define z10 (z01)
as the number of events in which Alice (Bob) emits a
single photon and Bob (Alice) emits a vacuum state
in an {µa, ob} ({oa, µb}) event. The lower bounds of
their expected values are z[∗]10 [=][ Np][µ]a _[p][o]b_ _[µ][a][e][−][µ][a]_ _[y]10[∗]_ [and]
_z[∗]01_ [=][ Np][o]a _[p][µ]b_ _[µ][b][e][−][µ][b]_ _[y]01[∗]_ [, respectively, where][ y][∗]10 [and][ y]01[∗]
are the corresponding yields. These can be estimated using the decoy-state method
(3) s[x]11[. We define the phase difference between Al-]
ice and Bob as θ = θa _θb + ϕab. All valid events in the_
_−_
_X basis can be grouped according to the phase difference_
_θ (_ _δ, δ_ _π_ _δ, π_ +δ ), and the corresponding num_∈{−_ _}∪{_ _−_ _}_
ber in the {ka, kb} event is denoted as x[θ]kakb [. In the post-]
matching step, two time bins are matched if they have the
same phase difference θ. Suppose the global phase difference θ is a randomly and uniformly distributed value,
and considering the angle of misalignment in the X basis σ, the expected number of single-photon pairs can be
given by
_νae[−][(][ν][a][+][ν][b][)]y[∗]10_ _dθ_
_qν[θ]aνb_
_s[x]11[∗]_ [= 1]
_π_
� _σ+δ_ _x[θ]νaνb_ _νbe[−][(][ν][a][+][ν][b][)]y[∗]01_
_σ_ _[×][ 2]_ _qν[θ]aνb_
_y[∗]_ _µb_ � _eνb_ _x∗oaνb_
01 _[≥]_ _N_ (µbνb − _νb[2][)]_ _poa_ _pνb_
_b_ _e[µ][b]_ _x[∗]oˆaµb_ _b_ _[−]_ _[ν]b[2]_
_−_ _[ν][2]_ _−_ _[µ][2]_
_µ[2]b_ _poˆa_ _pµb_ _µ[2]b_
_y[∗]_ _µa_ � _eνa_ _x∗νaob_
10 _[≥]_ _N_ (µaνa − _νa[2])_ _pνa_ _pob_
_a_ _e[µ][a]_ _x[∗]µaoˆb_ _a_ _[−]_ _[ν]a[2]_
_−_ _[ν][2]_ _−_ _[µ][2]_
_µ[2]a_ _pµa_ _poˆb_ _µ[2]a_
_x[d]oo[∗]_
_p[d]oaob_
_x[d]oo[∗]_
_p[d]oaob_
= _[Np][ν][a]_ _[p][ν][b]_ � _σ+δ_ 2νaνbe[−][2(][ν][a][+][ν][b][)]y[∗]01[y][∗]10 _,_
_π_ _σ_ _qν[θ]aνb_
(C11)
where qν[θ]aνb [is the gain when Alice chooses intensity][ ν][a][,]
and Bob chooses the intensity νb with phase difference θ
and x[θ]νaνb [=][ Np][ν]a _[p][ν]b_ _[q]ν[θ]aνb_ [.]
(4) e[x]11[. For single-photon pairs, the expected value of]
the phase error rate in the Z basis is equal to the expected
value of the bit error rate in the X basis. Therefore, we
first calculate the number of errors of the single-photon
pairs in the X basis t[x]11[. The upper bound of][ t][x]11 [can be]
expressed as
_x_
_t11_ _[≤][m][x][ −]_ [(][m][ν]a[0][,ν]b[0] [+][ m][0][ν]a[,][0][ν]b [) +][ m][00][,][00][,] (C12)
where (mνa0,νb0 (m0νa,0νb ) is the error count when the
states sent by Alice and Bob in time bin i (j) both collapse to the vacuum state in events {2νa, 2νb}, and m00,00
corresponds to the event where the states sent by Alice and Bob both collapse to vacuum states in events
_{2νa, 2νb}._ The expected counts (nνa0,νb0 + n0νa,0νb )[∗]
and n[∗]00,00 [can be expressed as]
�
_,_ (C7)
�
_,_ (C8)
where x[d]oo [=][ x][o][ˆ]a[o][ˆ]b [+] _[x][o][ˆ]a[o]b_ [+] _[x][o]a[o][ˆ]b_ [represents the number]
of events where at least one user chooses the declarevacuum state and p[d]oo [=][ p][o][ˆ]a _[p][o][ˆ]b_ [+][ p][o][ˆ]a _[p][o]b_ [+][ p][o]a _[p][o][ˆ]b_ [refers]
to the corresponding probability. Thus, the lower bound
of s[z]11[∗] [is given by]
_s[z]11[∗]_ [=][ z]10[∗] _[z][∗]01_ _._ (C9)
_xmax_
(2) s[z]0µb [.][ s]0[z]µb [represents the number of events in the][ Z]
basis, Alice emits a zero-photon state in the two matched
time bins, and the total intensity of Bob’s pulses is µb.
We define z00 (z0µb ) as the number of detection events
where the state sent by Alice collapses to the vacuum
state in the {µa, ob} ({µa, µb}) event. The lower bound
(nνa0,νb0 + n0νa,0νb )[∗] = [2]
_π_
� _σ+δ_ _e[−][(][ν][a][+][ν][b][)]q[∗]_
_σ_ _x[θ]νaνb_ _qν[θ]aνb_ 00 _dθ_
= _δNpνa_ _pνb_ _e[−][(][ν][a][+][ν][b][)]q[∗]00_
_π_
(C13)
and
� _e−(νa+νb)q∗00_
_qν[θ]aνb_
�2
_dθ_
_n[∗]00,00_ [= 1]
_π_
� _σ+δ_
_x[θ]νaνb_
_σ_
_e[−][2(][ν][a][+][ν][b][)](q[∗]00[)][2]_
_dθ_
_qν[θ]aνb_
(C14)
= _[Np][ν][a]_ _[p][ν][b]_
_π_
� _σ+δ_
_σ_
-----
respectively. Here q00[∗] [=][ x]o[d]a[∗]ob _[/][(][Np]oo[d]_ [). Using the fact]
that the error rate of the vacuum state is always 1/2,
we have (mνa0,νb0 + m0νa,0νb )[∗] = [1]2 [(][n][ν][a][0][,ν][b][0][ +][ n][0][ν][a][,][0][ν][b] [)][∗]
and m[∗]00,00 [=][ 1]2 _[n]00[∗]_ _,00[. Hence the upper bound of the bit]_
error rate in the X basis can be given by
_e[x]11_ [=][ t]11[x] _[/s]11[x]_ _[.]_ (C15)
_z_
(5) ϕ11[. For a failure probability][ ε][, the upper bound]
of the phase error rate ϕ[z]11 [can be obtained by using the]
random sampling without replacement [70]
_z_
_ϕ11_ 11 [+][ γ][U][ (][s]11[z] _[, s][x]11[, e]11[x]_ _[, ϵ][)][,]_ (C16)
_[≤][e][x]_
where
(1−n2+λk)AG + � (An+[2]Gk)[2][2][ + 4][λ][(1][ −] _[λ][)][G]_
_γ[U]_ (n, k, λ, ϵ) = _,_
2 + 2 (nA+[2]kG)[2]
(C17)
with A = max{n, k} and G = _[n]nk[+][k]_ [ln] 2πnkλn+(1k−λ)ϵ[2][ .]
_zn_
(6) s[zn]11 [,][ s]0[zn]µb [and][ ϕ]11 [.] Finally we can estimate the
parameters in Eq. (6), i.e., the lower bound of vacuum
events and single-photon pairs in a selected key group
_s[z]11L_ [and][ s]0[z]µbL[, and the upper bound of the phase er-]
_z_
ror rate of the n-bit group ϕ11U [. They can be obtained]
from the parameters above by using the random sampling
without replacement.
where τn := [�]k∈K _[e][−][k][k][n][p][k][/n][! is the probability that]_
Alice sends a n-photon state, and
_γ[U]_ (n, k, λ, ϵ) =
_nX_ �
_,_ _k_ _._
_∀_ _∈K_
2 [log 21]εsec
_n[±]X,k_ [:=][ e][k]
_pk_
� �
_nX,k ±_
We can also calculate the number of vacuum events,
_sZ,0, and the number of single-photon events, sZ,1, for_
_Z = ∪k∈KZk, i.e., by using Eqs. (C20) and (C21) with_
statistics from the basis Z. Then we can obtain the phase
error rate of the single-photon events in the X basis by
� �
_ϕX,1 :=_ _[c][X,][1]_ _≤_ _[v][Z,][1]_ + γ[U] _sZ,1, sX,1, [v][Z,][1]_ _, εsec_ _,_
_sX,1_ _sZ,1_ _sZ,1_
(C22)
where
_m[+]Z,µ2_ _Z,µ3_
_vZ,1 ≤_ _τ1_ _µ2 −[−]_ _µ[m]3[−]_ _,_
_mZ_ �
_,_ _k_ _,_
_∀_ _∈K_
2 [log 21]εsec
_m[±]Z,k_ [:=][ e][k]
_pk_
� �
_mZ,k ±_
_A[2]G[2]_
(n+k)[2][ + 4][λ][(1][ −] _[λ][)][G]_
(1−n2+λk)AG + �
2 + 2 (nA+[2]kG)[2]
_s[zn]11_ �s[z]11[/n][z][ −] _[γ][U][ (][n, n][z][ −]_ _[n, s][z]11[/n][z][, ϵ][)]�_ _,_
_[≥][n]_
_s[zn]0µb_ _[≥][n]_ �s[z]0µb _[/n][z][ −]_ _[γ][U][ �]n, n[z]_ _−_ _n, s[z]0µb_ _[/n][z][, ϵ]��_ _,_
_zn_ _z_ _z_ �
_ϕ11_ _[≤][ϕ]11_ [+][ γ][U][ �]s[zn]11 _[, s]11[z]_ _[−]_ _[s]11[zn][, ϕ]11[, ϵ]_ _._
(C18)
The total number of events under X basis is nX =
�
�k∈K _[n][X,k][ and the number of error events is][ m][X][ =]_
_k∈K_ _[m][X,k][.]_
In BB84-QDS, the unknown information to the attacker is given by
_H = sX,0 + sX,1(1 −_ _h(ϕX,1))._ (C23)
In our protocol based on BB84-KGP, we need to estimate parameters in a selected n-bit group, i.e., the lower
bound of number of vacuum events and single-photon
events under X basis s[n]X,0 [and][ s]X,[n] 1[, and the upper bound]
of the phase error rate of the single-photon events in the
_n_
_X basis ϕX,1[.]_
_s[n]X,0_ _[≥][n]_ �sX,0/nZ − _γ[U]_ (n, nZ − _n, sX,0/nZ, ϵ)�_ _, (C24)_
_s[n]X,1_ _[≥][n]_ �sX,1/nZ − _γ[U]_ (n, nZ − _n, sX,1/nZ, ϵ)�_ _, (C25)_
_ϕ[n]X,1_ [+][ γ][U][ �]s[n]X,1[, s]X,1 _X,1[, ϕ]X,1[, ϵ]�_ _._ (C26)
_[≤][ϕ][X,][1]_ _[−]_ _[s][n]_
Finally we can obtain
� _n_ �
_H = s[n]X,0_ [+][ s]X,[n] 1 1 − _h(ϕX,1[)]_ _−_ _λEC,_ (C27)
where λEC = nh(mX _/nX_ ).
(7)lkey. We can also obtain the length of final keys of
TP-TFQKD, which can be used to simulate the performance of OTUH-QDS in Fig. 4.
� _z_ �
_lkey =s[z]0µb_ [+][ s]11[z] 1 − _H(ϕ11[)]_ _−_ _n[z]fH(E[z])_
(C19)
2 1
_−_ log2 _−_ 2 log2 _,_
_ϵcor_ 2ϵP A
where ϵP A is the failure probability of privacy amplification.
**2.** **BB84-KGP in BB84-QDS and this work**
Both BB84-QDS and this work utilize decoy-state
BB84-KGP to generate correlated bit strings. According to Ref. [71] we can estimate the number of vacuum
events and single-photon events under X basis,
_µ2n[−]X,µ3_ _X,µ2_
_sX,0 ≥_ _τ0_ _µ2 −[−]_ _µ[µ]3[3][n][+]_ _,_ (C20)
_τ1µ1_ �n[−]X,µ2 _[−]_ _[n]X,µ[+]_ 3 _[−]_ _[µ]2[2]µ[−][2]1[µ][2]3_ (n[+]X,µ1 _[−]_ _[s][X,]τ0_ [0] [)]�
_sX,1 ≥_ _µ1(µ2 −_ _µ3) −_ _µ[2]2_ [+][ µ]3[2] _._
(C21)
-----
**3.** **SNS-KGP and SNS-QDS with random pairing**
We first follow the calculation in Ref. [72]. Alice and
Bob obtain Njk(jk = {00, 01, 02, 10, 20}) instances when
Alice sends intensity j and Bob sends state k. Here ’1’
and ’2’ represent the two intensities used in the KGP. After the sifted step, Alice and Bob obtain njk one-detector
heralded events. We denote the counting rate of source
_jk as Sjk = njk/Njk. With all these definitions, we have_
_N00 =_ �(1 − _pz)[2]p[2]0_ [+ 2(1][ −] _[p][z][)][p][z][p][0][p][z][0]�_ _N,_
_N01 =N10 =_ �(1 − _pz)[2]p0p1 + (1 −_ _pz)pzpz0p1�_ _N,_
_N02 =N20 =_ �(1 − _pz)[2](1 −_ _p0 −_ _p1)p0_
+ (1 − _pz)pzpz0(1 −_ _p0 −_ _p1)�N._
(C28)
In addition, we need to define two new subsets of X1
windows, C∆+ and C∆−, to estimate the upper bound of
_e[ph]1_ [. The number of instances in][ C][∆][±][ is]
_N∆± = [∆]_ 1[N.] (C29)
2π [(1][ −] _[p][z][)][2][p][2]_
We denote the number of effective events of right detectors responding from C∆+ as n[R]∆[+] [, and the number]
of effective events of left detectors responding from C∆−
as n[L]∆[−] [. And we obtain the counting error rate of][ C][∆][±] [,]
_n[R]∆[+]_ [+][n]∆[L] _[−]_
_T∆_ = 2N∆± .
If we denote the expected value of the counting rate of
**untagged photons as s[Z]1** _[∗][, the lower bound of][ s]1[Z][∗]_ is
1
_s[Z]1_ _[∗]_ _≥s[Z]1_ _[∗]_ = �µ[2]2[e][µ][1] [(][S][∗]01 [+][ S]10[∗] [)]
2µ1µ2(µ2 − _µ1)_ (C30)
_−_ _µ[2]1[e][µ][2]_ [(][S]∗02 [+][ S]∗20[)][ −] [2(][µ]2[2] _[−]_ _[µ]1[2][)][S]∗00�,_
where N00, N01, N10, N02, N20, N∆± are defined in
Eqs. (C28) and (C29), and
_nsignal =4Np[2]z[p][z][0][(1][ −]_ _[p][z][0][)]�(1 −_ _pd)e[−][ηµ][z][/][2]_
_−_ (1 − _pd)[2]e[−][2][ηµ][z]_ [�],
_nerror =2Np[2]z[(1][ −]_ _[p][z][0][)][2][�](1 −_ _pd)e[−][ηµ][z]_ _I0(ηµz)_
_−_ (1 − _pd)[2]e[−][2][ηµ][z]_ [�] + 2Np[2]z[p][2]z0[p][d][(1][ −] _[p][d][)][,]_
∆
� 2
_TX = [1]_ (1 − _pd)e[−][2][ηµ][1][ cos][2][ δ]2 dδ_
∆
_−_ [∆]2
_−_ (1 − _pd)[2]e[−][2][ηµ][1]_ _,_
∆
� 2
_SX = [1]_ (1 − _pd)e[−][2][ηµ][1][ sin][2][ δ]2 dδ_
∆
_−_ [∆]2
_−_ (1 − _pd)[2]e[−][2][ηµ][1]_ + TX _,_
where I0(x) is the 0-order hyperbolic Bessel functions of
the first kind.
In SNS-QDS, the unknown information to the attacker
is given by
_H = s[Z]1_ _[∗][(1][ −]_ _[h][(][e]1[ph][∗]))._ (C32)
In our protocol based on SNS-KGP, there is
_s[Z]1n[∗]_ [=][n] �s[Z]1 _[∗]_ _−_ _γ[U][ �]n, nZ −_ _n, s[Z]1_ _[∗][/n][Z][, ϵ]��_ _,_
� �� (C33)
_e[ph]1n[∗]_ [=][n] _e[ph]1_ _[∗]_ + γ[U][ �]s[Z]1n[∗][, s]1[Z][∗] _−_ _s[Z]1n[∗]_ _[e]1[ph][∗], ϵ_ _._
where Sjk[∗] [is the expected value of][ S][jk][, and][ S]∗jk [and][ S]jk[∗]
are the upper bound and lower bound of Sjk[∗] [when we]
estimate the expected value from its observed value.
The expected value of the phase-flip error rate of the
**untagged photons satisfies**
_∗_
∆ _[−]_ [1]2 _[e][−][2][µ][1]_ _[S]00[∗]_
_e[ph]1_ _[∗]_ _≤_ _e[ph]1_ _[∗]_ = _[T]_ _._ (C31)
2µ1e[−][2][µ][1] _s[Z]1_ _[∗]_
and
� �
_H = s[Z]1n[∗]_ 1 − _h(e[ph]1n[∗][)]_ _−_ _λEC,_ (C34)
where λEC = nh(Ez).
In SNS-QDS with random pairing, we follow the calculation in Ref. [24]. After random pairing there are two
different phase error rates
Here we use the fact that the error rate of vacuum state
is always 2[1] [.]
If the total transmittance of the experimental setups
is η, then we have
_n00 = 2pd(1 −_ _pd)N00,_
�
_n01 = n10 = 2_ (1 − _pd)e[ηµ][1][/][2]_ _−_ (1 − _pd)[2]e[−][ηµ][1]_ [�] _N01,_
�
_n02 = n20 = 2_ (1 − _pd)e[ηµ][2][/][2]_ _−_ (1 − _pd)[2]e[−][ηµ][2]_ [�] _N02,_
_nt = nsignal + nerror,_
_Ez =_ _[n][error]_ _,_
_nt_
_n[R]∆[+][ =][ n][L]∆[−]_ [= [][T][X] [(1][ −] [2][e][d][) +][ e][d][S][X] []][ N][∆][±] _[,]_
(e[ph]1 [)][2]
_e˜[′]1[ph]_ = (C35)
(e[ph]1 [)][2][ + (1][ −] _[e]1[ph][)][2][,]_
_e˜[′]2[ph]_ = 2[1] _[,]_ (C36)
and a new bit flip error rate
_E[′]_ = 2Ez(1 − _Ez)._ (C37)
The proportion of untagged bits after random pairing is
∆[′]un [= ∆]un[2] [+ 2∆][un][(1][ −] [∆][un][)][,] (C38)
where ∆un = Np[2]z[p][z][0][(1][ −] _[p][z][0][)][s][Z]1_ _[/n][t]_ [is the proportion]
of untagged bits before random pairing. The unknown
information to the attacker is given by
� �
_H =∆[′]un_ _[−]_ [∆]un[2] _p1H(˜e[′]1[ph][) + (1][ −]_ _[p][1][)][H][(˜][e]2[′][ph][)]_
(C39)
_−_ 2∆un(1 − ∆un)H(e[ph]1 [)][,]
where p1 = (e[ph]1 [)][2][ + (1][ −] _[e]1[ph][)][2][.]_
-----
**Appendix D: Error correction and privacy**
**amplification**
In this section we introduce our simulation of error
correction and privacy amplification in Table II. We use
the simulated data of TP-TFKGP at the distance of 400
km, which can be calculated by Eqs. C3, C4, C19 in
Appendix C. We implement our simulation on a desktop
computer with an Intel i5-10400 CPU (with RAM of 8
GB).
We use improved Cascade protocol to perform error correction to correct 300 (or 39830) errors among
1.267 10[6] (or 1.695 10[8]) bits. The detailed procedure
_×_ _×_
of improved Cascade protocol can be seen in Ref. [65],
where users first block their keys, and then perform binary process on each block to correct the errors and start
trace-back section to check the error, until there are no
[1] W. Diffie and M. Hellman, New directions in cryptography, IEEE trans. Inf. Theory 22, 644 (1976).
[2] R. A. DeMillo, Foundations of secure computation, Tech.
Rep. (Georgia Institute of Technology, 1978).
[3] R. L. Rivest, A. Shamir, and L. Adleman, A method
for obtaining digital signatures and public-key cryptosystems, Commun. ACM 21 (1978).
[4] T. Elgamal, A public key cryptosystem and a signature
scheme based on discrete logarithms, IEEE trans. Inf.
Theory 31, 469 (1985).
[5] M. Stevens, E. Bursztein, P. Karpman, A. Albertini,
and Y. Markov, The first collision for full sha-1, in Ad_vances in Cryptology – CRYPTO 2017, LNCS, Vol. 10401_
(Springer, 2017) pp. 570–596.
[6] F. Boudot, P. Gaudry, A. Guillevic, N. Heninger,
E. Thom´e, and P. Zimmermann, Comparing the difficulty of factorization and discrete logarithm: a 240-digit
experiment, in Annual International Cryptology Confer_ence (Springer, 2020) pp. 62–91._
[7] P. W. Shor, Algorithms for quantum computation: discrete logarithms and factoring, in Proceedings 35th an_nual symposium on foundations of computer science_
(IEEE, 1994) pp. 124–134.
[8] D. Gottesman and I. Chuang, Quantum digital signatures, arXiv preprint quant-ph/0105032 (2001).
[9] P. J. Clarke, R. J. Collins, V. Dunjko, E. Andersson,
J. Jeffers, and G. S. Buller, Experimental demonstration
of quantum digital signatures using phase-encoded coherent states of light, Nat. Commun. 3, 1174 (2012).
[10] R. J. Collins, R. J. Donaldson, V. Dunjko, P. Wallden,
P. J. Clarke, E. Andersson, J. Jeffers, and G. S. Buller,
Realization of quantum digital signatures without the requirement of quantum memory, Phys. Rev. Lett. 113,
040502 (2014).
[11] V. Dunjko, P. Wallden, and E. Andersson, Quantum
digital signatures without quantum memory, Phys. Rev.
Lett. 112, 040502 (2014).
[12] H.-L. Yin, Y. Fu, and Z.-B. Chen, Practical quantum
digital signature, Phys. Rev. A 93, 032316 (2016).
[13] R. Amiri, P. Wallden, A. Kent, and E. Andersson, Se
errors in each block. The results show that time consumption is 3.62 and 930.98 seconds with data sizes 10[11]
and 10[13], respectively.
In privacy amplification step, Alice chooses a random
universal2 hash function and performs it on the nZ-bit
keys after error correction to obtain l-bit final keys. The
choice of function is communicated to Bob, who also uses
it to obtain his l-bit final keys. In the simulation we utilize a random Toeplitz matrix as the universal2 hash function. When the data size is 10[13], the matrix is too large
that it exceeds the storage of our computer. Thus in the
algorithm we block the matrix into 10 10 (100) subma_×_
trices with the same size to accomplish the calculation.
For hash manipulations of every submatrix we follow the
procedure in Ref. [73], where fast Fourier transform is
used to speed up calculation time. In the simulation it
takes 2.98 and 2.057 10[4] seconds with data sizes 10[11]
_×_
and 10[13], respectively.
cure quantum signatures using insecure quantum channels, Phys. Rev. A 93, 032325 (2016).
[14] I. V. Puthoor, R. Amiri, P. Wallden, M. Curty, and
E. Andersson, Measurement-device-independent quantum digital signatures, Phys. Rev. A 94, 022328 (2016).
[15] T. Shang, Q. Lei, and J. Liu, Quantum random oracle
[model for quantum digital signature, Phys. Rev. A 94,](https://doi.org/10.1103/PhysRevA.94.042314)
[042314 (2016).](https://doi.org/10.1103/PhysRevA.94.042314)
[16] Y.-G. Yang, Z.-C. Liu, J. Li, X.-B. Chen, H.-J. Zuo, Y.-H.
Zhou, and W.-M. Shi, Theoretically extensible quantum
digital signature with starlike cluster states, Quantum
Inf. Process. 16, 12 (2017).
[17] M. Thornton, H. Scott, C. Croal, and N. Korolkova,
Continuous-variable quantum digital signatures over insecure channels, Phys. Rev. A 99, 032341 (2019).
[18] W. Qu, Y. Zhang, H. Liu, T. Dou, J. Wang, Z. Li,
S. Yang, and H. Ma, Multi-party ring quantum digital
[signatures, J. Opt. Soc. Am. B 36, 1335 (2019).](https://doi.org/10.1364/JOSAB.36.001335)
[19] C.-M. Zhang, Y. Zhu, J.-J. Chen, and Q. Wang, Practical
quantum digital signature with configurable decoy states,
Quantum Inf. Process. 19, 151 (2020).
[20] Y.-S. Lu, X.-Y. Cao, C.-X. Weng, J. Gu, Y.-M. Xie, M.G. Zhou, H.-L. Yin, and Z.-B. Chen, Efficient quantum
digital signatures without symmetrization step, Opt. Express 29, 10162 (2021).
[21] C.-H. Zhang, X. Zhou, C.-M. Zhang, J. Li, and Q. Wang,
Twin-field quantum digital signatures, Opt. Lett. 46,
3757 (2021).
[22] W. Zhao, R. Shi, J. Shi, P. Huang, Y. Guo, and D. Huang,
Multibit quantum digital signature with continuous vari[ables using basis encoding over insecure channels, Phys.](https://doi.org/10.1103/PhysRevA.103.012410)
[Rev. A 103, 012410 (2021).](https://doi.org/10.1103/PhysRevA.103.012410)
[23] C.-X. Weng, Y.-S. Lu, R.-Q. Gao, Y.-M. Xie, J. Gu, C.-L.
Li, B.-H. Li, H.-L. Yin, and Z.-B. Chen, Secure and prac[tical multiparty quantum digital signatures, Opt. Express](https://doi.org/10.1364/OE.433656)
**[29, 27661 (2021).](https://doi.org/10.1364/OE.433656)**
[24] J.-Q. Qin, C. Jiang, Y.-L. Yu, and X.-B. Wang, Quan[tum digital signatures with random pairing, Phys. Rev.](https://doi.org/10.1103/PhysRevApplied.17.044047)
[Applied 17, 044047 (2022).](https://doi.org/10.1103/PhysRevApplied.17.044047)
[25] M.-H. Zhang, J.-H. Xie, J.-Y. Wu, L.-Y. Yue, C. He,
-----
Z.-W. Cao, and J.-Y. Peng, Practical long-distance twinfield quantum digital signatures, Quantum Inf. Process.
**21, 150 (2022).**
[26] H.-L. Yin, Y. Fu, H. Liu, Q.-J. Tang, J. Wang, L.-X.
You, W.-J. Zhang, S.-J. Chen, Z. Wang, Q. Zhang, T.-Y.
Chen, Z.-B. Chen, and J.-W. Pan, Experimental quantum digital signature over 102 km, Phys. Rev. A 95,
032334 (2017).
[27] R. J. Collins, R. Amiri, M. Fujiwara, T. Honjo,
K. Shimizu, K. Tamaki, M. Takeoka, M. Sasaki, E. Andersson, and G. S. Buller, Experimental demonstration
of quantum digital signatures over 43 db channel loss using differential phase shift quantum key distribution, Sci.
Rep. 7, 3235 (2017).
[28] H.-L. Yin, W.-L. Wang, Y.-L. Tang, Q. Zhao, H. Liu,
X.-X. Sun, W.-J. Zhang, H. Li, I. V. Puthoor, L.-X.
You, E. Andersson, Z. Wang, Y. Liu, X. Jiang, X. Ma,
Q. Zhang, M. Curty, T.-Y. Chen, and J.-W. Pan, Experimental measurement-device-independent quantum digi[tal signatures over a metropolitan network, Phys. Rev.](https://doi.org/10.1103/PhysRevA.95.042338)
[A 95, 042338 (2017).](https://doi.org/10.1103/PhysRevA.95.042338)
[29] G. Roberts, M. Lucamarini, Z. Yuan, J. Dynes, L. Comandar, A. Sharpe, A. Shields, M. Curty, I. Puthoor,
and E. Andersson, Experimental measurement-deviceindependent quantum digital signatures, Nat. Commun.
**8, 1098 (2017).**
[30] C.-H. Zhang, X.-Y. Zhou, H.-J. Ding, C.-M. Zhang, G.-C.
Guo, and Q. Wang, Proof-of-principle demonstration of
passive decoy-state quantum digital signatures over 200
km, Phys. Rev. Applied 10, 034033 (2018).
[31] X.-B. An, H. Zhang, C.-M. Zhang, W. Chen, S. Wang,
Z.-Q. Yin, Q. Wang, D.-Y. He, P.-L. Hao, S.-F. Liu, X.Y. Zhou, G.-C. Guo, and Z.-F. Han, Practical quantum
digital signature with a gigahertz bb84 quantum key dis[tribution system, Opt. Lett. 44, 139 (2019).](https://doi.org/10.1364/OL.44.000139)
[32] H.-J. Ding, J.-J. Chen, L. Ji, X.-Y. Zhou, C.-H. Zhang,
C.-M. Zhang, and Q. Wang, 280-km experimental demonstration of a quantum digital signature with one decoy
state, Opt. Lett. 45, 1711 (2020).
[33] S. Richter, M. Thornton, I. Khan, H. Scott, K. Jaksch,
U. Vogl, B. Stiller, G. Leuchs, C. Marquardt, and N. Korolkova, Agile and versatile quantum communication:
[Signatures and secrets, Phys. Rev. X 11, 011038 (2021).](https://doi.org/10.1103/PhysRevX.11.011038)
[34] M.-C. Roehsner, J. A. Kettlewell, J. Fitzsimons, and
P. Walther, Probabilistic one-time programs using quantum entanglement, npj Quantum Inf. 7, 98 (2021).
[35] M.-C. Roehsner, J. A. Kettlewell, T. B. Batalh˜ao, J. F.
Fitzsimons, and P. Walther, Quantum advantage for
probabilistic one-time programs, Nat. Commun. 9, 5225
(2018).
[36] Y. Pelet, I. V. Puthoor, N. Venkatachalam,
S. Wengerovsky, M. Loncaric, S. P. Neumann, B. Liu,
Z. Samec, M. Stipˇcevi´c, R. Ursin, et al., Unconditionally
secure digital signatures implemented in an 8-user
quantum network, New J. Phys. (2022).
[37] T.-Y. Wang, X.-Q. Cai, Y.-L. Ren, and R.-L. Zhang, Security of quantum digital signatures for classical messages, Sci. Rep. 5, 9321 (2015).
[38] T.-Y. Wang, J.-F. Ma, and X.-Q. Cai, The postprocessing
of quantum digital signatures, Quantum Inf. Process. 16,
19 (2017).
[39] H. Zhang, X.-B. An, C.-H. Zhang, C.-M. Zhang, and
Q. Wang, High-efficiency quantum digital signature
scheme for signing long messages, Quantum Inf. Process.
**18, 3 (2019).**
[40] X.-Q. Cai, T.-Y. Wang, C.-Y. Wei, and F. Gao, Cryptanalysis of multiparty quantum digital signatures, Quantum Inf. Process. 18, 252 (2019).
[41] X. Yao, X. Liu, R. Xue, H. Wang, H. Li, Z. Wang, L. You,
Y. Huang, and W. Zhang, Multi-bit quantum digital signature based on quantum temporal ghost imaging, arXiv
preprint arXiv:1901.03004 (2019).
[42] H.-L. Yin, Y. Fu, C.-L. Li, C.-X. Weng, B.-H. Li, J. Gu,
Y.-S. Lu, S. Huang, and Z.-B. Chen, Experimental quantum secure network with digital signatures and encryption, Natl. Sci. Rev. 10, nwac228 (2023).
[43] Y.-M. Xie, C.-X. Weng, Y.-S. Lu, Y. Fu, Y. Wang, H.L. Yin, and Z.-B. Chen, Scalable high-rate twin-field
quantum key distribution networks without constraint
of probability and intensity, Phys. Rev. A 107, 042603
(2023).
[44] C. H. Bennett and G. Brassard, Quantum cryptography:
Public key distribution and coin tossing, Theor. Comput.
Sci. 560, 7 (2014).
[45] H.-K. Lo, M. Curty, and B. Qi, Measurement-deviceindependent quantum key distribution, Phy. Rev. Lett.
**108, 130503 (2012).**
[46] S. L. Braunstein and S. Pirandola, Side-channel-free
quantum key distribution, Phy. Rev. Lett. 108, 130502
(2012).
[47] Y.-H. Zhou, Z.-W. Yu, and X.-B. Wang, Making the
decoy-state measurement-device-independent quantum
key distribution practically useful, Phys. Rev. A 93,
042324 (2016).
[48] M. Lucamarini, Z. L. Yuan, J. F. Dynes, and A. J.
Shields, Overcoming the rate–distance limit of quantum
key distribution without quantum repeaters, Nature 557,
400 (2018).
[49] X. Ma, P. Zeng, and H. Zhou, Phase-matching quantum
key distribution, Phys. Rev. X 8, 031043 (2018).
[50] X.-B. Wang, Z.-W. Yu, and X.-L. Hu, Twin-field quantum key distribution with large misalignment error, Phy.
Rev. A 98, 062323 (2018).
[51] W.-B. Liu, C.-L. Li, Y.-M. Xie, C.-X. Weng, J. Gu,
X.-Y. Cao, Y.-S. Lu, B.-H. Li, H.-L. Yin, and Z.-B.
Chen, Homodyne detection quadrature phase shift keying
continuous-variable quantum key distribution with high
excess noise tolerance, PRX Quantum 2, 040334 (2021).
[52] Y.-M. Xie, Y.-S. Lu, C.-X. Weng, X.-Y. Cao, Z.-Y. Jia,
Y. Bao, Y. Wang, Y. Fu, H.-L. Yin, and Z.-B. Chen,
Breaking the rate-loss bound of quantum key distribution
with asynchronous two-photon interference, PRX Quantum 3, 020315 (2022).
[53] P. Zeng, H. Zhou, W. Wu, and X. Ma, Mode-pairing
quantum key distribution, Nat. Commun. 13, 3903
(2022).
[54] J. Gu, X.-Y. Cao, Y. Fu, Z.-W. He, Z.-J. Yin, H.-L.
Yin, and Z.-B. Chen, Experimental measurement-deviceindependent type quantum key distribution with flawed
and correlated sources, Sci. Bull. 67, 2167 (2022).
[55] Y. Fu, H.-L. Yin, T.-Y. Chen, and Z.-B. Chen,
Long-distance measurement-device-independent multiparty quantum communication, Phys. Rev. Lett. 114,
090501 (2015).
[56] J. Gu, X.-Y. Cao, H.-L. Yin, and Z.-B. Chen, Differential
phase shift quantum secret sharing using a twin field,
Opt. Express 29, 9165 (2021).
[57] A. Shen, X.-Y. Cao, Y. Wang, Y. Fu, J. Gu, W.-B. Liu,
-----
C.-X. Weng, H.-L. Yin, and Z.-B. Chen, Experimental
quantum secret sharing based on phase encoding of coherent states, Sci. China-Phys. Mech. Astron. 66, 260311
(2023).
[58] Z. Li, X.-Y. Cao, C.-L. Li, C.-X. Weng, J. Gu, H.-L. Yin,
and Z.-B. Chen, Finite-key analysis for quantum conference key agreement with asymmetric channels, Quantum
Sci. Technol. 6, 045019 (2021).
[59] A. J. Menezes, P. C. Van Oorschot, and S. A. Vanstone,
_Handbook of applied cryptography (CRC press, 2018)._
[60] H. Krawczyk, Lfsr-based hashing and authentication, in
_Annual International Cryptology Conference (1994) pp._
129–139.
[61] J. L. Carter and M. N. Wegman, Universal classes of
hash functions, in Proceedings of the ninth annual ACM
_symposium on Theory of computing (1977) pp. 106–112._
[62] X.-B. Wang, Beating the photon-number-splitting attack
in practical quantum cryptography, Phys. Rev. Lett. 94,
230503 (2005).
[63] H.-K. Lo, X. Ma, and K. Chen, Decoy state quantum key
distribution, Phys. Rev. Lett. 94, 230504 (2005).
[64] G. Brassard and L. Salvail, Secret-key reconciliation by
public discussion, in Workshop on the Theory and Appli_cation of of Cryptographic Techniques (Springer, 1993)_
pp. 410–423.
[65] H. Yan, T. Ren, X. Peng, X. Lin, W. Jiang, T. Liu, and
H. Guo, Information reconciliation protocol in quantum
key distribution system, in 2008 Fourth International
_Conference on Natural Computation, Vol. 3 (IEEE, 2008)_
pp. 637–641.
[66] V. Shoup, On fast and provably secure message authentication based on universal hashing, in Annual Interna_tional Cryptology Conference (Springer, 1996) pp. 313–_
328.
[67] R. Konig, R. Renner, and C. Schaffner, The operational
meaning of min-and max-entropy, IEEE trans. Inform.
Theory 55, 4337 (2009).
[68] M. Hayashi, Exponential decreasing rate of leaked information in universal random privacy amplification, IEEE
trans. Inform. Theory 57, 3989 (2011).
[69] J. Massey, Shift-register synthesis and bch decoding,
IEEE trans. Inform. Theory 15, 122 (1969).
[70] H.-L. Yin, M.-G. Zhou, J. Gu, Y.-M. Xie, Y.-S. Lu, and
Z.-B. Chen, Tight security bounds for decoy-state quantum key distribution, Sci. Rep. 10, 14312 (2020).
[71] C. C. W. Lim, M. Curty, N. Walenta, F. Xu, and
H. Zbinden, Concise security bounds for practical decoystate quantum key distribution, Phys. Rev. A 89, 022307
(2014).
[72] C. Jiang, Z.-W. Yu, X.-L. Hu, and X.-B. Wang, Unconditional security of sending or not sending twin-field quantum key distribution with finite pulses, Phys. Rev. Applied 12, 024061 (2019).
[73] B.-Y. Tang, B. Liu, Y.-P. Zhai, C.-Q. Wu, and W.-R. Yu,
High-speed and large-scale privacy amplification scheme
for quantum key distribution, Sci. Rep. 9, 15733 (2019).
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2301.01132, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "https://arxiv.org/pdf/2301.01132"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-01-03T00:00:00
|
[
{
"paperId": "4b2853c1182a8106f5d357efd8c3b36329a283c0",
"title": "Experimental quantum secret sharing based on phase encoding of coherent states"
},
{
"paperId": "92286945977af9be95e777c1cf5832994737e96b",
"title": "Experimental measurement-device-independent type quantum key distribution with flawed and correlated sources."
},
{
"paperId": "0a08629d9e3903f42ffe0cfa69e3eea2c8080e46",
"title": "Practical long-distance twin-field quantum digital signatures"
},
{
"paperId": "6814d701d075c14172c2f493c2f54eb8d3a25bc8",
"title": "Unconditionally secure digital signatures implemented in an eight-user quantum network"
},
{
"paperId": "85b5fbeaf0d82f5df254db75cfdbb4ff62f3aa3a",
"title": "Quantum Digital Signatures with Random Pairing"
},
{
"paperId": "7c8b0e12db5d9d1f56713efa997caf582be29233",
"title": "Mode-pairing quantum key distribution"
},
{
"paperId": "3f679438055cf78446a309b75893427bdedaa0d9",
"title": "Breaking the Rate-Loss Bound of Quantum Key Distribution with Asynchronous Two-Photon Interference"
},
{
"paperId": "b6b3662e467da89df4713f90b400137e14c3c698",
"title": "Scalable High-Rate Twin-Field Quantum Key Distribution Networks without Constraint of Probability and Intensity"
},
{
"paperId": "ef0f440d012ecd746837c376162c4b1fab228f2a",
"title": "Finite-key analysis for quantum conference key agreement with asymmetric channels"
},
{
"paperId": "dbf65979f35b71b8bf650e8b3412c0d790722524",
"title": "Experimental quantum secure network with digital signatures and encryption"
},
{
"paperId": "e9d3c367f38959da46c879b394dad5322747cc05",
"title": "Secure and practical multiparty quantum digital signatures."
},
{
"paperId": "5bea10b4a918a2464a1de5a4bc3f03ff479750fa",
"title": "Homodyne Detection Quadrature Phase Shift Keying Continuous-Variable Quantum key Distribution with High Excess Noise Tolerance"
},
{
"paperId": "c8dee346cc431313b64164c5ed9e428069c76808",
"title": "Efficient quantum digital signatures without symmetrization step."
},
{
"paperId": "542e0bf4486071bf76b26f7faeb5ac8a0d21b0e3",
"title": "Differential phase shift quantum secret sharing using a twin field."
},
{
"paperId": "a2e2174d7da661b0a9baf047d314a977a8594065",
"title": "Multibit quantum digital signature with continuous variables using basis encoding over insecure channels"
},
{
"paperId": "4c7a8d67b05f94a5172821a497ab7def28b0aae6",
"title": "Probabilistic one-time programs using quantum entanglement"
},
{
"paperId": "358f18bfd960caf054039b9d78affa8a8ca91519",
"title": "Comparing the difficulty of factorization and discrete logarithm: a 240-digit experiment"
},
{
"paperId": "d92b0427ad442a9dcf5313a926dc2465244224f2",
"title": "Twin-field quantum digital signatures."
},
{
"paperId": "43cb123e9fd803b69395457f91bd611099d4a2fd",
"title": "Practical quantum digital signature with configurable decoy states"
},
{
"paperId": "3ffe2117c85c6e9d6ab7624fa227bed654e52daa",
"title": "280-km experimental demonstration of a quantum digital signature with one decoy state."
},
{
"paperId": "3bba120769516931f3277a0e8ff8cd37c3dafb02",
"title": "Tight security bounds for decoy-state quantum key distribution"
},
{
"paperId": "4ef0e9fd186b4f997103a203ff88321da23ca79e",
"title": "Agile and Versatile Quantum Communication: Signatures and Secrets"
},
{
"paperId": "0ff0c31ea5f66367ea9ae4eae37dd54d59627fda",
"title": "High-speed and Large-scale Privacy Amplification Scheme for Quantum Key Distribution"
},
{
"paperId": "6ffb30a421880222ec3d087a2f2ec9d913cb2d8f",
"title": "Cryptanalysis of multiparty quantum digital signatures"
},
{
"paperId": "332df0a625c4aa05a118ec4c691c82b241f89037",
"title": "Multi-party ring quantum digital signatures"
},
{
"paperId": "e152878216a6f24ba892613bca4e1370358918f9",
"title": "Unconditional Security of Sending or Not Sending Twin-Field Quantum Key Distribution with Finite Pulses"
},
{
"paperId": "f3e13825af0ebcec692e2d9f9085482cd8918e2f",
"title": "Multi-bit quantum digital signature based on quantum temporal ghost imaging"
},
{
"paperId": "06482c2ded7a6bed829daf480b2612d13ac69828",
"title": "Continuous-variable quantum digital signatures over insecure channels"
},
{
"paperId": "83721103a6fd5535e943b1b575cf70862c2322a8",
"title": "Handbook of Applied Cryptography"
},
{
"paperId": "1b1900bdf2de6ce72713b498f1c614c1e9e866bb",
"title": "High-efficiency quantum digital signature scheme for signing long messages"
},
{
"paperId": "aed659a9f351369163020d1e5f213016d5ec1537",
"title": "Proof-of-Principle Demonstration of Passive Decoy-State Quantum Digital Signatures Over 200 km"
},
{
"paperId": "d560dafe94010daec0e9a2f7c7097d5bb46ca043",
"title": "Twin-field quantum key distribution with large misalignment error"
},
{
"paperId": "85670868b210bb28591ad775d815ef9f961c8566",
"title": "Phase-Matching Quantum Key Distribution"
},
{
"paperId": "0da7787e573e7e91dbbea1cb77c7b8c13e5a775e",
"title": "Overcoming the rate–distance limit of quantum key distribution without quantum repeaters"
},
{
"paperId": "5689cc4a43e48529f096d9723acfbad0caffd5de",
"title": "Quantum advantage for probabilistic one-time programs"
},
{
"paperId": "f648783368d2010858070efe778a6a253e656295",
"title": "The First Collision for Full SHA-1"
},
{
"paperId": "80ddcb3f07106dcb66fd9cc6a7553b0bbdea06b8",
"title": "Experimental demonstration of quantum digital signatures over 43 dB channel loss using differential phase shift quantum key distribution"
},
{
"paperId": "1587f892ef3785dfcce4020ae08fd1922e321f17",
"title": "Experimental measurement-device-independent quantum digital signatures over a metropolitan network"
},
{
"paperId": "954013c8643cf640935f1be0e8d5e4c5b3268fbc",
"title": "Experimental measurement-device-independent quantum digital signatures"
},
{
"paperId": "58c78038e5e1e0a1d4414bad18c86e36f5e8115c",
"title": "The postprocessing of quantum digital signatures"
},
{
"paperId": "6addad444d34b520d29eb33b008d0852a937bcfe",
"title": "Theoretically extensible quantum digital signature with starlike cluster states"
},
{
"paperId": "5a3f993e40d989ae974394f5c61e51d715955cf5",
"title": "Quantum random oracle model for quantum digital signature"
},
{
"paperId": "dbb1423e23d6cea80c00e71aab50c611e9187f26",
"title": "Experimental quantum digital signature over 102 km"
},
{
"paperId": "eb3c8e1dd04af2823e5902a85362a1678a28e089",
"title": "Practical quantum digital signature"
},
{
"paperId": "65ad55346573b6330fe51bdf7febde62fe0ae8dd",
"title": "Secure quantum signatures using insecure quantum channels"
},
{
"paperId": "b2be4c7978dbe3ab7000c75901f08aef93825b5b",
"title": "Security of quantum digital signatures for classical messages"
},
{
"paperId": "f91698f3f2bb9e586ed1882c1112775aeea9df37",
"title": "Making the decoy-state measurement-device-independent quantum key distribution practically useful"
},
{
"paperId": "89dceabad846e833ad87d059905340a1070720b3",
"title": "Long-distance measurement-device-independent multiparty quantum communication."
},
{
"paperId": "57d2f50168c9c9fb64da44090e2fa19ea5ee3517",
"title": "Concise security bounds for practical decoy-state quantum key distribution"
},
{
"paperId": "27eec2b7eaf398403b17344c3d80d4cf0de3087a",
"title": "Realization of quantum digital signatures without the requirement of quantum memory."
},
{
"paperId": "1dd27def61729ebefc70355ecfd07225923d95f3",
"title": "Quantum digital signatures without quantum memory."
},
{
"paperId": "8bdcfc2c4b3719a8964497de2059bc9f21f79ff4",
"title": "Experimental Demonstration of Quantum Digital Signatures"
},
{
"paperId": "48f7169dad564a53efb7dba984cd09fc9c4924e3",
"title": "Side-channel-free quantum key distribution."
},
{
"paperId": "7bc788574d753bc54aae51111256402636eb4a9d",
"title": "Measurement-device-independent quantum key distribution."
},
{
"paperId": "12b122f35ca32a4398810a2e98407603a805947e",
"title": "DECOY STATE QUANTUM KEY DISTRIBUTION"
},
{
"paperId": "e9845413e74d9218465ecb2dea2718d824962207",
"title": "Exponential Decreasing Rate of Leaked Information in Universal Random Privacy Amplification"
},
{
"paperId": "be06f5dd6095e04e037017822574a23dad681281",
"title": "Information Reconciliation Protocol in Quantum Key Distribution System"
},
{
"paperId": "80b388f07313e609a0fbd7dadfbadc69ae3b653e",
"title": "Protection"
},
{
"paperId": "0ad95f065f57fe26c0b8a9e6ee41c1a4c932cff1",
"title": "The Operational Meaning of Min- and Max-Entropy"
},
{
"paperId": "d9c77d27a5a7cf5dc897124a2c6c6bb9161a511c",
"title": "Quantum Digital Signatures"
},
{
"paperId": "ee140738cba12a5a218bf5890e36298410211149",
"title": "On Fast and Provably Secure Message Authentication Based on Universal Hashing"
},
{
"paperId": "2273d9829cdf7fc9d3be3cbecb961c7a6e4a34ea",
"title": "Algorithms for quantum computation: discrete logarithms and factoring"
},
{
"paperId": "ec28c9228e502b6a6b900fa0442b10962343d3b4",
"title": "LFSR-based Hashing and Authentication"
},
{
"paperId": "cd81f9ed8a3adfecf6a69fdf5e1dbf92d7844dcc",
"title": "Secret-Key Reconciliation by Public Discussion"
},
{
"paperId": "06a1d8fe505a4ee460e24ae3cf2e279e905cc9b0",
"title": "A public key cryptosystem and a signature scheme based on discrete logarithms"
},
{
"paperId": "feb061b699a2249f803baf159a991d63c64f9c99",
"title": "Universal Classes of Hash Functions"
},
{
"paperId": "ba624ccbb66c93f57a811695ef377419484243e0",
"title": "New Directions in Cryptography"
},
{
"paperId": "0d4b123449a178216349073f51dd2f216e4d20b8",
"title": "Practical quantum digital signature with a gigahertz BB84 quantum key distribution system."
},
{
"paperId": "7cd316505f52aa337ef8a2aff10bc6bf1df561d0",
"title": "and s"
},
{
"paperId": null,
"title": "Quantum cryptography: Public key distribution and coin tossing"
},
{
"paperId": "dd6317392cef439f0a0d2535aae0ff7bc1234868",
"title": "Beating the PNS attack in practical quantum cryptography"
},
{
"paperId": "12470f4c834e5fa31431d3ac0e9d8720800fed46",
"title": "In Foundations of secure computation"
},
{
"paperId": "a0870e57fb5e59aae83f2a2a09b8df78257ef556",
"title": "Shift-register synthesis and BCH decoding"
}
] | 31,886
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/034044638de2f68441e7ab322587e44c0528e7bf
|
[
"Computer Science"
] | 0.862673
|
BigData Analysis in Healthcare: Apache Hadoop , Apache spark and Apache Flink
|
034044638de2f68441e7ab322587e44c0528e7bf
|
Frontiers in Health Informatics
|
[
{
"authorId": "32131143",
"name": "Elham Nazari"
},
{
"authorId": "152919719",
"name": "M. Shahriari"
},
{
"authorId": "3842807",
"name": "H. Tabesh"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Front Health Informatics"
],
"alternate_urls": null,
"id": "50eb87e0-cf4d-49ec-831c-10227373cb2e",
"issn": "2676-7104",
"name": "Frontiers in Health Informatics",
"type": "journal",
"url": "http://www.ijmi.ir/"
}
|
Introduction: Health care data is increasing. The correct analysis of such data will improve the quality of care and reduce costs. This kind of data has certain features such as high volume, variety, high-speed production, etc. It makes it impossible to analyze with ordinary hardware and software platforms. Choosing the right platform for managing this kind of data is very important. The purpose of this study is to introduce and compare the most popular and most widely used platform for processing big data, Apache Hadoop MapReduce, and the two Apache Spark and Apache Flink platforms, which have recently been featured with great prominence.Material and Methods: This study is a survey whose content is based on the subject matter search of the Proquest, PubMed, Google Scholar, Science Direct, Scopus, IranMedex, Irandoc, Magiran, ParsMedline and Scientific Information Database (SID) databases, as well as Web reviews, specialized books with related keywords and standard. Finally, 80 articles related to the subject of the study were reviewed.Results: The findings showed that each of the studied platforms has features, such as data processing, support for different languages, processing speed, computational model, memory management, optimization, delay, error tolerance, scalability, performance, compatibility, Security and so on. Overall, the findings showed that the Apache Hadoop environment has simplicity, error detection, and scalability management based on clusters, but because its processing is based on batch processing, it works for slow complex analyzes and does not support flow processing, Apache Spark is also distributed as a computational platform that can process a big data set in memory with a very fast response time, the Apache Flink allows users to store data in memory and load them multiple times and provide a complex Fault Tolerance mechanism Continuously retrieves data flow status.Conclusion: The application of big data analysis and processing platforms varies according to the needs. In other words, it can be said that each technology is complementary, each of which is applicable in a particular field and cannot be separated from one another and depending on the purpose and the expected expectation, and the platform must be selected for analysis or whether custom tools are designed on these platforms.
|
### 2019; 8(1): e14 Open Access
# BIG DATA ANALYSIS IN HEALTHCARE: APACHE HADOOP, APACHE SPARK AND APACHE FLINK
## Elham Nazari[1], Mohammad Hasan Shahriari[2], Hamed Tabesh[1*]
1Department of Medical Informatics, Faculty of Medicine, Mashhad University of Medical Sciences, Mashhad, Iran.
2Department of Electrical Engineering, Faculty of Computer Engineering, University of Isfahan, Isfahan, Iran.
**Article Info** **A B S T R A C T**
**_Article type:_** Introduction:
_Review_ Health care data is increasing. The correct analysis of such data will
**_Article History:_** improve the quality of care and reduce costs. This kind of data has certain
_Received: 2019-03-03_ features such as high volume, variety, high-speed production, etc. It makes
_Revised: 2019-05-02_ it impossible to analyze with ordinary hardware and software platforms.
_Accepted: 2019-06-02_ Choosing the right platform for managing this kind of data is very
important.
The purpose of this study is to introduce and compare the most popular
**_* Corresponding author:_**
_Hamed Tabesh_ and most widely used platform for processing Big Data, Apache Hadoop
_Department of Medical Informatics,_ MapReduce, and the two Apache Spark and Apache Flink platforms, which
_Faculty of Medicine, Mashhad_ have recently been featured with great prominence.
_University of Medical Sciences,_
_Mashhad, Iran._ Material and Methods:
_Email: tabeshh@mums.ac.ir_ This study is a survey whose content is based on the subject matter search
of the Proquest, PubMed, Google Scholar, Science Direct, Scopus,
IranMedex, Irandoc, Magiran, ParsMedline and Scientific Information
Database (SID) databases, as well as Web reviews, specialized books with
related keywords and standard. Finally, 80 articles related to the subject of
the study were reviewed.
### Results:
The findings showed that each of the studied platforms has features, such
as data processing, support for different languages, processing speed,
computational model, memory management, optimization, delay, error
tolerance, scalability, performance, compatibility, Security and so on.
Overall, the findings showed that the Apache Hadoop environment has
simplicity, error detection, and scalability management based on clusters,
but because its processing is based on batch processing, it works for slow
complex analyzes and does not support stream processing, Apache Spark is
also distributed as a computational platform that can process a Big Data set
in memory with a very fast response time, the Apache Flink allows users to
store data in memory and load them multiple times and provide a complex
Fault Tolerance mechanism Continuously retrieves data stream status.
### Conclusion:
The application of Big Data analysis and processing platforms varies
according to the needs. In other words, it can be said that each technology
is complementary, each of which is applicable in a particular field and
cannot be separated from one another and depending on the purpose and
the expected expectation, and the platform must be selected for analysis or
whether custom tools are designed on these platforms.
### Keywords:
_Big Data Analysis, Apache Hadoop, Apache Spark, Apache Flink, Healthcare._
### How to cite this paper
Nazari E, Shahriari MH, Tabesh H. Big Data Analysis in Healthcare: Apache Hadoop, Apache spark and Apache Flink. Front
[Health Inform. 2019; 8(1): e14. DOI: 10.30699/fhi.v8i1.180](http://dx.doi.org/10.30699/fhi.v8i1.180)
-----
## INTRODUCTION
With the development of new technologies, health
care data is increasing. Estimated in 2012, the data is
about 200 petabyte (PB), estimated to reach 250000
PB by 2020 [1]. Analysis of these data is very
important for acquiring knowledge, for extracting
useful information and for discovering hidden data
patterns. And in the area of health care will improve
the quality of services, reduce costs and reduce
errors [2, 3]. This kind of data has many features:
including high volume, variety, scalability,
complexity, high speed production and uncertainty,
which makes it possible to use common data mining
techniques and typical software and hardware to
analyze this type of data [3-7].
A Big Data analysis is a process that organizes the
data collected from various sources and then
analyzes the data sets in order to discover the facts
and meaningful patterns [8].
Large-scale data analyzes have many uses in the
field of health: for example, early diagnosis of
diseases such as breast cancer, in the processing of
medical images and medical signals for providing
high-quality diagnosis, monitoring patient
symptoms, tracking chronic diseases such as
diabetes, preventing incidence of contagious
diseases, education through social networks, genetic
data analysis and personalized (precision) medicine.
Examples of this type of data are omics data,
including genomics, transcriptomics, proteomics and
pharmacogenomics, biomedical data, web data and
data in various electronic health records (EHRs) and
hospital information systems (HISs). Data contained
in the EHR and HIS contain rich data including
demographic characteristics, test results, diagnosis
and information of each individual [9-17]. Therefore,
analysis of health data has been considered with
regard to its importance, so that it has led scholars
and scientists to create structures, methodologies,
approaches and new approaches for managing,
controlling and processing Big Data [18].
In recent years, many tools have been introduced for
Big Data analysis. We intend to introduce the tools
provided by the Apache Software Foundation and
then compare them with each other after defining
the Big Data and its features. Some of the tools
available for Big Data analysis are Apache Hadoop
[19], Spark [20], and Flink [21], the focus of these
tools is on batch processing or stream processing.
Mostly batch processing tools are based on the
Apache Hadoop Infrastructure such as Apache
Mahout. Data stream analysis programs are often
used for real-time analysis. Spark and Flink are
examples of data flow analysis platforms. The
interactive analysis process allows users to interact
directly in real time to conduct their analysis. For
example, "Hive and Drill" are cluster platforms that
support interactive analysis. These tools help
researchers develop Big Data project [22].
The Big Data is a term used for data with a volume
greater than 1018 or Exabyte, and storage,
management, sharing, analysis, and visualization of
this type of data is difficult due to its characteristics
[23, 24]. The analysis of this type of data includes
these steps:
- Acquisition
- Information extraction and cleaning
- Data integration
- Modeling and analysis
- Interpretation and deployment [25].
The Big Data is defined by the attributes. These
features are, in fact, the challenges that the Big Data
analysis has to address and need to be managed. At
the beginning of the emergence of this type of data,
three features were raised; in studies of 8
characteristics for the big data, and in 2017, 42
characteristics were proposed, and it is expected to
reach 120 characteristics by the year 2021 [26].
Further, these descriptions are some important
features:
- Volume: refers to the production of highvolume data that requires a lot of storage
space.
- Velocity: Data rates are unpredictable.
- Variety: Variety of data and its various
formats. Data may fall into three categories:
structured, semi-structured, and
unstructured. They can also have different
types of images, videos and audio.
- Veracity: refers to bias, noise and
abnormality in large data. Extreme noise,
incomplete, inaccurate, inaccurate or extra
data, or, in other words, data quality imply
this characteristic.
- Vagueness: refers to the vagueness of the
information.
- Versatility: Different for different contexts
[4-7].
Apache Haddop is a suite of open source software
that facilitates solving issues with Big Data through
the use of a large number of computers. Many
people regard the two Hadoop and MapReduce as
similar, while this is not true [27]. In fact, Hadoop
uses the MapReduce software model to provide a
framework for storing and processing Big Data.
While Hadoop was originally designed to use
## MATERIALS AND METHODS
-----
computing on weak and medium systems, it was
gradually being used in high-end hardware [28].
In recent years, many projects have been developed
to complete or modify the Hadoop, for this purpose
the term "Hadoop Ecosystem" is used to refer to
projects and related products. In Fig 1 the Hadoop
Ecosystem is shown [27].
To fully understand the Hadoop, you need to look at
both yourself and the ecosystem around it. The
Hadoop project consists of four main components
[27]:
1) Hadoop Distributed File System (HDFS): A file
system for storing huge volumes of data across a
cluster of systems. HDFS has master-slave
architecture. It provides high-throughput and error
tolerance systems, which holds more than three
copies of each data block.
2) MapReduce data processing engine: The
distributed programming and processing model is
distributed.
3) Resource Management Layer, YARN (also known
as MapReduce Version 2): The new model is a
distributed job and places jobs among the cluster.
This model provides a distinction between
infrastructure and programming model.
4) Commons libraries used in different parts of the
Hadoop that are also used elsewhere. Some of these
libraries have been implemented in java, including
compression codecs, I/O utilities, and error
detection [19-25].
The Hadoop Ecosystem consists of several projects
around the main components mentioned above.
These projects are designed to help researchers and
experts in all stages of the analysis and machine
learning workflow. The general structure of the
ecosystem consists of three layers: the storage layer,
the processing layer and the management layer [2640].
**Fig 1: Hadoop Ecosystem [27]**
MapReduce is a Google-generated programming
model for Big Data processing based on the “divide
and conquer” method [41, 42]. The divide and
conquer method is implemented in two steps: Map
and Reduction. The steps in performing the
MapReduce model are shown in Fig 2 [43].
**Fig 2: Steps to Perform the MapReduce Model [43]**
MapReduce programming enables a large amount of
data to be processed in parallel. Based on this model,
each software is a sequence of MapReduce
operations consisting of a Map stage and a Reduce
step that is used to process a large number of
independent data. These two main actions are
applied to a key, value [44]:
Mapping step: The main node takes the input,
dividing it into smaller issues. Then they distribute
them between nodes that are tasked with doing
things. This node may also repeat the same thing, in
which case we have a multi-level structure.
Ultimately, these sub-issues are processed and sent
to the original node.
Step Reduce: Now, the main node that receives the
responses and the results combines them to provide
output. Meanwhile, actions may be performed on
results, such as filtering, summarizing or converting.
These two main actions are applied to a key, value.
The Map function takes a tidy pair of data and
converts it into a list of ordered pairs:
Map (k1, v1) -> list (k2, v2)
Then, the MapReduce framework collects all pairs
with the same key from all the lists and groups them
together. Then, for each key generated, a group is
created. Now the Reduce function applies to each
group:
Reduce (k2, list (v2)) -> list (v3)
The MapReduce framework now converts a list of
(key, values) into a list of values. The device should
be able to process a list of (key, values) in the main
memory.
One of the important features of MapReduce is that
the failed nodes are categorized automatically and
the complexity of the error tolerance is hidden from
the viewpoint of the programmers [44].
-----
## RESULTS
Advantages of MapReduce are parallel processing,
high scalability, low hardware cost, large file
processing, maximum operational capability, and
data locality [42, 44, 45].
Disadvantages of MapReduce are:
- One challenge and two-step dataflow
challenge. It does not directly support tasks
with different data flows.
- The user must repeatedly copy the join and
filtering and aggregation codes manually,
which will waste time, create program
errors, reduce readability and impede
optimization.
- Even for typical operations such as Filtering
and Projection, custom codes are written
that cause problems with reuse and
maintenance.
- The vague nature of the Map and Reduce
functions hinders the system's ability to be
optimized.
- No support for streaming data [42, 44-51]
Apache Spark is a text-based framework that was
presented at the University of Berkeley in 2009. The
existence of multiple advantages has made the
processing engine a powerful and useful process for
macro processing, and distinguishes it from other
tools, such as Hadoop and Storm [18, 20, 52-55].
All features provided by Apache Spark are built on
top of the core [56]. For Spark, there are three Java
APIs, Python and Scala. The Spark core is an API
location that defines the resilient distributed RDD
dataset, which is the concept of Spark's original
programming [57]. Its key features are [56]:
1. Responsible for the essential I/O capabilities.
2. Its important role in planning and observing
Spark cluster.
3. Recovering errors by using computations in
memory solves the complexity of MapReduce.
Advantages of Apache Spark are:
- Easily installed.
- Open source professionals like Intel, IBM,
Databricks, Cloudera and MapR have
officially announced that they support and
support Apache Spark's standards as the
standard engine for large data analysis.
- Eliminates the needs of large-format
processing with various data (text data,
data charts, etc.) and also manages data
sources properly.
- Surely, 10 to 100 times faster than the
Hadoop because of processing in memory, it
also works better with MapReduce in
executing the program on the disk.
- Supports various programming languages
from Python to Scala and Java. The system
has a set of over 80 high-level operators and
can be interactively used for querying data
within the shell.
- For duplicate processing, interactive
processing and event processing.
## Can be integrated with Hadoop Distributed
File System (HDFS), Hbase, and Casendra
and other storage systems [18, 46, 50, 52,
57-59].
The main issue is whether with the arrival of Spark,
we have to leave the Hadoop, and do they differ from
each other? In the answer, Spark and Hadoop cannot
be considered completely separate from each other
and with the advent of new tools from the past, we
must say that Spark has integrated with the Hadoop
and has overcome its problems [50, 51]. Spark does
not use MapReduce as its executable engine, but is
well integrated with the Hadoop. Because it can run
in yarn and work with the Hadoop and HDFS data
format. There is no way to create security in Spark,
and it needs to be connected to the security
mechanisms in YARN, such as Kerberos. As a result,
Spark can be more powerful in combination with
Hadoop [20, 54, 60].
Another high-level Apache project is the Flink
project, which has exactly the same mission as Spark
in the Hadoop Ecosystem, and has been introduced
as an alternative to the MapReduce Hadoop model.
Apache Flink is an open source stream processor.
Flink provides speed, efficiency, and precision for
mass processing, and can handle batch processing or
even direct and stream processing.
Many of the concepts of the two tools are similar, but
we will see Flink as a more diverse and lighter
option. For example, both data streams in Spark and
Flink guarantee that each record is processed
exactly once. As a result, any duplicates that may be
available will be deleted. Compared to other
processing systems, such as Storm, they have a very
high operational capability, but both have a low
error overhead.
The same flake with Spark can do structured query
language (SQL), graph, machine learning and stream
processing. It also works with NoSQL, relational
database management system (RDBMS), like the SQL
server and MongoDB. Flink is a combination of
MapReduce based on memory-based disk and spark.
One of Fink's advantages over Spark is the following:
Since Flink contains a memory management system,
-----
memory processing will be faster than Spark [18,
28].
It has better performance for repetitive processes.
Duplicate processing runs on a node independently
of the cluster, which will increase the speed.
Can run classic MapReduce processing and also
integrate with Apache TEZ [61].
Spark thanks to the micro-classification architecture
provides near-real-time flow, while Apache Flink
provides real-time stream for real-time. Due to pure
stream architecture based on the Kappa architecture
[48, 51, 59].
Flink has a pipeline nature in processing data and
chooses the best method for doing it. In Fig 3, the
architecture and components of the Flink are shown
[62].
**Fig 3: Architecture and components of Apache Flink [42]**
Table 1 shows the differences between the Hadoop,
Spark, and Flink and they are examined more
precisely:
**Table 1: The Comparison of the features of Hadoop, Spark and Flink [2, 18, 20, 42, 43, 45-48, 51, 52, 57, 59, 62-66]**
|Attributes|Apache Flink|Apache Spark|Apache Hadoop|
|---|---|---|---|
|Data processing|Provides stream and batch processing|Batch processing, also supports stream processing|Batch processing|
|Language support|Java, Scala, Python and R|Java, Scala, Python and R|First of all Java, but other languages like Groovy, Ruby, C ++, C, Python, Perl are also supported|
|Extended language|Java, Scala|Scala|Java|
|Processing speed|It processes faster than spark because of its underlying engine|Processes 100 times faster than MapReduce because processing is done in memory|Data processing is much slower than Spark and Flink|
|Computational model|Flink has adopted a continuous stream based on an operator-driven stream model. A continuous flow operator processes data quickly, without delay, in the collection. Flink can order that only some of the information that has actually been changed be processed. Therefore, job performance is significantly increased|Spark provides a near-real- time stream due to micro- batching architecture. Repeats your data batchwise. Each iteration should be planned and implemented separately|MapReduce selects the batch model. Batch essentially processes data in rest mode, captures a large amount of data, then processes and then writes in the output and does not support iterative processing.|
|Memory Management|Provides automatic storage management|Provides customizable memory management. In recent versions of Spark, automatic memory management is also possible|Provides customizable memory management. In other words, you can permanently or dynamically manage memory|
|Optimization|Flink comes with an optimizer that is independent of the programming interface. Flink Optimizer acts as a relational databases optimizer.|Job should be optimized manually. There is an optimizer named Catalyst, which is made in Scala language.|Job should be optimized manually|
|Delay|With the Apache Flink configuration, data execution time is achieved with low latency and high speed. Flink can process data (at very high speeds and large volumes) in milliseconds|The Hadoop is relatively faster because it stores a lot of input data in memory by the RDD and keeps the average data in memory, and eventually writes the data after it is completed or, if necessary, writes to the disk|The delay in the Hadoop is greater than Spark and Flink. The reason for this delay is support for various formats and structures and a huge amount of data.|
|Fault tolerance|The fault tolerance mechanism in Apache Flink is based on Chandy- Lamport distributed snapshots. This mechanism is lightweight, which keeps the interest rate high and at the same time ensures strong adaptability.|Uses the Flexible Distributed Dataset (RDDs). So that it will retrieve the program after no error, without the need for code or additional settings|It is very robust against the error and there is no need to reboot the program in case of any errors.|
|Scalability|Scalability is very high.|Scalability is very high.|MapReduce has potentially scalability that can be used in products with several thousand nodes|
-----
|Attributes|Apache Flink|Apache Spark|Apache Hadoop|
|---|---|---|---|
|Performance|Flint's performance is superior to any other information processing system. Flink used repetitive closed loop operators to accelerate machine learning and graph processing.|Stream processing is not as efficient as it uses micro-batch processing.|The Hadoop's performance of Spark and Flink is slower.|
|Delete Repeat|Processes each record exactly once and then duplicates it.|Processes each record exactly once and then duplicates it|Not available|
|Compatibility|It is fully compatible with the Hadoop, it can process data stored in the Hadoop and support all file formats / input formats.|Hadoop and Spark are compatible with each other, and Spark will share all mapping-down compatibility for data sources, file formats, and business intelligence tools through Java Database Connectivity (JDBC) and Open Database Connectivity (ODBC).|MapReduce and Spark are compatible with each other|
|Security|Through the Hadoop / Kerberos infrastructure there is support for user authentication. If Flink is running on YARN, Flink will confirm with its Kerberos Tokens that the user has YARN, HBase, and HDFS.|Authentication is only through authentication password. If SPARC runs on the Hadoop, it uses HDFS ACLS and file- level permissions. In addition, Spark can run on YARN and allow Kerberos to authenticate.|It supports Kerberos authentication, which is manageable somewhat difficult.|
|Cost|To run in memory, it requires high Random Access Memory (RAM), so it costs a lot.|It needs high RAM, because it needs a higher cost|It runs on cheap hardware.|
|Abstraction|Dataset abstraction for batch processing and DataStreams abstraction for stream processing|Spark RDD abstraction for batch processing and DStream abstraction for stream processing|There is no abstraction.|
|Visualization|Provides a web interface for sending and executing jobs. An implementation plan can be displayed on this interface.|Provides a web interface for sending and executing jobs in which the implementation plan can be visualized. Flink and Spark are both connected to Apache zeppelin, which provides data analysis as well as discovery, visualization and collaboration.|The Zoom Data visualization tool can be connected directly to HDFS, as well as to SQL- on-Hadoop, Impala, Hive, Spark SQL, Presto, and more.|
|Easy to use|It is easy because it has high level operators.|It is easy because it has high level operators.|MapReduce developers need to code each operation, which makes it very difficult, but add- ons like Pig and Hive make it a bit easier to work with.|
|Real-time analysis|Basically used for real-time processing. However, it also provides quick batch processing.|Supports real-time processing.|MapReduce fails in the face of real-time data processing.|
|SQL Support|Table Application Programming Interface (API) is a descriptive language similar to SQL that supports a data format such as DSL.|Using Spark-SQL executes SQL commands.|Ability to execute SQL commands with Apache Hive is possible.|
## DISCUSSION
Considering the necessity of analyzing health data
that is increasing day by day [25] and the
importance of considering the appropriate software
platform, this study examined and compared the
three platforms Apache Hadoop, Apache Spark and
Apache Flink. The results showed that, depending on
the needs, the efficiency of the data analysis and
processing platforms varies, in other words, it can be
said that each technology is complementary and
each one is applicable in a particular field and
cannot be separated from one another. For example,
when the volume discussion was raised, Hadoop
first implemented MapReduce, which has a high
processing speed parallel to the volume Then, with
the advancement of technology, a variety of
discussions came about, and other tools came into
being and were based on different aspects. If there is
a need for real-time data processing and stream in a
single event, Spark will be the appropriate option it
will not be a must and should be used with a Flink,
so the use of each technology depends on the need
and that we can combine these tools together to
achieve the desired purpose. The results of this
study can help researchers and those who are
seeking Big Data analytics in the field of health and
medical care in choosing the appropriate platform.
Big Data analysis improves health care services and
reduces costs. The results of well-conducted studies
and projects in the field of health care in the context
-----
of the Big Data analysis illustrate this fact. According
to a report, these analyzes will cost $340 to $450
billion in various prevention, diagnosis and
treatment departments [67, 68].
One of the most famous recently implemented
projects is IBM Watson. In this study, Watson's
physician will help in identifying symptoms and
factors associated with patient diagnosis and
treatment and making better decisions.
In the field of health care, 80% of the complex data
(MRI images, medical notes, etc.) are used to
perform these analyzes, based on the need of
different platforms. Hadoop helps researchers and
doctors get a glimpse of data that has never been
possible before.
It finds correlations in the data with many variables,
which is a very difficult task for humans and can be
effective in the discovery and prevention of diseases
and the treatment of chronic diseases.
One MapReduce demo, this demo helps write a
program that can remove duplicate CT scan images
from a 100 million photos database. Anticipated
wearable technologies of 26.54% in care for the
elderly and in intensive care during the period 20202016 will create a change in the field of health care.
Collected data can be stored in Hadoop and analyzed
using MapReduce and Spark and will save costs [69].
Hadoop is well placed to upgrade hospital services,
especially when hospitals sit at the bedside with
sensors for checking the status of blood pressure,
cholesterol, and so on, while using the RDBMS, it's
no longer possible to get this information in the long
run. Production is saved.
More than 40 percent of people have admitted that
high insurance costs are due to the large number of
fraudulent complaints that cost more than one
billion. Insurance companies use a hypothetical
environment to reduce these scams. They use
historical and real-time data on medical complaints,
wages, and so on.
At Texas Hospital, Hadoop was used in electronic
medical records (EMRs) and found that patients in
their 30-day treatment period needed additional
care and treatment, and with the help of the Hadoop
platform, they could reduce the readmission rate
from 26 to 21; this means that using Hadoop in EMR,
they could reduce the readmission rate by 5%.
96% of US hospitals have EHR, while in Asia and
India, this is very low. The EHR is a rich source of
data analysis and well-managed patient planning
changes and changes. In India, a hospital called
AIIMS uses the Big Data analysis to improve the
quality of services [70].
In addition to using the platform's capabilities, it is
possible to add and use tools for the needs of these
platforms to carry out the analysis of the database.
In a study, the Medoop tool, a medical platform
based on Hadoop-based, was proposed. This system
uses the features of scalability, high reliability and
high throughput in the Hadoop [71]. In the study, a
Hadoop image processing interface (HIFI) tool was
developed for MapReduce Image-based activities
[72]. In a study, the SparkSeq tool was also
proposed, which was also added to Spark, with the
goal of analyzing the genomic data and the
expression of the cloud-based gene expression for
the purpose of discovering translated strings for a
type of cancer or another tool called Thunderbuilt
for analysis of large-scale, neural data [73, 74]. A
study from Apache Spark has been used to analyze
functional MRI data and avoid frequent write-ups on
disk [75]. Flink has also been used to monitor
electrocardiogram (ECG), magnetic resonance
imaging (MRI) reading, wearable sensor monitoring,
and other cyber-physical systems, and is also useful
in analyzing genomic data and has been reported to
have a high fault tolerance [76, 77].
## CONCLUSION
It is suggested that future studies introduce other
platforms and make more comparisons with
different platforms and their capabilities for
managing Big Data in the field of health. It is also
suggested that, according to the target, custom tools
should be designed and incorporated into existing
platforms to be used.
## AUTHOR’S CONTRIBUTION
All the authors approved the final version of the
manuscript.
## CONFLICTS OF INTEREST
The authors declare no conflicts of interest
regarding the publication of this study.
## FINANCIAL DISCLOSURE
No financial interests related to the material of this
manuscript have been declared.
## REFERENCES
1. Hermon R, Williams PA. Big data in healthcare: What
is it used for? 3rd Australian eHealth Informatics and
Security Conference; 2014.
2. Chen M, Mao S, Liu Y. Big data: A survey. Mobile Netw
Appl. 2014; 19(2): 171-209.
3. Ristevski B, Chen M. Big data analytics in medicine
and healthcare. J Integr Bioinform. 2018; 15(3): 1-5.
PMID: 29746254 DOI: 10.1515/jib-2017-0030
[[PubMed]](https://www.ncbi.nlm.nih.gov/pubmed/29746254)
4. Mooney SJ, Pejaver V. Big data in public health:
Terminology, machine learning, and privacy. Annu
Rev Public Health. 2018; 39: 95-112. PMID:
-----
29261408 DOI: 10.1146/annurev-publhealth
[040617-014208 [PubMed]](https://www.ncbi.nlm.nih.gov/pubmed/29261408)
5. Jin X, Wah BW, Cheng X, Wang Y. Significance and
challenges of big data research. Big Data Research.
2015; 2(2): 59-64.
6. Bello-Orgaz G, Jung JJ, Camacho D. Social big data:
Recent achievements and new challenges.
Information Fusion. 2016; 28: 45-59.
7. Arockia Panimalar S, Varnekha Shree S, Veneshia
Kathrine A. The 17 V’s of big data. International
Research Journal of Engineering and Technology.
2017; 4(9): 329-33.
8. Goga K, Xhafa F, Terzo O. VM deployment methods
for DaaS model in clouds. In: Barolli L, Xhafa F, Javaid
N, Spaho E, Kolici V. (eds) Advances in internet, data
& web technologies. Lecture notes on data
engineering and communications technologies, vol
17. Springer, Cham; 2018.
9. Khan AS, Fleischauer A, Casani J, Groseclose SL. The
next public health revolution: Public health
information fusion and social networks. Am J Public
Health. 2010; 100(7): 1237-42. PMID: 20530760
[DOI: 10.2105/AJPH.2009.180489 [PubMed]](https://www.ncbi.nlm.nih.gov/pubmed/20530760)
10. Velikova M, Lucas PJF, Samulski M, Karssemeijer N. A
probabilistic framework for image information
fusion with an application to mammographic
analysis. Medical Image Analysis. 2012; 16(4): 86575.
11. Antink CH, Leonhardt S, Walter M. A synthesizer
framework for multimodal cardiorespiratory signals.
Biomedical Physics & Engineering Express. 2017;
3(3): 035028.
12. Sung W-T, Chang K-Y. Evidence-based multi-sensor
information fusion for remote health care systems.
Sensors and Actuators A: Physical. 2013; 204: 1-19.
13. Benke K, Benke G. Artificial intelligence and big data
in public health. Int J Environ Res Public Health.
2018; 15(12): 2796-805. PMID: 30544648 DOI:
[10.3390/ijerph15122796 [PubMed]](https://www.ncbi.nlm.nih.gov/pubmed/30544648)
14. Kim W-J. Knowledge-based diagnosis and prediction
using big data and deep learning in precision
medicine. Investig Clin Urol. 2018; 59(2): 69–71.
PMID: 29520381 DOI: 10.4111/icu.2018.59.2.69
[[PubMed]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5840120/)
15. Lee CH, Yoon H-J. Medical big data: Promise and
challenges. Kidney Res Clin Pract. 2017; 36(1): 3–11.
PMID: 28392994 DOI: 10.23876/j.krcp.2017.36.1.3
[[PubMed]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5331970/)
16. Archenaa J, Anita EM. A survey of big data analytics
in healthcare and government. Procedia Computer
Science. 2015; 50: 408-13.
17. Andreu-Perez J, Poon CCY, Merrifield RD, Wong STC,
Yang G-Z. Big data for health. IEEE Journal of
Biomedical and Health Informatics. 2015; 19(4):
1193-208.
18. Verma A, Mansuri AH, Jain N. Big data management
processing with hadoop MapReduce and spark
technology: A comparison. Symposium on Colossal
Data Analysis and Networking. 2016; IEEE.
19. Taylor RC. An overview of the
hadoop/MapReduce/HBase framework and its
current applications in bioinformatics. BMC
Bioinformatics. 2010; 11(12): S1.
20. Zaharia M, Xin RS, Wendell P, Das T, Armbrust M,
Dave A, et al. Apache spark: A unified engine for big
data processing. Communications of the ACM. 2016;
59(11): 56-65.
21. Carbone P, Ewen S, Haridi S. Apache flink: Stream
and batch processing in a single engine. Bulletin of
the IEEE Computer Society Technical Committee on
Data Engineering. 2015.
22. García-Gil D, Ramírez-Gallego S, García S, Herrera F. A
comparison on scalability for batch big data
processing on Apache Spark and Apache Flink. Big
Data Analytics. 2017; 2(1): 1-11.
23. O’Driscoll A, Daugelaite J, Sleator RD. ‘Big data’,
hadoop and cloud computing in genomics. J Biomed
Inform. 2013; 46(5): 774-81. PMID: 23872175 DOI:
[10.1016/j.jbi.2013.07.001 [PubMed]](https://www.ncbi.nlm.nih.gov/pubmed/23872175)
24. Sagiroglu S, Sinanc D, editors. Big data: A review.
International Conference on Collaboration
Technologies and Systems (CTS). 2013: IEEE.
25. Jagadish H, Gehrke J, Labrinidis A, Papakonstantinou
Y, Patel JM, Ramakrishnan R, et al. Big data and its
technical challenges. Communications of the ACM.
2014; 57(7): 86-94.
26. Shafer T. The 42 V’s of big data and data science
[Internet]. 2017. [cited: 15 May 2019] Available
from: https://www. kdnuggets.com/2017/04/42-vsbig-data-data-science.html.
27. Landset S, Khoshgoftaar TM, Richter AN, Hasanin T. A
survey of open source tools for machine learning
with big data in the hadoop ecosystem. Journal of Big
Data. 2015; 2(1): 24-60.
28. Dunning T, Friedman E. Real world hadoop. O'Reilly
Media; USA: 2015.
29. Hoffman S. Apache Flume: distributed log collection
for hadoop. Packt Publishing Ltd; 2013.
30. Garg N. Apache kafka. Packt Publishing Ltd; 2013.
31. Ting K, Cecho JJ. Apache sqoop cookbook: Unlocking
hadoop for your relational database. O'Reilly Media;
USA: 2013.
32. White T. Hadoop: The definitive guide. O'Reilly
Media; USA: 2012.
33. Hausenblas M, Nadeau J. Apache drill: Interactive ad
hoc analysis at scale. Big Data. 2013; 1(2): 100-4.
PMID: 27442064 DOI: 10.1089/big.2013.0011
[[PubMed]](https://www.ncbi.nlm.nih.gov/pubmed/27442064)
34. Fernández A, del Río S, López V, Bawakid A, del Jesus
MJ, Benítez JM, et al. Big data with cloud computing:
An insight on the computing environment,
MapReduce, and programming frameworks. Data
Mining and Knowledge Discovery. 2014; 4(5): 380409.
35. Wu D, Sakr S, Zhu L. Big data programming models.
-----
In: Zomaya A, Sakr S. (eds) Handbook of Big Data
Technologies. Springer; Cham: 2017.
36. Pol UR. Big data analysis: Comparison of hadoop
mapreduce, pig and hive. International Journal of
Innovative Research in Science, Engineering and
Technology. 2016; 5(6): 9687-93.
37. Oozie. Apache oozie workflow scheduler for hadoop.
[Internet] 2019. [cited: 1 Jul 2019]. Available from:
https://oozie.apache.org/
38. Olasz A, Thai BN, Kristóf D. A new initiative for tiling,
stitching and processing geospatial big data in
distributed computing environments. ISPRS Ann
Photogramm Remote Sens Spatial Inf Sci. 2016; 3(4):
111-8.
39. Masiane M, Warren L. CS5604 front-end user
interface team. [Internet] 2016 [cited: 1 Jul 2019]
Available from: https://vtechworks.lib.vt.edu/
handle/10919/70935
40. Shrivastava A, Deshpande T. Hadoop blueprints.
Packt Publishing; 2016.
41. Sinha S. What is a hadoop ecosystem? [Internet].
2017 [cited: 1 Jul 2019]. Available from:
https://www.quora.com/What-is-a-Hadoopecosystem.
42. Dean J, Ghemawat S. MapReduce: Simplified data
processing on large clusters. Communications of the
ACM. 2008; 51(1): 107-13.
43. Kumar VN, Shindgikar P. Modern big data processing
with hadoop: Expert techniques for architecting endto-end big data solutions to get valuable insights.
Packt Publishing; 2018.
44. Thomas L, Syama R. Survey on MapReduce
scheduling algorithms. International Journal of
Computer Applications. 2014; 95(23): 9-13.
45. Team D. Hadoop vs spark vs flink: Big data
frameworks comparison [Internet]. 2016 [cited: 1 Jul
2019]. Available from: https://www.data
flair.training/blogs/hadoop-vs-spark-vs-flink./
46. Carbone P, Katsifodimos A, Ewen S, Markl V, Haridi S,
Tzoumas K. Apache flink: Stream and batch
processing in a single engine. Bulletin of the IEEE
Computer Society Technical Committee on Data
Engineering. 2015; 36(4): 28-38.
47. Chintapalli S, Dagit D, Evans B, Farivar R, Graves T,
Holderbaugh M, et al. Benchmarking streaming
computation engines: Storm, flink and spark
streaming. International Parallel and Distributed
Processing Symposium Workshops, IEEE; 2016.
48. Frampton, M., Mastering Apache Spark. Packt
Publishing; 2015.
49. Monteith JY, McGregor JD, Ingram JE. Hadoop and its
evolving ecosystem. 5th International Workshop on
Software Ecosystems. Citeseer; 2013.
50. Parsian M. Data algorithms: Recipes for scaling up
with hadoop and spark. O'Reilly Media, USA; 2015.
51. Singh D, Reddy CK. A survey on platforms for big data
analytics. Journal of Big Data, 2015. 2(1): 8-28.
52. Estrada R, Ruiz I. Big data SMACK: A guide to apache
spark, mesos, akka, cassandra, and kafka. Apress;
2016.
53. Meng X. Mllib: Scalable machine learning on spark.
Spark Workshop; 2014.
54. Zaharia M, Chowdhury M, Das T, Dave A, Ma J,
McCauley M, et al. Fast and interactive analytics over
hadoop data with Spark. Usenix Login. 2012; 37(4):
45-51.
55. Zaharia M, Chowdhury M, Franklin MJ, Shenker S,
Stoica I, et al. Spark: Cluster computing with working
sets. Proceedings of the 2nd USENIX Conference on
Hot Topics in Cloud Computing; 2010.
56. Team, D. Apache spark ecosystem: Complete spark
components guide [Internet]. 2017 [cited: 1 Dec
2018]. Available from: https://data-flair.
training/blogs/apache-spark-ecosystem-components
57. Safabakhsh M. Apache spark [Internet]. 2018 [cited:
1 Jul 2019]. Available from:
http://myhadoop.ir/?page_id=131.
58. Penchikala S. Big data processing with apache spark–
Part 1: Introduction [Internet]. 2015 [cited: 1 Jul
2019]. Available from: https://www.infoq.com/
articles/apache-spark-introduction.
59. Shoro AG, Soomro TR. Big data analysis: Apache
spark perspective. Global Journal of Computer
Science and Technology. 2015; 15(1): 7-14.
60. Kupisz B, Unold O. Collaborative filtering
recommendation algorithm based on hadoop and
spark. International Conference on Industrial
Technology. IEEE; 2015.
61. Saha B, Shah H, Seth S, Vijayaraghavan G, Murthy A,
Curino C. Apache tez: A unifying framework for
modeling and building data processing applications.
International Conference on Management of Data.
ACM; 2015.
62. Team D. Flink tutorial: A comprehensive guide for
apache flink [Internet]. 2018 [cited: 1 Jan 2019].
Available from: https://data-flair.training/blogs
/flink-tutorial./
63. Tsai C-W, Lai C-F, Chao H-C, Vasilakos AV. Big data
analytics: A survey. Journal of Big Data. 2015; 2(1):
21-53.
64. Oussous A, Benjelloun F-Z, Lahcen AA, Belfkih S. Big
data technologies: A survey. Journal of King Saud
University-Computer and Information Sciences.
2018; 30(4): 431-48.
65. Ramírez-Gallego S, Fernández A, García S, Chen M,
Herrera F. Big data: Tutorial and guidelines on
information and process fusion for analytics
algorithms with MapReduce. Information Fusion.
2018; 42: 51-61.
66. Ferranti A, Marcelloni F, Segatori A, Antonelli M,
Ducange P. A distributed approach to multi-objective
evolutionary generation of fuzzy rule-based
classifiers from big data. Information Sciences. 2017;
415: 319-40.
67. Nazari E, Pour R, Tabesh H. Comprehensive overview
-----
of decision-fusion technique in healthcare: A scoping
review protocol. Iran J Med Inform. 2018; 7(1): e7.
68. Kayyali B, Knott D, Van Kuiken S. The big-data
revolution in US health care: Accelerating value and
innovation. Mc Kinsey & Company. 2013; 2(8): 1-13.
69. Poojary P. Big data in healthcare: How hadoop is
revolutionizing healthcare analytics [Internet]. 2019
[cited: 15 May 2019]. Available from:
[https://www.edureka.co/blog/hadoop-big-data-in-](https://www.edureka.co/blog/hadoop-big-data-in-healthcare)
[healthcare.](https://www.edureka.co/blog/hadoop-big-data-in-healthcare)
70. HDFS Tutorial Team. How big data is solving
healthcare problems successfully? [Internet]. 2016
[cited: 15 May 2019]. Available from:
https://www.hdfstutorial.com/blog/big-dataapplication-in-healthcare/
71. Lijun W, Yongfeng H, Ji C, Ke Z, Chunhua L. Medoop: A
medical information platform based on hadoop.
International Conference on e-Health Networking,
Applications and Services. IEEE; 2013.
72. Sweeney C, Liu L, Arietta S, Lawrence J. HIPI: A
hadoop image processing interface for image-based
mapreduce tasks. International Journal of Recent
Trends in Engineering & Research. 2011; 2(11): 55762.
73. Wiewiórka MS, Messina A, Pacholewska A, Maffioletti
S, Gawrysiak P, Okoniewski MJ. SparkSeq: fast,
scalable and cloud-ready tool for the interactive
genomic data analysis with nucleotide precision.
Bioinformatics. 2014; 30(18): 2652-3. PMID:
24845651 DOI: 10.1093/bioinformatics/btu343
[[PubMed]](https://www.ncbi.nlm.nih.gov/pubmed/24845651)
74. Freeman J, Vladimirov N, Kawashima T, Mu Y,
Sofroniew NJ, Bennett DV, et al. Mapping brain
activity at scale with cluster computing. Nat Methods.
2014; 11(9): 941-50. PMID: 25068736 DOI:
[10.1038/nmeth.3041 [PubMed]](https://www.ncbi.nlm.nih.gov/pubmed/25068736)
75. Boubela RN, Kalcher K, Huf W, Našel C, Moser E. Big
data approaches for the analysis of large-scale fMRI
data using apache spark and GPU processing: a
demonstration on resting-state fMRI data from the
human connectome project. Front Neurosci. 2016; 9:
492. PMID: 26778951 DOI:
[10.3389/fnins.2015.00492 [PubMed]](https://www.ncbi.nlm.nih.gov/pubmed/26778951)
76. Versaci F, Pireddu L, Zanetti G. Scalable genomics:
From raw data to aligned reads on Apache YARN.
International Conference on Big Data. IEEE; 2016.
77. Harerimana G, Jang B, Kim JW, Park HK. Health big
data analytics: A technology survey. IEEE Access.
2018; 6: 65661-78.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.30699/FHI.V8I1.180?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.30699/FHI.V8I1.180, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNC",
"status": "GOLD",
"url": "http://ijmi.ir/index.php/IJMI/article/download/180/255"
}
| 2,019
|
[
"Review"
] | true
| 2019-07-27T00:00:00
|
[] | 11,480
|
en
|
[
{
"category": "Computer Science",
"source": "external"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/03412edf78a05517e308d16313ea9ff3965fdf1a
|
[
"Computer Science"
] | 0.856318
|
Decentralized detection and classification using kernel methods
|
03412edf78a05517e308d16313ea9ff3965fdf1a
|
International Conference on Machine Learning
|
[
{
"authorId": "145796039",
"name": "X. Nguyen"
},
{
"authorId": "1721860",
"name": "M. Wainwright"
},
{
"authorId": "1694621",
"name": "Michael I. Jordan"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"ICML",
"Int Conf Mach Learn"
],
"alternate_urls": null,
"id": "fc0a208c-acb7-47dc-a0d4-af8190e21d29",
"issn": null,
"name": "International Conference on Machine Learning",
"type": "conference",
"url": "https://icml.cc/"
}
| null |
# Decentralized Detection and Classification using Kernel Methods
### XuanLong Nguyen Martin J. Wainwright Computer Science Division Electrical Engineering and Computer Science University of California, Berkeley University of California, Berkeley
```
xuanlong@cs.berkeley.edu wainwrig@eecs.berkeley.edu
Michael I. Jordan Computer Science Division and Department of Statistics University of California, Berkeley
jordan@cs.berkeley.edu
April 30, 2004
```
Technical Report 658
Department of Statistics
University of California, Berkeley
**Abstract**
We consider the problem of decentralized detection under constraints on the number of bits that
can be transmitted by each sensor. In contrast to most previous work, in which the joint distribution
of sensor observations is assumed to be known, we address the problem when only a set of empirical
samples is available. We propose a novel algorithm using the framework of empirical risk minimization
and marginalized kernels, and analyze its computational and statistical properties both theoretically and
empirically. We provide an efficient implementation of the algorithm, and demonstrate its performance
on both simulated and real data sets.
## 1 Introduction
A decentralized detection system typically involves a set of sensors that receive observations from the environment, but are permitted to transmit only a summary message (as opposed to the full observation) back to
a fusion center. On the basis of its received messages, this fusion center then chooses a final decision from
some number of alternative hypotheses about the environment. The problem of decentralized detection is to
design the local decision rules at each sensor, which determine the messages that are relayed to the fusion
center, as well a decision rule for the fusion center itself [28]. A key aspect of the problem is the presence
of communication constraints, meaning that the sizes of the messages sent by the sensors back to the fusion
center must be suitably “small” relative to the raw observations, whether measured in terms of either bits or
power. The decentralized nature of the system is to be contrasted with a centralized system, in which the
fusion center has access to the full collection of raw observations.
Such problems of decentralized decision-making have been the focus of considerable research in the past
two decades [e.g., 27, 28, 7, 8]. Indeed, decentralized systems arise in a variety of important applications,
ranging from sensor networks, in which each sensor operates under severe power or bandwidth constraints,
-----
to the modeling of human decision-making, in which high-level executive decisions are frequently based
on lower-level summaries. The large majority of the literature is based on the assumption that the probability distributions of the sensor observations lie within some known parametric family (e.g., Gaussian and
conditionally independent), and seek to characterize the structure of optimal decision rules. The probability
of error is the most common performance criterion, but there has also been a significant amount of work
devoted to other criteria, such as the Neyman-Pearson or minimax formulations. See Tsitsiklis [28] and
Blum et al. [7] for comprehensive surveys of the literature.
More concretely, let Y ∈{−1, +1} be a random variable, representing the two possible hypotheses
in a binary hypothesis-testing problem. Moreover, suppose that the system consists of S sensors, each
of which observes a single component of the S-dimensional vector X = {X [1], . . ., X [S]}. One starting
point is to assume that the joint distribution P (X, Y ) falls within some parametric family. Of course, such
an assumption raises the modeling issue of how to determine an appropriate parametric family, and how to
estimate parameters. Both of these problems are very challenging in contexts such as sensor networks, given
highly inhomogeneous distributions and a large number S of sensors. Our focus in this paper is on relaxing
this assumption, and developing a method in which no assumption about the joint distribution P (X, Y ) is
required. Instead, we posit that a number of empirical samples (xi, yi)[n]i=1 [are given.]
In the context of centralized signal detection problems, there is an extensive line of research on nonparametric techniques, in which no specific parametric form for the joint distribution P (X, Y ) is assumed
(see, e.g., Kassam [19] for a survey). In the decentralized setting, however, it is only relatively recently that
nonparametric methods for detection have been explored. Several authors have taken classical nonparametric methods from the centralized setting, and shown how they can also be applied in a decentralized system.
Such methods include schemes based on Wilcoxon signed-rank test statistic [33, 23], as well as the sign
detector and its extensions [13, 1, 15]. These methods have been shown to be quite effective for certain
types of joint distributions.
Our approach to decentralized detection in this paper is based on a combination of ideas from reproducing_kernel Hilbert spaces [2, 25], and the framework of empirical risk minimization from nonparametric statis-_
tics. Methods based on reproducing-kernel Hilbert spaces (RKHSs) have figured prominently in the literature on centralized signal detection and estimation for several decades [e.g., 34, 17, 18]. More recent work
in statistical machine learning [e.g., 26] has demonstrated the power and versatility of kernel methods for
solving classification or regression problems on the basis of empirical data samples. Roughly speaking,
kernel-based algorithms in statistical machine learning involve choosing a function, which though linear
in the RKHS, induces a nonlinear function in the original space of observations. A key idea is to base
the choice of this function on the minimization of a regularized empirical risk functional. This functional
consists of the empirical expectation of a convex loss function φ, which represents an upper bound on the
0-1 loss (the 0-1 loss corresponds to the probability of error criterion), combined with a regularization term
that restricts the optimization to a convex subset of the RKHS. It has been shown that suitable choices of
margin-based convex loss functions lead to algorithms that are robust both computationally [26], as well as
statistically [35, 3]. The use of kernels in such empirical loss functions greatly increases their flexibility, so
that they can adapt to a wide range of underlying joint distributions.
In this paper, we show how kernel-based methods and empirical risk minimization are naturally suited
to the decentralized detection problem. More specifically, a key component of the methodology that we
propose involves the notion of a marginalized kernel, where the marginalization is induced by the transformation from the observations X to the local decisions Z. The decision rules at each sensor, which
can be either probabilistic or deterministic, are defined by conditional probability distributions of the form
Q(Z|X), while the decision at the fusion center is defined in terms of Q(Z|X) and a linear function over
2
-----
the corresponding RKHS. We develop and analyze an algorithm for optimizing the design of these decision
rules. It is interesting to note that this algorithm is similar in spirit to a suite of locally optimum detectors
in the literature [e.g., 7], in the sense that one step consists of optimizing the decision rule at a given sensor
while fixing the decision rules of the rest, whereas another step involves optimizing the decision rule of the
fusion center while holding fixed the local decision rules at each sensor. Our development relies heavily on
the convexity of the loss function φ, which allows us to leverage results from convex analysis [24] so as to
derive an efficient optimization procedure. In addition, we analyze the statistical properties of our algorithm,
and provide probabilistic bounds on its performance.
While the thrust of this paper is to explore the utility of recently-developed ideas from statistical machine learning for distributed decision-making, our results also have implications for machine learning. In
particular, it is worth noting that most of the machine learning literature on classification is abstracted away
from considerations of an underlying communication-theoretic infrastructure. Such limitations may prevent
an algorithm from aggregating all relevant data at a central site. Therefore, the general approach described
in this paper suggests interesting research directions for machine learning—specifically, in designing and
analyzing algorithms for communication-constrained environments.
The remainder of the paper is organized as follows. In Section 2, we provide a formal statement of the
decentralized decision-making problem, and show how it can be cast as a learning problem. In Section 3, we
present a kernel-based algorithm for solving the problem, and we also derive bounds on the performance of
this algorithm. Section 4 is devoted to the results of experiments using our algorithm, in application to both
simulated and real data. Finally, we conclude the paper with a discussion of future directions in Section 5.
## 2 Problem formulation and a simple strategy
In this section, we begin by providing a precise formulation of the decentralized detection problem to be
investigated in this paper, and show how it can be formulated in terms of statistical learning. We then
describe a simple strategy for designing local decision rules, based on an optimization problem involving
the empirical risk. This strategy, though naive, provides intuition for our subsequent development based on
kernel methods.
### 2.1 Formulation of the decentralized detection problem
Suppose Y is a discrete-valued random variable, representing a hypothesis about the environment. Although
the methods that we describe are more generally applicable, the focus of this paper is the binary case, in
which the hypothesis variable Y takes values in Y := {−1, +1}. Our goal is to form an estimate Y
[�]
of the true hypothesis, based on observations collected from a set of S sensors. More specifically, each
t = 1, . . ., S, let X [t] ∈X represent the observation at sensor t, where X denotes the observation space. The
full set of observations corresponds to the S-dimensional random vector X = (X [1], . . ., X [S]) ∈X [S], drawn
from the conditional distribution P (X|Y ).
We assume that the global estimate Y is to be formed by a fusion center. In the centralized setting, this
[�]
fusion center is permitted access to the full vector X = (X [1], . . ., X [S]) of observations. In this case, it is
well-known [31] that optimal decision rules, whether under the Bayes error or the Neyman-Pearson criteria,
can be formulated in terms of the likelihood ratio P (X|Y = 1)/P (X|Y = −1). In contrast, the defining
feature of the decentralized setting is that the fusion center has access only to some form of summary of each
observation X [t], t = 1, . . . S. More specifically, we suppose that each each sensor t = 1 . . ., S is permitted
3
-----
to transmit a message Z [t], taking values in some space Z. The fusion center, in turn, applies some decision
rule γ to compute an estimate Y = γ(Z [1], . . ., Z[S]) of Y based on its received messages.
[�]
In this paper, we focus on the case of a discrete observation space—say X = {1, 2, . . ., M }. The
key constraint, giving rise to the decentralized nature of the problem, is that the corresponding message
space Z = {1, . . ., L} is considerably smaller than the observation space (i.e., L ≪ M ). The problem is
to find, for each sensor t = 1, . . ., S, a decision rule γ [t] : X [t] →Z [t], as well as an overall decision rule
γ : Z [S] →{−PSfrag replacements1, +1} at the fusion center so as to minimize the Bayes risk P (Y ̸= γ(Z)). We assume that
the joint distribution P (X, Y ) is unknown, but that we are given n independent and identically distributed
(i.i.d.) data points (xi, yi)[n]i=1 [sampled from][ P] [(][X, Y][ )][.]
X [1] X [2] X [3] . . .X [S]
γ[1] γ[2] γ[3] . . . γ[S]
Z[1] Z[2] Z[3] . . .Z[S]
X ∈{1, . . ., M }[S]
Z ∈{1, . . ., L}[S]
**Figure 1.** Decentralized detection system with S sensors, in which Y is the unknown hypothesis,
X = (X [1], . . ., X [S]) is the vector of sensor observations; and Z = (Z [1], . . ., Z [S]) are the quantized messages
transmitted from sensors to the fusion center.
Figure 1 provides a graphical representation of this decentralized detection problem. The single node at
the top of the figure represents the hypothesis variable Y, and the outgoing arrows point to the collection of
observations X = (X [1], . . ., X [S]). The local decision rules γ[t] lie on the edges between sensor observations
X [t] and messages Z [t]. Finally, the node at the bottom is the fusion center, which collects all the messages.
Although the Bayes-optimal risk can always be achieved by a deterministic decision rule [28], considering the larger space of stochastic decision rules confers some important advantages. First, such a space
can be compactly represented and parameterized, and prior knowledge can be incorporated. Second, the optimal deterministic rules are often very hard to compute, and a probabilistic rule may provide a reasonable
approximation in practice. Accordingly, we represent the rule for the sensors t = 1, . . ., S by a conditional
probability distribution Q(Z|X). The fusion center makes its decision by applying a deterministic function
γ(z) of z. The overall decision rule (Q, γ) consists of the individual sensor rules and the fusion center rule.
The decentralization requirement for our detection/classification system—i.e., that the decision rule for
sensor t must be a function only of the observation x[t]—can be translated into the probabilistic statement
that Z[1], . . ., Z[S] be conditionally independent given X:
Q(Z|X) =
S
�
Q[t](Z[t]|X [t]). (1)
t=1
4
-----
In fact, this constraint turns out to be advantageous from a computational perspective, as will be clarified
in the sequel. We use Q to denote the space of all factorized conditional distributions Q(Z|X), and Q0 to
denote the subset of factorized conditional distributions that are also deterministic.
### 2.2 A simple strategy based on minimizing empirical risk
Suppose that we have as our training data n pairs (xi, yi) for i = 1, . . ., n. Note that each xi, as a particular
realization of the random vector X, is an S dimensional signal vector xi = (x[1]i [, . . ., x]i[S][)][ ∈X][ S][. Let][ P]
be the unknown underlying probability distribution for (X, Y ). The probabilistic set-up makes it simple to
estimate the Bayes risk, which is to be minimized.
Consider a collection of local decision rules made at the sensors, which we denote by Q(Z|X). For
each such set of rules, the associated Bayes risk is defined by:
1
Ropt := P (Y = 1|Z) − P (Y = −1|Z) . (2)
2 [−] [1]2 [E]���� ����
Here the expectation E is with respect to the probability distribution P (X, Y, Z) := P (X, Y )Q(Z|X). It
is clear that no decision rule at the fusion center (i.e., having access only to z) has Bayes risk smaller than
Ropt. In addition, the Bayes risk Ropt can be achieved by using the decision function
γopt(z) = sign(P (Y = 1|z) − P (Y = −1|z)).
It is key to observe that this optimal decision rule cannot be computed, because P (X, Y ) is not known, and
Q(Z|X) is to be determined. Thus, our goal is to determine the rule Q(Z|X) that minimizes an empirical
estimate of the Bayes risk based on the training data (xi, yi)[n]i=1[. In Lemma 1 we show that the following is]
one such unbiased estimate of the Bayes risk:
1
Remp :=
2 [−] 2[1]n
�
z
n
�
�� Q(z|xi)yi��. (3)
i=1
In addition, γopt(z) can be estimated by the decision function γemp(z) = sign��ni=1 [Q][(][z][|][x][i][)][y][i]�. Since Z
is a discrete random vector, the optimal Bayes risk can be estimated easily, regardless of whether the input
signal X is discrete or continuous.
**Lemma 1. (a) Assume that P** (z) > 0 for all z. Define
�n
i=1 [Q][(][z][|][x][i][)][I][(][y][i][ = 1)]
κ(z) = �n .
i=1 [Q][(][z][|][x][i][)]
_Then limn→∞_ κ(z) = P (Y = 1|z).
_(b) As n →∞, Remp and γemp(z) tend to Ropt and γopt(z), respectively._
_Proof. See Appendix 1._
The significance of Lemma 1 is in motivating the goal of finding decision rules Q(Z|X) to minimize
the empirical error Remp. It is equivalent, using equation (3), to maximize
n
�
Q(z|xi)yi
i=1
5
, (4)
����
�
C(Q) =
z
����
-----
subject to the constraints that define a probability distribution:
Q(z|x) = [�]t[S]=1 [Q][t][(][z][t][|][x][t][)] for all values of z and x.
�z[t][ Q][t][(][z][t][|][x][t][) = 1] for t = 1, . . ., S,
Q[t](z[t]|x[t]) ∈ [0, 1] for t = 1, . . ., S.
(5)
The major computational difficulty in the optimization problem defined by equations (4) and (5) lies in the
summation over all L[S] possible values of z ∈Z [S]. One way to avoid this obstacle is by maximizing instead
the following function:
�
C2(Q) :=
z
� n
�
Q(z|xi)yi
i=1
�2
.
Expanding the square and using the conditional independence condition (1) leads to the following equivalent
form for C2:
L
�
Q[t](z[t]|x[t]i[)][Q][t][(][z][t][|][x][t]j[)][.] (6)
z[t]=1
�
C2(Q) = yiyj
i,j
S
�
t=1
Note that the conditional independence condition (1) on Q allow us to compute C2(Q) in O(SL) time, as
opposed to O(L[S]).
While this simple strategy is based directly on the empirical risk, it does not exploit any prior knowledge
about the class of discriminant functions for γ(z). As we discuss in the following section, such knowledge
can be incorporated into the classifier using kernel methods. Moreover, the kernel-based decentralized
detection algorithm that we develop turns out to have an interesting connection to the simple approach
based on C2(Q).
## 3 A kernel-based algorithm
In this section, we turn to methods for decentralized detection based on empirical risk minimization and
kernel methods [2, 25, 26]. We begin by introducing some background and definitions necessary for subsequent development. We then motivate and describe a central component of our decentralized detection
system—namely, the notion of a marginalized kernel. Our method for designing decision rules is based on
an optimization problem, which we show how to solve efficiently. Finally, we derive theoretical bounds on
the performance of our decentralized detection system.
### 3.1 Empirical risk minimization and kernel methods
In this section, we provide some background on empirical risk minimization and kernel methods. The
exposition given here is necessarily very brief; we refer the reader to the books [26, 25, 34] for more details.
Our starting point is to consider estimating Y with a rule of the form y(x) = signf (x), where f : X → R is
�
a discriminant function that lies within some function space to be specified. The ultimate goal is to choose
a discriminant function f to minimize the Bayes error P (Y ̸= Y ), or equivalently to minimize the expected
[�]
value of the following 0-1 loss:
φ0(yf (x)) := I[y ̸= sign(f (x))]. (7)
6
-----
This minimization is intractable, both because the function φ0 is not well-behaved (i.e., non-convex and
non-differentiable), and because the joint distribution P is unknown. However, since we are given a set
of i.i.d. samples {(xi, yi)}[n]i=1[, it is natural to consider minimizing a loss function based on an][ empirical]
_expectation, as motivated by our development in Section 2.2. Moreover, it turns out to be fruitful, for both_
computational and statistical reasons, to design loss functions based on convex surrogates to the 0-1 loss.
Indeed, a variety of classification algorithms in statistical machine learning have been shown to involve
loss functions that can be viewed as convex upper bounds on the 0-1 loss. For example, the support vector
machine (SVM) algorithm [9, 26] uses a hinge loss function:
φ1(yf (x)) := (1 − yf (x))+ ≡ max{1 − yf (x), 0}. (8)
On the other hand, the logistic regression algorithm [12] is based on the logistic loss function:
φ2(yf (x)) := log �1 + exp[−][yf] [(][x][)][ �][−][1]. (9)
Finally, the standard form of the boosting classification algorithm [11] uses a exponential loss function:
φ3(yf (x)) := exp(−yf (x)). (10)
Intuition suggests that a function f with small φ-risk Eφ(Y f (X)) should also have a small Bayes risk
P (Y ̸= sign(f (X))). In fact, it has been established rigorously that convex surrogates for the (non-convex)
0-1 loss function, such as the hinge (8) and logistic loss (9) functions, have favorable properties both computationally (i.e., algorithmic efficiency), and in a statistical sense (i.e., bounds on estimation error) [35, 3].
We now turn to consideration of the function class from which the discriminant function f is to be
chosen. Kernel-based methods for discrimination entail choosing f from within a function class defined by
a positive semidefinite kernel, defined as follows (see [25]):
**Definition 2. A real-valued kernel function is a symmetric bilinear mapping Kx : X × X →** R. It is
_positive semidefinite, which means that for any subset {x1, . . ., xn} drawn from X_ _, the Gram matrix Kij =_
Kx(xi, xj) is positive semidefinite.
Given any such kernel, we first define a vector space of functions mapping X to the real line R through
all sums of the form
m
�
f (·) = αjKx(·, xj), (11)
j=1
where {xj}[m]j=1 [are arbitrary points from][ X] [, and][ α][j][ ∈] [R][. We can equip this space with a][ kernel-based inner]
_product by defining ⟨Kx(·, xi), Kx(·, xj)⟩_ := Kx(xi, xj), and then extending this definition to the full
space by bilinearity. Note that this inner product induces, for any function of the form (11), the kernel-based
norm ∥f ∥[2]H [=][ �]i,j[m] =1 [α][i][α][j][K][x][(][x][i][, x][j][)][.]
**Definition 3. The reproducing kernel Hilbert space H associated with a given kernel Kx consists of the**
_kernel-based inner product, and the closure (in the kernel-based norm) of all functions of the form (11)._
As an aside, the term “reproducing” stems from the fact for any f ∈H, we have ⟨f, Kx(·, xi)⟩ = f (xi),
showing that the kernel acts as the representer of evaluation [25].
7
-----
In the framework of empirical risk minimization, the discriminant function f ∈H is chosen by minimizing a cost function given by the sum of the empirical φ-risk Eφ(Y f (X)) and a suitable regularization
[�]
term
n
�
min φ(yif (xi)) + [λ] H[,] (12)
f ∈H 2 [∥][f] [∥][2]
i=1
where λ > 0 is a regularization parameter. The Representer Theorem (Thm. 4.2; [26]) guarantees that the
optimal solution to problem (12) can be written in the form f[�](x) = [�]i[n]=1 [α][i][y][i][K][x][(][x, x][i][)][, for a particular]
vector α ∈ R[n]. The key here is that sum ranges only over the observed data points {(xi, yi)}[n]i=1[.]
For the sake of development in the sequel, it will be convenient to express functions f ∈H as linear
discriminants involving the the feature map Φ(x) := Kx(·, x). (Note that for each x ∈X, the quantity
Φ(x) ≡ Φ(x)(·) is a function from X to the real line R.) Any function f in the Hilbert space can be written
as a linear discriminant of the form ⟨w, Φ(x)⟩ for some function w ∈H. (In fact, by the reproducing
property, we have f (·) = w(·)). As a particular case, the Representer Theorem allows us to write the
optimal discriminant as f[�](x) = ⟨w,� Φ(x)⟩, where �w = [�]i[n]=1 [α][i][y][i][Φ(][x][i][)][.]
### 3.2 Fusion center and marginalized kernels
With this background, we first consider how to design the decision rule γ at the fusion center for a fixed setting Q(Z|X) of the sensor decision rules. Since the fusion center rule can only depend on z = (z [1], . . ., z[S]),
our starting point is a feature space {Φ[′](z)} with associated kernel Kz. Following the development in the
previous section, we consider fusion center rules defined by taking the sign of a linear discriminant of the
form γ(z) := ⟨w, Φ[′](z)⟩. We then link the performance of γ to another kernel-based discriminant function f that acts directly on x = (x[1], . . ., x[S]), where the new kernel KQ associated with f is defined as a
_marginalized kernel in terms of Q(Z|X) and Kz._
The relevant optimization problem is to minimize (as a function of w) the following regularized form of
the empirical φ-risk associated with the discriminant γ
n
�
φ(yiγ(z))Q(z|xi) + [λ], (13)
2 [||][w][||][2][�]
i=1
min
w
��
z
where λ > 0 is a regularization parameter. In its current form, the objective function (13) is intractable to
compute (because it involves summing over all L[S] possible values of z of a loss function that is generally
non-decomposable). However, exploiting the convexity of φ allows us to perform the computation exactly
for deterministic rules in Q0, and also leads to a natural relaxation for an arbitrary decision rule Q ∈Q.
This idea is formalized in the following:
**Proposition 4. Define the quantities**
�
ΦQ(x) := Q(z|x)Φ[′](z), and f (x; Q) := ⟨w, ΦQ(x)⟩. (14)
z
_For any convex φ, the optimal value of the following optimization problem is a lower bound on the optimal_
_value in problem (13):_
�
min φ(yif (xi; Q)) + [λ] (15)
w 2 [||][w][||][2]
i
_Moreover, the relaxation is tight for any deterministic rule Q(Z|X)._
8
-----
_Proof. Applying Jensen’s inequality to the function φ yields φ(yif_ (xi; Q)) ≤ [�]z [φ][(][y][i][γ][(][z][))][Q][(][z][|][x][i][)][ for]
each i = 1, . . . n, from which the lower bound follows. Equality for deterministic Q ∈Q0 is immediate.
A key point is that the modified optimization problem (15) involves an ordinary regularized empirical
φ-loss, but in terms of a linear discriminant function f (x; Q) = ⟨w, ΦQ(x)⟩ in the transformed feature
space {ΦQ(x)} defined in equation (14). Moreover, the corresponding marginalized kernel function takes
the form:
�
KQ(x, x[′]) := Q(z|x)Q(z[′]|x[′]) Kz(z, z[′]), (16)
z,z[′]
where Kz(z, z[′]) := ⟨Φ[′](z), Φ[′](z[′])⟩ is the kernel in {Φ[′](z)}-space. It is straightforward to see that the
positive semidefiniteness of Kz implies that KQ is also a positive semidefinite function.
From a computational point of view, we have converted the marginalization over loss function values
to a marginalization over kernel functions. While the former is intractable, the latter marginalization can
be carried out in many cases by exploiting the structure of the conditional distributions Q(Z|X). (In Section 3.3, we provide several examples to illustrate.) From the modeling perspective, it is interesting to
note that marginalized kernels, like that of equation (16), underlie recent work that aims at combining the
advantages of graphical models and Mercer kernels [16, 29].
As a standard kernel-based formulation, the optimization problem (15) can be solved by the usual Lagrangian dual formulation [26], thereby yielding an optimal weight vector w. This weight vector defines the
decision rule for the fusion center by γ(z) := ⟨w, Φ[′](z)⟩. By the Representer Theorem [26], the optimal
solution w to problem (15) has an expansion of the form
n
�
i=1
�
αiyiQ(z[′]|xi)Φ[′](z[′]),
z[′]
w =
n
�
αiyiΦQ(xi) =
i=1
where α is an optimal dual solution, and the second equality follows from the definition of ΦQ(x) given in
equation (14). Substituting this decomposition of w into the definition of γ yields
�
γ(z) :=
z[′]
n
�
αiyiQ(z[′]|xi)Kz(z, z[′]). (17)
i=1
Note that there is an intuitive connection between the discriminant functions f and γ. In particular, using the
definitions of f and KQ, it can be seen that f (x) = E[γ(Z)|x], where the expectation is taken with respect
to Q(Z|X = x). The interpretation is quite natural: when conditioned on some x, the average behavior of
the discriminant function γ(Z), which does not observe x, is equivalent to the optimal discriminant f (x),
which does have access to x.
### 3.3 Design and computation of marginalized kernels
As seen in the previous section, the representation of discriminant functions f and γ depends on the kernel
functions Kz(z, z[′]) and KQ(x, x[′]), and not on the explicit representation of the underlying feature spaces
{Φ[′](z)} and {ΦQ(x)}. It is also shown in the next section that our algorithm for solving f and γ requires
only the knowledge of the kernel functions Kz and KQ. Indeed, the effectiveness of a kernel-based algorithm
typically hinges heavily on the design and computation of its kernel function(s).
9
-----
Accordingly, let us now consider the computational issues associated with marginalized kernel KQ,
assuming that Kz has already been chosen. In general, the computation of KQ(x, x[′]) entails marginalizing
over the variable Z, which (at first glance) has computational complexity on the order of O(L[S]). However,
this calculation fails to take advantage of any structure in the kernel function Kz. More specifically, it is
often the case that the kernel function Kz(z, z[′]) can be decomposed into local functions, in which case the
computational cost is considerably lower. Here we provide a few examples of computationally tractable
kernels.
**Computationally tractable kernels:**
(a) Perhaps the simplest example is the linear kernel Kz(z, z[′]) = [�]t[S]=1 [z][t][z][′][t][, for which it is straightfor-]
ward to derive KQ(x, x[′]) = [�]t[S]=l [E][[][z][t][|][x][t][]][ E][[][z][′][t][|][x][′][t][]][.]
(b) A second example, natural for applications in which X [t] and Z [t] are discrete random variables, is
the count kernel. Let us represent each discrete value u ∈{1, . . ., M } as a M -dimensional vector (0, . . ., 1, . . ., 0), whose u-th coordinate takes value 1. If we define the first-order count kernel
Kz(z, z[′]) := [�]t[S]=1 [I][[][z][t][ =][ z][′][t][]][, then the resulting marginalized kernel takes the form:]
S
�
Q(z[t] = z[′][t]|x[t], x[′][t]). (18)
t=1
�
KQ(x, x[′]) = Q(z|x)Q(z[′]|x[′])
z,z[′]
S
�
I[z[t] = z[′][t]] =
t=1
(c) A natural generalization is the second-order count kernel Kz(z, z[′]) = [�]t,r[s] =1 [I][[][z][t][ =][ z][′][t][]][I][[][z][r][ =]
z[′][r]] that accounts for the pairwise interaction between coordinates z [t] and z[r]. For this example, the
associated marginalized kernel KQ(x, x[′]) takes the form:
�
2 Q(z[t] = z[′][t]|x[t], x[′][t])Q(z[r] = z[′][r]|x[r], x[′][r]). (19)
1≤t<r≤S
**Remarks: First, note that even for a linear base kernel Kz, the kernel function KQ inherits additional**
(nonlinear) structure from the marginalization over Q(Z|X). As a consequence, the associated discriminant
functions (i.e., γ and f ) are certainly not linear. Second, our formulation allows any available prior knowledge to be incorporated into KQ in at least two possible ways: (i) The base kernel representing a similarity
measure in the quantized space of z can reflect the structure of the sensor network, or (ii) More structured
decision rules Q(Z|X) can be considered, such as chain or tree-structured decision rules.
### 3.4 Joint optimization
Our next task is to perform joint optimization of both the fusion center rule, defined by w (or equivalently
α, as in equation (17)), and the sensor rules Q. Observe that the cost function (15) can be re-expressed as a
function of both w and Q as follows:
G(w; Q) := [1]
λ
� �
� �
φ yi⟨w, Q(z|xi)Φ[′](z)⟩ + [1] (20)
2 [||][w][||][2][.]
i z
Of interest is the joint minimization of the function G in both w and Q. It can be seen easily that
(a) G is convex in w with Q fixed; and
10
-----
(b) moreover, G is convex in Q[t], when both w and all other {Q[r], r ̸= t} are fixed.
These observations motivate the use of blockwise coordinate gradient descent to perform the joint minimization.
**Optimization of w: As described in Section 3.2, when Q is fixed, then minw G(w; Q) can be computed**
efficiently by a dual reformulation. Specifically, as we establish in the following result using ideas from
convex duality [24], a dual reformulation of minw G(w; Q) is given by
n �
� φ[∗](−λαi) − [1] (yy[T] ) ◦ KQ�α, (21)
2 [α][T][ �]
i=1
max
α∈R[n]
�
− [1]
λ
where φ[∗](u) := supv∈R �u · v − φ(v)} is the conjugate dual of φ, [KQ]ij := KQ(xi, xj) is the empirical
kernel matrix, and ◦ denotes Hadamard product.
**Proposition 5. For each fixed Q ∈Q, the value of the primal problem inf w G(w; Q) is attained and equal to**
_its dual form (21). Furthermore, any optimal solution α to problem (21) defines the optimal primal solution_
w(Q) to minw G(w; Q) via w(Q) = [�]i[n]=1 [α][i][y][i][Φ][Q][(][x][i][)][.]
_Proof. It suffices for our current purposes to restrict to the case where the functions w and ΦQ(x) can be_
viewed as vectors in some finite-dimensional space—say R[m]. However, it is possible to extend this approach
to the infinite-dimensional setting by using conjugacy in general normed spaces [21].
A remark on notation before proceeding: since Q is fixed, we drop Q from G for notational convenience
(i.e., we write G(w) ≡ G(w; Q)). First, we observe that G(w) is convex with respect to w and that G →∞
as ||w|| →∞. Consequently, the infimum defining the primal problem inf w∈Rm G(w) is attained. We now
re-write this primal problem as follows:
inf = inf
w∈R[m][ G][(][w][)] w∈R[m][{][G][(][w][)][ −⟨][w,][ 0][⟩}][ =][ −][G][∗][(0)][,]
where G[∗] : R[m] → R denotes the conjugate dual of G.
Using the notation gi(w) := λ1 [φ][(][⟨][w, y][i][Φ][Q][(][x][i][)][⟩][)][ and][ Ω(][w][) :=][ 1]2 [||][w][||][2][, we can decompose][ G][ as the]
sum G(w) = [�]i[n]=1 [g][i][(][w][) + Ω(][w][)][. This decomposition allows us to compute the conjugate dual][ G][∗] [via the]
inf-convolution theorem (Thm. 16.4; Rockafellar [24]) as follows:
n �
�
ui) . (22)
i=1
G[∗](0) = inf
ui,i=1,...,n
� n
�
gi[∗][(][u][i][) + Ω][∗][(][−]
i=1
Applying calculus rules for conjugacy operations (Thm. 16.3; [24]), we obtain:
gi[∗][(][u][i][)] =
� λ1 [φ][∗][(][−][λα][i][)] if ui = −αi(yiΦQ(xi)) for some αi ∈ R
(23)
+∞ otherwise.
A straightforward calculation yields Ω[∗](v) = supw{⟨v, w⟩− 2[1] [||][w][||][2][}][ =][ 1]2 [||][v][||][2][.][ Substituting these expres-]
sions into equation (22) leads to:
1
λ [φ][∗][(][−][λ][i][α][i][) + 1]2
11
n 2
�
αiyiΦQ(xi),
i ����
����
G[∗](0) = inf
α∈R[n]
n
�
i=1
-----
from which it follows that
inf = −G[∗](0) = sup
w [G][(][w][)] α∈R[n]
�
�
αiαjyiyjKx(xi, xj) .
1≤i,j≤n
�
− [1]
λ
n
�
φ[∗](−λαi) − [1]
2
i=1
Thus, we have derived the dual form (21). See Appendix 5 for the remainder of the proof, in which we
derive the link between w(Q) and the dual variables α.
This proposition is significant in that the dual problem involves only the kernel matrix (KQ(xi, xj))1≤i,j≤n.
Hence, one can solve for the optimal discriminant functions y = f (x) or y = γ(z) without requiring explicit
knowledge of the underlying feature spaces {Φ[′](z)} and {ΦQ(x)}. As a particular example, consider the
case of hinge loss function (8), as used in the SVM algorithm [26]. A straightforward calculation yields
φ[∗](u) =
�
u if u ∈ [−1, 0]
+∞ otherwise.
Substituting this formula into (21) yields, as a special case, the familiar dual formulation for the SVM:
max
0≤α≤1/λ
� n �
� αi − [1] (yy[T] ) ◦ KQ�α .
2 [α][T][ �]
i
**Optimization of Q: The second step is to minimize G over Q[t], with w and all other {Q[r], r ̸= t} held**
fixed. Our approach is to compute the derivative (or more generally, the subdifferential) with respect to Q[t],
and then apply a gradient-based method. A challenge to be confronted is that G is defined in terms of feature
vectors Φ[′](z), which are typically high-dimensional quantities. Indeed, although it is intractable to evaluate
the gradient at an arbitrary w, the following result establishes that it can always be evaluated at the point
(w(Q), Q) for any Q ∈Q.
**Lemma 6. Let w(Q) be the optimizing argument of minw G(w; Q), and let α be an optimal solution to the**
_dual problem (21). Then the following element_
�
−λ αiαjQ(z[′]|xj) [Q][(][z][|][x][i][)] i [= ¯][x][t][]][ I][[][z][t][ = ¯][z][t][]]
Q[t](z[t]|x[t]i[)] [K][z][(][z, z][′][)][I][[][x][t]
(i,j)(z,z[′])
_is an element of the subdifferential._ [1]
_Proof. See Appendix 5._
Observe that this representation of the (sub)gradient involves marginalization over Q of the kernel function Kz, and therefore can be computed efficiently in many cases, as described in Section 3.3. Overall, the
blockwise coordinate descent algorithm for optimizing the choice of local decision rules takes the following
form:
1Subgradient is a generalized counterpart of gradient for non-differentiable convex functions. Briefly, a subgradient of a convex
function f : R[m] → R at x is a vector s ∈ R[m] satisfying f (y) ≥ f (x) + ⟨s, y − x⟩ for all y ∈ R[m]. The subdifferential at a point
x is the set of all subgradients; hence, if f is differentiable at x, the subdifferential consists of the single vector {∇f (x)}. In our
cases, G is non-differentiable when φ is the hinge loss (8), and differentiable when φ is the logistic loss (9) or exponential loss (10).
∂Qt(¯zt|x¯t)G evaluated at (w(Q), Q). More details on convex analysis can be found in the books [24, 14].
12
-----
**Kernel quantization (KQ) algorithm:**
(a) With Q fixed, compute the optimizing w(Q) by solving the dual problem (21).
(b) For some index t, fix w(Q) and {Q[r], r ̸= t} and take a gradient step in Q[t] using
Lemma 6.
Upon convergence, we define a deterministic decision rule for each sensor t via:
γ[t](x[t]) := argmaxzt∈Z Q(z[t]|x[t]). (24)
**Remarks: A number of comments about this algorithm are in order. At a high level, the updates consist**
of alternatively updating the decision rule for a sensor while fixing the decision rules for the remaining sensors and the fusion center, and updating the decision rule for the fusion center while fixing the decision rules
for all other sensors. In this sense, our approach is similar in spirit to a suite of practical algorithms [e.g.,
28] for decentralized detection under particular assumptions on the joint distribution P (X, Y ).
Using standard results [5], it is straightforward to guarantee convergence of such coordinate-wise updates when the loss function φ is strictly convex and differentiable (e.g., logistic loss (9) or exponential
loss (10)). In contrast, the case of non-differentiable φ (e.g., hinge loss (8)) requires more care. We have,
however, obtained good results in practice even in the case of hinge loss.
Finally, it is interesting to note the connection between the KQ algorithm and the naive approach considered in Section 2.2. More precisely, suppose that we fix w such that all αi are equal to one, and let the
base kernel Kz be constant (and thus entirely uninformative). Under these conditions, the optimization of
G with respect to Q reduces to exactly the naive approach.
### 3.5 Estimation error bounds
This section is devoted to analysis of the statistical properties of the KQ algorithm. In particular, our goal
is to derive bounds on the performance of our classifier (Q, γ) when applied to new data, as opposed to the
i.i.d. samples on which it was trained. It is key to distinguish between two forms of φ-risk:
(a) the empirical φ-risk Eφ(Y γ(Z)) is defined by an expectation over P (X, Y )Q(Z|X), where P is the
[�] [�] [�]
empirical distribution given by the i.i.d. samples {(xi, yi)}[n]i=1[.]
(b) the true φ-risk Eφ(Y γ(Z)) is defined by taking an expectation over the joint distribution P (X, Y )Q(Z|X).
In designing our classifier, we made use of the empirical φ-risk as a proxy for the actual risk. On the
other hand, the appropriate metric for assessing performance of the designed classifier is the true φ-risk
Eφ(Y γ(Z)). At a high level, our procedure for obtaining performance bounds can be decomposed into the
following steps:
1. First, we relate the true φ-risk Eφ(Y γ(Z)) to the true φ-risk Eφ(Y f (X) for the functions f ∈F
(and f ∈F0) that are computed at intermediate stages of our algorithm. The latter quantities are
well-studied objects in statistical learning theory.
2. The second step to relate the empirical φ-risk E(Y f (X)) to the true φ-risk E(Y f (X)). In general,
[�]
the true φ-risk for a function f in some class F is bounded by the empirical φ-risk plus a complexity
term that captures the “richness” of the function class F [35, 3]. In particular, we make use of the
_Rademacher complexity as a measure of this richness._
13
-----
3. Third, we combine the first two steps so as to derive bounds on the true φ-risk Eφ(Y γ(Z)) in terms
of the empirical φ-risk of f and the Rademacher complexity.
4. Finally, we derive bounds on the Rademacher complexity in terms of the number of training samples
n, as well as the number of quantization levels L and M .
**Step 1: We begin by isolating the class of functions over which we optimize. Define, for a fixed Q ∈Q,**
the function space FQ as
�f : x �→⟨w, ΦQ(x)⟩ = � αiyiKQ(x, xi) �� s. t. ||w|| ≤ B�, (25)
i
where B > 0 is a constant. Note that FQ is simply the class of functions associated with the marginalized kernel KQ. The function class over which our algorithm performs the optimization is defined by the
union F := ∪Q∈QFQ, where Q is the space of all factorized conditional distributions Q(Z|X). Lastly, we
define the function class F0 := ∪Q∈Q0FQ, corresponding to the union of the function spaces defined by
marginalized kernels with deterministic distributions Q.
Any discriminant function f ∈F (or F0), defined by a vector α, induces an associated discriminant
function γf via equation (17). Relevant to the performance of the classifier γf is the expected φ-loss
Eφ(Y γf (Z)), whereas the algorithm actually minimizes (the empirical version of) Eφ(Y f (X)). The relationship between these two quantities is expressed in the following proposition.
**Proposition 7.**
_(a) We have Eφ(Y γf_ (Z)) ≥ Eφ(Y f (X)), with equality when Q(Z|X) is deterministic.
_(b) Moreover, there holds_
inf ≤ inf (26a)
f ∈F [E][φ][(][Y γ][f] [(][Z][))] f ∈F0 [E][φ][(][Y f] [(][X][)]
inf ≥ inf (26b)
f ∈F [E][φ][(][Y γ][f] [(][Z][))] f ∈F [E][φ][(][Y f] [(][X][))][.]
_The same statement also holds for empirical expectations._
_Proof. Applying Jensen’s inequality to the convex function φ yields_
Eφ(Y γf (Z)) = EXY E[φ(Y γf (Z))|X, Y ] ≥ EXY φ(E[Y γf (Z)|X, Y ]) = Eφ(Y f (X)),
where we have used the conditional independence of Z and Y given X. This establishes part (a), and
the lower bound (26b) follows directly. Moreover, part (a) also implies that inf f ∈F0 Eφ(Y γf (Z)) =
inf f ∈F0 Eφ(Y f (X)), and the upper bound (26a) follows since F0 ⊂F.
**Step 2: The next step is to relate the empirical φ-risk for f (i.e.,** E(Y f (X))) to the true φ-risk (i.e.,
[�]
E(Y f (X))). Recall that the Rademacher complexity of the function class F is defined [30] as
n
�
σif (Xi),
i=1
Rn(F) = E sup
f ∈F
2
n
where the Rademacher variables σ1, . . ., σn are independent and uniform on {−1, +1}, and X1, . . ., Xn
are i.i.d. samples selected according to distribution P . In the case that φ is Lipschitz with constant ℓ, the
empirical and true risk can be related via the Rademacher complexity as follows [20]. With probability at
14
-----
least 1 − δ with respect to training samples (Xi, Yi)[n]i=1[, drawn according to the empirical distribution][ P][ n][,]
there holds
�
ln(2/δ)
fsup∈F |Eφ[�] (Y f (X)) − Eφ(Y f (X))| ≤ 2ℓRn(F) + 2n . (27)
Moreover, the same bound applies to F0.
**Step 3: Combining the bound (27) with Proposition 7 leads to the following theorem, which provides**
generalization error bounds for the optimal φ-risk of the decision function learned by our algorithm in terms
of the Rademacher complexities Rn(F0) and Rn(F):
**Theorem 8. Given n i.i.d. labeled data points (xi, yi)[n]i=1[, with probability at least][ 1][ −]** [2][δ][,]
ln(2/δ)
2n
�
1
inf
f ∈F n
n
�
φ(yif (xi)) − 2ℓRn(F) −
i=1
≤ inf
f ∈F [E][φ][(][Y γ][f] [(][Z][))][ ≤]
ln(2/δ)
.
2n
�
1
inf
f ∈F0 n
n
�
φ(yif (xi)) + 2ℓRn(F0) +
i=1
_Proof. Using bound (27), with probablity at least 1 −_ δ, for any f ∈F,
ln(2/δ)
.
2n
�
Eφ(Y f (X) ≥ [1]
n
n
�
φ(yif (xi)) − 2ℓRn(F) −
i=1
Combining with (26b), we have, with probability 1 − δ,
inf ≥ inf
f ∈F [E][φ][(][Y γ][f] [(][Z][))] f ∈F [E][φ][(][Y f] [(][X][))]
ln(2/δ)
2n
�
1
≥ inf
f ∈F n
n
�
φ(yif (xi)) − 2ℓRn(F) −
i=1
which proves the lower bound of the theorem with probability at least 1 − δ. The upper bound is similarly
true with probability at least 1 − δ. Hence, both are true with probability at least 1 − 2δ, by the union bound.
**Step 4: So that Theorem 8 has practical meaning, we need to derive upper bounds on the Rademacher**
complexity of the function classes F and F0. Of particular interest is the growth in the complexity of F
and F0 with respect to the number of training samples n, as well as the number of discrete signals L and
M . The following proposition derives such bounds, exploiting the fact that the number of 0-1 conditional
probability distributions Q(Z|X) is a finite number, (L[MS]).
**Proposition 9.**
n
� �
KQ(Xi, Xi) + 2(n − 1)
i=1
15
2B
Rn(F0) ≤
n
�
E sup
Q∈Q0
�
n/2 sup
z,z[′][ K][z][(][z, z][′][)]
�1/2
2MS log L . (28)
_Proof. See Appendix 5._
-----
Although the rate given in equation (28) is not tight in terms of the number of data samples n, the bound is
nontrivial and is relatively simple. (In particular, it depends directly on the kernel function K, the number
of samples n, quantization levels L, number of sensors S, and size of observation space M .)
We can also provide a more general and possibly tighter upper bound on the Rademacher complexity
based on the concept of entropy number [30]. Indeed, an important property of the Rademacher complexity is that it can be estimated reliably from a single sample (x1, . . ., xn). Specifically, if we define
R�n(F) := E[ n[2] [sup][f] [∈F] �ni=1 [σ][i][f] [(][x][i][)]][ (where the expectation is w.r.t. the Rademacher variables][ {][σ][i][}][ only),]
then it can be shown using McDiarmid’s inequality that R[�]n(F) is tightly concentrated around Rn(F) with
high probablity [4]. Concretely, for any η > 0, there holds:
� �
P |Rn(F) − R[�]n(F)| ≥ η ≤ 2e[−][η][2][n/][8]. (29)
Hence, the Rademacher complexity is closely related to its empirical version R[�]n(F), which can be related
to the concept of entropy number. In general, define the covering number N (ϵ, S, ρ) for a set S to be the
minimum number of balls of diameter ϵ that completely cover S (according to a metric ρ). The ϵ-entropy
number of S is then defined as log N (ϵ, S, ρ). In our context, consider in particular the L2(Pn) metric
defined on an empirical sample (x1, . . ., xn) as:
� 1
∥f1 − f2∥L2(Pn) :=
n
n �1/2
�
(f1(xi) − f2(xi))[2] .
i=1
Then, it is well known [30] that for some absolute constant C, there holds:
� ∞
R�n(F) ≤ C
0
�
log N (ϵ, F, L2(Pn))
dϵ. (30)
n
The following result relates the entropy number for F to the supremum of the entropy number taken over a
restricted function class FQ.
**Proposition 10. The entropy number log N** (ϵ, F, L2(Pn)) of F is bounded above by
sup log N (ϵ/2, FQ, L2(Pn)) + (L − 1)MS log [2][L][S][ sup][ ||][α][||][1][ sup][z,z][′][ K][z][(][z, z][′][)] . (31)
Q∈Q ϵ
_Moreover, the same bound holds for F0._
_Proof. See Appendix 5._
This proposition guarantees that the increase in the entropy number in moving from some FQ to the
1
larger class F is only O((L−1)MS log(L[S]/ϵ)). Consequently, we incur at most an O([MS [2](L − 1) log L/n] 2 )
increase in the upper bound (30) for Rn(F) (as well as Rn(F0)). Moreover, the Rademacher complexity
increases with the square root of the number L log L of quantization levels L.
16
-----
## 4 Experimental Results
We evaluated our algorithm using both data from simulated sensor networks and real-world data sets. We
consider three types of sensor network configurations:
**Conditionally independent observations: In this example, the observations X** [1], . . ., X [S] are independent conditional on Y, as illustrated in Figure 1. We consider networks with 10 sensors (S = 10), each of
which receive signals with 8 levels (M = 8). We applied the algorithm to compute decision rules for L = 2.
In all cases, we generate n = 200 training samples, and the same number for testing. We performed 20 trials
on each of 20 randomly generated models P (X, Y ).
**Chain-structured dependency: A conditional independence assumption for the observations, though**
widely employed in most work on decentralized detection, may be unrealistic in many settings. For instance,
consider the problem of detecting a random signal in noise [31], in which Y = 1 represents the hypothesis
that a certain random signal is present in the environment, whereas Y = −1 represents the hypothesis that
only i.i.d. noise is present. Under these assumptions X [1], . . ., X [S] will be conditionally independent given
Y = −1, since all sensors receive i.i.d. noise. However, conditioned on Y = +1 (i.e., in the presence of
the random signal), the observations at spatially adjacent sensors will be dependent, with the dependence
decaying with distance.
In a 1-D setting, these conditions can be modeled with a chain-structured dependency, and the use of a
count kernel to account for the interaction among sensors. More precisely, we consider a set-up in which
five sensors are located in a line such that only adjacent sensors interact with each other. More specifically,
the sensors Xt−1 and Xt+1 are independent given Xt and Y, as illustrated in Figure 2. We implemented
the kernel-based quantization algorithm using either first- or second-order count kernels, and the hinge loss
function (8), as in the SVM algorithm. The second-order kernel is specified in equation (19) but with the
sum taken over only t, r such that |t − r| = 1.
Y
Y
X [1]
PSfrag replacements
X [2]
X [3]
X [4]
X [5]
X [1] X [2] X [3]
X [4] X [5] X [6]
X [7] X [8] X [9]
PSfrag replacements
(a) (b)
**Figure 2. Examples of graphical models P** (X, Y ) of our simulated sensor networks. (a) Chain-structured
dependency. (b) Fully connected (not all connections shown).
**Spatially-dependent sensors: As a third example, we consider a 2-D layout in which, conditional on**
the random target being present (Y = +1), all sensors interact but with the strength of interaction decaying
with distance. Thus P (X|Y = 1) is of the form:
P (X|Y = 1) ∝ exp � [�] ht;uIu(X [t]) + � θtr;uvIu(X [t])Iv(X [r])�.
t t̸=r;uv
17
-----
Here the parameter h represents observations at individual sensors, whereas θ controls the dependence
among sensors. The distribution P (X|Y = −1) can be modeled in the same way with observations h[′],
and setting θ[′] = 0 so that the sensors are conditionally independent. In simulations, we generate θtr;uv ∼
N (1/dtr, 0.1), where dtr is the distance between sensor t and r, and the observations h and h[′] are randomly
chosen in [0, 1][S]. We consider a sensor network with 9 nodes (i.e., S = 9), arrayed in the 3 × 3 lattice
illustrated in Figure 2(b). Since computation of this density is intractable for moderate-sized networks, we
generated an empirical data set (xi, yi) by Gibbs sampling.
Naive Bayes sensor network Chain−structured sensor network
KQ test error
KQ test error
(a) (b)
Chain−structured sensor network
2nd CK
0.3 0.4 0.5
KQ test error
Fully connected sensor network
0.2 0.3 0.4
KQ test error
(c) (d)
**Figure 3. Scatter plots of the test error of the LR versus KQ methods. (a) Conditionally independent network.**
(b) Chain model with first-order kernel. (c), (d) Chain model with second-order kernel. (d) Fully connected
model.
We compare the results of our algorithm to an alternative decentralized classifier based on performing
a likelihood-ratio (LR) test at each sensor. Specifically, for each sensor t, the estimates P (X [t]=u|Y =1)
P (X [t]=u|Y =−1)
for u = 1, . . ., M of the likelihood ratio are sorted and grouped evenly into L bins. Given the quantized
input signal and label Y, we then construct a naive Bayes classifier at the fusion center. This choice of
decision rule provides a reasonable comparison, since thresholded likelihood ratio tests are optimal in many
cases [28].
The KQ algorithm generally yields more accurate classification performance than the likelihood-ratio
based algorithm (LR). Figure 3 provides scatter plots of the test error of the KQ versus LQ methods for four
different set-ups, using L = 2 levels of quantization. Panel (a) shows the naive Bayes setting and the KQ
method using the first-order count kernel. Note that the KQ test error is below the LR test error on the large
18
-----
majority of examples. Panels (b) and (c) show the case of chain-structured dependency, as illustrated in
Figure 2(a), using a first- and second-order count kernel respectively. Again, the performance of KQ in both
cases is superior to that of LR in most cases. Finally, panel (d) shows the fully-connected case of Figure 2(b)
with a first-order kernel. The performance of KQ is somewhat better than LR, although by a lesser amount
than the other cases.
**UCI repository data sets:**
We also applied our algorithm to several data sets from the machine learning data repository at the
University of California Irvine [6]. In contrast to the sensor network detection problem, in which communication constraints must be respected, the problem here can be viewed as that of finding a good quantization
scheme that retains information about the class label. Thus, the problem is similar in spirit to work on discretization schemes for classification [10]. The difference is that we assume that the data have already been
crudely quantized (we use m = 8 levels in our experiments), and that we retain no topological information concerning the relative magnitudes of these values that could be used to drive classical discretization
algorithms. Overall, the problem can be viewed as hierarchical decision-making, in which a second-level
classification decision follows a first-level set of decisions concerning the features.
Data L = 2 4 6 NB CK
Pima 0.212 0.217 0.212 0.223 0.212
Iono 0.091 0.034 0.079 0.056 0.125
Bupa 0.368 0.322 0.345 0.322 0.345
Ecoli 0.082 0.176 0.176 0.235 0.188
Yeast 0.312 0.312 0.312 0.303 0.317
Wdbc 0.083 0.097 0.111 0.083 0.083
**Table 1: Experimental results for the UCI data sets.**
We used 75% of the data set for training and the remainder for testing. The results for our algorithm with
L = 2, 4, and 6 quantization levels are shown in Table 1. Note that in several cases the quantized algorithm
actually outperforms a naive Bayes algorithm (NB) with access to the real-valued features. This result may
be due in part to the fact that our quantizer is based on a discriminative classifier, but it is worth noting
that similar improvements over naive Bayes have been reported in earlier empirical work using classical
discretization algorithms [10].
## 5 Conclusions
We have presented a new approach to the problem of decentralized decision-making under constraints on
the number of bits that can be transmitted by each of a distributed set of sensors. In contrast to most
previous work in an extensive line of research on this problem, we assume that the joint distribution of
sensor observations is unknown, and that a set of data samples is available. We have proposed a novel
algorithm based on kernel methods, and shown that it is quite effective on both simulated and real-world
data sets.
This line of work described here can be extended in a number of directions. First, although we have
focused on discrete observations X, it is natural to consider continuous signal observations. Doing so would
require considering parameterized distributions Q(Z|X). Second, our kernel design so far makes use of
only rudimentary information from the sensor observation model, and could be improved by exploiting such
knowledge more thoroughly. Third, we have considered only the so-called parallel configuration of the
19
|Data|L = 2|4|6|NB|CK|
|---|---|---|---|---|---|
|Pima Iono Bupa Ecoli Yeast Wdbc|0.212 0.091 0.368 0.082 0.312 0.083|0.217 0.034 0.322 0.176 0.312 0.097|0.212 0.079 0.345 0.176 0.312 0.111|0.223 0.056 0.322 0.235 0.303 0.083|0.212 0.125 0.345 0.188 0.317 0.083|
-----
sensors, which amounts to the conditional independence of Q(Z|X). One direction to explore is the use
of kernel-based methods for richer configurations, such as tree-structured and tandem configurations [28].
Finally, the work described here falls within the area of fixed sample size detectors. An alternative type of
decentralized detection procedure is a sequential detector, in which there is usually a large (possibly infinite)
number of observations that can be taken in sequence (e.g. [32]). It is also interesting to consider extensions
our method to this sequential setting.
## Acknowledgement
We are grateful to Peter Bartlett for very helpful discussions related to this work. We wish to acknowledge
support from ONR MURI N00014-00-1-0637 and ARO MURI DAA19-02-1-0383.
**Proof of Lemma 1: (a) Since x1, . . ., xn are independent realizations of the random vector X, the quantities**
Q(z|x1), . . ., Q(z|xn) are independent realizations of the random variable Q(z|X). (This statement holds
for each fixed z ∈Z [S].) By the strong law of large numbers, there holds
1
n
n
� Q(z|xi) −→[a.s.] EQ(z|xi) = P (z)
i=1
as n → +∞. Similarly, we have n[1] �ni=1 [Q][(][z][|][x][i][)][I][(][y][i][ = 1)][ a.s.]−→ EQ(z|X)I(Y = 1). Therefore, as n →∞,
a.s. EQ(z|X)I(Y = 1) �
κ(z) −→ =
P (z)
x
Q(z|X = x)P (X = x, Y = 1)
= P (Y = 1|z),
P (z)
where we have exploited the fact that Z is independent of Y given X.
(b) For each z ∈Z [S], we have
��n �n �
i=1 [Q][(][z][|][x][i][)][I][(][y][i][ = 1)] i=1 [Q][(][z][|][x][i][)][I][(][y][i][ =][ −][1)]
sign �n − �n
i=1 [Q][(][z][|][x][i][)] i=1 [Q][(][z][|][x][i][)]
��n �
i=1 [Q][(][z][|][x][i][)][y][i]
= sign �n
i=1 [Q][(][z][|][x][i][)]
= γemp(z).
Thus, part (a) implies γemp(z) → γopt(z) for each z. Similarly, Remp → Ropt.
**Proof of Proposition 5 Here we complete the proof of Proposition 5. It remains to show that the optimum**
w(Q) of the primal problem is related to the optimal α of the dual problem via w(Q) = [�]i[n]=1 [α][i][y][i][Φ][Q][(][x][i][)][.]
Indeed, since G(w) is a convex function with respect to w, w(Q) is an optimum solution for minw G(w; Q)
if and only if 0 ∈ ∂wG(w(Q)). By definition of the conjugate dual, this condition is equivalent to w(Q) ∈
∂G[∗](0).
Recall that G[∗] is an inf-convolution of n functions g1[∗][, . . ., g]n[∗] [and][ Ω][∗][. Let][ �][α][ := (]α[�]1, . . ., �αn) be an
optimum solution to the dual problem, and �u := (u�1, . . ., �un) be the corresponding value in which the
infimum operation in the definition of G[∗] is attained. Applying the subdifferential operation rule on a infconvolution function (Cor. 4.5.5, [14]):
20
-----
∂G[∗](0) = ∂g1[∗][(]u[�]1) ∩ . . . ∩ ∂gn[∗][(]u[�]n) ∩ ∂Ω[∗](−
n
�
u�i).
i=1
But Ω[∗](v) = [1]2 [∥][v][∥][2][, and so][ ∂][Ω][∗][(][−] [�]i[n]=1 i=1 i=1
[u][�][i][)][ reduces to a singleton][ −] [�][n] [u][�][i][ =][ �][n] [α][�][i][y][i][Φ][Q][(][x][i][)][. This]
implies that w(Q) = i=1
[�][n]
[α][�][i][y][i][Φ][Q][(][x][i][)][ is the optimum solution to the primal problem.]
To conclude, it will be useful for the proof of Lemma 6 to calculate ∂gi[∗][(][u][ �][i][)][, and derive several additional]
properties relating w(Q) and α. The expression for gi[∗] [in equation (23) shows that it is the image of the]
�
function [1]
λ [φ][∗] [under the linear mapping][ α][i][ �→] λ[1] [α][i][(][y][i][Φ][Q][(][x][i][)][. Consequently, by Theorem 4.5.1 of Urruty]
and Lemarechal [14]), we have ∂gi[∗][(][u][ �][i][) =][ {][w][ :][ ⟨][w, y][i][Φ][Q][(][x][i][)][⟩∈] [∂φ][∗][(][−][λ][α][�][i][)][}][, which implies that][ b][i][ :=]
⟨w(Q), yiΦQ(xi)⟩∈ ∂φ[∗](−λαi) for each i = 1, . . ., n. By convex duality, this also implies that −λαi ∈
� �
∂φ(bi) for i = 1, . . ., n.
**Proof of Lemma 6: We shall show that the subdifferential ∂Qt(¯zt|x¯t)G can be computed directly in terms of**
the optimal solution α of the dual optimization problem (21) and the kernel function Kz. Our approach is
to first derive a formula for ∂Q(¯z|x¯)G, and then to compute ∂Qt(¯zt|x¯t)G by applying the chain rule.
Define bi := ⟨w(Q), yiΦQ(xi)⟩. Using Theorem 23.8 of Rockafellar [24], the subdifferential ∂Q(¯z|x¯)G
evaluated at (w(Q); Q) can be expressed as
n
�
∂φ(bi)yi⟨w, Φ[′](¯z)⟩I[xi = ¯x].
i=1
∂Q(¯z|x¯)G =
n
�
∂Q(¯z|x¯)gi =
i=1
Earlier we proved that −λαi ∈ ∂φ(bi) for each i = 1, . . ., n, where α is the optimal solution of (21).
Therefore, ∂Q(¯z|x¯)G evaluated at (w(Q); Q) contains the following element:
n
�
−λαiyi⟨w(Q), Φ[′](¯z)⟩I[xi = ¯x]
i=1
n n
� �
= −λαiyi⟨ αjyjΦQ(xj), Φ[′](¯z)⟩I[xi = ¯x]
i=1 j=1
� �
= −λαiαjyiyjI[xi = ¯x] K(z, ¯z)Q(z|xj).
i,j z
For each t = 1, . . ., S, ∂Qt(¯zt|x¯t)G is related to ∂Q(¯z|x¯)G by the chain rule. Note that Q(¯z|x¯) = [�]t[S]=1 [Q][t][(¯][z][t][|][x][¯][t][)][.]
∂Qt(¯zt|x¯t)G = � ∂Qt(¯zt|x¯t)Q(z|x)∂Q(z|x)G
z,x
�
=
z,x
Q(z|x)
Q[t](¯z[t]|x¯[t]) [I][[][x][t][ = ¯][x][t][]][I][[][z][t][ = ¯][z][t][]][∂][Q][(][z][|][x][)][G,]
which contains the following element as one of its subgradients:
�
z,x
Q(z|x) �� � �
−λαiαjyiyjI[xi = x] Kz(z[′], z)Q(z[′]|xj)
Q[t](¯z[t]|x¯[t]) [I][[][x][t][ = ¯][x][t][]][I][[][z][t][ = ¯][z][t][]]
i,j z[′]
�
= −λαiαjyiyjI[x[t]i [= ¯][x][t][]][I][[][z][t][ = ¯][z][t][]][ Q][(][z][|][x][i][)]
i,j,z,z[′] Q[t](¯z[t]|x¯[t]i[)] [Q][(][z][′][|][x][j][)][K][z][(][z][′][, z][)]
21
-----
This completes the proof of the lemma.
**Proof of Proposition 9: By definition of Rademacher complexity [30], we have**
n
2 �
Rn(F0) = E sup σif (Xi)
f ∈F0 n i=1
n
2 �
= E sup σi⟨w, ΦQ(Xi)⟩
∥w∥≤B;Q∈Q0 n i=1
n
2B �
= ∥ σiΦQ(Xi)∥.
n [E][ sup]Q∈Q0 i=1
Applying the Cauchy-Schwarz inequality yields
n
�
σiΦQ(Xi)||[2]
i=1
2B
Rn(F0) ≤
n
2B
=
n
�
�
�
�E sup ||
Q∈Q0
E sup
Q∈Q0
n
�
KQ(Xi, Xi) + 2E sup
i=1 Q∈Q0
1/2
�
σiσjKQ(Xi, Xj)
1≤i<j≤n
.
It remains to upper bound the second term inside the square root in the RHS. The trick is to partition the
n(n − 1)/2 pairs of (i, j) into n − 1 subsets each of which has n/2 pairs of different i and j (assuming n is
even for simplicity). The existence of such a partition can be shown by induction on n. Now, for each i =
1, . . ., n−1, denote the subset indexed by i by n/2 pairs (πi(j), πi[′][(][j][))][n/]j=1[2] [, where all][ {][π][i][(1)][, . . ., π][i][(][n/][2)][}∩]
{πi[′][(1)][, . . ., π]i[′][(][n/][2)][}][ =][ ∅][. Therefore,]
n/2
� σπi(j)σπi′[(][j][)][K][Q][(][X][π][i][(][j][)][, X][π]i[′][(][j][)][)]
j=1
n/2
� σπi(j)σπi′[(][j][)][K][Q][(][X][π][i][(][j][)][, X][π]i[′][(][j][)][)][.]
j=1
n−1
�
i=1
E sup
Q∈Q0
�
σiσjKQ(Xi, Xj) = E sup
1≤i<j≤n Q∈Q0
≤
n−1
�
E sup
i=1 Q∈Q0
Our final step is to bound the terms inside the summation over i by invoking Massart’s lemma [22] for
bounding Rademacher averages over a finite set A ⊂ R[d]:
E sup
a∈A
d
� �
σiai ≤ max ||a||2 2 log |A|. (32)
i=1
Now, for each i and a realization of X1, . . ., Xn, treat σπi(j)σπi′[(][j][)][ for][ j][ = 1][, . . ., n/][2][ as][ n/][2][ Rademacher]
variables, and the n/2 dimensional vector (KQ(Xπi(j), Xπi′[(][j][)][))]j[n/]=1[2] [takes on only][ L][MS][ possible values]
(since there are L[MS] possible choices for Q ∈Q0). Then we have, for each i = 1, . . ., n − 1:
E sup
Q∈Q0
n/2
� σπi(j)σπi′[(][j][)][K][Q][(][X][π][i][(][j][)][, X][π]i[′][(][j][)][)] ≤ �n/2 sup �
z,z[′][ K][z][(][z, z][′][)]
j=1
22
2 log(L[MS]),
-----
from which the lemma follows.
**Proof of Proposition 10: We treat each Q(Z|X) ∈Q as a function over all possible values (z, x). Recall**
that X is an S-dimensional vector X = (X [1], . . ., X [S]). For each fixed realization x[t] of X [t], for t = 1, . . ., S,
the set of all discrete conditional probability distributions Q(Z [t]|x[t]) is a (L − 1) simplex ∆L. Since each
X [t] takes on M possible values, and X has S dimensions, we have:
N (ϵ, Q, L∞) ≤ N (ϵ, ∆L, l∞)[MS] ≤ (1/ϵ)[(][L][−][1)][MS].
Recall that each f ∈F can be written as:
f (x) =
n
� �
αi Q(z|x)Q(zi|xi)Kz(z, zi). (33)
i=1 z,zi
We now define ϵ0 := ϵ [2L[S] sup ||α||1 supz,z′ Kz(z, z[′])][−][1]. Given each fixed conditional distribution Q in
the ϵ0-covering G(ϵ0, Q, L∞) for Q, we can construct an ϵ/2-covering in L2(Pn) for FQ. It is straightforward to verify that the union of all coverings for FQ indexed by Q ∈ G(ϵ0, Q, L∞) forms an ϵ-covering
for F. Indeed, given any function f ∈F that is expressed in the form (33) with a corresponding Q ∈Q,
there exists some Q[∗] ∈ G(ϵ0, Q, L∞) such that ∥Q − Q[∗]∥∞ ≤ ϵ0. Let f1 be a function in FQ∗ using the
same coefficients α as those of f . Given Q[∗] there exists some f2 ∈FQ∗ such that ∥f1 − f2∥L2(Pn) ≤ ϵ/2.
Applying the triangle inequality yields
∥f − f2∥L2(Pn) ≤ ∥f − f1∥L2(Pn) + ∥f1 − f2∥L2(Pn)
≤ ∥f − f1∥∞ + ϵ/2
≤ L[S] sup ||α||1 sup
z,z[′][ K][z][(][z, z][′][)][∥][Q][ −] [Q][∗][∥][∞] [+][ ϵ/][2][,]
which is bounded above by ϵ. In summary, we have constructed an ϵ-covering in L2(Pn) for F whose
number of coverings is no more than N (ϵ0, Q, L∞) supQ N (ϵ/2, FQ, L2(Pn)). This implies that
� �
log N (ϵ, F, L2(Pn)) ≤ log N (ϵ0, Q, L∞) sup N (ϵ/2, FQ, L2(Pn))
Q
� [�] 2L[S] sup ||α||1 supz,z′ Kz(z, z[′])
≤ log
ϵ
�(L−1)MS
�
sup N (ϵ/2, FQ, L2(Pn))
Q
= sup log N (ϵ/2, FQ, L2(Pn)) + (L − 1)MS log [2][L][S][ sup][ ||][α][||][1][ sup][z,z][′][ K][z][(][z, z][′][)],
Q∈Q ϵ
which completes the proof.
## References
[1] M. M. Al-Ibrahim and P. K. Varshney. Nonparametric sequential detection based on multisensor data.
In Proc. 23rd Annu. Conf. on Inform. Sci. and Syst., pages 157–162, 1989.
[2] N. Aronszajn. Theory of reproducing kernels. Transactions of the American Mathematical Society,
68:337–404, 1950.
23
-----
[3] P. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification and risk bounds. Technical
Report 638, Department of Statistics, University of California at Berkeley, April 2003.
[4] P. Bartlett and S. Mendelson. Gaussian and Rademacher complexities: Risk bounds and structural
results. Journal of Machine Learning Research, 3:463–482, 2002.
[5] D.P. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, 1995.
[6] C.L. Blake and C.J. Merz. UCI repository of machine learning databases, 1998.
[7] R. S. Blum, S. A. Kassam, and H. V. Poor. Distributed detection with multiple sensors: Part II —
advanced topics. Proceedings of the IEEE, 85:64–79, 1997.
[8] J. F. Chamberland and V. V. Veeravalli. Decentralized detection in sensor networks. IEEE Transactions
_on Signal Processing, 51(2):407–416, 2003._
[9] C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273–297, 1995.
[10] J. Dougherty, R. Kohavi, and M. Sahami. Supervised and unsupervised discretization of continuous
features. In Proceedings of the ICML, 1995.
[11] Y. Freund and R. Schapire. A decision-theoretic generalization of on-line learning and an application
to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997.
[12] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: A statistical view of boosting.
_Annals of Statistics, 28:337–374, 2000._
[13] J. Han, P. K. Varshney, and V. C. Vannicola. Some results on distributed nonparametric detection. In
_Proc. 29th Conf. on Decision and Control, pages 2698–2703, 1990._
[14] J. Hiriart-Urruty and C. Lemar´echal. Fundamentals of Convex Analysis. Springer, 2001.
[15] E. K. Hussaini, A. A. M. Al-Bassiouni, and Y. A. El-Far. Decentralized CFAR signal detection. Signal
_Processing, 44:299–307, 1995._
[16] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In Advances
_in Neural Information Processing Systems 11, Cambridge, MA, 1999. MIT Press._
[17] T. Kailath. RKHS approach to detection and estimation problems—Part I: Deterministic signals in
Gaussian noise. IEEE Trans. Info. Theory., 17:530–549, 1971.
[18] T. Kailath and H. V. Poor. Detection of stochastic processes. IEEE Trans. Info. Theory., 44:2230–2259,
1998.
[19] S. A. Kassam. Nonparametric signal detection. In Advances in Statistical Signal Processing. JAI Press,
1993.
[20] V. Koltchinskii and D. Panchenko. Empirical margin distributions and bounding the generalization
error of combined classifiers. Annals of Statistics, 30:1–50, 2002.
[21] D. G. Luenberger. Optimization by Vector Space Methods. Wiley, New York, 1969.
24
-----
[22] P. Massart. Some applications of concentration inequalities to statistics. Annales de la Facult´e des
_Sciences de Toulouse, IX:245–303, 2000._
[23] A. Nasipuri and S. Tantaratana. Nonparametric distributed detection using Wilcoxon statistics. Signal
_Processing, 57(2):139–146, 1997._
[24] G. Rockafellar. Convex Analysis. Princeton University Press, Princeton, 1970.
[25] S. Saitoh. Theory of Reproducing Kernels and its Applications. Longman Scientific & Technical,
Harlow, UK, 1988.
[26] B. Sch¨olkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[27] R. R. Tenney and N. R. Jr. Sandell. Detection with distributed sensors. IEEE Trans. Aero. Electron.
_Sys., 17:501–510, 1981._
[28] J. N. Tsitsiklis. Decentralized detection. In Advances in Statistical Signal Processing, pages 297–344.
JAI Press, 1993.
[29] K. Tsuda, T. Kin, and K. Asai. Marginalized kernels for biological sequences. Bioinformatics, 18:268–
275, 2002.
[30] A. W. van der Vaart and J. Wellner. Weak Convergence and Empirical Processes. Springer-Verlag,
New York, NY, 1996.
[31] H. L. van Trees. Detection, Estimation and Modulation Theory. Krieger Publishing Co., Melbourne,
FL, 1990.
[32] V. V. Veeravalli, T. Basar, and H. V. Poor. Decentralized sequential detection with a fusion center
performing the sequential test. IEEE Trans. Info. Theory, 39(2):433–442, 1993.
[33] R. Viswanathan and A. Ansari. Distributed detectionof a signal in generalized Gaussian noise. IEEE
_Trans. Acoust., Speech, and Signal Process., 37:775–778, 1989._
[34] H. L. Weinert, editor. Reproducing Kernel Hilbert Spaces : Applications in Statistical Signal Process_ing. Hutchinson Ross Publishing Co., Stroudsburg, PA, 1982._
[35] T. Zhang. Statistical behavior and consistency of classification methods based on convex risk minimization. Annal of Statistics, 53:56–134, 2003.
25
-----
|
{
"disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1145/1015330.1015438?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1145/1015330.1015438, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GREEN",
"url": "http://www.cs.berkeley.edu/~jordan/papers/658.pdf"
}
| 2,004
|
[
"JournalArticle",
"Book",
"Conference"
] | true
| 2004-07-04T00:00:00
|
[] | 22,708
|
en
|
[
{
"category": "Medicine",
"source": "external"
},
{
"category": "Medicine",
"source": "s2-fos-model"
},
{
"category": "Computer Science",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/03427086ff72bfe94bd1d082140d9ed3cdf5e196
|
[
"Medicine"
] | 0.865492
|
Creation of a Holistic Platform for Health Boosting Using a Blockchain-Based Approach: Development Study
|
03427086ff72bfe94bd1d082140d9ed3cdf5e196
|
Interactive Journal of Medical Research
|
[
{
"authorId": "2139771164",
"name": "J. Lopez-Barreiro"
},
{
"authorId": "1400918473",
"name": "Luis M. Álvarez-Sabucedo"
},
{
"authorId": "1401330083",
"name": "J. García-Soidán"
},
{
"authorId": "1393633379",
"name": "Juan M. Santos-Gago"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Interact J Med Res"
],
"alternate_urls": [
"http://www.i-jmr.org/index"
],
"id": "f99ce606-828c-4a19-9fb4-55181d6df4d2",
"issn": "1929-073X",
"name": "Interactive Journal of Medical Research",
"type": "journal",
"url": "https://www.i-jmr.org/"
}
|
Background Low adherence to healthy habits, which is associated with a higher risk of disease and death, among citizens of Organization for Economic Co-operation and Development countries is a serious concern. The World Health Organization (WHO) and the physical activity (PA) guidelines for Americans provide recommendations on PA and healthy diets. To promote these habits, we suggest using a blockchain-based platform, using the PA Messaging Framework to deliver messages and rewards to users. Blockchain is a decentralized secure platform for data management, which can be used for value-added controls and services such as smart contracts (SCs), oracles, and decentralized applications (dApps). Of note, there is a substantial penetration of blockchain technologies in the field of PA, but there is a need for more implementations of dApps to take advantage of features such as nonfungible tokens. Objective This study aimed to create a comprehensive platform for promoting healthy habits, using scientific evidence and blockchain technology. The platform will use gamification to encourage healthy PA and eating habits; in addition, it will monitor the activities through noninvasive means, evaluate them using open-source software, and follow up through blockchain messages. Methods A literature search was conducted on the use of blockchain technology in the field of PA and healthy eating. On the basis of the results of this search, it is possible to define an innovative platform for promoting and monitoring healthy habits through health-related challenges on a dApp. Contact with the user will be maintained through messages following a proposed model in the literature to improve adherence to the challenges. Results The proposed strategy is based on a dApp that relies on blockchain technology. The challenges include PA and healthy eating habits based on the recommendations of the WHO and the Food and Agriculture Organization. The system is constituted of a blockchain network where challenge-related achievements are stored and verified using SCs. The user interacts with the system through a dApp that runs on their local device, monitors the challenge, and self-authenticates by providing their public and private keys. The SC verifies challenge fulfillment and generates messages, and the information stored in the network can be used to encourage competition among participants. The ultimate goal is to create a habit of healthy activities through rewards and peer competition. Conclusions The use of blockchain technology has the potential to improve people’s quality of life through the development of relevant services. In this work, strategies using gamification and blockchain are proposed for monitoring healthy activities, with a focus on transparency and reward allocation. The results are promising, but compliance with the General Data Protection Regulation is still a concern. Personal data are stored on personal devices, whereas challenge data are recorded on the blockchain.
|
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
##### Original Paper
# Creation of a Holistic Platform for Health Boosting Using a Blockchain-Based Approach: Development Study
##### Juan Lopez-Barreiro[1*], MSc; Luis Alvarez-Sabucedo[2*], PhD; Jose-Luis Garcia-Soidan[1*], PhD; Juan M Santos-Gago[2*], PhD
1Faculty of Education and Sport Sciences, University of Vigo, Pontevedra, Spain
2AtlanTTic, University of Vigo, Vigo, Spain
*all authors contributed equally
**Corresponding Author:**
Juan Lopez-Barreiro, MSc
Faculty of Education and Sport Sciences
University of Vigo
Campus A Xunqueira, s/n
Pontevedra, 36005
Spain
Phone: 34 610669712
[Email: juan.lopez.barreiro@uvigo.es](mailto:juan.lopez.barreiro@uvigo.es)
### Abstract
**Background:** Low adherence to healthy habits, which is associated with a higher risk of disease and death, among citizens of
Organization for Economic Co-operation and Development countries is a serious concern. The World Health Organization (WHO)
and the physical activity (PA) guidelines for Americans provide recommendations on PA and healthy diets. To promote these
habits, we suggest using a blockchain-based platform, using the PA Messaging Framework to deliver messages and rewards to
users. Blockchain is a decentralized secure platform for data management, which can be used for value-added controls and services
such as smart contracts (SCs), oracles, and decentralized applications (dApps). Of note, there is a substantial penetration of
blockchain technologies in the field of PA, but there is a need for more implementations of dApps to take advantage of features
such as nonfungible tokens.
**Objective:** This study aimed to create a comprehensive platform for promoting healthy habits, using scientific evidence and
blockchain technology. The platform will use gamification to encourage healthy PA and eating habits; in addition, it will monitor
the activities through noninvasive means, evaluate them using open-source software, and follow up through blockchain messages.
**Methods:** A literature search was conducted on the use of blockchain technology in the field of PA and healthy eating. On the
basis of the results of this search, it is possible to define an innovative platform for promoting and monitoring healthy habits
through health-related challenges on a dApp. Contact with the user will be maintained through messages following a proposed
model in the literature to improve adherence to the challenges.
**Results:** The proposed strategy is based on a dApp that relies on blockchain technology. The challenges include PA and healthy
eating habits based on the recommendations of the WHO and the Food and Agriculture Organization. The system is constituted
of a blockchain network where challenge-related achievements are stored and verified using SCs. The user interacts with the
system through a dApp that runs on their local device, monitors the challenge, and self-authenticates by providing their public
and private keys. The SC verifies challenge fulfillment and generates messages, and the information stored in the network can
be used to encourage competition among participants. The ultimate goal is to create a habit of healthy activities through rewards
and peer competition.
**Conclusions:** The use of blockchain technology has the potential to improve people’s quality of life through the development
of relevant services. In this work, strategies using gamification and blockchain are proposed for monitoring healthy activities,
with a focus on transparency and reward allocation. The results are promising, but compliance with the General Data Protection
Regulation is still a concern. Personal data are stored on personal devices, whereas challenge data are recorded on the blockchain.
**_(Interact J Med Res 2023;12:e44135)_** [doi: 10.2196/44135](http://dx.doi.org/10.2196/44135)
-----
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
**KEYWORDS**
blockchain; exercise; gamification; habits; healthy lifestyle; physical fitness
### Introduction
##### Background
In modern societies, many of the deaths and diseases that occur
could easily be avoided if people adopt healthy lifestyle habits
[1-4]. Therefore, the governments of the Organization for
Economic Co-operation and Development (OECD) countries
are especially interested in promoting healthy lifestyle habits
among their citizens and have been making relevant policies.
The problem observed in these policies, however, is the low
adherence to these habits among the general population. It
seems, therefore, that the difficulty lies not in defining these
habits but in generating a culture based on them.
On the one hand, recommendations for the practice of PA and
its benefits for people’s health, based on scientific evidence,
can be found in the physical activity (PA) guidelines of the
World Health Organization (WHO) [1] and the PA guidelines
for Americans (PAG) [2]. These guidelines state that, in general,
for all populations, some exercise is better than none. If people
who do not practice any PA just start doing so, they will obtain
health benefits. It is recommended that people with sedentary
habits should perform PA following the principles of load
progression [5]. People who perform moderate-intensity PA
can gradually begin performing vigorous PA. In addition to the
practice of PA, a healthy diet is recommended, which involves
reducing sugar, fat, and salt consumption and limiting the
consumption of processed foods and foods containing saturated
fats.
On the other hand, the guidelines recommend the consumption
of fruits, vegetables, legumes, nuts, and whole grains, as well
as the consumption of at least 5 servings of fruits and vegetables
daily [6-12]. The guidelines also emphasize that poor dietary
habits together with a lack of PA greatly increase the risk of
contracting noncommunicable diseases [6,11].
This is why it is of great social value to provide tools to promote
these healthy habits in the population and monitor adherence.
To this end, this paper explores the use of a platform to promote
these habits and monitor adherence taking advantage of
blockchain (BC) for gamification techniques. To carry out this
gamification, the Physical Activity Messaging Framework [13]
is used to organize the delivery of the most appropriate messages
and rewards to the user to encourage their participation. These
messages are categorized as generic, targeted, tailored, tailored
personalized, and generic personalized, and as we progress
through them, they become more relevant and more personal
to the user.
Generic messages are those that apply to any person in general,
regardless of their particularities, and that inform about the
benefits of PA practice (eg, “Performing PA is good for your
health”) [14]. By contrast, targeted messages are relevant to a
specific group [15], in this case, the general population of adults,
and specifically highlight the benefits of PA practice in this
group (eg, “Adults should perform 30 minutes of moderate PA
per day to improve their cardiovascular health”). To engage the
user in a more personal way, tailored messages are used. These
messages use specific data about each individual user (eg, their
goals) to make the message more relevant [15] (eg, “You are
only 10 minutes away from reaching your weekly PA goal.
Achieve it and improve your cardiovascular health!”).
A personalization layer can be added to these messages, which
consists of adding data that are not related to PA, such as the
name of the user, to increase the salience and proximity of the
message [15]. Thus, this feature can be added to generic
messages (eg, “Hi Manuel! Doing PA is good for your health”)
and to tailored messages (eg, “Hi Rosa! You are only 10 minutes
away from reaching your weekly PA practice goal. Achieve it
and improve your cardiovascular health!”).
In addition, messages can be classified according to whether
they are framed to highlight the benefit of meeting the proposed
objectives (gain framed; eg, “PA practice reduces the risk of
heart disease, hypertension, and type 2 diabetes”) or to point
out the harms of not meeting them (loss framed; eg, “Not
performing PA increases the risk of heart disease, hypertension,
and type 2 diabetes”) [16]. Messages aimed at highlighting the
benefits of performing PA are generally recommended to
promote PA practice [14,16]. By contrast, messages emphasizing
the harms of not performing PA may also be recommended in
certain cases, such as back injuries, where it may be beneficial
to increase the perceived risk of not performing PA to engage
users [17,18].
BC is a technology that provides features such as
decentralization, transparency, open source, autonomy,
immutability, and anonymity [19]; it can be conceptualized as
a new model for the externalization of trust in information
management in a distributed environment [20]. It consists of
generating a general ledger, in which, using accounting
terminology, the information is stored. This information, by the
nature of the system itself, becomes immutable. To this end, it
relies on a peer-to-peer structure in which the nodes or members
participating in the system collaborate with each other to
guarantee the inviolability of the data and their high availability,
subject neither to the failure of a server nor to the management
of a third party. This latter aspect is what allows it to become
the appropriate tool when it is not desirable to rely on third
parties. The nodes within the network themselves validate the
records and add them to a chain of blocks (hence the name of
the technology), which constitutes the aforementioned ledger
of records [21].
When an agent wishes to enter a new record in this ledger, the
agreement of all members of the ledger’s host network is needed
before the record can be validated. This is done by using a
specific protocol called a consensus algorithm, which establishes
the criteria for the acceptance of an element in the chain of
records. The 2 most common consensus algorithms are proof
_of work (PoW) [22], used in the bitcoin network, whereby_
miners must solve a complex mathematical problem to justify
the inclusion of the new block, and proof of authority (PoA)
-----
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
[22], which allows the inclusion of new records based on the
relevance of the miner making the proposal. These algorithms
are only a small sample of the plethora of proposals in the
literature.
This technology is achieving high market penetration as a
solution for information storage and verification in a wide
variety of domains, ranging from cryptocurrencies to the
traceability of food and pharmaceutical products [23,24]. As
shown in a recent systematic review [25], a significant
penetration is observed in the field of PA but with a poor
leveraging of the special features that BC offers to implement
value-added controls and services such as smart contracts (SCs),
oracles, and decentralized applications (dApps), each of which
is described in Textbox 1.
**Textbox 1. Descriptions of smart contracts (SCs), oracles, and decentralized applications (dApps).**
As reported by Cai et al [28], the implementation of dApps is
required to exploit another important feature existing within
this environment, namely nonfungible tokens (NFTs). An NFT
is an encrypted digital asset, a special type of cryptographic
token that represents something unique. NFTs serve to prove
that a certain user is in possession of a token that is unique,
traceable, and exchangeable; they are very useful in certain
gamification contexts to reward users for achieving their goals
[29].
##### Objectives
On the basis of the points outlined so far, this work proposes
the creation of a platform to facilitate the inculcation of healthy
lifestyle habits and practices through a gamification strategy.
The objective was to engage users—society as a whole—in
activities that can become healthy habits. These healthy habits
will be both sporting and nutritional.
One of the highlights of this platform is its great potential in
terms of gamification. This tool provides a very functional
support for the monitoring of the data hosted on it without
human intervention (eg, the validation of the challenges
presented to the users).
In addition, within this platform, the possibility of defining
challenges in a highly parameterized way is contemplated so
that different state or private agencies can, in due course,
propose their own challenges and make them available to users.
We can say, therefore, that the objective of this work is to
present a holistic platform for the support of health-related
challenges among the population, using scientific evidence with
the support of BC technology. It is intended to provide a
mechanism to encourage and set healthy PA and eating habits
among the general population by using gamification techniques
through challenges.
This platform will integrate the noninvasive monitoring of the
proposed activities, evaluation through open-source software,
and follow-up using BC through messages addressed to the end
user that will allow them to adhere to this activity.
### Methods
As a first step, the state of the art in this regard was checked,
through a literature search, to get an idea about the use of BC
technology in the field of PA and healthy eating [25]. On the
basis of the results of this search, it is possible to define an
innovative platform for the promotion and monitoring of healthy
lifestyle habits based on scientific evidence. The goal is to
introduce healthy habits concretized in activities defined within
different health-related challenges through the use of a dApp
for the general population.
One of the key aspects with regard to improving users’
adherence to the training program embedded in the challenges
is to maintain contact with the user. To organize this information
delivery to the user, messages will be used following the model
proposed in the study by Williamson et al [13].
### Results
The aforementioned review of the current literature shows a
significant increase in scientific production related to BC
technology in recent years, as is also indicated in 2 bibliometric
reviews [24,30]. Among the large number of existing works
that take advantage of this technology, the following are worth
mentioning.
##### BC and PA and Health Care
In the literature, it is possible to find several works that combine
BC technology with PA practice and health care. Among them
is the study by Alsalamah et al [31], who proposed a platform
to incentivize PA practice and encourage a healthy lifestyle
through gamification and rewarding of users for meeting their
goals using cryptocurrencies. Another noteworthy study is the
one by Frikha et al [32], who stored users’ health data in
electronic health records to diagnose and treat patients more
easily and cost-effectively. Other notable works are those by
Jamil et al [33] and Jamil et al [34]; the authors assigned training
and diet programs to each user based on their anthropometric
and body composition data. Furthermore, in the study by Jamil
-----
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
et al [34], the authors allowed the transfer of the user profile
among different sports centers.
##### BC and Sport
Other trending works have made contributions that are restricted
to the sporting field, ranging from data capture and management
to predictions of sporting performance. This is the case with
the study by Cao et al [35], who developed a model to predict
performance and improve training; the study by Hong and Park
[36], who captured players’ performance data to make tactical
decisions in situ and in real time; and the study by Yu [37], who
developed a model to improve and guide training using athletes’
physiological data. Moreover, we have the study by Ma [38] in
which the author filtered data from users’ gait patterns; as well
as the study by Shan and Mai [39], who proposed a system to
capture and manage athletes’ fitness data in real time. Finally,
there is the study by Mulyati et al [40] in which a model was
developed to store data regarding belt promotions and grades
in taekwondo, bringing transparency and immutability to the
scores.
##### BC and Active Aging
There are also very diverse contributions related to the
incorporation of BC into active aging. Khezr et al [41] developed
a system that provides alerts when normal behavioral patterns
change. Rahman et al [42] assigned therapies based on users’
treatment needs. Rahman et al [43] developed a system to
control smart home devices using gestural recognition tools.
Rupasinghe et al [44] determined the risk factors for falls and
developed a model to predict them. Silva et al [45] captured
physiological data of patients and made them secure and
interoperable through BC. Spinsante et al [46] proposed an app
to promote active aging and assess the level of PA practice and
quality of life. Finally, Velmovitsky et al [47] proposed a system
for users to control informed consent for their participation in
studies at all times.
Table 1 shows a synthesis of the state of the art in different
technological aspects, such as the use of SCs and oracles,
support for cryptocurrencies and NFTs, training and dietary
programs based on scientific evidence, and end-user delivery
support.
**Table 1.** Analysis of studies related to blockchain and physical activity and health care, sport, and active aging.
Domain and reference SC[a] Oracle Cryptocurrencies NFT[b] Evidence based End-user delivery support
**Physical activity and health care**
Alsalamah et al [31] Yes No Yes No No Web dApp[c] and mobile app
Frikha et al [32] Yes No No No No Web application and mobile app
Jamil et al [33] Yes No No No No Web application
Jamil et al [34] Yes No No No No Web application
**Sport**
Cao et al [35] No No No No No Not described
Hong and Park [36] No No No No No Not described
Ma [38] No No No No No Not described
Mulyati et al [40] No No No No No Web application and mobile dApp
Shan and Mai [39] No No No No No Not described
Yu [37] No No No No No Not described
**Active aging**
Khezr et al [41] Yes No No No No Not described
Rahman et al [42] Yes No No No No Not described
Rahman et al [43] Yes No No No No Web application and mobile dApp
Rupasinghe et al [44] Yes No No No No Not described
Silva et al [45] No No No No No Web application and mobile app
Spinsante et al [46] No No No No No Web application and mobile app
Velmovitsky et al [47] Yes No No No No Not described
aSC: smart contract.
bNFT: nonfungible token.
cdApp: decentralized application.
The studies cited (Table 1) dealt with the introduction of BC in
the field of PA and health care, sport, and active aging.
However, most of them (14/17, 82%) show very limited
development, which shows us the initial stage of development
of this technology in the field concerned. Only 9 (53%) of the
17 studies make use of SCs [31-34,41-44,47]. Among those
-----
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
describing the access policy, most (10/17, 59%) use private and
authorized networks; of the 17 studies, only 1 (6%) uses a public
network, and 1 (6%) uses an authorized consortium. In addition,
only the study by Alsalamah et al [31] incentivizes using
cryptocurrencies as a reward. None of the cited works make use
of NFTs, and none base their training or dietary plans on
scientific evidence. Regarding the delivery medium, most used
web applications or mobile apps, and only 3 (18%) of the 17
studies leveraged dApps [31,40,43].
On the basis of this review of the state of the art and relying on
the Physical Activity Messaging Framework and BC technology,
challenges will be proposed to the general population and
monitored through the use of a dApp that relies on the
information and SCs stored in the BC.
These challenges are composed of (1) a series of PAs and
specific healthy eating habits that generate benefits for the user
when performed with the proposed sequencing and periodicity
and (2) the messages corresponding to each challenge.
The PAs included in these challenges are based on scientific
evidence following the recommendations for the general adult
population found in the PA guidelines of the WHO [1] and the
PAG [2], whereas the proposed healthy eating habits are based
on the recommendations of the WHO and the Food and
Agriculture Organization (FAO) [7,11]. We list here in a
concrete and clear way the PA practice recommendations for
the general adult population that will be the basis for the
subsequent creation of the different challenges that users will
have to complete to obtain their rewards (it is recommended to
exceed the upper limits of moderate and vigorous PA or perform
a combination of both):
- Moderate PA per week: 150 to 300 minutes
- Vigorous PA per week: 75 to 50 minutes
- Strength PA per week: ≥2 sessions
Among the aforementioned recommendations, we find different
PA modalities such as aerobic exercise (muscle movement in
a rhythmic way and maintained over time), muscle strengthening
(strength training and weight lifting), bone strengthening
(produces a force in the bones that promotes their growth and
strength), balance training (improves the ability to resist internal
or external forces of the body that cause falls), and
multicomponent training (a combination of aerobic PA, balance
training, and muscle strengthening), which will bring some
benefit to the user when performed [2]. Of note, Momma et al
[48], in their recent systematic review and meta-analysis of
cohort studies on muscle-strengthening activities, highlighted
the reduction in the risk of all-cause mortality, cardiovascular
disease, cancer, and diabetes in participants by 10% to 17%
[48]. Regarding healthy eating habits, the WHO recommends
restricting sugar consumption to <10% of total daily calories,
fat consumption to <30% of total daily calories, and salt
consumption to <5 g daily, as well as limiting the consumption
of processed foods and foods containing saturated fats to <10%
of total calorie intake and foods containing trans fats to <1% of
total calorie intake. By contrast, the guidelines recommend the
consumption of fruits, vegetables, legumes, nuts, and whole
grains, as well as the consumption of at least 5 servings of fruits
and vegetables daily [6-12]. Aune et al [6] report a 31% decrease
in the risk of contracting diseases with a daily intake of 800 g
of fruits and vegetables, a 19% decrease with a daily intake of
600 g of fruits, and a 25% decrease with a daily intake of 600
g of vegetables; Leenders et al [49] suggest an increase in
longevity with fruit and vegetable consumption; and Chowdhury
et al [50] report that individuals consuming a well-balanced diet
are healthier with a strong immune system and have a reduced
risk of contracting infectious diseases such as COVID-19.
On the basis of the aforementioned recommendations for healthy
habits and the scientific evidence that supports each activity, 4
challenges are generated (summarized schematically in Textbox
2).
-----
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
**Textbox 2. Explanatory summary of the 4 proposed challenges.**
##### Challenge 1: High-Intensity Interval Training 7-Minute
**_Workout_**
This challenge consists of the user performing the high-intensity
_interval training (HIIT) 7-minute workout 4 days per week. The_
basis for this challenge comes from the WHO and PAG
recommendation to combine aerobic PA and
muscle-strengthening PA and from the training proposed in the
study by Klika and Jordan [51], in which PA training is
performed only with body weight aerobic PA and muscle
strengthening [2,52]. The training consists of repeating 2 or 3
sets of the _HIIT 7-minute workout [51]. On the basis of the_
WHO and PAG vigorous PA practice recommendations, in this
challenge, the user will be asked to perform 3 sets daily of the
_HIIT 7-minute workout at least 4 times per week._
If no workout has been performed after 2 days from the start of
the challenge, the user will receive “PA practice improves your
physical and mental health” as a generic message to highlight
the benefit of meeting their goals.
After 3 days from the start of the challenge without performing
any training, the user will receive “Not performing your strength
training sessions will worsen your health” as a targeted message
framed to highlight the harms of not meeting the PA and strength
training goals.
When the user has completed 2 training sessions, they will
receive “Cheer up! You have been strength training this week,
keep it up to improve your quality of life” as a tailored message
framed around the benefit.
Finally, when the user reaches their goal of 4 strength workouts
per week, they will receive “Great job, [name of user]! You’ve
completed this challenge, keep it up—you’re decreasing your
chance of getting heart disease by more than 10%!” as a
personalized tailored message based on the virtues of performing
PA.
##### Challenge 2: Walk More Than 10,000 Steps Every Day
This challenge consists of the user walking >10,000 steps daily
all 7 days of the week, based on the results of the recent
systematic review and meta-analysis conducted by Jayedi et al
[52], in which a clear decrease in the risk of all-cause mortality
is observed when walking >10,000 steps daily, in addition to a
12% decrease in the risk of all-cause mortality with each
increment of 1000 steps per day. The user will be asked to
-----
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
replace sedentary time with PA practice and walk >10,000 steps
every day for 7 consecutive days.
After 1 day from the start of the challenge, if the user has not
walked 10,000 steps, they will receive “The practice of PA
reduces the risk of heart disease, hypertension, and type 2
diabetes” as a generic message to highlight the benefit of
meeting their goals.
After 3 days from the start of the challenge without walking
10,000 steps, the user will receive “Not reaching your daily step
goals will worsen your quality of life” as a targeted message
framed to highlight the harms of not meeting daily step goals.
When the user has walked >10,000 steps for 4 consecutive days,
they will receive “Cheer up! You have reached your daily goal
again, keep it up to improve your cardiovascular health” as a
tailored message framed around the benefit.
Finally, when the user manages to reach their daily step goal
for 7 consecutive days, they will receive “Congratulations,
[name of user]! You have completed this challenge, keep it up,
you have just improved your physical and mental health!” as a
personalized tailored message based on the virtues of performing
PA.
##### Challenge 3: Balance Training
This challenge consists of the user performing strength training
≥2 days per week. To meet this goal, the user will be asked to
perform the eccentric training exercises using sliding disks [53]
at least twice a week.
After 2 days from the start of the challenge without performing
any training, the user will receive “PA practice increases your
longevity” as a generic message to highlight the benefit of
meeting their goals.
After 4 days from the start of the challenge without performing
any training, the user will receive “By not performing your
balance training you are increasing the probability of falling”
as a targeted message framed to highlight the harms of not
meeting their weekly training goal.
When the user has completed 1 workout, they will receive
“Cheer up! You’ve had a workout this week, keep it up to
improve your balance” as a tailored message framed around the
benefit.
Finally, when the user reaches their goal of 4 strength workouts
per week, they will receive “Great job, [name of user]! You’ve
completed this challenge, keep it up, you’ve just improved your
balance and bone health!” as a personalized tailored message
based on the virtues of performing balance training.
##### Challenge 4: Eat at Least 5 Servings of Fruits and Vegetables
**_per Day_**
This challenge consists of the user eating at least 5 servings of
fruits and vegetables per day, based on the results of the recent
systematic review and meta-analysis conducted by Aune et al
[6] as well as the recommendations of the WHO [11] and the
FAO [7]. Both organizations recommend the consumption of
at least 5 servings of fruits and vegetables per day, and Aune
et al [6] report a 31% decrease in the risk of contracting diseases
with a daily intake of 800 g of fruits and vegetables, a 19%
decrease with a daily intake of 600 g of fruits, and a 25%
decrease with a daily intake of 600 g of vegetables. To perform
this challenge, the user will be asked to consume at least 5
servings of fruits and vegetables per day (1 serving is
approximately 150 g) on all 7 days of the week.
After 1 day from the start of the challenge, if the user has not
consumed at least 5 servings of fruits and vegetables, they will
receive “WHO recommends the consumption of fruits and
vegetables to reduce the risk of heart disease, hypertension, and
type 2 diabetes” as a generic message to highlight the benefit
of meeting their goals.
After 3 days from the start of the challenge without consuming
the 5 daily portions, the user will receive “If you don’t eat at
least five servings of fruits and vegetables a day, you increase
your risk of disease” as a targeted message framed to highlight
the harms of not meeting the daily goals.
When the user has reached the goal of eating at least 5 servings
of fruits and vegetables a day for 4 consecutive days, they will
receive “Cheers! You have reached your daily goal again, keep
it up to increase your life expectancy” as a tailored gain-framed
message.
Finally, when the user manages to reach their daily goal of
eating at least 5 servings of fruits and vegetables for 7
consecutive days, they will receive “Congratulations, [name of
user]! You have completed this challenge, keep it up, you are
reducing the probability of being diagnosed with cancer by more
than 10%!” as a personalized tailored message based on the
virtues of consuming fruits and vegetables.
##### Architectural Perspective
From an architectural perspective, the system is fundamentally
constituted through a BC network. In this network,
challenge-related achievements are stored, and their verification
is performed using SCs. As mentioned in the previous sections,
the objective of the system is to provide a motivating user
experience so that participants feel engaged in the proposed
activity and thus adhere to the challenges introduced in the
system. By using this registration and verification tool, users
can be assured of the veracity of their achievements.
To operate within the system, the user must make use of the
dApp provided for this purpose. This application will run on
the user’s local device and be responsible for managing the
user’s identity and sending for publication on the BC network
the data registered for the event in which the user is taking part.
This monitoring of activities related to the challenge itself should
be carried out in the least invasive way possible.
The BC network used for this purpose was hosted on an external
service provider that runs the Hyperledger nodes with support
for Web3 applications. In particular, tests were performed using
support from Kaleido [54].
According to the proposed model, the SC defined for each
challenge automates challenge-specific decision-making,
performing tasks such as the following:
-----
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
- Verifying the fulfillment of the conditions for each
challenge: once the conditions for the challenge in question
have been met, the established rewards are assigned.
- Generating the established messages: these messages
correspond to certain challenge conditions that are analyzed
by the SC. Thus, when a user does not perform the walking
PA on a particular day, a corresponding alert is generated
and sent to the user.
For the interaction with the BC network, the deployed nodes
offer a representational state transfer application programming
**Table 2.** Description of the most relevant procedures.
interface that allows the invocation of remote services in a
simple way. A description of the most relevant procedures can
be found in Table 2. The user self-authenticates when sending
data by providing their public key and creating an encrypted
field with their private key to validate the information sent. In
other words, the access credentials will be managed only by the
client device. It is also worth noting that any user or node can
obtain a complete list of records in the chain and obtain the
messages that have not yet been delivered.
Action Resource Purpose Input parameters
POST /v1/records Add a challenge record - Challenge ID
- User log-in
- Public key
- Challenge facts
- Encrypted hash with private key
GET /v1/records Obtain the complete data string or the data referring to a log-in or challenge - User log-in (optional)
- Challenge (optional)
- Initial date (optional)
GET /v1/awards Obtain the rewards associated with a log-in - User log-in
GET /v1/messages Obtain a user’s pending messages - User log-in (optional)
- Initial date (optional)
As an example, in the case of challenge 1, _HIIT 7-minute_
_workout, the user must perform 3 sets of the 7-minute HIIT_
workout per day for at least 4 days per week. The user must
manually record the sets performed each day using the dApp
provided for this purpose. In addition, the user must attach a
JPEG file demonstrating the completion of the training (eg, a
screenshot of the heart rate variation intervals during the HIIT
execution). Subsequently, the SC corresponding to this
challenge, illustrated in Algorithm 1 within Textbox 3, performs
the verification of the established conditions for this particular
challenge. This mechanism supports the handling of messages
sent to users as well as the allocation of a reward in the form of
a transfer of the network’s own cryptocurrency as a reward.
The information, which is stored in the network, can be used
as a support to encourage competition among participants. To
this end, using these data, dashboards can be generated showing
the most involved user in the activity—the one who has walked
the most steps, performed the most sets of the HIIT 7-minute
_workout, or consumed the most number of servings of fruits_
and vegetables—or any other parameter that may be interesting
and can be used to encourage participation. The idea is to
achieve a critical mass of users among whom a habit of healthy
activities is inculcated through this system of rewards and
competition among peers.
Of note, the tests carried out in the laboratory after the
implementation of the BC network showed satisfactory results
in its functioning.
-----
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
**Textbox 3. Algorithm 1: smart contract snippet for checking the high-intensity interval training challenge.**
### Discussion
With regard to the objectives and hypotheses set out in this
work, we have been able to create a tool that encourages healthy
lifestyle habits in the population through challenges. BC
technology will be key to the implementation of these habits
and the monitoring of compliance in the least intrusive way and
without the need to rely on trusted third parties.
##### Limitations and Future Work
The limitations of this work include the limited consideration
of General Data Protection Regulation (GDPR) implications
and the manual need for information upload. In future work and
to overcome the latter limitation, we propose the introduction
of artificial intelligence techniques and the use of wearables
connected to the dApp, a method similar to that used in the
study by Santos-Gago et al [55].
##### Comparison With Prior Work
Regarding the characteristics considered relevant in the 17
articles cited in the Results section, the following aspects are
worth highlighting in comparison with our proposal.
Regarding the access policy, our platform is formed by a
permissioned network. Therefore, only authorized nodes will
be able to participate in the platform, as is the case with 3 (18%)
-----
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
of the 17 studies [33,34,36]. Other approaches [31,32,42,43,47]
involved the use of a permissioned private BC, Rupasinghe et
al [44] used a permissioned consortium BC, and Mulyati et al
[40] used a public BC network. The other studies (7/17, 41%)
do not indicate the type of network used.
Concerning SC, 8 (47%) of the 17 studies [35-40,45,46] did not
report on their implementation, in contrast to our approach,
which is similar to that of 9 (53%) of the 17 studies
[31-34,41-44,47], which also made use of this special BC
feature.
Regarding the use of cryptocurrencies to incentivize users, only
Alsalamah et al [31] take advantage of this feature; the other
works cited (16/17, 94%) do not indicate the use of
cryptocurrencies. In our work, too, this feature is not exploited.
Concerning end-user delivery support, we implemented a dApp
to be able to offer all the features that only BC can provide,
similar to the approach used in 3 (18%) of the 17 studies
[31,40,43].
Finally, regarding the use of oracles, NFTs, and the proposed
PA and scientific evidence–based feeding, none of the
aforementioned works indicate the use of these features. On the
one hand, our proposal does not involve the use of oracles either.
Nevertheless, according to the suggested model, it is possible
to include oracles as actors with minor updates on the low level
of the designed system. On the other hand, we use rewards to
achieve a gamification experience and engage users in healthy
lifestyle habits through challenges. We have based these
challenges, composed of PA and healthy eating habits, on
scientific evidence, supported by relevant organizations such
as the WHO and the FAO.
##### Conclusions
The emergence of disruptive technologies such as BC has
opened the door to new possibilities in the provision of services
to society. This work explores the potential of this technology
in the development of services that improve people’s quality of
life. To this end, strategies have been developed that allow,
using gamification, the monitoring of adherence to new healthy
habits in a simple way and, consequently, help to increase
adherence.
The use of BC technology has been fundamental for meeting
these objectives. In the review of previous works, it can be
observed that the potential of BC has not been fully exploited.
In our model, the aim is to show how to fully use this
technology. Consequently, it is worth highlighting the following
aspects of the platform:
- In an autonomous manner, without the need for supervision
by a human agent and without the possibility of blockage,
the verification of challenge completion is carried out. In
##### Acknowledgments
the same process, the reward allocation is carried out, which
cannot be interfered with by any system agent.
- All participants in the system can transparently verify the
status of challenges at all times, thus increasing system
transparency.
- As trust in the data resides in the network itself, there is no
need to rely on a third party. This eliminates distrust because
the promoter of the challenge is not known at first hand.
By contrast, when using this technology, there are certain
deployment aspects that must be taken into account, including,
primarily, the fact that future practitioners must be aware that
once an SC is deployed, it cannot be modified, as could be the
case with other technologies where the software is easily
updatable. This is why it is very important to adequately test
the system in development before deploying it in production.
This technology offers other elements that can improve users’
adherence to the system but have not yet been properly
implemented in this prototype. We are talking about both the
use of NFTs to reward the fulfillment of certain challenges or
meta-challenges and the use of oracles for the unsupervised
acquisition of information to eliminate the need for user input
and improve SC decision-making.
Although the system is still pending functional validation in a
realistic environment, the experimental result has been
satisfactory. A simple tool has been generated for the user, with
a scalable and inexpensive deployment for service providers
and with great potential for improving people’s health. In this
aspect, the adequate generation of training plans has played a
fundamental role. These have been obtained from validated
medical sources (eg, the WHO and the PAG) and therefore offer
a high level of confidence.
A negative aspect of the system, pending more rigorous
treatment, is compliance with the GDPR. This legal framework
establishes a series of conditions, such as the elimination of
user information when the user demands it. However, it should
be noted that, in our proposal, no personal or medical data are
recorded directly on the BC. The personal data are stored in the
personal device, and the data regarding the completion of the
different challenges are recorded on the BC. In fact, there are
already critical voices regarding these aspects, and they are
calling for a revision of the legal framework to facilitate the
adoption of these new technologies [56].
In conclusion, a tool has been created through which healthy
lifestyle habits can be inculcated in terms of both PA and healthy
eating. Furthermore, it has been automated in the most
transparent, safe, and least intrusive way possible using BC
technology. Thereby, a tool to reduce the risk of all-cause
mortality and to increase the well-being of society has been
developed.
This research was cofunded by grant PID2020-115137RB-I00 funded by the Ministry of Science and Innovation
(MCIN/AEI/10.13039/501100011033) and by the _Predoctoral grant program of the Xunta de Galicia (Regional Ministry of_
Culture, Education, and University Organization).
-----
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
##### Authors' Contributions
JL-B and LA-S were responsible for the conceptualization of the study as well as the software. All authors were responsible for
the methodology, formal analysis, and data curation. The original draft was prepared by JL-B and LA-S. All authors reviewed
and edited the draft and have read and approved the published version of the manuscript.
##### Conflicts of Interest
None declared.
##### References
1. Bull FC, Al-Ansari SS, Biddle S, Borodulin K, Buman MP, Cardon G, et al. World Health Organization 2020 guidelines
[on physical activity and sedentary behaviour. Br J Sports Med 2020 Dec 25;54(24):1451-1462 [FREE Full text] [doi:](http://bjsm.bmj.com/lookup/pmidlookup?view=long&pmid=33239350)
[10.1136/bjsports-2020-102955] [Medline: 33239350]](http://dx.doi.org/10.1136/bjsports-2020-102955)
2. Piercy KL, Troiano RP, Ballard RM, Carlson SA, Fulton JE, Galuska DA, et al. The physical activity guidelines for
[Americans. JAMA 2018 Nov 20;320(19):2020-2028 [FREE Full text] [doi: 10.1001/jama.2018.14854] [Medline: 30418471]](https://europepmc.org/abstract/MED/30418471)
3. World Health Organization. WHO Guidelines on Physical Activity and Sedentary Behaviour: Web Annex: Evidence
Profiles. Geneva: World Health Organization; 2020.
4. Yang Y, Dixon-Suen S, Dugué PA, Hodge A, Lynch B, English D. Physical activity and sedentary behaviour over adulthood
in relation to all-cause and cause-specific mortality: a systematic review of analytic strategies and study findings. Int J
[Epidemiol 2022 May 09;51(2):641-667. [doi: 10.1093/ije/dyab181] [Medline: 34480556]](http://dx.doi.org/10.1093/ije/dyab181)
5. Barrera J. Actividad física como estilo de vida saludable: criterios básicos. Revista Médica de Risaralda 2003 Nov;9(2):43.
6. Aune D, Giovannucci E, Boffetta P, Fadnes LT, Keum N, Norat T, et al. Fruit and vegetable intake and the risk of
cardiovascular disease, total cancer and all-cause mortality-a systematic review and dose-response meta-analysis of
[prospective studies. Int J Epidemiol 2017 Jun 01;46(3):1029-1056 [FREE Full text] [doi: 10.1093/ije/dyw319] [Medline:](https://europepmc.org/abstract/MED/28338764)
[28338764]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=28338764&dopt=Abstract)
7. Frutas y verduras – esenciales en tu dieta Año Internacional de las Frutas y Verduras, 2021. Documento de antecedentes.
Rome, Italy: Food and Agriculture Organization; 2020.
8. Hooper L, Abdelhamid A, Bunn D, Brown T, Summerbell C, Skeaff C. Effects of total fat intake on body weight. Cochrane
[Database Syst Rev 2015 Aug 07(8):CD011834. [doi: 10.1002/14651858.CD011834] [Medline: 26250104]](http://dx.doi.org/10.1002/14651858.CD011834)
9. Nishida C, Uauy R. WHO Scientific Update on health consequences of trans fatty acids: introduction. Eur J Clin Nutr 2009
[May 06;63 Suppl 2(S2):S1-S4. [doi: 10.1038/ejcn.2009.13] [Medline: 19424215]](http://dx.doi.org/10.1038/ejcn.2009.13)
10. World Health Organization. Diet Nutrition and the Prevention of Chronic Diseases Report of a Joint WHO/FAO Expert
Consultation. Geneva: World Health Organization; 2003.
11. World Health Organization. Food-based dietary guidelines in the WHO European Region. WHO Regional Office for
[Europe. 2003. URL: https://apps.who.int/iris/handle/10665/107490 [accessed 2022-05-23]](https://apps.who.int/iris/handle/10665/107490)
12. World Health Organization. Guideline: Sugars Intake for Adults and Children. Geneva: World Health Organization; 2015.
13. Williamson C, Kelly P, Tomasone JR, Bauman A, Mutrie N, Niven A, et al. A modified Delphi study to enhance and gain
international consensus on the Physical Activity Messaging Framework (PAMF) and Checklist (PAMC). Int J Behav Nutr
[Phys Act 2021 Aug 19;18(1):108 [FREE Full text] [doi: 10.1186/s12966-021-01182-z] [Medline: 34412638]](https://ijbnpa.biomedcentral.com/articles/10.1186/s12966-021-01182-z)
14. Williamson C, Baker G, Mutrie N, Niven A, Kelly P. Get the message? A scoping review of physical activity messaging.
[Int J Behav Nutr Phys Act 2020 Apr 15;17(1):51 [FREE Full text] [doi: 10.1186/s12966-020-00954-3] [Medline: 32295613]](https://ijbnpa.biomedcentral.com/articles/10.1186/s12966-020-00954-3)
15. Conroy DE, Hojjatinia S, Lagoa CM, Yang C, Lanza ST, Smyth JM. Personalized models of physical activity responses
to text message micro-interventions: a proof-of-concept application of control systems engineering methods. Psychol Sport
[Exerc 2019 Mar;41:172-180 [FREE Full text] [doi: 10.1016/j.psychsport.2018.06.011] [Medline: 30853855]](https://europepmc.org/abstract/MED/30853855)
16. Latimer AE, Brawley LR, Bassett RL. A systematic review of three approaches for constructing physical activity messages:
[what messages work and what improvements are needed? Int J Behav Nutr Phys Act 2010 May 11;7(1):36 [FREE Full](https://ijbnpa.biomedcentral.com/articles/10.1186/1479-5868-7-36)
[text] [doi: 10.1186/1479-5868-7-36] [Medline: 20459779]](https://ijbnpa.biomedcentral.com/articles/10.1186/1479-5868-7-36)
17. Bassett RL, Ginis KAM. Risky business: the effects of an individualized health information intervention on health risk
perceptions and leisure time physical activity among people with spinal cord injury. Disabil Health J 2011 Jul;4(3):165-176.
[[doi: 10.1016/j.dhjo.2010.12.001] [Medline: 21723523]](http://dx.doi.org/10.1016/j.dhjo.2010.12.001)
18. Bassett-Gunter RL, Martin Ginis KA, Latimer-Cheung AE. Do you want the good news or the bad news? Gain- versus
loss-framed messages following health risk information: the effects on leisure time physical activity beliefs and cognitions.
[Health Psychol 2013 Dec;32(12):1188-1198. [doi: 10.1037/a0030126] [Medline: 23088175]](http://dx.doi.org/10.1037/a0030126)
19. Lin I, Liao T. A survey of blockchain security issues and challenges. Int J Netw Secur 2017;19(5):653-659. [doi:
[10.6633/IJNS.201709.19(5).01]](http://dx.doi.org/10.6633/IJNS.201709.19(5).01)
20. Kim JW. Analysis of blockchain ecosystem and suggestions for improvement. J Inform Commun Convergence Eng 2021
[Mar 31;19(1):8-15. [doi: 10.6109/jicce.2021.19.1.8]](http://dx.doi.org/10.6109/jicce.2021.19.1.8)
21. Monrat AA, Schelen O, Andersson K. A survey of blockchain from the perspectives of applications, challenges, and
[opportunities. IEEE Access 2019;7:117134-117151. [doi: 10.1109/access.2019.2936094]](http://dx.doi.org/10.1109/access.2019.2936094)
-----
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
22. Oyinloye DP, Teh JS, Jamil N, Alawida M. Blockchain consensus: an overview of alternative protocols. Symmetry 2021
[Jul 27;13(8):1363. [doi: 10.3390/sym13081363]](http://dx.doi.org/10.3390/sym13081363)
23. Hussien HM, Yasin SM, Udzir NI, Ninggal MI, Salman S. Blockchain technology in the healthcare industry: trends and
[opportunities. J Industrial Inform Integration 2021 Jun;22:100217. [doi: 10.1016/j.jii.2021.100217]](http://dx.doi.org/10.1016/j.jii.2021.100217)
24. Bukhari D. Blockchain technology: a bibliometric analysis. In: HCI International 2020 - Posters. Cham: Springer; 2020.
25. Lopez-Barreiro J, Alvarez-Sabucedo L, Garcia-Soidan JL, Santos-Gago JM. Use of blockchain technology in the domain
of physical exercise, physical activity, sport, and active ageing: a systematic review. Int J Environ Res Public Health 2022
[Jul 02;19(13):8129 [FREE Full text] [doi: 10.3390/ijerph19138129] [Medline: 35805788]](https://www.mdpi.com/resolver?pii=ijerph19138129)
26. [Buterin V. Ethereum White Paper. 2013. URL: https://bibbase.org/network/publication/](https://bibbase.org/network/publication/buterin-ethereumwhitepaperanextgenerationsmartcontractdecentralizedapplicationplatform-2013)
[buterin-ethereumwhitepaperanextgenerationsmartcontractdecentralizedapplicationplatform-2013 [accessed 2022-05-15]](https://bibbase.org/network/publication/buterin-ethereumwhitepaperanextgenerationsmartcontractdecentralizedapplicationplatform-2013)
27. [Beniiche A. A study of blockchain oracles. arXiv 2020 [FREE Full text]](https://arxiv.org/abs/2004.07140)
28. Cai W, Wang Z, Ernst JB, Hong Z, Feng C, Leung VC. Decentralized applications: the blockchain-empowered software
[system. IEEE Access 2018;6:53019-53033. [doi: 10.1109/access.2018.2870644]](http://dx.doi.org/10.1109/access.2018.2870644)
29. Gómez-Díaz R, García-Rodríguez A. Bibliotecas, juegos y gamificación: una tendencia de presente con mucho futuro.
[ThinKEPI 2018 Apr 25;12:125. [doi: 10.3145/thinkepi.2018.13]](http://dx.doi.org/10.3145/thinkepi.2018.13)
30. Miau S, Yang J. Bibliometrics-based evaluation of the Blockchain research trend: 2008 – March 2017. Technol Analysis
[Strategic Manage 2018 Jan 31;30(9):1029-1045. [doi: 10.1080/09537325.2018.1434138]](http://dx.doi.org/10.1080/09537325.2018.1434138)
31. Alsalamah HA, Nasser S, Alsalamah S, Almohana AI, Alanazi A, Alrrshaid F. Wholesome coin: a pHealth solution to
reduce high obesity rates in gulf cooperation council countries using cryptocurrency. Front Blockchain 2021 Jul 12;4. [doi:
[10.3389/fbloc.2021.654539]](http://dx.doi.org/10.3389/fbloc.2021.654539)
32. Frikha T, Chaari A, Chaabane F, Cheikhrouhou O, Zaguia A. Healthcare and fitness data management using the IoT-based
[blockchain platform. J Healthc Eng 2021 Jul 9;2021:9978863-9978812 [FREE Full text] [doi: 10.1155/2021/9978863]](https://doi.org/10.1155/2021/9978863)
[[Medline: 34336176]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=34336176&dopt=Abstract)
33. Jamil F, Kahng HK, Kim S, Kim D. Towards secure fitness framework based on IoT-enabled blockchain network integrated
[with machine learning algorithms. Sensors (Basel) 2021 Feb 26;21(5):1640 [FREE Full text] [doi: 10.3390/s21051640]](https://www.mdpi.com/resolver?pii=s21051640)
[[Medline: 33652773]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=33652773&dopt=Abstract)
34. Jamil F, Qayyum F, Alhelaly S, Javed F, Muthanna A. Intelligent microservice based on blockchain for healthcare
[applications. Comput Material Continua 2021;69(2):2513-2530. [doi: 10.32604/cmc.2021.018809]](http://dx.doi.org/10.32604/cmc.2021.018809)
35. Cao P, Zhu G, Zhang Q, Wang F, Liu Y, Mo R. Blockchain-enabled HMM model for sports performance prediction. IEEE
[Access 2021;9:40255-40262. [doi: 10.1109/access.2021.3064261]](http://dx.doi.org/10.1109/access.2021.3064261)
36. Hong Y, Park DW. Big data and blockchain to improve performance of professional sports teams. ASM Sci J
2020;13(1):19-27.
37. Yu S. Application of blockchain-based sports health data collection system in the development of sports industry. Mobile
[Inform Syst 2021 Jun 10;2021:1-6. [doi: 10.1155/2021/4663147]](http://dx.doi.org/10.1155/2021/4663147)
38. Ma F. Design of running training assistance system based on blockchain technology in wireless network. J Wireless Com
[Network 2021 Jan 31;2021(1). [doi: 10.1186/s13638-021-01897-4]](http://dx.doi.org/10.1186/s13638-021-01897-4)
39. Shan Y, Mai Y. Research on sports fitness management based on blockchain and Internet of Things. J Wireless Com
[Network 2020 Oct 15;2020(1). [doi: 10.1186/s13638-020-01821-2]](http://dx.doi.org/10.1186/s13638-020-01821-2)
40. Mulyati, Rahardja U, Hardini M, Al Nasir AL, Aini Q. Taekwondo sports test and training data management using blockchain.
In: Proceedings of the 2020 Fifth International Conference on Informatics and Computing (ICIC). 2020 Presented at: 2020
Fifth International Conference on Informatics and Computing (ICIC); Nov 03-04, 2020; Gorontalo, Indonesia. [doi:
[10.1109/icic50835.2020.9288598]](http://dx.doi.org/10.1109/icic50835.2020.9288598)
41. Khezr S, Benlamri R, Yassine A. Blockchain-based model for sharing activities of daily living in healthcare applications.
In: Proceedings of the 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive
Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology
Congress (DASC/PiCom/CBDCom/CyberSciTech). 2020 Presented at: 2020 IEEE Intl Conf on Dependable, Autonomic
and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing,
Intl Conf on Cyber Science and Technology Congress (DASC/PiCom/CBDCom/CyberSciTech); Aug 17-22, 2020; Calgary,
[AB, Canada. [doi: 10.1109/dasc-picom-cbdcom-cyberscitech49142.2020.00109]](http://dx.doi.org/10.1109/dasc-picom-cbdcom-cyberscitech49142.2020.00109)
42. Rahman MA, Hossain MS, Loukas G, Hassanain E, Rahman SS, Alhamid MF, et al. Blockchain-based mobile edge
[computing framework for secure therapy applications. IEEE Access 2018;6:72469-72478. [doi: 10.1109/access.2018.2881246]](http://dx.doi.org/10.1109/access.2018.2881246)
43. Rahman M, Abualsaud K, Barnes S, Rashid M, Abdullah S. A natural user interface and blockchain-based in-home smart
health monitoring system. In: Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and Enabling
Technologies (ICIoT). 2020 Presented at: 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies
[(ICIoT); Feb 02-05, 2020; Doha, Qatar. [doi: 10.1109/iciot48696.2020.9089613]](http://dx.doi.org/10.1109/iciot48696.2020.9089613)
44. Rupasinghe T, Burstein F, Rudolph C, Strange S. Towards a blockchain based fall prediction model for aged care. In:
Proceedings of the Australasian Computer Science Week Multiconference. 2019 Presented at: ACSW 2019: Australasian
[Computer Science Week 2019; Jan 29 - 31, 2019; Sydney NSW Australia. [doi: 10.1145/3290688.3290736]](http://dx.doi.org/10.1145/3290688.3290736)
-----
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
45. Silva CA, Aquino GS, Melo SR, Egídio DJ. A fog computing-based architecture for medical records management. Wireless
[Commun Mobile Computing 2019 Feb 27;2019:1-16. [doi: 10.1155/2019/1968960]](http://dx.doi.org/10.1155/2019/1968960)
46. Spinsante S, Poli A, Mongay Batalla J, Krawiec P, Dobre C, Bǎjenaru L, et al. Clinically-validated technologies for assisted
[living. J Ambient Intell Human Comput 2021 Aug 16;14(3):2095-2116. [doi: 10.1007/s12652-021-03419-y]](http://dx.doi.org/10.1007/s12652-021-03419-y)
47. Velmovitsky PE, Miranda PA, Vaillancourt H, Donovska T, Teague J, Morita PP. A blockchain-based consent platform
for active assisted living: modeling study and conceptual framework. J Med Internet Res 2020 Dec 04;22(12):e20832
[[FREE Full text] [doi: 10.2196/20832] [Medline: 33275111]](https://www.jmir.org/2020/12/e20832/)
48. Momma H, Kawakami R, Honda T, Sawada SS. Muscle-strengthening activities are associated with lower risk and mortality
in major non-communicable diseases: a systematic review and meta-analysis of cohort studies. Br J Sports Med 2022 Jul
[28;56(13):755-763 [FREE Full text] [doi: 10.1136/bjsports-2021-105061] [Medline: 35228201]](http://bjsm.bmj.com/lookup/pmidlookup?view=long&pmid=35228201)
49. Leenders M, Sluijs I, Ros MM, Boshuizen HC, Siersema PD, Ferrari P, et al. Fruit and vegetable consumption and mortality:
European prospective investigation into cancer and nutrition. Am J Epidemiol 2013 Aug 15;178(4):590-602. [doi:
[10.1093/aje/kwt006] [Medline: 23599238]](http://dx.doi.org/10.1093/aje/kwt006)
50. Chowdhury MA, Hossain N, Kashem MA, Shahid MA, Alam A. Immune response in COVID-19: a review. J Infect Public
[Health 2020 Nov;13(11):1619-1629 [FREE Full text] [doi: 10.1016/j.jiph.2020.07.001] [Medline: 32718895]](https://linkinghub.elsevier.com/retrieve/pii/S1876-0341(20)30567-0)
51. Klika B, Jordan C. High-intensity circuit training using body weight: maximum results with minimal investment. ACSM's
[Health Fitness J 2013;17(3):8-13. [doi: 10.1249/FIT.0b013e31828cb1e8]](http://dx.doi.org/10.1249/FIT.0b013e31828cb1e8)
52. Jayedi A, Gohari A, Shab-Bidar S. Daily step count and all-cause mortality: a dose-response meta-analysis of prospective
[cohort studies. Sports Med 2022 Jan 21;52(1):89-99. [doi: 10.1007/s40279-021-01536-4] [Medline: 34417979]](http://dx.doi.org/10.1007/s40279-021-01536-4)
53. Lopez-Barreiro J, Hernandez-Lucas P, Garcia-Soidan JL, Romo-Perez V. Effects of an eccentric training protocol using
[gliding discs on balance and lower body strength in healthy adults. J Clin Med 2021 Dec 19;10(24):5965 [FREE Full text]](https://www.mdpi.com/resolver?pii=jcm10245965)
[[doi: 10.3390/jcm10245965] [Medline: 34945261]](http://dx.doi.org/10.3390/jcm10245965)
54. [Kaleido. URL: https://www.kaleido.io/ [accessed 2023-04-10]](https://www.kaleido.io/)
55. Santos-Gago JM, Ramos-Merino M, Alvarez-Sabucedo LM. Identification of free and WHO-compliant handwashing
[moments using low cost wrist-worn wearables. IEEE Access 2021;9:133574-133593. [doi: 10.1109/access.2021.3115434]](http://dx.doi.org/10.1109/access.2021.3115434)
56. Noh JH, Kwon HY. A study on smart city security policy based on blockchain in 5G Age. In: Proceedings of the 2019
International Conference on Platform Technology and Service (PlatCon). 2019 Presented at: 2019 International Conference
[on Platform Technology and Service (PlatCon); Jan 28-30, 2019; Jeju, Korea (South). [doi: 10.1109/platcon.2019.8669406]](http://dx.doi.org/10.1109/platcon.2019.8669406)
##### Abbreviations
**BC:** blockchain
**dApp:** decentralized application
**FAO:** Food and Agriculture Organization
**GDPR:** General Data Protection Regulation
**HIIT:** high-intensity interval training
**NFT:** nonfungible token
**OECD:** Organization for Economic Co-operation and Development
**PA:** physical activity
**PAG:** physical activity guidelines for Americans
**POA:** proof of authority
**POW:** proof of work
**SC:** smart contract
**WHO:** World Health Organization
-----
INTERACTIVE JOURNAL OF MEDICAL RESEARCH Lopez-Barreiro et al
©Juan Lopez-Barreiro, Luis Alvarez-Sabucedo, Jose-Luis Garcia-Soidan, Juan M Santos-Gago. Originally published in the
Interactive Journal of Medical Research (https://www.i-jmr.org/), 19.04.2023. This is an open-access article distributed under
the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted
use, distribution, and reproduction in any medium, provided the original work, first published in the Interactive Journal of Medical
Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.i-jmr.org/,
as well as this copyright and license information must be included.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC10157451, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBY",
"status": "GOLD",
"url": "https://www.i-jmr.org/2023/1/e44135/PDF"
}
| 2,023
|
[
"JournalArticle"
] | true
| 2023-04-19T00:00:00
|
[
{
"paperId": "eeeda98142638fa7b2be8e3bcfac64b9833b488e",
"title": "Use of Blockchain Technology in the Domain of Physical Exercise, Physical Activity, Sport, and Active Ageing: A Systematic Review"
},
{
"paperId": "477307b9054d05230426757878a5c92d72cad0e9",
"title": "Muscle-strengthening activities are associated with lower risk and mortality in major non-communicable diseases: a systematic review and meta-analysis of cohort studies"
},
{
"paperId": "bbf129c621a90e1c4d9990363ef380e11e385ba0",
"title": "Effects of an Eccentric Training Protocol Using Gliding Discs on Balance and Lower Body Strength in Healthy Adults"
},
{
"paperId": "b2cc838b35dab032ed4a55f0b7040c9be086e769",
"title": "Physical activity and sedentary behaviour over adulthood in relation to all-cause and cause-specific mortality: a systematic review of analytic strategies and study findings."
},
{
"paperId": "64f5b8b9ea6bc3e2850a2957839e430448488279",
"title": "Daily Step Count and All-Cause Mortality: A Dose–Response Meta-analysis of Prospective Cohort Studies"
},
{
"paperId": "8f0a1a2dc33c7c8986d2c8082f44ac1ae91c4e77",
"title": "A modified Delphi study to enhance and gain international consensus on the Physical Activity Messaging Framework (PAMF) and Checklist (PAMC)"
},
{
"paperId": "2979828197184486c7ccd5ba1db05e097dab0389",
"title": "Clinically-validated technologies for assisted living"
},
{
"paperId": "5ab7d5c9132c44bab82427016aa7deabec42e36e",
"title": "Blockchain Consensus: An Overview of Alternative Protocols"
},
{
"paperId": "01773eeba8202b901b7b1ce04f42e50529942eb5",
"title": "Wholesome Coin: A pHealth Solution to Reduce High Obesity Rates in Gulf Cooperation Council Countries Using Cryptocurrency"
},
{
"paperId": "47a3e61b5c992a849b7b85ff7f61eda73b989a14",
"title": "Healthcare and Fitness Data Management Using the IoT-Based Blockchain Platform"
},
{
"paperId": "4ad6ce4d3fcc7e70f2483fba0a466cc29665930c",
"title": "Blockchain technology in the healthcare industry: Trends and opportunities"
},
{
"paperId": "5856e2a77cdcde9346a6b1a02e67588166904727",
"title": "Towards Secure Fitness Framework Based on IoT-Enabled Blockchain Network Integrated with Machine Learning Algorithms"
},
{
"paperId": "9d12ac05ea1b999778bcedebbc38d4d6f999b826",
"title": "Design of running training assistance system based on blockchain technology in wireless network"
},
{
"paperId": "3b7bdac833ee9e5548e90e929d85d4cf9971b89b",
"title": "A Blockchain-Based Consent Platform for Active Assisted Living: Modeling Study and Conceptual Framework"
},
{
"paperId": "d61fe55df2a74b7a2b8fda050bbebe0fef8eccbb",
"title": "World Health Organization 2020 guidelines on physical activity and sedentary behaviour"
},
{
"paperId": "f93dda715d6de6f3829b7da72a75c24e98e004ce",
"title": "Taekwondo Sports Test and Training Data Management Using Blockchain"
},
{
"paperId": "5e4c2734097473617a7274f35502b51cd321ea5a",
"title": "Blockchain-based Model for Sharing Activities of Daily Living in Healthcare Applications"
},
{
"paperId": "df3849a295e68146b6ebcf8716a536c160e83af9",
"title": "Blockchain Technology: A Bibliometric Analysis"
},
{
"paperId": "d98e72abba83071865797d85a95330b4f40b6c58",
"title": "Immune response in COVID-19: A review"
},
{
"paperId": "64946eb1e055f7b7214ffca4457a7e23cd28d191",
"title": "Research on sports fitness management based on blockchain and Internet of Things"
},
{
"paperId": "4440338d00337904f22bd1286e6996782c5e8d6e",
"title": "Get the message? A scoping review of physical activity messaging"
},
{
"paperId": "55ec2762fa782ac2c580f057e076f33b98e6ddd3",
"title": "A Study of Blockchain Oracles"
},
{
"paperId": "660f79f26eb9e9bf60343a7df4f8462f9a6c9ada",
"title": "A Natural User Interface and Blockchain-Based In-Home Smart Health Monitoring System"
},
{
"paperId": "627341e1a5872676798c3ab56355157c1fe78bcc",
"title": "A Survey of Blockchain From the Perspectives of Applications, Challenges, and Opportunities"
},
{
"paperId": "7688a610b12dfa63009badf74a5664497e5eaa6c",
"title": "A Study on smart city security policy based on blockchain in 5G Age"
},
{
"paperId": "38edb34e714ab132556087f89f449c525b2b6e1f",
"title": "Personalized models of physical activity responses to text message micro‐interventions: A proof‐of‐concept application of control systems engineering methods"
},
{
"paperId": "a52644fb3bfc1c0141c2c0173e27ad4145c169eb",
"title": "A Fog Computing-Based Architecture for Medical Records Management"
},
{
"paperId": "0ecf9afc84bf0d1d81a35cd7123a19fc03766542",
"title": "Towards a Blockchain based Fall Prediction Model for Aged Care"
},
{
"paperId": "e882106e4f095938b3b3dd515d06d7c192dd4db6",
"title": "The Physical Activity Guidelines for Americans"
},
{
"paperId": "a4b2837509af0c33ac182a5b84bd47e2e38a61f7",
"title": "Blockchain-Based Mobile Edge Computing Framework for Secure Therapy Applications"
},
{
"paperId": "acea0aada3b7c58b3711507f0935cfb4606eab72",
"title": "Decentralized Applications: The Blockchain-Empowered Software System"
},
{
"paperId": "3936100e571ecab027fc9497a3e38d3333136e3d",
"title": "Bibliotecas, juegos y gamificación: una tendencia de presente con mucho futuro"
},
{
"paperId": "012e43def9df8256593cebdbbecf21ae0915f870",
"title": "Bibliometrics-based evaluation of the Blockchain research trend: 2008 – March 2017"
},
{
"paperId": "48e9fa12cbc0b653f762ca4cf5b0ca44e33ef5ae",
"title": "Fruit and vegetable intake and the risk of cardiovascular disease, total cancer and all-cause mortality—a systematic review and dose-response meta-analysis of prospective studies"
},
{
"paperId": "59df356ab46fd214f9dafc8f0a24e1d582a60c05",
"title": "Effects of total fat intake on body weight."
},
{
"paperId": "3586abc9af0ec9393552c695d6874ce71052a405",
"title": "Do you want the good news or the bad news? Gain- versus loss-framed messages following health risk information: The effects on leisure time physical activity beliefs and cognitions."
},
{
"paperId": "7d61b70167ae785df4a823076fdaa8f15b0082a7",
"title": "Fruit and vegetable consumption and mortality: European prospective investigation into cancer and nutrition."
},
{
"paperId": "6865eed3b4ad7ef54c53d591c698b1911f978f64",
"title": "HIGH-INTENSITY CIRCUIT TRAINING USING BODY WEIGHT: Maximum Results With Minimal Investment"
},
{
"paperId": "0cfbad99da61e471c36cb418e0ea224c281f2dda",
"title": "Actividad física como estilo de vida saludable: criterios básicos"
},
{
"paperId": "c6c1bbc9a9ba847266471b0793d15fa8bd35a927",
"title": "Risky business: the effects of an individualized health information intervention on health risk perceptions and leisure time physical activity among people with spinal cord injury."
},
{
"paperId": "a79c2bbc61995e3533ed19daa57e2c42a0d73127",
"title": "A systematic review of three approaches for constructing physical activity messages: What messages work and what improvements are needed?"
},
{
"paperId": "d7d85296126470e31df8a731a9c42ee6712cde2d",
"title": "WHO Scientific Update on health consequences of trans fatty acids: introduction"
},
{
"paperId": "f7ddf5129b392cc23411c2ca28ece3bc0ad682d6",
"title": "A Survey Of Blockchain Security Issues And Challenges"
},
{
"paperId": "ce0377f7ad2ff95f05a6c9469ef639b90d27d5c1",
"title": "Intelligent Microservice Based on Blockchain for Healthcare Applications"
},
{
"paperId": "00238985ff4816a79a13c1c6a7fccfe3661e658a",
"title": "Identification of Free and WHO-Compliant Handwashing Moments Using Low Cost Wrist-Worn Wearables"
},
{
"paperId": "2ed5afb298dfe363c9fc56993ade4067454cf491",
"title": "Application of Blockchain-Based Sports Health Data Collection System in the Development of Sports Industry"
},
{
"paperId": "de353d7dbe8c04a56439e52f1c7e9ef20d4866ed",
"title": "Blockchain-Enabled HMM Model for Sports Performance Prediction"
},
{
"paperId": "9af89eaf0a1091f01f1a3abe36377597265aa22e",
"title": "Analysis of Blockchain Ecosystem and Suggestions for Improvement"
},
{
"paperId": "c89ebf3801428fc8f1b45ca05c7e411dafbec7f7",
"title": "Frutas y verduras – esenciales en tu dieta"
},
{
"paperId": null,
"title": "Big data and blockchain to improve performance of professional sports teams"
},
{
"paperId": null,
"title": "WHO Guidelines on Physical Activity and Sedentary Behaviour: Web Annex: Evidence Profiles"
},
{
"paperId": null,
"title": "Guideline: Sugars Intake for Adults and Children"
},
{
"paperId": null,
"title": "Ethereum White Paper"
},
{
"paperId": "36971037afe88abc84046f432b2755cc6b1c00d3",
"title": "Diet, nutrition and the prevention of chronic diseases : report of a Joint WHO/FAO Expert Consultation"
},
{
"paperId": null,
"title": "Food-based dietary guidelines in the WHO European Region"
},
{
"paperId": null,
"title": "Lopez-Barreiro INTERACTIVE"
}
] | 16,027
|
en
|
[
{
"category": "Computer Science",
"source": "s2-fos-model"
},
{
"category": "Engineering",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/034422b48471f5430ea18c868c84cc1e7c3e828a
|
[] | 0.907517
|
A Blind Load-Balancing Algorithm (BLBA) for Distributing Tasks in Fog Nodes
|
034422b48471f5430ea18c868c84cc1e7c3e828a
|
Wireless Communications and Mobile Computing
|
[
{
"authorId": "2181351259",
"name": "Niloofar Tahmasebi-Pouya"
},
{
"authorId": "1695542",
"name": "M. Sarram"
},
{
"authorId": "38120993",
"name": "S. Mostafavi"
}
] |
{
"alternate_issns": null,
"alternate_names": [
"Wirel Commun Mob Comput"
],
"alternate_urls": [
"https://onlinelibrary.wiley.com/journal/15308677",
"http://www.interscience.wiley.com/jpages/1530-8669/"
],
"id": "501c1070-b5d2-4ff0-ad6f-8769a0a1e13f",
"issn": "1530-8669",
"name": "Wireless Communications and Mobile Computing",
"type": "journal",
"url": "https://www.hindawi.com/journals/wcmc/"
}
|
In the distributed infrastructure of fog computing, fog nodes (FNs) can process user requests locally. In order to reduce the delay and response time of a user’s requests, incoming requests must be evenly distributed among FNs. For this purpose, in this paper, we propose a blind load-balancing algorithm (BLBA) to improve the load distribution in the fog environment. In the proposed algorithm, the mobile device sends a task to a FN. Then, the FN decides to process that task using the Double-
Q
-learning algorithm. One of the critical advantages of BLBA is that decision-making on tasks is done without any knowledge of the state of neighbor nodes. The proposed system consists of four layers: (i) IoT layer, (ii) fog layer, (iii) proxy server layer, and (iv) cloud layer. The experimental results show that the proposed algorithm with proper distribution of tasks between nodes significantly reduces the delay and user response time compared to the existing methods.
|
Hindawi
Wireless Communications and Mobile Computing
Volume 2022, Article ID 1533949, 11 pages
[https://doi.org/10.1155/2022/1533949](https://doi.org/10.1155/2022/1533949)
# Research Article A Blind Load-Balancing Algorithm (BLBA) for Distributing Tasks in Fog Nodes
## Niloofar Tahmasebi-Pouya, Mehdi-Agha Sarram, and Seyedakbar Mostafavi
Computer Engineering Department, Yazd University, Yazd, Iran
Correspondence should be addressed to Seyedakbar Mostafavi; a.mostafavi@yazd.ac.ir
Received 8 March 2022; Revised 28 June 2022; Accepted 20 July 2022; Published 11 August 2022
Academic Editor: Andrea Marin
[Copyright © 2022 Niloofar Tahmasebi-Pouya et al. This is an open access article distributed under the Creative Commons](https://creativecommons.org/licenses/by/4.0/)
[Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work](https://creativecommons.org/licenses/by/4.0/)
is properly cited.
In the distributed infrastructure of fog computing, fog nodes (FNs) can process user requests locally. In order to reduce the delay
and response time of a user’s requests, incoming requests must be evenly distributed among FNs. For this purpose, in this paper,
we propose a blind load-balancing algorithm (BLBA) to improve the load distribution in the fog environment. In the proposed
algorithm, the mobile device sends a task to a FN. Then, the FN decides to process that task using the Double-Q-learning
algorithm. One of the critical advantages of BLBA is that decision-making on tasks is done without any knowledge of the state
of neighbor nodes. The proposed system consists of four layers: (i) IoT layer, (ii) fog layer, (iii) proxy server layer, and (iv)
cloud layer. The experimental results show that the proposed algorithm with proper distribution of tasks between nodes
significantly reduces the delay and user response time compared to the existing methods.
## 1. Introduction
Fog computing is a distributed computing model that
extends cloud services to the edge of the network to facilitate
the management and scheduling of computing, networking,
and storage services between data centers and end devices.
Both fog computing and cloud computing provide computing, storage, and networking services to end-users, but fog is
closer to the end-user, thus providing minimal delay for
Internet of Things (IoT) applications. FNs are located in a
layer between IoT and the cloud data center. FNs can process data stream and user requests in real time, reducing network delay and congestion [1–3].
IoT devices typically assign processing tasks to the nearest neighbor node. In this case, some FNs may receive more
tasks than other FNs and be overloaded over time. To avoid
this situation, load-balancing methods are suggested to distribute loads over the nodes. Load-balancing in FNs refers
to the even distribution of input tasks across a group of processing FNs so that the capacity of FNs is fairly utilized and
task processing speed is increased [4–8]. FNs can allocate
their tasks to underloaded neighbor nodes or the cloud
through the load-balancing approach and reduce overload
and processing delay as much as possible. The loadbalancing approaches in the fog environment can be categorized as static and dynamic. Static load-balancing algorithms
perform load-balancing and apply fixed rules to distribute
task requests. On the other side, in dynamic load-balancing,
the tasks are assigned dynamically to the FNs based on a
long-term knowledge of load distribution. In other words,
dynamic load-balancing approaches update their loadbalancing rules frequently based on the new knowledge of
traffic loads [9, 10]. Dynamic load-balancing algorithms
can be divided into two categories: (1) sender-initiated techniques where congested nodes look for lightly loaded nodes
and offload their tasks to them and (2) receiver-initiated
strategies where underloaded nodes search for overloaded
nodes and steal their tasks [11, 12]. In this paper, we propose
a dynamic load-balancing method based on sender-initiated
strategies for task distribution over FNs.
The nature-inspired load-balancing algorithms can be
classified into three different types: heuristic, metaheuristic,
and hybrid. The purpose of designing heuristics is to achieve
the optimal response in a specified period [13–15]. Met
-----
2 Wireless Communications and Mobile Computing
heuristic algorithms require more execution time to achieve
the optimal response, and these algorithms have a more
extensive response space than heuristics [16–19]. Hybrid
algorithms combine heuristic or metaheuristic algorithms
that reduce execution time and cost and provide more efficient results than other algorithms [20–24]. The use of typical load-balancing methods increases resource utilization
and resource savings, as well as reduces delay and response
time. However, these algorithms may lose their efficiency
because of the time-varying dynamics of traffic load in fog
computing. Therefore, we need an algorithm that can adapt
to dynamics in environmental conditions. For this purpose,
we introduce a decision-making process based on the Double-Q-learning algorithm to evenly distribute processing
tasks among FNs. The main contributions of the proposed
approach are summarized below:
(i) Architecture. The proposed system considers a fourlayer architecture to handle load-balancing problems in the fog environment. Because of this architecture, tasks are processed locally at FNs, and
there is no need to transfer data to the cloud.
(ii) Algorithm. This work proposes a decision-making
process based on the Double-Q-learning algorithm
to find a low-load FN. The FN using the Double-Q
-learning algorithm selects an available neighbor
FN or cloud to assign the task. This algorithm can
be trained to maximize the long-term reward. In
this algorithm, the agent makes decisions about
the task processing without knowledge of the fog
environment and only based on the observations
and rewards. The results show that the loadbalancing method based on the Double-Q-learning
algorithm has significantly reduced the delay and
response time than the compared approaches.
(iii) Mechanism. This work proposes a load-balancing
method based on the Double-Q-learning algorithm
to distribute the tasks evenly among FNs with the
goal of reducing processing time. To our knowledge,
most of the methods presented in previous research
for decision-making require knowledge of the
capacity of neighbor FNs and the cloud, which creates a traffic load on the network and also delays the
decision-making process. In the BLBA, the FN just
decides based on the information obtained during
the learning period from delay and rewards based
on its own condition and has no knowledge of the
status of neighbor nodes. In this method, the FN
learns to assign the received task to a low-load node
for faster processing.
Our algorithm operates in such a way that the nodes
have no initial knowledge of the position of other nodes in
the fog environment, and during the learning period, they
act only on the basis of experiences related to their conditions and have no knowledge of other neighbor nodes. We
refer to such an algorithm as the blind algorithm for the
proper distribution of tasks between FNs to stress the fact
that there is no prior knowledge of the status of neighbor
nodes. This algorithm can be implemented on other networks and run on the fly.
We organized this paper as follows: In Section 2, we offer
the related works. In Section 3, we describe the proposed
system architecture, the reinforcement learning algorithm,
and how to compute the delay in this system. In Section 4,
we introduce the proposed load-balancing method. In Section 5, the simulation results and our analysis of these results
are presented. Finally, Section 6 offers a conclusion and suggestions for future work.
## 2. Related Work
In this section, we review the previous works on loadbalancing using reinforcement learning. To minimize overload and reduce delay, it is critical to use an optimal loadbalancing algorithm. In fog computing, IoT devices and
mobile users typically assign their tasks to the nearest FN.
Since these devices are often mobile, different FNs may have
different loads depending on their position in the network.
This causes an imbalance in the distribution of tasks
between FNs, and some FNs may be overloaded, while other
FNs are idle or low-load. In distributed environments, we
can use reinforcement learning to design load-balancing
algorithms that learn traffic patterns and automatically distribute the load evenly among the nodes. Some authors have
used the benefits of reinforcement learning algorithms to
solve the load-balancing problem. The existing studies can
be classified from many perspectives. Here, we review several previous works that are aware of the capacity and load
of the nodes.
2.1. Literature Review. Many of the previous studies are
founded based on the assumption of knowledge of node
capacity. Baek et al. [10] proposed a decision-making process based on reinforcement learning to find the optimal offloading decision with unknown reward and transition
functions. In this method, FNs can send some tasks to an
available neighbor FN. The purpose of this is to minimize
overload probability and processing time. Xu et al. [25]
introduced a dynamic resource allocation method for loadbalancing in the fog environment. Technically, they presented a system framework for fog computing and the
load-balancing analysis for various types of computing
nodes. Then, they designed a corresponding resource allocation method in the fog environment through static resource
allocation and dynamic service migration to achieve loadbalancing for fog computing systems. Moon et al. [26]
defined a computational task migration problem for balancing loads of vehicular edge computing servers (VECSs)
and minimizing migration costs. To solve this problem, they
adopt a reinforcement learning algorithm in a cooperative
VECS group environment that can collaborate with VECSs
in the group. The objective of this study is to optimize
load-balancing and migration cost while satisfying the delay
constraints of the computation task of vehicles. Wu et al. [27]
proposed a reinforcement learning-based metadata dynamic
load-balancing mechanism. This method can control the
-----
Wireless Communications and Mobile Computing 3
load dynamically according to the performance of the metadata servers, and it has good adaptability in the case of a sudden change in data volume.
Other studies are based on the assumption of knowledge
of node loads or future load predictions. Razaq et al. [28]
proposed a Q-learning-based algorithm for load-balancing
in the fog environment, in which a task is divided into several pieces based on security requirements to help in privacy
preservation. In this algorithm, the agent assigns a task piece
to a node with an equal or higher security reputation than
the security level of a task piece that can provide service to
avoid overload on the nodes. Xu et al. [29] proposed a work
donation algorithm based on reinforcement learning for
distributed-memory systems to optimize load-balancing
with minimized communication costs and dynamically
adapt to flow behaviors and available network bandwidth.
Then, they designed a high-order load estimation model to
predict blockwise particle advection loads and used a linear
transmission model to estimate interprocess communications’ costs. Mai et al. [30] suggested a reinforcement
learning-based method that uses evolution strategies to
assign tasks between fog servers to minimize processing
latency in the long term. Talaat et al. [31] introduced a
load-balancing and optimization strategy using a dynamic
resource allocation method based on reinforcement learning
and genetic algorithm. This method collects the load
information for each server, handles the incoming requests,
and distributes them between the servers evenly. Divya and
Sri [32] proposed a reinforcement learning-based loadbalancing method by combining software-defined networks
and fog computing. The proposed method understands the
network behavior and balances the loads to provide the
maximum possible availability of the resources. Lu et al.
[33] used improved deep reinforcement learning based on
LSTM and candidate networks to solve tasks offloading in
mobile edge computing. Li et al. [34] suggested a loadbalancing method using an online reinforcement learning
algorithm for load distribution in vehicular networks. This
algorithm achieves a suitable association solution through
continuous learning from the dynamic vehicular environment. Lin et al. [35] introduced a reinforcement learningbased approach aimed at load-balancing for data center networks. This approach employs reinforcement learning to
learn a network and control it based on the learned experience. Li et al. [36] suggested an algorithm based on machine
learning which is aimed at generating intelligent adaptive
strategies related to load-balancing of collaborative servers
and dynamic scheduling of sequential tasks. Based on the
proposed algorithm and software-defined networking technology, the tasks can be executed cooperatively by the user
device and the servers in the mobile fog computing network.
Rikhtegar et al. [37] proposed a load-balancing method
based on deep reinforcement learning for software-defined
networking-based data center networks. This method uses
the deep deterministic policy gradient algorithm to adaptively learn the link-weight values by observing the traffic
flow characteristics. Kim and Kim [38] proposed an agent
that uses a deep reinforcement learning algorithm to distribute requests between gaming servers. The agent has done
this by measuring network loads and analyzing a large
amount of user data.
2.2. Research Gap and Motivation. To our knowledge, most
of the methods presented in previous research for decisionmaking require knowledge of the capacity of neighbor FNs
and the cloud (e.g., [10, 26, 27]), which creates a traffic load
on the network and also delays the decision-making process.
Our work in this paper differs from previous works as in our
method, the FN just decides based on the information
obtained during the learning period from delay and reward
based on its own condition and has no knowledge of the status of neighbor nodes. Our work in this paper enables loadbalancing in a dynamic fog environment where the nodes
have no information about each other. The application of
the proposed scheme is not limited to a specific scenario,
but its purpose is a subclass of problems.
## 3. System Model
In this section, we describe the proposed system architecture, the reinforcement learning algorithm, and how to compute the delay in this system.
3.1. Proposed System Architecture. In this paper, as shown in
Figure 1, a four-layer architecture is considered for the proposed system. The first layer includes IoT devices that connect directly to FNs and send data to these nodes locally.
The second layer is the fog layer. Fog servers can be located
in different geographical locations and process data received
from IoT devices in real time. The third layer comprises a
proxy server that receives data from FNs and then sends this
data to the cloud. The last layer in this structure is the cloud
data center layer, which includes several servers and data
centers. Because of this structure, data and information are
processed locally at FNs, and there is no need to transfer
data to the cloud.
FNs can allocate their tasks to low-load neighbor nodes
or the cloud through the load-balancing method provided
for the dynamic fog environment in this paper and reduce
overload and processing delay as much as possible. Because
of the dynamic of the fog environment, a variable number
of mobile devices may be connected to each FN at any
moment. The FN to which more mobile devices are connected receives more tasks than other FNs and will be
overloaded. Load-balancing methods are used to evenly
distribute tasks among FNs to avoid overload. The primary
purpose of the load-balancing algorithm in the fog computing environment is to improve the response time so that it
operates optimally even in dynamic conditions of the system.
For this purpose, in this paper, the Double-Q-learning algorithm is applied in FNs to improve delay, response time,
and resource loss in the network. After receiving a task, each
FN decides to use the Double-Q-learning algorithm to process it or send it to a neighbor FN or cloud for faster
processing.
3.2. Preliminaries on the Reinforcement Learning Algorithm.
In this part, we review the background on reinforcement
learning and Double-Q-learning algorithm:
-----
4 Wireless Communications and Mobile Computing
Cloud
Proxy server
Fog
IoT
Figure 1: The four-layer architecture of fog computing.
(i) Reinforcement Learning Algorithm. In this paper, we
formulate the load-balancing approach with the
reinforcement learning algorithm. Specifically, the
reinforcement learning algorithm maximizes the
cumulative reward by selecting optimal action in
each state of the environment [39, 40]. The proposed method formulates the load-balancing problem as a Markov decision process (MDP) for the
dynamic fog environment. The MDP comprises a
decision-making agent that continuously observes
the current state s of the system, selects an action
_a from the allowed actions in that state ða ∈_ _AðsÞÞ,_
and then transitions to a new state s[′] and receives
a reward r for that action, which influences future
decisions [41].
(ii) Off-Policy Learning. The policy is a mapping from
one state to one action, which determines how to
deal with each action and how to make a decision
in each of the different situations and is defined in
the two forms of On-policy and Off-policy. In Onpolicy, the same policy is used for both optimization
and action selection purposes. However, in Off-policy, two separate policies are used for optimization
and select action [42]. In this paper, among the
reinforcement learning algorithms, the Double-Q
-learning algorithm is used. The Double-Q-learning algorithm is an Off-policy algorithm.
(iii) Value Function. In the reinforcement learning algorithm, the value function is defined as the received
long-term expected cumulative rewards, which has
a long-term view, and for each state, a value is determined as follows:
� �h ��i
_v[∗]ð Þs_ = maxa [〠]p s[′], r sj, a _r + γv[∗]_ _s[′]_, ð1Þ
_s[′],r_
where 0 < γ < 1 is called the discount factor, which determines the importance of future rewards and shows that the
current decision has more value than future decisions.
(iv) Model. The model of the Double-Q-learning algorithm is random, and its states are indefinite.
In a reinforcement learning problem, the agent explores
the environment and learns to select the optimal action to
maximize long-term reward. Hence, reinforcement learning
in dynamic environments has many applications for optimization. In addition, it can be an excellent way to evenly distribute tasks between FNs.
3.3. Problem Formulation. We have formulated the proposed
load-balancing problem as the MDP to achieve the desired
performance. An MDP usually consists of <S, A, P, R >,
which are defined for the proposed load-balancing problem
as follows:
(i) S = fs = ðC, Q, NÞg. It is the state space, where C
represents the capacity of FN, Q represents the forward queue size in FN, and N represents the number of mobile devices connected to FN. Decisionmaking in the Double-Q-learning algorithm is made
based on the current state of the system. Most of the
previous methods to define the state space require
knowledge about the capacity of neighbor FNs.
However, in the BLBA, the state of the system is
defined only based on the status of the decisionmaker FN, and this causes the decision-making to
be done without any knowledge of the status of
the neighbor nodes.
(ii) A = fa = ðnÞg. It is the action space, where n represents the selected FN or cloud to assign the task.
(iii) P. The transition probability is a value between ½0, 1�
. The transition probability distribution Pðs[′]js, aÞ to
the next state s′ by selecting action a, if it is in the
state s.
(iv) R. It is the reward for selecting the action a in the
current state. The primary goal is to select the optimal action in each system so that the long-term
value is maximized and the processing time and
overload probability are minimized.
The task processing time is equal to the sum of the transmission delay and processing delay of that task in different
devices. These delays are calculated as follows.
3.3.1. Task Transmission Delay. The transmission delay
between two nodes is obtained from the sum of the waiting
delay in the forward queue of the source node and the send
delay on the communication channel between the two
nodes, which is calculated as follows:
-----
Wireless Communications and Mobile Computing 5
_δ = tW + tS,_ ð2Þ
where tW is the waiting delay in the forward queue of the
source node and is calculated as follows:
_tW = tout −_ _tin,_ ð3Þ
where tin represents the arrival time of the task m in the
queue of a node and tout represents the exit time of the task
_m from that node. The parameters needed to calculate the_
delay are given in Table 1.
In (2), tS is the send delay on the communication channel between the two nodes and is calculated as
_tS =_ � _LTask_ �, ð4Þ
BWN ⋅ log2 1 + β1D[−]i,j[β][2] [⋅] _[P]t[/][σ][ ⋅]_ [BW]N
where Di,j represents the distance between two nodes.
3.3.2. Reward Function. In the BLBA, the Double-Q-learning
algorithm runs on FNs, in which we defined the reward
function Rðs, aÞ as the negative of the processing delay. If
the task processing delay is longer, as a result, less reward
is received. In this paper, Rðs, aÞ is calculated as follows:
_R sð_, aÞ = −θ, ð5Þ
where θ represents the processing delay of the task assigned
to the FN. θ is calculated in one of the following two ways:
(i) If the node itself (FN-I) that received the task from
the mobile device processes it, θ is calculated as
_θ = tEi,_ ð6Þ
where tEi represents the task execution time in FN-I
(ii) If the task is assigned to the neighbor node (FN-J) or
the cloud for processing, θ is calculated as
_θ = δij + tE_ _j + δji,_ ð7Þ
where δij represents the task transmission delay from FN-I
to FN-J or cloud, tE _j represents the task execution time in_
FN-J or cloud, and δji represents the transmission delay
the result of the task from FN-J or cloud to FN-I
3.3.3. Task Execution Time. After the node receives the task,
that node allocates part of its capacity to execute this task.
The task execution time in the FN-I, FN-J, or cloud is calculated as follows, where I represents the number of task
instructions:
_tE =_ _[I][ ⋅]_ _[C][CPU]_ _:_ ð8Þ
_f CPU_
3.3.4. Total Delay. Depending on which node will process
the task, the task processing time is calculated as follows:
_T_ task = δmi + θ + δim, ð9Þ
where δmi represents the task transmission delay from the
mobile device to FN-I and δim represents the transmission
delay of the result of the task from FN-I to the mobile device.
Because the proxy server only sends the task to the cloud
and does not perform any processing, it is assumed that the
send delay on the communication channel from FN-I to the
cloud and vice versa is calculated directly and without considering the proxy server.
Each FN is an agent that is learning in the network. Any
new task in the system causes the FN to perform an action in
the environment and select one node to assign the new task.
The reward of the selected action will be specified when
updating the state of the environment. If the current state of
the system is closer to the load-balancing and the tasks are
processed faster, the reward will be given to the agent; otherwise, no reward will be awarded to that. Through the
rewards received, each node learns to make the best decision
for processing a task.
## 4. Blind Load-Balancing Algorithm (BLBA)
In this section, a Double-Q-learning-based load-balancing
algorithm for proper distribution of the load between the
FNs is presented to solve the problems of previous methods.
The Double-Q-learning algorithm is used to find the optimal
state-action with the least computational cost, which obtains
enough information through experience. The model of the
Double-Q-learning algorithm is random, and its states are
indefinite. In a learning problem, the agent explores the
environment and learns to select the optimal action to maximize long-term reward. Hence, the Double-Q-learning
algorithm in dynamic environments has many applications
for optimization. In addition, it can be an excellent way to
evenly distribute tasks between FNs.
The Double-Q-learning algorithm uses two estimation
functions instead of one estimation function: Q1 and Q2.
This way, it uses two Q-tables for estimates that stored the
value of all actions. The difference between the two tables
is that when we update the value of one of the tables, we
use the maximum value present in the other table. Assume
that the action a[∗] = arg maxa _Q1ðs[′], aÞ is the most valuable_
action in the state s[′], according to the value function Q1.
We use the value Q2ðs[′], a[∗]Þ to update Q1. In a similar way,
the action b[∗] = arg maxa _Q2ðs[′], aÞ is the most valuable action_
in the state s′, according to the value function Q2. We use b[∗]
and Q1 to update Q2. In the Double-Q-learning algorithm,
each time an update is performed, it is decided with equal
probability that the value of which table is updated and which
table is used to consider the maximum value. In this algorithm, an agent performs an action a after receiving the state
_s of the environment and then transitions to the next state s[′]_
and receives a reward Rðs, aÞ from the environment in return.
-----
6 Wireless Communications and Mobile Computing
Table 1: Parameter values to calculate the delay.
Parameters Definition Value
Bandwidth per node 10000 kB
BWN Cloud bandwidth 2000 kB
_LTask_ Task data length 3800
_β1_ The path loss constant 10[-3]
_β2_ The path loss exponent 4
_Pt_ The transmission power of node 20 dBm
_σ_ The noise power spectral density 174 dBm/Hz
_CCPU_ The number of CPU cycles required to compute any instruction 5
The CPU speeds of the FN 2800
_f CPU_
The CPU speeds of the cloud 44800
The value function for state s and action a in the Double-Q
-learning algorithm is estimated as follows: Start
h � � �� i
_Q1 sð_, aÞ = Q1 sð, aÞ + α R sð, aÞ + γQ2 s[′], arg maxa _Q1 s′, a_ − _Q1 sð_, aÞ,
The mobile device sends the task to FN-I.
ð10Þ
h � � �� i
_Q2 sð_, aÞ = Q2 sð, aÞ + α R sð, aÞ + γQ1 s[′], arg maxa _Q2 s′, a_ − _Q2 sð_, aÞ, Does FN-I Yes
ð11Þ process the task?
where ð0 < α < 1Þ is the learning rate, which balances
between new observations and what has been learned. The
Double-Q-learning algorithm uses the ε-greedy policy to
maximize long-term value, in which ε indicates that the next
action is randomly selected (with a constant probability of
0 ≤ _ε ≤_ 1) or selected from among the best in the table (with
probability 1 − _ε). First, the algorithm does not have any_
information about the network, so it is in the form of greedy
exploring the network. Once enough information is obtained
from the network, load-balancing is performed optimally. In
the ε-greedy algorithm, if we observe each action infinite
times, we can be ensured that Qðs, aÞ converges to the optimal value. Therefore, the FN learns through the Double-Q
-learning algorithm to select the most suitable node to assign
the task.
By applying load-balancing on FNs, the load is evenly
distributed between these nodes. The FN is considered an
agent and is busy learning in the network. After the FN
receives a new task via the mobile device, it observes the current state of the environment. Then, to maximize the longterm reward, based on the experiences and rewards it has
received so far and without any knowledge of the capacity
of the other nodes and only according to its own capacity,
it decides the task processing. If the FN has enough capacity,
it processes the task itself; otherwise, it assigns the task processing to the neighbor FN or cloud. If the task processing
delay in the FN itself and the neighbor FN is greater than
the processing delay of that task in the cloud, then the FN
assigns this task to the cloud for faster processing and
reduces the load on other nodes. Figure 2 shows the flowchart of the proposed load-balancing method.
Figure 2: Flowchart of the proposed load-balancing method.
The state space is equal to the capacity of the FN, the forward queue size in the FN, and the number of mobile
devices connected to the FN; the action of selecting a FN
or cloud to assign the task and the reward is a function to
minimizing task processing delay.
## 5. Performance Evaluation
The performance of the proposed load-balancing problem
based on the Double-Q-learning algorithm is evaluated
using the iFogSim simulation environment [43]. We ran this
program on an Asus computer with an Intel Core i7 processor and 8 GB RAM. The proposed system includes N FNs
-----
Wireless Communications and Mobile Computing 7
Table 2: Parameter values in the simulation.
Parameters Definition Value
_N_ Number of FNs 4, 10
_n_ The number of times a state has been observed —
_α_ Learning rate 1/n
_γ_ Discount factor 0.9
and a variable number of mobile devices. Mobile devices
randomly connect to their neighbor FNs and assign their
task processing to these nodes. First, in the Double-Q
-learning algorithm, all Q-table values are zero, and the FN
has no information about the network. In order to learn,
the ε-greedy method is used, in which the value ε is initially
considered equal to 1, and the algorithm in the form of
greedy explores the network. After that, the FN’s confidence
increases in estimating Q-values; its value changes to 0.3.
The amount of reward received is equal to the negative
of the task processing delay in one of the FNs or the cloud.
In the simulation, it is assumed that mobile devices now
send tasks to FNs, and the Double-Q-learning algorithm is
executed simultaneously with received tasks by FNs. This
will prevent overload in the nodes as much as possible. Each
node using the Double-Q-learning algorithm selects a node
to assign the task after examining the current state of the
environment and receives a reward from the environment
in return. Over time, the experience of the FNs from the network increases, and each node learns to assign the task to a
low-load node that can process the task faster and receive a
reward from the environment in return. However, in other
methods, unlike the proposed method, the load-balancing
algorithm is executed after creating an overload in the FN.
This leads to reduced performance and increases the delay
in these systems. The parameters used for system evaluation
are given in Table 2.
In the following, the performance of BLBA is compared
with SSLB [9], random, and proportional [44] loadbalancing methods. In the random method, a node offloads
tasks to a randomly picked neighbor. That is, when the FN
overloads, it randomly selects a neighbor node and sends
its load to it for faster processing. In the proportional
method, the capacity information of the neighbors is
received and selects the optimal one to offload a task. In
the SSLB method, after a FN is overloaded, it compares the
capacity of the other neighbor nodes and sends the task to
the node that has at least 40% of its capacity empty and
has the highest capacity.
In this section, we first consider the number of FNs as 4.
Then, we increase the number of nodes to 10 and check the
performance of the proposed algorithm in both conditions.
Figure 3 shows the increase in cumulative reward at each
time iteration of the proposed algorithm. In this paper, the
reward is equal to the negative of the processing delay of
the task assigned to the FN. Given that each task is processed
by which node, reducing the processing delay of a task
increases the reward received for processing that task.
Increasing the number of tasks assigned to nodes leads to
0
–10000
–20000
–30000
–40000
–50000
–60000
–70000
0 200 400 600 800 1000
Episode
Figure 3: The cumulative reward for each time iteration of the
proposed algorithm.
135
125
115
105
95
85
75
65
|Col1|Col2|
|---|---|
|||
|||
|||
|||
|||
|N = 4||
0 200 400 600 800 1000
Episode
BLBA Random
SSLB Proportional
Figure 4: Average processing time.
reducing cumulative rewards. Because less processing capacity is allocated to each task, as a result, processing each of
them has more delay. As shown in this figure, the assigned
decision of the task based on the Double-Q-learning algorithm, with the suitable distribution of tasks between nodes,
has gradually increased the cumulative reward.
It is expected that by using the Double-Q-learning algorithm, the load-balancing performance in the network will
be significantly improved. As shown in Figure 4, the SSLB
method has partly reduced the average processing time than
the random and proportional methods. However, the assignment of tasks based on the Double-Q-learning algorithm,
with the proper distribution of tasks among FNs, has
-----
8 Wireless Communications and Mobile Computing
500
450
400
350
300
250
0 200 400 600 800 1000
Episode
BLBA Random
SSLB Proportional
140
120
100
80
60
40
20
0
BLBA
SSLB Proportional Random
Figure 6: Total delay.
Figure 5: The run time of all tasks.
enabled the nodes to process the tasks faster, and as can be
seen, the average processing time in the BLBA than compared methods has improved dramatically.
In Figure 5, the run time of all the tasks that enter the
system during the simulation time is compared in the BLBA
with other methods. As shown in this figure, the proposed
BLBA compared to other methods has significantly reduced
the run time of these tasks. This makes the proposed system
perform better than other compared methods.
Then, in Figure 6, the total delay in all four methods is
reviewed. Total delay is obtained through the average processing time of input tasks from the first to the last iteration
of the algorithm implementation. As can be seen, the SSLB
method has a lower total delay than the random and proportional methods. However, the proposed BLBA achieves less
total delay than the other three methods, which means that
the proposed method works better.
Finally, the standard deviation of load on nodes is compared in all four methods. According to Figure 7, at first, in
the SSLB method, the standard deviation of load on nodes is
less than in other methods. However, with increasing agent
learning, the standard deviation of load on nodes in the proposed BLBA is significantly reduced. This indicates that in
the proposed BLBA, the tasks are evenly distributed in the
network, and the overload and underload possibility in the
nodes is reduced. In addition, in this paper, the aim is to
improve the load-balancing and reduce the delay, which in
the proposed method both the delay and the loadbalancing are optimized. In other methods, load-balancing
may be improved, but that does not mean that delay is
minimized.
Then, we consider the number of nodes as 10 and check
the performance of these algorithms in these conditions. As
shown in Figure 8, as the number of FNs and mobile devices
increases, the proposed algorithm spends more time learning. However, even in this situation, the assignment of tasks
based on the Double-Q-learning algorithm, with proper distribution of tasks among the nodes, has enabled the nodes to
process tasks faster, and as can be seen, the average processing time of tasks in the proposed method is still improved
over other methods.
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
N = 4
0
0 200 400 600 800 1000
Episode
BLBA Random
SSLB Proportional
Figure 7: The standard deviation of load on nodes.
135
125
115
105
95
85
75
65
0 400 800 1200 1600 2000
Episode
BLBA Random
SSLB Proportional
Figure 8: Average processing time.
In Figure 9, we can see that as the number of nodes
increases, although the BLBA algorithm spends more time
learning, finally the run time of all tasks that enter the
system during the simulation time, the load-balancing
method based on the Double-Q-learning algorithm in these
conditions is also significantly reduced compared to other
-----
Wireless Communications and Mobile Computing 9
500
450
400
350
300
250
0 400 800 1200 1600 2000
Episode
BLBA Random
SSLB Proportional
that in the proposed method, the tasks are evenly distributed
among the nodes.
Finally, Figure 11 shows the approximate number of iterations for convergence to the optimal run time per number
of different nodes. As you can see, as the number of FNs
increases, the number of possible actions increases, and thus,
the response space becomes larger. Therefore, the time
required to learn and converge to optimal policy also
increases. However, although the number of nodes increases,
in all implementations, the proposed algorithm eventually
converges to the minimum point.
The results show that when a node decides to assign the
load using the Double-Q-learning algorithm, it only considers
the forward queue state, capacity, and the number of mobile
devices connected to itself, and no information from other
nodes. From the above evaluation, we conclude that the proposed BLBA is more stable than other load-balancing methods
and significantly reduces network delay and response time.
In addition, as the number of FNs increases, although the
nodes spend more time learning, the results of the proposed
method are better than the other methods over time, and we
can be sure that the algorithm works well in any situation.
## 6. Conclusion and Future Work
Figure 9: The run time of all tasks.
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 400 800 1200 1600 2000
Episode
BLBA Random
SSLB Proportional
Figure 10: The standard deviation of load on nodes.
1200
1000
800
600
400
200
0
0 5 10 15 20 25 30 35 40
Number of fog nodes
Figure 11: Approximate number of iterations required for
convergence of the proposed algorithm.
The purpose of this paper is to provide a method to improve
load-balancing in FNs. In this paper, the Double-Q-learning
algorithm is used for load-balancing in the fog environment.
The Double-Q-learning algorithm achieves an optimal policy
using the experience that the agent gains from interacting with
the environment. In the proposed BLBA, each FN as the agent
explores the fog environment and seeks to find a low-load
node for assigning tasks, to minimize processing time and
the overload possibility. In this paper, the system state is
defined only based on the state of the decision-maker FN,
and the decision-making is done without any knowledge of
the state of neighbor FNs. The BLBA has been tested for a different number of FNs and mobile devices within the network
and has had good efficiency. The simulation results show that
our proposed method significantly reduces processing time
and response time than existing methods. According to the
network structure, the utilization of the Double-Q-learning
algorithm in any IoT device to further improve loadbalancing and reduce delay is one of the future research directions that this paper opens for researchers. In addition, in the
future, we intend to examine the performance of the proposed load-balancing algorithm in mobile edge computing.
## Data Availability
methods. Then, the standard deviation of load on nodes in
all four methods is compared for a situation where the number of nodes is 10. According to Figure 10, with increasing
agent learning, the standard deviation of load on nodes in
the proposed method is significantly reduced. This indicates
The data used to support the findings of this article can be
accessed by request.
## Disclosure
A preliminary version of this manuscript has been published
in the proceeding of 2021 11[th] International Conference on
[Computer and Knowledge Engineering (ICCKE), https://](https://ieeexplore.ieee.org/document/9721449)
[ieeexplore.ieee.org/document/9721449.](https://ieeexplore.ieee.org/document/9721449)
-----
10 Wireless Communications and Mobile Computing
## Conflicts of Interest
The authors declare that they have no conflicts of interest.
## Authors’ Contributions
(i) Niloofar Tahmasebi Pouya worked on validation, formal analysis, software, data curation, and writing—original
draft. (ii) Seyedakbar Mostafavi worked on methodology,
proofing—original draft, and project administration. (iii)
Mehdi Agha Sarram worked on review and editing, data
curation, and supervision.
## References
[1] D. Baburao, T. Pavankumar, and C. S. R. Prabhu, “A novel
application framework for resource optimization, service
migration, and load balancing in fog computing environment,” Nano, pp. 1–14, 2022.
[2] P. H. Vilela, J. J. Rodrigues, R. D. Righi, S. Kozlov, and V. F.
Rodrigues, “Looking at fog computing for E-health through
the lens of deployment challenges and applications,” Sensors,
vol. 20, no. 9, p. 2553, 2020.
[3] R. Beraldi, C. Canali, R. Lancellotti, and G. P. Mattia, “Distributed load balancing for heterogeneous fog computing infrastructures in smart cities,” Pervasive and Mobile Computing,
vol. 67, article 101221, 2020.
[4] Z. Duan, C. Tian, N. Zhang et al., “A novel load balancing
scheme for mobile edge computing,” Journal of Systems and
Software, vol. 186, article 111195, 2022.
[5] M. H. Kashani and E. Mahdipour, “Load balancing algorithms
in fog computing: a systematic review,” IEEE Transactions on
Services Computing, 2022.
[6] A. R. Hameed, S. ul Islam, I. Ahmad, and K. Munir, “Energyand performance-aware load-balancing in vehicular fog computing,” Sustainable Computing: Informatics and Systems,
vol. 30, article 100454, 2021.
[7] M. Kaur and R. Aron, “A systematic study of load balancing
approaches in the fog computing environment,” The Journal
of Supercomputing, vol. 77, no. 8, pp. 9202–9247, 2021.
[8] T. Hellemans and B. Van Houdt, “Performance analysis of
load balancing policies with memory,” Performance Evaluation, vol. 153, article 102259, 2022.
[9] D. Puthal, R. Ranjan, A. Nanda, P. Nanda, P. P. Jayaraman,
and A. Y. Zomaya, “Secure authentication and load balancing
of distributed edge datacenters,” Journal of Parallel and Distributed Computing, vol. 124, pp. 60–69, 2019.
[10] J. Y. Baek, G. Kaddoum, S. Garg, K. Kaur, and V. Gravel,
“Managing fog networks using reinforcement learning based
load balancing algorithm,” in 2019 IEEE Wireless Communications and Networking Conference (WCNC), Marrakesh,
Morocco, 2019.
[11] A. Marin, S. Balsamo, and J. M. Fourneau, “LB-networks: a
model for dynamic load balancing in queueing networks,” Performance Evaluation, vol. 115, pp. 38–53, 2017.
[12] N. Sonenberg, G. Kielanski, and B. Van Houdt, “Performance analysis of work stealing in large-scale multithreaded
computing,” ACM Transactions on Modeling and Performance Evaluation of Computing Systems, vol. 6, no. 2,
pp. 1–28, 2021.
[13] N. Khattar, J. Sidhu, and J. Singh, “Toward energy-efficient
cloud computing: a survey of dynamic power management
and heuristics-based optimization techniques,” The Journal
of Supercomputing, vol. 75, no. 8, pp. 4750–4810, 2019.
[14] M. Adhikari and T. Amgoth, “Heuristic-based load-balancing
algorithm for IaaS cloud,” Future Generation Computer Systems, vol. 81, pp. 156–165, 2018.
[15] S. Rasheed, N. Javaid, S. Rehman, K. Hassan, F. Zafar, and
M. Naeem, “A cloud-fog based smart grid model using maxmin scheduling algorithm for efficient resource allocation,”
in International Conference on Network-Based Information
Systems, Cham, 2019.
[16] L. Abualigah, A. H. Gandomi, M. A. Elaziz et al., “Advances in
meta-heuristic optimization algorithms in big data text clustering,” Electronics, vol. 10, no. 2, p. 101, 2021.
[17] M. Adhikari, S. Nandy, and T. Amgoth, “Meta heuristic-based
task deployment mechanism for load balancing in IaaS cloud,”
Journal of Network and Computer Applications, vol. 128,
pp. 64–77, 2019.
[18] S. T. Milan, L. Rajabion, H. Ranjbar, and N. J. Navimipour,
“Nature inspired meta-heuristic algorithms for solving the
load-balancing problem in cloud environments,” Computers
& Operations Research, vol. 110, pp. 159–187, 2019.
[19] S. Sefati, M. Mousavinasab, and R. Zareh Farkhady, “Load balancing in cloud computing environment using the grey wolf
optimization algorithm based on the reliability: performance
evaluation,” The Journal of Supercomputing, vol. 78, no. 1,
pp. 18–42, 2022.
[20] B. Kruekaew and W. Kimpan, “Multi-objective task scheduling
optimization for load balancing in cloud computing environment using hybrid artificial bee colony algorithm with reinforcement learning,” IEEE Access, vol. 10, pp. 17803–17818,
2022.
[21] S. R. Deshmukh, S. K. Yadav, and D. N. Kyatanvar, “Load balancing in cloud environs: optimal task scheduling via hybrid
algorithm,” International Journal of Modeling, Simulation,
and Scientific Computing, vol. 12, no. 2, p. 2150008, 2021.
[22] M. M. Golchi, S. Saraeian, and M. Heydari, “A hybrid of firefly
and improved particle swarm optimization algorithms for load
balancing in cloud environments: performance evaluation,”
Computer Networks, vol. 162, article 106860, 2019.
[23] A. M. Manasrah and H. B. Ali, “Workflow scheduling using
hybrid GA-PSO algorithm in cloud computing,” Wireless
Communications and Mobile Computing, vol. 2018, 16 pages,
2018.
[24] A. Thakur and M. S. Goraya, “RAFL: a hybrid metaheuristic
based resource allocation framework for load balancing in
cloud computing environment,” Simulation Modelling Practice and Theory, vol. 116, article 102485, 2022.
[25] X. Xu, S. Fu, Q. Cai et al., “Dynamic resource allocation for
load balancing in fog environment,” Wireless Communications
and Mobile Computing, vol. 2018, Article ID 6421607, 15
pages, 2018.
[26] S. Moon, J. Park, and Y. Lim, “Task migration based on reinforcement learning in vehicular edge computing,” Wireless
Communications and Mobile Computing, vol. 2021, 10 pages,
2021.
[27] Z. Wu, J. Wei, F. Zhang, W. Guo, and G. Xie, “MDLB: a metadata dynamic load balancing mechanism based on reinforcement learning,” Journal of Zhejiang University Science C,
vol. 21, no. 7, pp. 1034–1046, 2020.
-----
Wireless Communications and Mobile Computing 11
[28] M. M. Razaq, S. Rahim, B. Tak, and L. Peng, “Fragmented task
scheduling for load-balanced fog computing based on Qlearning,” Wireless Communications and Mobile Computing,
vol. 2022, 9 pages, 2022.
[29] J. Xu, H. Guo, H. W. Shen, M. Raj, S. W. Wurster, and
T. Peterka, “Reinforcement learning for load-balanced parallel
particle tracing,” IEEE Transactions on Visualization and
Computer Graphics, vol. PP, p. 1, 2022.
[30] L. Mai, N. N. Dao, and M. Park, “Real-time task assignment
approach leveraging reinforcement learning with evolution
strategies for long-term latency minimization in fog computing,” Sensors, vol. 18, no. 9, p. 2830, 2018.
[31] F. M. Talaat, M. S. Saraya, A. I. Saleh, H. A. Ali, and S. H. Ali,
“A load balancing and optimization strategy (LBOS) using
reinforcement learning in fog computing environment,” Journal of Ambient Intelligence and Humanized Computing,
vol. 11, no. 11, pp. 4951–4966, 2020.
[32] V. Divya and R. L. Sri, “ReTra: reinforcement based traffic load
balancer in fog based network,” in 2019 10th International
Conference on Computing, Communication and Networking
Technologies (ICCCNT), Kanpur, India, 2019.
[33] H. Lu, C. Gu, F. Luo, W. Ding, and X. Liu, “Optimization of
lightweight task offloading strategy for mobile edge computing
based on deep reinforcement learning,” Future Generation
Computer Systems, vol. 102, pp. 847–861, 2020.
[34] Z. Li, C. Wang, and C. J. Jiang, “User association for load
balancing in vehicular networks: an online reinforcement
learning approach,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 8, pp. 2217–2228, 2017.
[35] Q. Lin, Z. Gong, Q. Wang, and J. Li, “RILNET: a reinforcement
learning based load balancing approach for datacenter networks,” in International Conference on Machine Learning for
Networking, pp. 44–55, Cham, 2019.
[36] X. Li, Y. Qin, H. Zhou, D. Chen, S. Yang, and Z. Zhang, “An
intelligent adaptive algorithm for servers balancing and tasks
scheduling over mobile fog computing networks,” Wireless
Communications and Mobile Computing, vol. 2020, 16 pages,
2020.
[37] N. Rikhtegar, O. Bushehrian, and M. Keshtgari, “DeepRLB: a
deep reinforcement learning-based load balancing in data center networks,” International Journal of Communication Systems, vol. 34, no. 15, 2021.
[38] H. Y. Kim and J. Kim, “A load balancing scheme for gaming
server applying reinforcement learning in IoT,” Computer Science and Information Systems, vol. 17, no. 3, pp. 891–906,
2020.
[39] H. Ye, L. Liang, G. Y. Li, J. Kim, L. Lu, and M. Wu, “Machine
learning for vehicular networks: recent advances and application examples,” IEEE Vehicular Technology Magazine,
vol. 13, no. 2, pp. 94–101, 2018.
[40] A. Mebrek, M. Esseghir, and L. Merghem-Boulahia, “Energyefficient solution based on reinforcement learning approach
in fog networks,” in 2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC), Tangier,
Morocco, 2019.
[41] N. C. Luong, D. T. Hoang, S. Gong et al., “Applications of deep
reinforcement learning in communications and networking: a
survey,” IEEE Communications Surveys and Tutorials, vol. 21,
no. 4, pp. 3133–3174, 2019.
[42] Y. Xu, W. Xu, Z. Wang, J. Lin, and S. Cui, “Load balancing for
ultradense networks: a deep reinforcement learning-based
approach,” IEEE Internet of Things Journal, vol. 6, no. 6,
pp. 9399–9412, 2019.
[43] R. Mahmud and R. Buyya, “Modeling and simulation of fog
and edge computing environments using iFogSim toolkit,”
Fog and edge computing: Principles and paradigms, pp. 433–
465, 2019.
[44] I. Tellioglu and H. A. Mantar, “A proportional load balancing
for wireless sensor networks,” in 2009 Third International
Conference on Sensor Technologies and Applications, pp. 514–
519, Athens, Greece, 2009.
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1155/2022/1533949?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1155/2022/1533949, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": null,
"status": "GOLD",
"url": "https://downloads.hindawi.com/journals/wcmc/2022/1533949.pdf"
}
| 2,022
|
[] | true
| 2022-08-11T00:00:00
|
[
{
"paperId": "128765f08cc361662a4c6115438a28e797b87b55",
"title": "Load Balancing Algorithms in Fog Computing"
},
{
"paperId": "f20b6841b06b7129e9c8e0c1a9da82dc81a6041e",
"title": "Fragmented Task Scheduling for Load-Balanced Fog Computing Based on Q-Learning"
},
{
"paperId": "ee946f689e81a82913526ba02189253d063484cd",
"title": "A novel application framework for resource optimization, service migration, and load balancing in fog computing environment"
},
{
"paperId": "a61491a6d2d6d3c70ecd1dd0214fe02451ff376b",
"title": "RAFL: A hybrid metaheuristic based resource allocation framework for load balancing in cloud computing environment"
},
{
"paperId": "d9f558ae54b86632dec2d50b59089530272e03fd",
"title": "A Novel Load Balancing Scheme for Mobile Edge Computing"
},
{
"paperId": "ef7aad83bc349c35cd61fcda72bb95a114ccc09d",
"title": "Reinforcement Learning for Load-balanced Parallel Particle Tracing"
},
{
"paperId": "cac5345d03e6053a451250760cf66e85880f2263",
"title": "DeepRLB: A deep reinforcement learning‐based load balancing in data center networks"
},
{
"paperId": "a0f742b65b64ee19f8d32c090e50f5c169ba294c",
"title": "Performance Analysis of Work Stealing in Large-scale Multithreaded Computing"
},
{
"paperId": "731978fb1f3f00c92688b47e95e227dd5d7e2200",
"title": "Energy- and performance-aware load-balancing in vehicular fog computing"
},
{
"paperId": "2449a2df4039b5230e542b405b086640c302e143",
"title": "Load balancing in cloud computing environment using the Grey wolf optimization algorithm based on the reliability: performance evaluation"
},
{
"paperId": "0fe538bab575186a61fb3e5ea998ee7d4a298787",
"title": "A systematic study of load balancing approaches in the fog computing environment"
},
{
"paperId": "aa532954002c162774d15a01496de45f94107502",
"title": "Load balancing in cloud environs: Optimal task scheduling via hybrid algorithm"
},
{
"paperId": "2422bd2914d8b5c1d321189db8afc96d57c2b704",
"title": "Advances in Meta-Heuristic Optimization Algorithms in Big Data Text Clustering"
},
{
"paperId": "8d75c87f39c16177b452880c6ad10f1688a174a7",
"title": "Distributed load balancing for heterogeneous fog computing infrastructures in smart cities"
},
{
"paperId": "cc263a34994c0f542dc6055a10685510f2a058a3",
"title": "An Intelligent Adaptive Algorithm for Servers Balancing and Tasks Scheduling over Mobile Fog Computing Networks"
},
{
"paperId": "06dacd8521a59edb4ceabc941c24dc8412e70f86",
"title": "MDLB: a metadata dynamic load balancing mechanism based on reinforcement learning"
},
{
"paperId": "262d29786c983b7446c2a34337423421beafd536",
"title": "Looking at Fog Computing for E-Health through the Lens of Deployment Challenges and Applications"
},
{
"paperId": "c7b19e45399f320f1e9a8fe1767a8786be52e603",
"title": "Performance Analysis of Load Balancing Policies with Memory"
},
{
"paperId": "5a52ffa633c1366a3e889526c6946eae88f918a6",
"title": "A load balancing and optimization strategy (LBOS) using reinforcement learning in fog computing environment"
},
{
"paperId": "a8b682cc83170421d6201fec5c128ea549f0fe69",
"title": "Optimization of lightweight task offloading strategy for mobile edge computing based on deep reinforcement learning"
},
{
"paperId": "e5a00742c2c91413f683a152ff9ad5693b84ce73",
"title": "A hybrid of firefly and improved particle swarm optimization algorithms for load balancing in cloud environments: Performance evaluation"
},
{
"paperId": "b3da936f2ecf19fd78da67292d6b54b98a2a9e94",
"title": "Nature inspired meta-heuristic algorithms for solving the load-balancing problem in cloud environments"
},
{
"paperId": "004d965e807eca34f3ae5bfebbee2f1ab18965be",
"title": "ReTra: Reinforcement based Traffic Load Balancer in Fog based Network"
},
{
"paperId": "6c972f0729a56941161c23211269d57d2aa27d87",
"title": "Load Balancing for Ultradense Networks: A Deep Reinforcement Learning-Based Approach"
},
{
"paperId": "1b7fc9c2ee7aa21f86e207aa736330b52a6c1e7d",
"title": "Energy-Efficient Solution Based on Reinforcement Learning Approach in Fog Networks"
},
{
"paperId": "5c9291a401b7560f8db430b994f9c8313e9c3661",
"title": "Meta heuristic-based task deployment mechanism for load balancing in IaaS cloud"
},
{
"paperId": "4930f19341f804cf89a517c2e3a6159690563b21",
"title": "Secure authentication and load balancing of distributed edge datacenters"
},
{
"paperId": "9c5eec9e1dac6072178a5fa8bb596c45cdb1de99",
"title": "Toward energy-efficient cloud computing: a survey of dynamic power management and heuristics-based optimization techniques"
},
{
"paperId": "6f76dd34b5eada2c2119663a853ed2cab02ccaea",
"title": "Managing Fog Networks using Reinforcement Learning Based Load Balancing Algorithm"
},
{
"paperId": "df90821b75051bcb4e1a205c11690c1de184e74c",
"title": "Modelling and Simulation of Fog and Edge Computing Environments using iFogSim Toolkit"
},
{
"paperId": "b0b279fbca4ce426840d9d3e45615a28fee77732",
"title": "RILNET: A Reinforcement Learning Based Load Balancing Approach for Datacenter Networks"
},
{
"paperId": "79c8f930bb66c82421b84617e4b6c0b2855cd063",
"title": "Applications of Deep Reinforcement Learning in Communications and Networking: A Survey"
},
{
"paperId": "e1bc711dc8052a87f15fe6fbfea119585f8a431f",
"title": "A Cloud-Fog Based Smart Grid Model Using Max-Min Scheduling Algorithm for Efficient Resource Allocation"
},
{
"paperId": "51afac0db0605b2cfc947b453bf5cfe13c21fbec",
"title": "Real-Time Task Assignment Approach Leveraging Reinforcement Learning with Evolution Strategies for Long-Term Latency Minimization in Fog Computing"
},
{
"paperId": "ef64656656bea818fa925ea29b3c5106b19450f1",
"title": "Dynamic Resource Allocation for Load Balancing in Fog Environment"
},
{
"paperId": "5cfe0cdb2a321ec34c087d11b5cab8c3056c4e25",
"title": "Heuristic-based load-balancing algorithm for IaaS cloud"
},
{
"paperId": "e7802902c6b91f67e37d2076a727b1ef799a8131",
"title": "Machine Learning for Vehicular Networks: Recent Advances and Application Examples"
},
{
"paperId": "b51ec303b04483f3bd6d4c753158465f9e127865",
"title": "LB-networks: A model for dynamic load balancing in queueing networks"
},
{
"paperId": "f2c3d86f33714c6bb1d71057977f8772a442d9c5",
"title": "User Association for Load Balancing in Vehicular Networks: An Online Reinforcement Learning Approach"
},
{
"paperId": "8bad68fbcf00fe4df83384d35773b30500b6039e",
"title": "A Proportional Load Balancing for Wireless Sensor Networks"
},
{
"paperId": "d57aa28fa6529bdba5d7a4cffbacdc98b0b25f67",
"title": "Multi-Objective Task Scheduling Optimization for Load Balancing in Cloud Computing Environment Using Hybrid Artificial Bee Colony Algorithm With Reinforcement Learning"
},
{
"paperId": "6f374e6b9d7e695d28af98d4d27834b223ee28c4",
"title": "Task Migration Based on Reinforcement Learning in Vehicular Edge Computing"
},
{
"paperId": "a512b8bd694cc127aae78409e98d0fe2ea8a37e2",
"title": "A load balancing scheme for gaming server applying reinforcement learning in IoT"
},
{
"paperId": "4f37a0fac9478623c6a7c61ec507d5a60647280d",
"title": "Workflow Scheduling Using Hybrid GA-PSO Algorithm in Cloud Computing"
},
{
"paperId": null,
"title": "Dynamic resource allocation for load balancing in fog environment,”Wireless"
},
{
"paperId": null,
"title": "Real - time task assignment approach leveraging reinforcement learning with evolution strategies for long - term latency minimization in fog comput"
}
] | 12,753
|
en
|
[
{
"category": "Economics",
"source": "external"
},
{
"category": "Political Science",
"source": "s2-fos-model"
},
{
"category": "Economics",
"source": "s2-fos-model"
}
] |
https://www.semanticscholar.org/paper/03444cd094adf78fdd992d62d986285640518762
|
[
"Economics"
] | 0.857065
|
Cryptocurrencies and Political Finance
|
03444cd094adf78fdd992d62d986285640518762
|
[
{
"authorId": "2131970073",
"name": "Catalina Uribe Burcher"
}
] |
{
"alternate_issns": null,
"alternate_names": null,
"alternate_urls": null,
"id": null,
"issn": null,
"name": null,
"type": null,
"url": null
}
| null |
# Cryptocurrencies and Political Finance
International IDEA Discussion Paper 2/2019
-----
# Cryptocurrencies and Political Finance
International IDEA Discussion Paper 2/2019
Catalina Uribe Burcher
-----
© 2019 International Institute for Democracy and Electoral Assistance
This paper is independent of specific national or political interests. Views expressed in this paper do not necessarily
represent the views of International IDEA, its Board or its Council members.
References to the names of countries and regions in this publication do not represent the official position of
International IDEA with regard to the legal status or policy of the entities mentioned.
The electronic version of this publication is available under a Creative Commons Attribute-NonCommercial-ShareAlike
3.0 (CC BY-NC-SA 3.0) licence. You are free to copy, distribute and transmit the publication as well as to remix and
adapt it, provided it is only for non-commercial purposes, that you appropriately attribute the publication, and that you
distribute it under an identical licence. For more information on this licence visit the Creative Commons website:
<http://creativecommons.org/licenses/by-nc-sa/3.0/>.
International IDEA
Strömsborg
SE–103 34 Stockholm
Sweden
Telephone: +46 8 698 37 00
Email: info@idea.int
Website: <http://www.idea.int>
Design and layout: International IDEA
DOI: <https://doi.org/10.31752/idea.2019.7>
[Created with Booktype: <https://www.booktype.pro>](https://www.booktype.pro/)
International IDEA
-----
Acknowledgements ....................................................................................................................... 5
1. Introduction ............................................................................................................................... 6
2. Using cryptocurrencies to finance politics ............................................................................. 9
3. Conclusions and recommendations ..................................................................................... 19
References ................................................................................................................................... 21
About the author ......................................................................................................................... 32
About International IDEA ............................................................................................................ 33
-----
This discussion paper was developed by International IDEA in close cooperation
with the United Nations Office on Drugs and Crime (UNODC). The paper
benefited from valuable input and feedback from a number of colleagues at
International IDEA and UNODC. Special thanks to William Sjöstedt who
contributed an important part of the original research that informed this paper. Our
appreciation also goes to Sahra Daar, Oleksiy Feshchenko, Yukihiko Hamada,
Rumbidzai Kandawasvika-Nhundu, Keboitse Machangana, Coline Mechinaud and
Sam Van der Staak. Importantly, the editorial guidance and input we received from
Kelley Friel, Lisa Hagman and David Prater helped us improve the paper during the
entire production process.
-----
Cryptocurrencies and Political Finance
Cryptocurrencies present several potential challenges and benefits to legislators and
oversight agencies working on political finance around the world. There are now
more than 1,000 such currencies in the market, which is dominated by bitcoin
(Trading View 2018). Their use is increasing in all realms, including political
activities such as campaign finance. The main policy concerns regarding their use are
anonymity, volatility and a lack of oversight (Bloomberg 2018).
**What are cryptocurrencies?**
Cryptocurrencies are based on a series of cryptographic protocols that digitally verify financial transactions. The
degree of anonymity can vary, however. The origin and destination of transactions (their addresses) are stored in a
public ledger that is non-erasable: a full record of all transactions is maintained in perpetuity (CryptoCurrency
Facts n.d.). The ledger is a database that contains every transaction and mining action; it is continuously updated
and synchronized across a network (OECD 2018). Therefore all activities and transactions can be checked and
confirmed at any time. The addresses are typically traceable, although some cryptocurrencies are designed to
disguise them. The identity of the people involved in the transactions may not be traceable, depending on the type
of cryptocurrency (CryptoCurrency Facts n.d.). Therefore it is critical to understand how each currency is designed
in order to assess the threats and opportunities they may present with regard to anonymity, transparency and
trackability.
Cryptocurrency transactions are secure and anonymous (see box above). Yet the
anonymity of transactions could facilitate the use of such currencies to fund illicit
activities. However, blockchain technology can be designed to provide some
transparency (see box below). Some cryptocurrencies might have identification
requirements that facilitate greater monitoring and auditing than transactions using
traditional currencies; while others do not even disclose the transactions’ addresses,
leaving little evidence of who was involved or where it took place.
-----
Thus, there are several questions regarding the risks and opportunities associated
with cryptocurrencies’ potential use in financing politics, as well as the transparency
and oversight of such transactions. For example, since they are currently (relatively)
unregulated, it is unclear in many countries whether political finance transactions
using cryptocurrencies are allowed. Another question is whether they should be
considered a currency like the dollar or euro, or if they should be treated as an asset,
or something else altogether. This is an important distinction for political finance
purposes, since regulations often differ for in-cash vs. in-kind donations.
Furthermore, cryptocurrencies can facilitate the violation of political finance
regulations, for example by channelling foreign or anonymous donations to countries
where these are banned.
Perhaps more importantly, it is unclear whether cryptocurrencies merit special
regulations, as they have yet to become truly mainstream; many remain sceptical
about their future in the legal economy (The Economist 2018a). Even political
finance experts and practitioners have little understanding of cryptocurrencies in
general, and their implications for the financing of politics in particular
(International IDEA 2018b). However, they are already used in a host of areas, and
oversight agencies tasked with controlling the flow of resources in and out of politics
should understand the current and potential future implications of these
technologies.
This discussion paper clarifies some of the basic concepts related to
cryptocurrencies and their current regulatory state, with their respective virtues and
pitfalls. Focusing on the use of cryptocurrencies to finance politics, it also considers
their interplay with foreign contributions, anonymous donations, and donation
limits from corporations and single sources, as well as reporting, monitoring and
oversight systems. It concludes with a series of recommendations directed at
legislators and oversight agencies.
This research is based primarily on media sources, and to a lesser extent on
academic and policy research; few academic studies have analysed the relatively new
phenomenon of cryptocurrencies, and fewer still have done so in the context of
political finance. The paper was also informed by an online survey of 46 individuals
and institutions working primarily in political finance oversight in 28 countries.
-----
Cryptocurrencies and Political Finance
-----
Some practitioners and scholars involved in political finance are confident that
cryptocurrencies will soon be more widely used in political finance (International
IDEA 2018b), which might pose further challenges to transparency and oversight.
However, some countries like Mexico are discussing how issuing their own
cryptocurrency and mandating that all political finance transactions use it may help
trace and monitor transactions (García 2017). Similarly, Brazil used blockchain
technology in recent elections to register donations in real time as part of the ‘Voto
Legal’ project (Voto Legal 2018). The core concern is how candidates and parties use
cryptocurrencies to finance campaigns and other political activities.
With or without regulations, candidates and parties are beginning to embrace
cryptocurrencies to raise campaign funds. Iceland’s Pirate Party, for example,
welcomes cryptocurrencies, and famously succeeded in the most recent national
election (Dueweke 2018: 2). In Sweden, Mathias Sundin ran for (and eventually
won) a seat in Parliament in 2014 accepting only bitcoin donations (del Castillo
2017; Eleftheriou-Smith 2014).
The US case is perhaps the most visible. The country’s political finance oversight
agency, the Federal Election Commission (FEC), released guidelines on
cryptocurrency campaign donations in 2014, according to which federal campaigns
can buy and accept bitcoins, albeit under certain restrictions (Orcutt 2017). While
the Identity and Payments Association explained during a Senate hearing, ‘every state
except for Kansas allows Bitcoin contributions’ (Dueweke 2018: 2), the situation at
the state level seems to be more complicated than that. For instance, Wisconsin,
North Carolina and South Carolina have been reviewing whether crypto-donations
are legal under their state law. The first two reviews are still ongoing, while in the
latter officials concluded that such donations are not permissible under South
Carolina law (Bonn 2018; Bryanov 2018). In Oregon, campaign finance regulations
prevent campaigns from accepting cryptocurrencies over concerns about their
anonymity, although discussions are ongoing (Voorhees 2018; AP 2018). Likewise,
legislators in Colorado are considering allowing crypto-donations up to the same
limits as cash donations (Frank 2018). Other states remain more sceptical.
California’s Fair Political Practices Commission has taken a cautious approach by
-----
Cryptocurrencies and Political Finance
recommending that campaigns abstain from accepting cryptocurrencies due to the
difficulties associated with tracing their origins (Dueweke 2018: 3).
Cryptocurrencies began creeping into the US electoral finance system in 2014
when New Hampshire gubernatorial candidate Andrew Hemingway accepted
bitcoins to finance his (unsuccessful) campaign, closely followed later the same year
by Jared Polis, who also accepted bitcoins during his re-election campaign to the
House of Representatives (Nolan 2018). A number of candidates in the country were
quick to catch onto the trend. Dan Elder, for example, funded his 2016 campaign for
the Missouri House of Representatives entirely in bitcoin (Bryanov 2018). During
the 2018 mid-term US elections, US House candidate for California’s 45th
congressional district Brian Forde made headlines for accepting bitcoin donations
(Wilhelm 2018a); he had reportedly raised about 16 per cent of his campaign
contributions through bitcoin donations by the end of March 2018 (Dueweke 2018:
3). Likewise, Austin Petersen, who was running in the Republican primary in the
state of Missouri, also made waves when he had to return USD 130,000 in bitcoins
because the amount surpassed campaign limits (CCN 2018c). The most notorious
case is that of presidential candidate Rand Paul, which during his 2016 campaign
became the first presidential candidate to accept cryptocurrency donations (Keneally
2018).
-----
Regulators around the world have an opportunity to mould these currencies to
better serve the interests of the electorate and maximize the potential benefits of
fighting corruption and illicit political finance. Well-designed regulations could
incentivize an ethical approach to money in politics through enhanced transparency
and accountability (Lapointe and Fishbane 2018: 23).
### Private donations: monetary or in-kind?
Private donations are treated differently depending on their type. Yet classifying
cryptocurrency transactions as in-cash or in-kind contributions can be problematic.
In-kind donations are typically restricted due to the difficulties associated with
tracing and appraising them (Uribe Burcher and Casal Bertoa forthcoming). Yet
tracing cryptocurrencies is not necessarily difficult; in fact, it may be easier as the
public ledger records of all transactions. Appraising them may not be particularly
difficult either, given that numerous exchanges provide up-to-date information on
the value of most cryptocurrencies in real time. But this requires clearly establishing
the date when the value of the crypto-donation is to be set (e.g. when the donation
was received) and what exchanges are recognized to establish the value of the
donation on that date.
-----
Cryptocurrencies and Political Finance
**Cryptocurrency exchanges and custodial wallet providers**
A vital aspect of the current discussion on regulating cryptocurrencies centres on how to deal with exchanges and
custodial wallet providers. These are companies and service providers that have emerged to facilitate storing and
exchanging cryptocurrencies to fiat currency or to other cryptocurrencies. Their regulation is still a matter of
debate. The emerging consensus is that, as part of the financial system, they are subject to AML regulations and
must comply with the same standards as any other financial service, like ‘know your customer’ (Demertzis and
Wolf 2018: 9; Eich forthcoming: 25). South Korea has taken an alternative approach by requiring wallets to be tied
to traditional bank accounts (Kim 2018).
Classifying crypto-donations as in-kind contributions may cause some confusion
among political parties and candidates, and among the oversight agencies. In-kind
donations are usually associated with assets or services (Falguera, Jones and Ohman
2014: 392). However, people most likely associate cryptocurrencies with money
(given their name); where they are not classified as such, the authorities would need
to clarify with political parties and candidates to facilitate their correct reporting, and
with oversight agencies to facilitate their audits.
### Foreign contributions
Almost 68 per cent of countries ban foreign contributions, or foreign donations to
political parties, and almost 56 per cent prohibit foreign donations to candidates, as
Figure 1 illustrates.
Figure 1: Ban on donations from foreign interests to political parties and
candidates
_Source: International IDEA, Political Finance Database (Stockholm: International IDEA, 2018), <https://_
www.idea.int/data-tools/question-view/528>, accessed 14 September 2018
Most countries view foreign donations negatively and therefore choose to outlaw
them in order to prevent external influence and protect the principle of selfdetermination (Uribe Burcher and Casal Bertoa forthcoming). This also means that
there is great interest in finding ways to dodge these restrictions (Uribe Burcher and
Perdomo 2017). Can cryptocurrencies facilitate undisclosed foreign donations or inkind contributions?
-----
Cryptocurrencies indeed appear to have facilitated foreign influence over the 2016
US elections (Shi 2018; Popper and Rosenberg 2018), in part due to gaps in the
financial transparency and political finance systems (Krumholz 2018; Murray 2018:
1). According to a grand jury indictment, some Russian funds for this purpose were
apparently transferred using cryptocurrency exchange accounts (Murray 2018: 2–3).
Testimony before the US Senate Judiciary Committee cautioned that
cryptocurrencies may open the floodgates for foreign and illicit sources:
Political committees may not knowingly accept contributions from foreign
nationals … they are required only to take ‘minimally intrusive’ steps to verify a
contributor’s true nationality. As long as a contributor provides a donor
attestation and uses a U.S. address, the contribution would appear legitimate and
not prompt any additional due diligence requirements on the part of the
recipient. Foreign-source donations are particularly difficult to detect when a
nonintermediated payment method such as a virtual currency is used
(Murray 2018: 6).
-----
Cryptocurrencies and Political Finance
Moreover, the floodgates are widened when a cryptocurrency permits high levels of
anonymity. Most worrisome is that the current US guidelines do not clearly target
cryptocurrency donations from super Political Action Committees (super PACs)
(Dueweke 2018: 2). These entities ‘may raise unlimited sums of money from
corporations, unions, associations and individuals, then spend unlimited sums to
overtly advocate for or against political candidates’, and must report their donors to
the Federal Election Commission (FEC) (OpenSecrets 2018). Without knowing who
is donating, or where the donation is coming from, cryptocurrencies that allow
increased anonymity may allow an unlimited flow of illicit or undue donations to
enter the political finance system. While cryptocurrencies may not necessarily be the
source of this problem, they can exacerbate it.
### Anonymous donations and donation limits
Anonymous donations are almost as unpopular as foreign donations. More than 56
per cent of countries have banned them, and more than 10 per cent limit them with
respect to political parties, while almost 44 per cent ban them and 9 per cent limit
them with respect to candidates, as Figure 2 illustrates.
Figure 2: Ban on donations from anonymous donations to political parties
and candidates
_Source: International IDEA, Political Finance Database (Stockholm: International IDEA, 2018), <https://_
www.idea.int/data-tools/question-view/539>, accessed 14 September 2018
These measures are designed to ensure transparency of party funding and improve
compliance monitoring of political finance regulations as a whole; small anonymous
donations are sometimes allowed to protect the privacy of small donors (Falguera,
Jones and Ohman 2014). Anonymous donations are also usually forbidden because
they may allow organized crime to inject resources into political campaigns, parties
and elections (Briscoe, Perdomo and Uribe Burcher 2014; Villaveces-Izquierdo and
Uribe Burcher 2013; Briscoe and Goff 2016a; Briscoe and Goff 2016b).
-----
Against this backdrop, the presence of cryptocurrencies may be a reason for
concern, as they were created to be anonymous (Peterson 2018; Ross and Beyoud
2018). In countries that forbid anonymous donations, crypto-donations that do not
provide information about the identity of the person originating the transaction are,
by default, illegal. The same principle would apply to donation limits from
corporations and single sources. If a donor is anonymous, it is impossible to tell
whether it is a corporation or a human, and whether one entity is the source of
multiple transactions to the same party or candidate that surpass the individual limit.
By contrast, and as mentioned above, cryptocurrencies that prioritize transparency
could create great opportunities to track political finance income and expenditure.
With that in mind, regulators in the future can exploit blockchain’s transparency
features to require financial transactions involving political parties, candidates and
public officials to be registered in an open or closed ledger, to be monitored by the
state’s oversight agency or to be made public.
-----
Cryptocurrencies and Political Finance
### Reporting, monitoring and oversight
Reporting, monitoring and oversight are arguably the most important components of
any political finance system. Limits, bans and public funding are aspirational at best
if data are not reported, if compliance is not monitored, and if violations do not
trigger investigations or punishment. But despite their importance, many countries
lack adequate mechanisms to enforce their political finance regulations, or robust
oversight agencies with clear mandates and sufficient resources and political
independence (Uribe Burcher and Perdomo 2017: 138–39). Nor does every country
require regular political party finance reporting; those reports are supposed to be
made public in only 60 per cent of countries that require them, and only 52 per cent
require the reports to reveal the identity of donors (International IDEA 2018a).
Depending on their level of anonymity or transparency, cryptocurrencies may
further hamper or facilitate reporting, monitoring and oversight (Ross and Beyoud
2018). When cryptocurrencies allow for increased anonymity, they complicate the
work of oversight actors that need to track the flow of resources in and out of politics.
For example, such concerns were raised when Mauricio Toro, then a candidate for
Colombia’s House of Representatives, accepted bitcoin and ether contributions,
although ‘his cryptocurrency wallet addresses are not available publicly’ (Dueweke
2018: 3).
Yet cryptocurrencies that favour transparency over anonymity may support the role
of oversight agencies. In this respect, the potential use of a public ledger may help
them track the sources of funds. For this reason, some organizations in Mexico
advocate channelling all political finance transactions through cryptocurrencies, or
otherwise promoting identification through a public ledger (García 2017). Another
way of tracking the identity of donors involved in cryptocurrency transactions is to
require their online wallet to be tied to a traditional bank account. In addition, the
role of virtual currency exchangers is decisive in effectively enforcing political finance
reporting requirements since, as financial intermediaries, they provide a location data
point for campaigns to identify foreign donors. Witnesses in a hearing before the US
Senate Judiciary Committee on the matter noted that in the USA these exchanges are
considered ‘money services businesses subject to the Bank Secrecy Act … even
foreign-located virtual currency exchangers when they serve U.S. customers’ (Murray
2018: 5–6). As such, they are subject to AML requirements.
Finally, it is important to underline that the role of oversight agencies usually goes
beyond simple monitoring and includes guidance to parties and candidates on how
to comply with the law. It is therefore paramount for oversight agencies to play an
active role in guiding political parties and candidates on how to report on the cryptodonations they receive. The US FEC has taken active steps in this direction, as
explained above regarding Advisory Opinion 2014-02 (FEC 2014). The FEC
reporting guidelines clearly state that ‘holding bitcoins in a bitcoin wallet does not
relieve the committee of its obligations to return or refund a bitcoin contribution
that is from a prohibited source, exceeds the contributor’s contribution limit, or is
otherwise not legal’ (FEC n.d.). Most importantly, the fact that the committee
received bitcoins, the FEC explained, does not exempt it from disclosing the receipt
-----
of the contribution, including ‘the contributor’s mailing address, employer and
occupation’ (FEC n.d.).
### Asset declarations for elected and public officials
Understanding the influence that cryptocurrencies have, and may potentially have, in
the financing of politics must go beyond looking at the resources that donors pour
into political parties and election campaigns. It also involves understanding how
much money elected officials and civil servants receive once in office through these
methods. Asset declarations, where available, provide a valuable source of information
with regards to the extent to which digital currencies are becoming an important
trading commodity for politicians and public officials.
In Ukraine, for example, a study on public officials’ asset declarations ‘reveals that
57 officials have declared over 21,000 bitcoins with the majority of cryptocurrency
holders in the Odessa regional council and the country’s Parliament. A second study
shows that in 2017 the largest amount of cryptocurrency declared by a Ukrainian
official was in bitcoin cash’ (Crypto News Monitor 2018). The fact that Ukrainian
officials are required to declare assets through an official electronic platform provides
much-needed transparency, especially considering that cryptocurrencies are not yet
regulated in the country.
Requiring elected officials to submit asset declarations is an important first step,
which only 53 per cent of countries have taken (see Figure 3). But oversight agencies
must also be equipped to verify that public officials are accurately declaring
(particularly anonymous) cryptocurrency donations, and to investigate and limit
money laundering.
Figure 3: Asset declarations from elected officials
_Source: International IDEA, Political Finance Database (Stockholm: International IDEA, 2018), <https://_
www.idea.int/data-tools/question-view/284667>, accessed 11 December 2018
-----
Cryptocurrencies and Political Finance
### Additional political finance considerations: stability and a level playing field
While the value of cryptocurrencies has increased over time, and some analysts have
noted their potential to increase transparency and accountability (Roberts 2018;
Huberman, Leshno and Moallemi 2017), they also face high levels of volatility
(CCN 2018b) that may create risks for political campaign transactions, as parties and
candidates face additional financial instability. In addition, there are still challenges
associated with transforming cryptocurrencies into traditional (fiat) currencies—
especially when these involve large sums of money, which may create problems for
candidates and parties that are unable to use donations to pay for campaign activities
where cryptocurrencies are not yet accepted.
It is also important to remember that a lack of access to campaign finance is one of
the main obstacles preventing more women from running for and obtaining political
office, given that they are typically excluded from existing fundraising networks
(International IDEA and NIMD 2017). Minority and indigenous groups experience
similar barriers (IPU and UNDP 2010: 16–7). Thus, regulating the use of
cryptocurrencies in political finance should take into account the fact that they
remain relatively user unfriendly: ‘All participants have to download specialist
software, and getting traditional money into and out of bitcoin’s ecosystem is
fiddly’ (The Economist 2018c). Marginalized groups may have problems accessing
the technology required for these transactions, which may exacerbate the already
unbalanced playing field. Efforts to diversify the crypto and blockchain industries
should therefore consider female politicians and their capacity to leverage the
advantages the industry can provide (Bowles 2018).
-----
The impact of cryptocurrencies in the field of political finance is still a matter of
debate, and most oversight agencies are yet to develop clear guidelines on their use.
Yet they are already being used to finance politics, even though it is still largely
unclear what they are, how they are (or should be) regulated, and how parties and
candidates need to report on them.
Understanding the implications of cryptocurrencies in the field of political finance
requires taking stock of their most salient features, and what these mean for the way
resources flow in and out of politics:
- Cryptocurrencies’ volatility and limited purchasing value mean that parties and
candidates should be mindful when using them, avoid speculators, and remain
focused on political competition rather than the ups and downs of the cryptomarket.
- Cryptocurrencies’ increasing usage means that policymakers should consider
how to best regulate them and guide all parties involved on how to follow the
new regulations.
- Cryptocurrencies’ capacity to bypass the banking system means the
intermediaries that provide gateways to the regulated financial system
(cryptocurrency exchangers in particular) should be subject to AML/CFT
regulation. Regulators should then consider if (and how) to require
transactions to go through intermediaries.
- Cryptocurrencies’ immutability means that transaction records cannot be
tampered with, which offers an important layer of security that cash
transactions, for example, lack. These records can help oversight agencies
follow the trail of resources and donors behind political campaigns.
- Cryptocurrencies’ anonymity, when they are designed to prioritize privacy
rather than transparecy, may hamper the work of oversight agencies and allow
illicit donations to enter the system—whether it is foreign, anonymous or
other types of donations banned in a given country—as well as undeclared
-----
Cryptocurrencies and Political Finance
assets of public and elected officials. These cryptocurrencies should therefore
be limited, if not outright forbidden, for parties and candidates to use.
- Cryptocurrencies’ increased transparency, when they are designed with an
open ledger and allow the identity of people involved in the transactions to be
tracked, could facilitate the work of oversight agencies.
- Cryptocurrencies’ could present an additional obstacle for marginalized groups
to access funds on an equal basis, which merits mechanisms to ensure that
these groups can benefit from this technology.
- Cryptocurrencies’ poor (and sometimes contradictory) regulations mean that
candidates and parties need adequate guidelines on how to apply existing
political finance requirements to cryptocurrency transactions, including the
basic premise of whether they should be considered assets or money.
Most importantly, potential regulations should ideally be debated at both the
national and international levels, involving all parties (e.g. political parties, oversight
agencies and the tech industry), as well as women and other marginalized groups.
These discussions should aim to ensure that these technologies open, rather than
limit, the avenues for political fundraising and enhance transparency in the discussion
of money in politics.
-----
Aldaz-Carroll, E. and Aldaz-Carroll, E., ‘Can cryptocurrencies and blockchain help
[fight corruption?’, The Brookings Institution, 1 February 2018, <https://](https://www.brookings.edu/blog/future-development/2018/02/01/can-cryptocurrencies-and-blockchain-help-fight-corruption/)
[www.brookings.edu/blog/future-development/2018/02/01/can-](https://www.brookings.edu/blog/future-development/2018/02/01/can-cryptocurrencies-and-blockchain-help-fight-corruption/)
[cryptocurrencies-and-blockchain-help-fight-corruption/>, accessed 29 August](https://www.brookings.edu/blog/future-development/2018/02/01/can-cryptocurrencies-and-blockchain-help-fight-corruption/)
2018
The Associated Press (AP), ‘Ethics Commission asks Legislature to decide bitcoins’,
[4 May 2018, <https://www.apnews.com/](https://www.apnews.com/f6b7b095f3bb40b2a15751eed9eb4836/Ethics-Commission-asks-Legislature-to-decide-bitcoins)
[f6b7b095f3bb40b2a15751eed9eb4836/ Ethics-Commission-asks-Legislature-](https://www.apnews.com/f6b7b095f3bb40b2a15751eed9eb4836/Ethics-Commission-asks-Legislature-to-decide-bitcoins)
[to-decide-bitcoins>, accessed 7 September 2018](https://www.apnews.com/f6b7b095f3bb40b2a15751eed9eb4836/Ethics-Commission-asks-Legislature-to-decide-bitcoins)
Bambrough, B., ‘Blow to bitcoin as Coinbase CEO Makes Stark warning’, Forbes,
[15 August 2018, <https://www.forbes.com/sites/billybambrough/2018/08/15/](https://www.forbes.com/sites/billybambrough/2018/08/15/blow-to-bitcoin-as-coinbase-ceo-makes-stark-warning/#28a5f8c92d55)
[blow-to-bitcoin-as-coinbase-ceo-makes-stark-warning/#28a5f8c92d55>,](https://www.forbes.com/sites/billybambrough/2018/08/15/blow-to-bitcoin-as-coinbase-ceo-makes-stark-warning/#28a5f8c92d55)
accessed 4 September 2018
Betfair, ‘Cryptocurrency latest: Colombia embraces while China cracks down’,
[3 September 2018, <https://betting.betfair.com/financial-betting/market-](https://betting.betfair.com/financial-betting/market-briefing/cryptocurrency-latest-colombia-is-embracing-while-china-is-cracking-down-030918-699.html)
[briefing/cryptocurrency-latest-colombia-is-embracing-while-china-is-cracking-](https://betting.betfair.com/financial-betting/market-briefing/cryptocurrency-latest-colombia-is-embracing-while-china-is-cracking-down-030918-699.html)
[down-030918-699.html>, accessed 4 September 2018](https://betting.betfair.com/financial-betting/market-briefing/cryptocurrency-latest-colombia-is-embracing-while-china-is-cracking-down-030918-699.html)
Blackstone, B., ‘Switzerland wants to be the world capital of cryptocurrency’, The
_[Wall Street Journal, 28 April 2018, <https://www.wsj.com/articles/switzerland-](https://www.wsj.com/articles/switzerland-wants-to-be-the-world-capital-of-cryptocurrency-1524942058)_
[wants-to-be-the-world-capital-of-cryptocurrency-1524942058>, accessed](https://www.wsj.com/articles/switzerland-wants-to-be-the-world-capital-of-cryptocurrency-1524942058)
13 September 2018
Bloomberg, ‘Coinbase CEO Armstrong on the future of crypto’ [video], 15 August
[2018, <https://www.bloomberg.com/news/videos/2018-08-15/coinbase-ceo-](https://www.bloomberg.com/news/videos/2018-08-15/coinbase-ceo-armstrong-on-the-future-of-crypto-video)
[armstrong-on-the-future-of-crypto-video>, accessed 4 September 2018](https://www.bloomberg.com/news/videos/2018-08-15/coinbase-ceo-armstrong-on-the-future-of-crypto-video)
Bonn, T., ‘Politicians are getting in on the cryptocurrency craze to fund campaigns’,
[CNBC, 2 March 2018, <https://www.cnbc.com/2018/03/01/cryptocurrency-](https://www.cnbc.com/2018/03/01/cryptocurrency-candidates-politicians-embrace-bitcoin.html)
[candidates-politicians-embrace-bitcoin.html>, accessed 29 August 2018](https://www.cnbc.com/2018/03/01/cryptocurrency-candidates-politicians-embrace-bitcoin.html)
-----
Cryptocurrencies and Political Finance
Bowles, N., ‘Women in cryptocurrencies push back against “blockchain bros”’, The
_[New York Times, 25 February 2018, <https://www.nytimes.com/2018/02/25/](https://www.nytimes.com/2018/02/25/business/cryptocurrency-women-blockchain-bros.html)_
[business/cryptocurrency-women-blockchain-bros.html>, accessed 6 September](https://www.nytimes.com/2018/02/25/business/cryptocurrency-women-blockchain-bros.html)
2018
Briscoe, I., Perdomo, C. and Uribe Burcher, C., Illicit Networks and Politics in Latin
_America (Stockholm and The Hague: International IDEA, NIMD and the_
[Clingendael Institute, 2014), <https://www.idea.int/publications/catalogue/](https://www.idea.int/publications/catalogue/illicit-networks-and-politics-latin-america)
[illicit-networks-and-politics-latin-america>, accessed 6 September 2018](https://www.idea.int/publications/catalogue/illicit-networks-and-politics-latin-america)
Briscoe, I., and Goff, D., Protecting Politics: Deterring the Influence of Organized
_Crime on Elections (Stockholm and The Hague: International IDEA and the_
[Clingendael Institute, 2016a), <https://www.idea.int/publications/catalogue/](https://www.idea.int/publications/catalogue/protecting-politics-deterring-influence-organized-crime-elections)
[protecting-politics-deterring-influence-organized-crime-elections>, accessed](https://www.idea.int/publications/catalogue/protecting-politics-deterring-influence-organized-crime-elections)
6 September 2018
—, Protecting Politics: Deterring the Influence of Organized Crime on Political Parties
(Stockholm and The Hague: International IDEA and the Clingendael Institute,
[2016b), <https://www.idea.int/publications/catalogue/protecting-politics-](https://www.idea.int/publications/catalogue/protecting-politics-deterring-influence-organized-crime-political-parties)
[deterring-influence-organized-crime-political-parties>, accessed 6 September](https://www.idea.int/publications/catalogue/protecting-politics-deterring-influence-organized-crime-political-parties)
2018
Browne, R., ‘Estonia says it won’t issue a national cryptocurrency and never planned
to’, CNBC, 4 June 2018, <https://www.cnbc.com/2018/06/04/estonia-wontissue-national-cryptocurrency-estcoin-never-planned-to.html>, accessed
13 September 2018
Bryanov, K., ‘Bitcoin for America: cryptocurrencies in campaign finance’, Coin
_[Telegraph, 31 May 2018, <https://cointelegraph.com/news/bitcoin-for-america-](https://cointelegraph.com/news/bitcoin-for-america-cryptocurrencies-in-campaign-finance)_
[cryptocurrencies-in-campaign-finance>, accessed 11 September 2018](https://cointelegraph.com/news/bitcoin-for-america-cryptocurrencies-in-campaign-finance)
_CCN, ‘“We need an international discussion on cryptocurrencies”: OECD’s_
[Medcraft’, 21 March 2018b, <https://www.ccn.com/we-need-an-international-](https://www.ccn.com/we-need-an-international-discussion-on-cryptocurrencies-oecds-medcraft/)
[discussion-on-cryptocurrencies-oecds-medcraft/>, accessed 31 August 2018](https://www.ccn.com/we-need-an-international-discussion-on-cryptocurrencies-oecds-medcraft/)
—, ‘Federal Reserve governor: we’re monitoring “extreme volatility” of
[cryptocurrencies’, 4 April 2018a, <https://www.ccn.com/federal-reserve-is-](https://www.ccn.com/federal-reserve-is-monitoring-extreme-volatility-of-bitcoin-prices/)
[monitoring-extreme-volatility-of-bitcoin-prices/>, accessed 29 August 2018](https://www.ccn.com/federal-reserve-is-monitoring-extreme-volatility-of-bitcoin-prices/)
—, ‘Crypto-loving US Senate candidate forced to return $130,000 bitcoin donation’,
[18 June 2018c, <https://www.ccn.com/crypto-loving-us-senate-candidate-](https://www.ccn.com/crypto-loving-us-senate-candidate-forced-to-return-130000-bitcoin-donation/)
[forced-to-return-130000-bitcoin-donation/>, accessed 7 September 2018](https://www.ccn.com/crypto-loving-us-senate-candidate-forced-to-return-130000-bitcoin-donation/)
—, ‘Uzbekistan opens door to cryptocurrency exchanges, offers tax benefits’,
[8 September 2018d, <https://www.ccn.com/uzbekistan-opens-doors-for-](https://www.ccn.com/uzbekistan-opens-doors-for-cryptocurrency-exchanges-with-tax-benefits/)
[cryptocurrency-exchanges-with-tax-benefits/>, accessed 10 December 2018](https://www.ccn.com/uzbekistan-opens-doors-for-cryptocurrency-exchanges-with-tax-benefits/)
-----
Chen, Q., ‘Next stop in the cryptocurrency craze: a government-backed coin’,
[CNBC, 30 November 2017, <https://www.cnbc.com/2017/11/30/](https://www.cnbc.com/2017/11/30/cryptocurrency-craze-springboards-government-backed-coin.html)
[cryptocurrency-craze-springboards-government-backed-coin.html>, accessed](https://www.cnbc.com/2017/11/30/cryptocurrency-craze-springboards-government-backed-coin.html)
29 August 2018
Cheng, E., ‘China clamps down on cryptocurrency speculation, but not blockchain
[development’, CNBC, 3 September 2018, <https://www.cnbc.com/](https://www.cnbc.com/2018/09/03/china-clamps-down-on-cryptocurrency-speculation.html)
[2018/09/03/china-clamps-down-on-cryptocurrency-speculation.html>, accessed](https://www.cnbc.com/2018/09/03/china-clamps-down-on-cryptocurrency-speculation.html)
3 September 2018
[Coinbase, ‘The rise of crypto in higher education’, 28 August 2018, <https://](https://blog.coinbase.com/the-rise-of-crypto-in-higher-education-81b648c2466f)
[blog.coinbase.com/the-rise-of-crypto-in-higher-education-81b648c2466f>,](https://blog.coinbase.com/the-rise-of-crypto-in-higher-education-81b648c2466f)
accessed 4 September 2018
Corbeta, S., Luceyb, B. and Yarovayac, L., ‘Datestamping the Bitcoin and Ethereum
[bubbles’, Finance Research Letters, 26 (September 2018), pp. 81–88, <https://](https://doi.org/10.1016/j.frl.2017.12.006)
[doi.org/10.1016/j.frl.2017.12.006>, accessed 25 February 2019](https://doi.org/10.1016/j.frl.2017.12.006)
_[CryptoCurrency Facts, ‘How does cryptocurrency work?’, n.d., <https://](https://cryptocurrencyfacts.com/how-does-cryptocurrency-work-2/)_
[cryptocurrencyfacts.com/how-does-cryptocurrency-work-2/>, accessed](https://cryptocurrencyfacts.com/how-does-cryptocurrency-work-2/)
30 August 2018
_Crypto News Monitor, ‘57 Ukrainian Officials Declared Over 21,000 Bitcoins’,_
[25 January 2018, <https://cryptonewsmonitor.com/2018/01/25/57-ukrainian-](https://cryptonewsmonitor.com/2018/01/25/57-ukrainian-officials-declared-over-21000-bitcoins/)
[officials-declared-over-21000-bitcoins/> accessed 11 December 2018](https://cryptonewsmonitor.com/2018/01/25/57-ukrainian-officials-declared-over-21000-bitcoins/)
Del Castillo, M., ‘Why a Swedish MP is joining Bitcoin Exchange BTCX’, Coindesk,
[5 June 2017, <https://www.coindesk.com/why-a-swedish-mp-is-joining-](https://www.coindesk.com/why-a-swedish-mp-is-joining-bitcoin-exchange-btcx/)
[bitcoin-exchange-btcx/>, accessed 12 September 2018](https://www.coindesk.com/why-a-swedish-mp-is-joining-bitcoin-exchange-btcx/)
Demertzis, M. and Wolf, G. B., The Economic Potential and Risks of Crypto Assets: Is a
_[Regulatory Framework Needed? (Brussels: Bruegel, 2018), < http://bruegel.org/](http://bruegel.org/wp-content/uploads/2018/09/PC-14_2018.pdf)_
[wp-content/uploads/2018/09/PC-14_2018.pdf>, accessed 12 September 2018](http://bruegel.org/wp-content/uploads/2018/09/PC-14_2018.pdf)
Dow, S., ‘What’s the future of cryptocurrencies?’, World Economic Forum,
[15 August 2018, <https://www.weforum.org/agenda/2018/08/cryptocurrencies-](https://www.weforum.org/agenda/2018/08/cryptocurrencies-are-useful-but-will-not-save-us)
[are-useful-but-will-not-save-us>, accessed 7 September 2018](https://www.weforum.org/agenda/2018/08/cryptocurrencies-are-useful-but-will-not-save-us)
Dueweke, S., ‘Witness testimony’, Protecting Our Elections: Examining Shell
_Companies and Virtual Currencies as Avenues for Foreign Interference, US Senate_
Judiciary Committee Hearing (Washington, DC: Identity And Secure
[Transactions DarkTower, 2018), <https://www.judiciary.senate.gov/imo/](https://www.judiciary.senate.gov/imo/media/doc/06-26-18%20Dueweke%20Testimony.pdf)
[media/doc/06-26-18%20Dueweke%20Testimony.pdf>, accessed 11 September](https://www.judiciary.senate.gov/imo/media/doc/06-26-18%20Dueweke%20Testimony.pdf)
2018
_The Economist, ‘Bitcoin and other cryptocurrencies are useless’, 30 August 2018a,_
[<https://www.economist.com/leaders/2018/08/30/bitcoin-and-other-](https://www.economist.com/leaders/2018/08/30/bitcoin-and-other-cryptocurrencies-are-useless)
[cryptocurrencies-are-useless>, accessed 5 September 2018](https://www.economist.com/leaders/2018/08/30/bitcoin-and-other-cryptocurrencies-are-useless)
-----
Cryptocurrencies and Political Finance
[—, ‘What to make of cryptocurrencies and blockchains’, 30 August 2018b, <https://](https://www.economist.com/technology-quarterly/2018/08/30/what-to-make-of-cryptocurrencies-and-blockchains)
[www.economist.com/technology-quarterly/2018/08/30/what-to-make-of-](https://www.economist.com/technology-quarterly/2018/08/30/what-to-make-of-cryptocurrencies-and-blockchains)
[cryptocurrencies-and-blockchains>, accessed 6 September 2018](https://www.economist.com/technology-quarterly/2018/08/30/what-to-make-of-cryptocurrencies-and-blockchains)
—, ‘How to put bitcoin into perspective’, 1 September 2018c, <https://
www.economist.com/technology-quarterly/2018/09/01/how-to-put-bitcoininto-perspective>, accessed 6 September 2018
—, ‘Mining cryptocurrencies is using up eye-watering amounts of power’,
1 September 2018d, <https://www.economist.com/technology-quarterly/
2018/09/01/mining-cryptocurrencies-is-using-up-eye-watering-amounts-ofpower>, accessed 6 September 2018
—, ‘Initial coin offerings have become big business’, 1 September 2018e, <https://
www.economist.com/technology-quarterly/2018/09/01/initial-coin-offeringshave-become-big-business>, accessed 6 September 2018
—, ‘From one cryptocurrency to thousands’, 1 September 2018f, <https://
www.economist.com/technology-quarterly/2018/09/01/from-onecryptocurrency-to-thousands>, accessed 6 September 2018
Eich, S., ‘Old utopias, new tax havens: the politics of bitcoin in historical
perspective’, in Philipp H. et al (eds.), Regulating Blockchain: Political and Legal
_[Challenges (Oxford: Oxford University Press, forthcoming), <http://](http://www.academia.edu/37248021/Old_Utopias_New_Tax_Havens_The_Politics_of_Bitcoin_in_Historical_Perspective_in_Regulating_Blockchain_OUP_2019_)_
[www.academia.edu/37248021/Old_Utopias_New_Tax_Havens_The](http://www.academia.edu/37248021/Old_Utopias_New_Tax_Havens_The_Politics_of_Bitcoin_in_Historical_Perspective_in_Regulating_Blockchain_OUP_2019_)
[_Politics_of_Bitcoin_in_Historical_Perspective_in_Regulating_Blockchain_](http://www.academia.edu/37248021/Old_Utopias_New_Tax_Havens_The_Politics_of_Bitcoin_in_Historical_Perspective_in_Regulating_Blockchain_OUP_2019_)
[OUP_2019_>, accessed 12 September 2018](http://www.academia.edu/37248021/Old_Utopias_New_Tax_Havens_The_Politics_of_Bitcoin_in_Historical_Perspective_in_Regulating_Blockchain_OUP_2019_)
Eleftheriou-Smith, L., ‘Sweden’s Mathias Sundin: “The world’s first political Bitcoin
only candidate”’, Independent, 12 July 2014, <https://www.independent.co.uk/
news/world/europe/swedens-mathias-sundin-the-world-s-first-political-bitcoinonly-candidate-9602190.html>, accessed 12 September 2018
European Commission, ‘Strengthened EU rules to prevent money laundering and
terrorism financing’, 9 July 2018, <http://ec.europa.eu/newsroom/just/
document.cfm?action=display&doc_id=48935>, accessed 4 September 2018
Falguera, E., Jones, S. and Ohman, M., Funding of Political Parties and Election
_Campaigns: A Handbook on Political Finance (Stockholm: International IDEA,_
2014), <https://www.idea.int/sites/default/files/publications/funding-ofpolitical-parties-and-election-campaigns.pdf>, accessed 14 September 2018
Fatás, A. and di Mauro, B. W., ‘Here’s why central banks shouldn’t play
cryptocurrencies at their own game’, World Economic Forum, 17 May 2018,
<https://www.weforum.org/agenda/2018/05/this-is-why-central-banksshouldnt-play-cryptocurrencies-at-their-own-game>, accessed 6 September 2018
-----
Financial Action Task Force (FATF), Guidance for a Risk-Based Approach: Virtual
_Currencies (Paris: FATF, 2015), <http://www.fatf-gafi.org/media/fatf/_
documents/reports/Guidance-RBA-Virtual-Currencies.pdf>, accessed
13 September 2018
—, ‘Regulation of virtual assets’, 19 October 2018, <http://www.fatf-gafi.org/
publications/fatfrecommendations/documents/regulation-virtual-assets.html>
accessed 10 December 2018
Frank, J., ‘Colorado wants to allow political donations in bitcoin and other
cryptocurrency’, The Denver Post, 17 May 2018, <https://www.denverpost.com/
2018/05/17/colorado-cryptocurrency-bitcoin-political-contributions/>, accessed
11 September 2018
Galeon, D., ‘This city now has its very own cryptocurrency’, World Economic
Forum, 4 October 2017<https://www.weforum.org/agenda/2017/10/dubai-hasits-very-own-official-cryptocurrency>, accessed 7 September 2018
García, C., ‘Plantean “bitcoin” para elecciones’ [Proposal for ‘bitcoin’ for elections],
El Universal, 5 December 2017, <http://www.eluniversal.com.mx/
elecciones-2018/plantean-bitcoin-para-elecciones>, accessed 12 September 2018
Gensler, G., ‘Keynote address’ [video], Workshop on Digital Financial Assets:
Opportunities and Challenges, OECD, 15 May 2018, Paris, <https://youtu.be/
LBad6zU1ok4>, accessed 7 September 2018
He, D. et al., Fintech and Financial Services: Initial Considerations (Washington, DC:
IMF, 2017), <https://www.imf.org/en/Publications/Staff-Discussion-Notes/
Issues/2017/06/16/Fintech-and-Financial-Services-InitialConsiderations-44985>, accessed 7 September 2018
Henning, P. J., ‘Policing cryptocurrencies has become a game of whack-a-mole for
[regulators’, The New York Times, 31 May 2018, <https://www.nytimes.com/](https://www.nytimes.com/2018/05/31/business/dealbook/bitcoin-cryptocurrencies-regulation.html)
[2018/05/31/business/dealbook/bitcoin-cryptocurrencies-regulation.html>,](https://www.nytimes.com/2018/05/31/business/dealbook/bitcoin-cryptocurrencies-regulation.html)
accessed 6 September 2018
Huberman, G., Leshno, J. D. and Moallemi, C., Monopoly Without a Monopolist: An
_Economic Analysis of the Bitcoin Payment System (Helsinki: Bank of Finland,_
[2017), <https://helda.helsinki.fi/bof/bitstream/handle/123456789/14912/](https://helda.helsinki.fi/bof/bitstream/handle/123456789/14912/BoF_DP_1727.pdf;jsessionid=2F1E2EDBF1180739B5C13906CA99260E?sequence=1)
[BoF_DP_1727.pdf;jsessionid=2F1E2EDBF1180739B5 C13906CA99260E?](https://helda.helsinki.fi/bof/bitstream/handle/123456789/14912/BoF_DP_1727.pdf;jsessionid=2F1E2EDBF1180739B5C13906CA99260E?sequence=1)
[sequence=1, accessed 7 September 2018](https://helda.helsinki.fi/bof/bitstream/handle/123456789/14912/BoF_DP_1727.pdf;jsessionid=2F1E2EDBF1180739B5C13906CA99260E?sequence=1)
International IDEA, Political Finance Database (Stockholm: International IDEA,
2018a), <https://www.idea.int/data-tools/data/political-finance-database>,
accessed 11 December 2018
—, Survey on the Use of Cryptocurrencies to Finance Politics, conducted online in
Arabic, English, French and Spanish among 46 academics and political finance
-----
Cryptocurrencies and Political Finance
experts, November to December 2018 (Stockholm: International IDEA,
2018b), unpublished
International IDEA and the Netherlands Institute for Multiparty Democracy
(NIMD), Women’s Access to Political Finance: Insights from Colombia, Kenya and
_Tunisia (Stockholm and The Hague: International IDEA and NIMD, 2017),_
<https://www.idea.int/publications/catalogue/womens-access-political-financeinsights-colombia-kenya-and-tunisia>, accessed 21 August 2018
Inter-Parliamentary Union (IPU) and United Nations Development Programme
(UNDP), The Representation of Minorities and Indigenous Peoples in
Parliament: A Global Overview (Geneva and New York: IPU and UNDP,
2010), <https://www.ipu.org/resources/publications/reports/2016-07/
representation-minorities-and-indigenous-peoples-in-parliament-globaloverview>, accessed 18 December 2018
_Investopedia, ‘Initial Coin Offering (ICO), 2018, <https://www.investopedia.com/_
terms/s/security.asp>, accessed 31 August 2018
Jackson, R., ‘Bitcoin 101: everything you need to know about investing, buying, and
mining digital currency’, Big Think, 2 September 2018, <https://bigthink.com/
reuben-jackson/bitcoin-101-everything-you-need-to-know-about-investingbuying-and-mining-digital-currency>, accessed 4 September 2018
Jones, B., ‘The US is no longer the world’s largest bitcoin market’, Business Insider,
18 September 2017, <https://www.businessinsider.com/japan-worlds-largestbitcoin-market-2017-9?r=US&IR=T&IR=T>, accessed 13 September 2018
Kadyrov, R. E. and Prokhorov, I. V., ‘Regulating cryptocurrencies: new challenges to
economic security and problems created by individuals involved in the schemes
of laundering cryptocurrencies-generated profits’, Knowledge E, 13 February
2018, <https://knepublishing.com/index.php/Kne-Social/article/view/
1568/3703>, accessed 29 August 2018
Kappos, G. et al., ‘An empirical analysis of anonymity in Zcash’, 27th USENIX
_Security Symposium (2018), <https://arxiv.org/abs/1805.03180>, accessed_
6 September 2018
Keneally, M., ‘Bitcoin is gaining currency in political campaign donations’, ABC
News, 7 February 2018, <http://abcnews.go.com/Politics/bitcoin-popularpolitical-campaign-donations/story?id=52873921>, accessed 29 August 2018
Kim, C., ‘South Korea to ban cryptocurrency traders from using anonymous bank
accounts’, Reuters, 23 January 2018, <https://uk.reuters.com/article/ussouthkorea-bitcoin/south-korea-to-ban-cryptocurrency-traders-from-usinganonymous-bank-accounts-idUKKBN1FC069>, accessed 13 September 2018
-----
Konash, M., ‘Cryptocurrencies need regulation to survive, Mistertango survey
reveals’, Coinspeaker, 3 September 2018, <https://www.coinspeaker.com/
2018/09/03/cryptocurrencies-need-regulation-to-survive/>, accessed
4 September 2018
Krumholz, S., ‘Witness testimony’, Protecting Our Elections: Examining Shell
_Companies and Virtual Currencies as Avenues for Foreign Interference, US Senate_
Judiciary Committee Hearing (Washington, DC: Center for Responsive
Politics, 2018), <https://www.judiciary.senate.gov/imo/media/doc/
06-26-18%20Krumholz%20Testimony.pdf>, accessed 11 September 2018
Lapointe, C. and Fishbane, L., The Blockchain Ethical Design Framework
(Washington, DC: Georgetown University, 2018), <http://
beeckcenter.georgetown.edu/wp-content/uploads/2018/06/The-BlockchainEthical-Design-Framework.pdf>, accessed 7 September 2018
The Local Client Studio, ‘Mining the future: why Sweden is leading the
cryptocurrency revolution’, Business Sweden, n.d., <https://www.businesssweden.se/en/Invest/industries/Data-Centers-By-Sweden/news-and-downloads/
investment-news/nyhet-13-okt/>, accessed 11 September 2018
Mandeng, O. J. and Nagy-Mohacsi, P., ‘This is how cryptocurrencies challenge the
status quo’, World Economic Forum, 19 March 2018, <https://
www.weforum.org/agenda/2018/03/cryptocurrencies-challenge-the-status-quo>,
accessed 6 September 2018
Mersch, Y., Virtual currencies ante portas, Speech by Yves Mersch, Member of the
Executive Board of the ECB, 39th meeting of the Governor’s Club, Bodrum,
Turkey, 14 May 2018, <https://www.ecb.europa.eu/press/key/date/2018/html/
ecb.sp180514.en.html>, accessed 7 September 2018
Mizrahi, A., ‘G20 leaders declare commitment to regulate crypto assets’, News,
2 December 2018, <https://news.bitcoin.com/g20-leaders-declare-commitmentto-regulate-crypto-assets/>, accessed 10 December 2018
Murray, D., ‘Witness testimony’, Protecting Our Elections: Examining Shell Companies
_and Virtual Currencies as Avenues for Foreign Interference, US Senate Judiciary_
Committee Hearing (Washington, DC: Financial Integrity Network, 2018),
<https://www.judiciary.senate.gov/imo/media/doc/
06-26-18%20Murray%20Testimony.pdf>, accessed 11 September 2018
Nelson, A., ‘Cryptocurrency regulation in 2018: where the world stands right now’,
_Bitcoin Magazine, 1 February 2018, <https://bitcoinmagazine.com/articles/_
cryptocurrency-regulation-2018-where-world-stands-right-now/>, accessed
29 August 2019
Nolan, L., ‘Congress investigating influence of cryptocurrencies on elections’,
_Bitsonline, 24 June 2018, <https://bitsonline.com/congress-investigating-_
influence-of-cryptocurrencies-on-elections/>, accessed 11 September 2018
-----
Cryptocurrencies and Political Finance
OpenSecrets, Super PACs (Washington, DC: Center for Responsive Politics, 2018),
<https://www.opensecrets.org/pacs/superpacs.php>, accessed 14 September
2018
Organisation for Economic Co-operation and Development (OECD), ‘Blockchain
Policy Forum’ [video], 4–5 September 2018, <https://ocde.streamakaci.com/
blockchain/>, accessed 4 September 2018
Orcutt, M., ‘Should politicians accept campaign contributions in bitcoin?’, The
_Download, 31 October 2017, <https://www.technologyreview.com/the-_
download/609285/should-politicians-accept-campaign-contributions-inbitcoin/>, accessed 29 August 2018
Peterson, B., ‘This is what Bill Gates thinks about cryptocurrencies’, World
Economic Forum, 28 February 2018, <https://www.weforum.org/agenda/
2018/02/bill-gates-says-cryptocurrency-is-a-rare-technology-that-has-causeddeaths-in-a-fairly-direct-way>, accessed 6 September 2018
Petersen, C., ‘Almost half of the world’s top universities offer cryptocurrency or
blockchain courses’, News BTC, 28 August 2018, <https://www.newsbtc.com/
2018/08/28/almost-half-of-the-worlds-top-universities-offer-cryptocurrency-orblockchain-courses/>, accessed 4 September 2018
Popper, N., ‘Have a cryptocurrency company? Bermuda, Malta or Gibraltar wants
you’, The New York Times, 29 July 2018, <https://www.nytimes.com/
2018/07/29/technology/cryptocurrency-bermuda-malta-gibraltar.html>,
accessed 6 September 2018
Popper, N. and Rosenberg, M., ‘How Russian spies hid behind bitcoin in hacking
campaign’, The New York Times, 13 July 2018, <https://www.nytimes.com/
2018/07/13/technology/bitcoin-russian-hacking.html>, accessed 6 September
2018
Psaila, S., Cryptocurrency Security Standard (CCSS) (Malta: Deloitte, 2018), <https://
www2.deloitte.com/content/dam/Deloitte/mt/Documents/technology/
dt_mt_article_Cryptocurrency_Security_Standard-sandro-psaila.pdf>, accessed
18 December 2018
Reuters, ‘Cryptocurrency storage firm Kingdom Trust obtains insurance through
Lloyd’s’, The New York Times, 28 August 2018a, <https://www.nytimes.com/
reuters/2018/08/28/business/28reuters-cryptocurrency-lloyds-of-londoninsurance.html>, accessed 6 September 2018
—, ‘Special report: in Venezuela, new cryptocurrency is nowhere to be found’,
_The New York Times, 30 August 2018b_
Roberts, B., ‘The top 3 cryptocurrencies (and why they cost so much)’, Forbes,
4 April 2018, <https://www.forbes.com/sites/brianroberts/2018/04/04/the
-----
top-3-cryptocurrencies-and-why-they-cost-so-much/#32876f573ceb>, accessed
30 August 2018
Rooney, K., ‘Your guide to cryptocurrency regulations around the world and where
they are headed’, CNBC, 27 March 2018, <https://www.cnbc.com/
2018/03/27/a-complete-guide-to-cyprocurrency-regulations-around-theworld.html>, accessed 31 August 2018
Ross, M. and Beyoud, L., ‘Bitcoin campaign donations pose potential fraud risks’,
Bloomberg News, 28 March 2018, <https://www.bna.com/bitcoin-campaigndonations-n57982090524/>, accessed 7 September 2018
Shi, M. M., ‘US Senate hearing will look at crypto’s impact on elections’, Coindesk,
22 June 2018, <https://www.coindesk.com/us-senate-hearing-will-look-cryptosimpact-elections/>, accessed 7 September 2018
Skingsley, C., Should the Riksbank issue e-krona? (Stockholm: FinTech, 2016),
<http://archive.riksbank.se/Documents/Tal/Skingsley/2016/
tal_skingsley_161116_eng.pdf>, accessed 7 September 2018
Smith, K. A., ‘13 types of cryptocurrency that aren’t bitcoin’, Bankrate, 30 August
2018, <https://www.bankrate.com/investing/types-of-cryptocurrency/>,
accessed 5 September 2018
Statista, ‘Average number of Monero transactions daily worldwide from 1st quarter
2016 1st quarter 2017’, n.d., <https://www.statista.com/statistics/730827/
average-number-of-monero-transactions/>, accessed 10 December 2018
Thomson Reuters, ‘Cryptocurrencies by country’, 25 October 2017, <https://
blogs.thomsonreuters.com/answerson/world-cryptocurrencies-country/>,
accessed 12 December 2018
Timmer, H., Cryptocurrencies and Blockchain: Europe and Central Asia Economic
_Update [video] (Tbilisi: World Bank, May 2018), <http://www.worldbank.org/_
en/news/video/2018/05/07/eca-update-video>, accessed 7 September 2018
Trading View, ‘Cryptocurrency market’, n.d., <https://www.tradingview.com/
markets/cryptocurrencies/prices-all/>, accessed 3 September 2018
Treasury Committee, Parliament Live, Economic Crime [video], 4 July 2018,
<https://www.parliamentlive.tv/Event/Index/d90d3055-4105-4f2bbf62-18fa0a1336b9>, accessed 6 September 2018
Uribe Burcher, C. and Casal Bertoa, F., Best Political Finance Design Tool
(Stockholm: International IDEA, forthcoming)
Uribe Burcher, C. and Perdomo, C., ‘Money, influence, corruption and capture: can
democracy be protected?’, The Global State of Democracy 2017: Exploring
-----
Cryptocurrencies and Political Finance
_Democracy’s Resilience (Stockholm: International IDEA, 2017), <https://_
www.idea.int/gsod>, accessed 14 September 2018
US Federal Election Commission (FEC), ‘How to Report Bitcoin Contributions’,
n.d., <https://www.fec.gov/help-candidates-and-committees/filing-reports/
bitcoin-contributions/>, accessed 7 September 2018
—, Advisory Opinion 2014-02 (Washington, DC: FEC, 2014), <https://
www.fec.gov/files/legal/aos/2014-02/2014-02.pdf>, accessed 7 September 2018
US Securities and Exchange Commission (SEC), ‘Initial coin offerings (ICOs)’,
21 August 2018, <https://www.sec.gov/ICO>, accessed 10 December 2018
Villas-Boas, A., ‘This US city is the first to ban the mining of cryptocurrencies’,
World Economic Forum, 19 March 2018, <https://www.weforum.org/agenda/
2018/03/for-the-first-time-a-us-city-has-banned-cryptocurrency-mining>,
accessed 7 September 2018
Villaveces-Izquierdo, S. and Uribe Burcher, C., Illicit Networks and Politics in the
_Baltic States (Stockholm: International IDEA, 2013), <https://www.idea.int/_
publications/catalogue/illicit-networks-and-politics-baltic-states?lang=en>,
accessed 6 September 2018
Voorhees, C., ‘Campaign contributions by bitcoin? Oregon elections chief favors
“yes”’, The Oregonian, 20 June 2018, <https://www.oregonlive.com/politics/
index.ssf/2018/06/campaign_contributions_by_bitc.html>, accessed
7 September 2018
Voto Legal, ‘Confiança via blockchain’ [Confidence via blockchain] (Sao Paulo: Voto
Legal, 2018), <https://blockchain.votolegal.com.br/>, accessed 11 December
2018
Wilhelm, C., ‘Trump sanctions Venezuelan cryptocurrency’, Politico, 19 March
2018a, <https://www.politico.com/story/2018/03/19/trump-sanctionsvenezuelan-cryptocurrency-424550>, accessed 6 September 2018
—, ‘“Bitcoin’s candidate” takes heat for cryptocurrency donations’, Politico, 29 May
2018b, <https://www.politico.com/story/2018/05/29/bitcoin-candidatecryptocurrency-donations-566833>, accessed 6 September 2018
-----
—, ‘In surprise move, cryptocurrency exchanges embrace regulation’, Politico,
6 August 2018c, <https://www.politico.com/story/2018/06/08/in-surprisemove-cryptocurrency-exchanges-embrace-regulation-607184>, accessed 6
September 2018
_Wired, ‘Blockchain expert explains one concept in 5 levels of difficulty’,_
28 November 2017, <https://www.youtube.com/watch?v=hYip_Vuv8J0>,
accessed 29 November 2018
World Bank, Cryptocurrencies and Blockchain: Europe and Central Asia Economic
_[Update (Washington, DC: World Bank, 2018), <https://](https://openknowledge.worldbank.org/bitstream/handle/10986/29763/9781464812996.pdf?sequence=2&isAllowed=y)_
[openknowledge.worldbank.org/bitstream/handle/10986/29763/](https://openknowledge.worldbank.org/bitstream/handle/10986/29763/9781464812996.pdf?sequence=2&isAllowed=y)
[9781464812996.pdf?sequence=2&isAllowed=y>, accessed 7 September 2018](https://openknowledge.worldbank.org/bitstream/handle/10986/29763/9781464812996.pdf?sequence=2&isAllowed=y)
Yakubowski, M., ‘Belarus: high tech park releases “complete legal regulations” for
cryptocurrencies’, Cointelegraph, 30 November 2018, <https://
cointelegraph.com/news/belarus-high-tech-park-releases-complete-legalregulations-for-cryptocurrencies>, accessed 18 December 2018
-----
Cryptocurrencies and Political Finance
**Catalina Uribe Burcher is a Senior Programme Officer in International IDEA’s**
Political Participation and Representation Programme. Uribe Burcher focuses on
money in politics, integrity, conflict and the threats that transnational illicit networks
pose to democratic processes. She has also worked as an independent consultant for
the Colombian Ministry of Foreign Affairs and as coordinator of a programme caring
for victims of the armed conflict in Colombia. She is a Colombian and Swedish
lawyer with a specialty in criminal law, and holds a master’s degree in international
and comparative law from Uppsala University, Sweden.
-----
The International Institute for Democracy and Electoral Assistance (International
IDEA) is an intergovernmental organization with the mission to advance democracy
worldwide, as a universal human aspiration and enabler of sustainable development.
We do this by supporting the building, strengthening and safeguarding of democratic
political institutions and processes at all levels. Our vision is a world in which
democratic processes, actors and institutions are inclusive and accountable and
deliver sustainable development to all.
### What do we do?
In our work we focus on three main impact areas: electoral processes; constitutionbuilding processes; and political participation and representation. The themes of
gender and inclusion, conflict sensitivity and sustainable development are
mainstreamed across all our areas of work.
International IDEA provides analyses of global and regional democratic trends;
produces comparative knowledge on good international democratic practices; offers
technical assistance and capacity-building on democratic reform to actors engaged in
democratic processes; and convenes dialogue on issues relevant to the public debate
on democracy and democracy building.
### Where do we work?
Our headquarters is located in Stockholm, and we have regional and country offices
in Africa, the Asia-Pacific, Europe, and Latin America and the Caribbean.
International IDEA is a Permanent Observer to the United Nations and is accredited
to European Union institutions.
<http://idea.int>
-----
Cryptocurrencies are a new form of digital money or asset
that could drastically change the flow of resources around
the world. As their number keeps on growing, also do the
questions about their implications on the financing of
politics and integrity more broadly. Does their anonymity,
volatility and a lack of oversight create a potential for abuse?
Or does the technology behind cryptocurrencies—
blockchain—create a breeding ground for innovations in the
anti-corruption realm?
This discussion paper presents some of the basic notions
behind cryptocurrencies and their regulation, especially
targeting their use in the financing of political parties and
election campaigns. The author pays special attention to how
cryptocurrencies can allow foreign contributions and
anonymous donations to enter politics unnoticed, while
analyzing their capacity to improve political finance
reporting, disclosure and oversight.
International IDEA
Strömsborg
SE–103 34 Stockholm
Sweden
Telephone: +46 8 698 37 00
Email: info@idea.int
Website: <http://www.idea.int>
-----
|
{
"disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.31752/IDEA.2019.7?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.31752/IDEA.2019.7, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.",
"license": "CCBYNCSA",
"status": "GREEN",
"url": "https://www.idea.int/sites/default/files/publications/cryptocurrrencies-and-political-finance.pdf"
}
| 2,019
|
[] | true
| 2019-03-15T00:00:00
|
[
{
"paperId": "a6d81d5e8662746e63b60b90c1d3dea178c7e30d",
"title": "Economic Crime"
},
{
"paperId": "fdcb7aa2ef3226fbecfc94fc6cd7add6652189cf",
"title": "Replication package for: \"Monopoly without a Monopolist: An Economic Analysis of the Bitcoin Payment System\""
},
{
"paperId": "20e3952cd82470758d7b0ee2f1b2b78e30f671ab",
"title": "Old Utopias, New Tax Havens"
},
{
"paperId": "f18fe650bb9d630cb83a000fa28c4e38229f8404",
"title": "The Blockchain Ethical Design Framework"
},
{
"paperId": "9a7b0dadd30f26be4476c28f2cd06eacc1a9432f",
"title": "The economic potential and risks of crypto assets: is a regulatory framework needed? Bruegel Policy Contribution Issue n˚14 | September 2018"
},
{
"paperId": "ab4da6d37196c4fbec6bcb3b7508e31506fe30bc",
"title": "An Empirical Analysis of Anonymity in Zcash"
},
{
"paperId": "fcd7ce92349ac3591119533d2f9f87523e0fa41e",
"title": "Initial Coin Offerings (ICOs)"
},
{
"paperId": "c3af1bfcf9239d9711af692d8741fca2669ab221",
"title": "Regulating Cryptocurrencies: New Challenges to Economic Security and Problems Created by Individuals Involved in the Schemes of Laundering Cryptocurrencies-Generated Profits"
},
{
"paperId": "2e36e7fde43e2cfc99dfe0c9f70a9cfee969f299",
"title": "Datestamping the Bitcoin and Ethereum Bubbles"
},
{
"paperId": "a36542a29c53d2c621e9f74316dd15415b9cb282",
"title": "Fintech and Financial Services"
},
{
"paperId": "38a22a0792c782d6f1a3177554b7dd24efd24e91",
"title": "Cecilia Skingsley: Should the Riksbank issue e-krona?"
},
{
"paperId": "df48f0c4131672b95187bc2a0089682263a5937c",
"title": "Keynote address"
},
{
"paperId": "eff24bcd10f5375c5480ebd696becafc3673706b",
"title": "Funding of Political Parties and Election Campaigns"
},
{
"paperId": "899da2683806092444a7ea3a1de0c9bc1ae18c81",
"title": "Advisory Opinion"
},
{
"paperId": "9cac25ef20c194847c5af20458c8225f3b6e2436",
"title": "The New York Times"
},
{
"paperId": null,
"title": "Congress investigating influence of cryptocurrencies on elections"
},
{
"paperId": null,
"title": "Cryptocurrencies need regulation to survive, Mistertango survey reveals’, Coinspeaker, 3 September 2018, <https://www.coinspeaker.com/ 2018/09/03/cryptocurrencies-need-regulation-to-survive/>"
},
{
"paperId": null,
"title": "Your guide to cryptocurrency regulations around the world and where they are headed"
},
{
"paperId": null,
"title": "Trump sanctions Venezuelan cryptocurrency’, Politico, 19 March 2018a, <https://www.politico.com/story/2018/03/19/trump-sanctionsvenezuelan-cryptocurrency-424550>"
},
{
"paperId": null,
"title": "Illicit Networks and Politics in the Baltic States (Stockholm: International IDEA, 2013), <https://www.idea.int/ publications/catalogue/illicit-networks-and-politics-baltic-states?lang=en>"
},
{
"paperId": null,
"title": "Cryptocurrencies and Blockchain: Europe and Central Asia Economic Update [video] (Tbilisi: World Bank, May 2018), <http://www.worldbank.org/ en/news/video/2018/05/07/eca-update-video>"
},
{
"paperId": null,
"title": "Bitcoin is gaining currency in political campaign donations"
},
{
"paperId": null,
"title": "How Russian spies hid behind bitcoin in hacking campaign"
},
{
"paperId": null,
"title": "This is how cryptocurrencies challenge the status quo"
},
{
"paperId": null,
"title": "Policing cryptocurrencies has become a game of whack-a-mole for regulators"
},
{
"paperId": null,
"title": "Campaign contributions by bitcoin? Oregon elections chief favors “yes”"
},
{
"paperId": null,
"title": "Virtual currencies ante portas, Speech by Yves Mersch, Member of the Executive Board of the ECB, 39th meeting of the Governor’s Club"
},
{
"paperId": null,
"title": "The Blockchain Ethical Design Framework (Washington, DC: Georgetown University, 2018)"
},
{
"paperId": null,
"title": "Have a cryptocurrency company? Bermuda, Malta or Gibraltar wants you"
},
{
"paperId": null,
"title": "13 types of cryptocurrency that aren’t bitcoin"
},
{
"paperId": null,
"title": "This is what Bill Gates thinks about cryptocurrencies"
},
{
"paperId": null,
"title": "Should the Riksbank issue e-krona? (Stockholm: FinTech, 2016)"
},
{
"paperId": null,
"title": "Bitcoin campaign donations pose potential fraud risks"
},
{
"paperId": null,
"title": "Belarus: high tech park releases “complete legal regulations"
},
{
"paperId": null,
"title": "Cryptocurrency regulation in 2018: where the world stands right now"
},
{
"paperId": null,
"title": "South Korea to ban cryptocurrency traders from using anonymous bank"
},
{
"paperId": null,
"title": "Almost half of the world’s top universities offer cryptocurrency or blockchain courses"
},
{
"paperId": null,
"title": "This city now has its very own cryptocurrency’, World Economic Forum, 4 October 2017<https://www.weforum.org/agenda/2017/10/dubai-hasits-very-own-official-cryptocurrency>, accessed"
},
{
"paperId": null,
"title": "Keynote address’ [video], Workshop on Digital Financial Assets: Opportunities and Challenges, OECD, 15 May 2018, Paris, <https://youtu.be/ LBad6zU1ok4>"
},
{
"paperId": null,
"title": "This US city is the first to ban the mining of cryptocurrencies"
},
{
"paperId": null,
"title": "Bitcoin 101: everything you need to know about investing, buying, and mining digital currency"
},
{
"paperId": null,
"title": "Colorado wants to allow political donations in bitcoin and other cryptocurrency"
},
{
"paperId": "319f0cac90c5fff07718cc6d875c6c7033242b82",
"title": "Initial Coin Offering (ICO)"
},
{
"paperId": null,
"title": "Should politicians accept campaign contributions in bitcoin?"
},
{
"paperId": null,
"title": "Money, influence, corruption and capture: can democracy be protected?"
},
{
"paperId": null,
"title": "Plantean “bitcoin” para elecciones’ [Proposal for ‘bitcoin’ for elections"
},
{
"paperId": null,
"title": "Ethics Commission asks Legislature to decide bitcoins"
},
{
"paperId": null,
"title": "Bitcoin campaign donations pose potential fraud risks', Bloomberg News"
},
{
"paperId": null,
"title": "Trump sanctions Venezuelan cryptocurrency"
},
{
"paperId": null,
"title": "The top 3 cryptocurrencies (and why they cost so much)', Forbes"
},
{
"paperId": null,
"title": "Belarus: high tech park releases \"complete legal regulations\" for cryptocurrencies"
},
{
"paperId": null,
"title": "Bitcoin's candidate\" takes heat for cryptocurrency donations"
},
{
"paperId": null,
"title": "Cryptocurrency market"
},
{
"paperId": null,
"title": "Protecting Our Elections: Examining Shell Companies and Virtual Currencies as Avenues for Foreign Interference, US Senate Judiciary Committee Hearing"
},
{
"paperId": null,
"title": "Initial coin offerings have become big business"
},
{
"paperId": null,
"title": "Next stop in the cryptocurrency craze: a government-backed coin', CNBC"
},
{
"paperId": null,
"title": "Coinbase CEO Armstrong on the future of crypto"
},
{
"paperId": null,
"title": "Guidance for a Risk-Based Approach: Virtual Currencies"
},
{
"paperId": null,
"title": "The US is no longer the world's largest bitcoin market"
},
{
"paperId": null,
"title": "Can cryptocurrencies and blockchain help fight corruption?', The Brookings Institution"
},
{
"paperId": null,
"title": "Protecting Our Elections: Examining Shell Companies and Virtual Currencies as Avenues for Foreign Interference, US Senate Judiciary Committee Hearing"
},
{
"paperId": null,
"title": "Cryptocurrency storage firm Kingdom Trust obtains insurance through Lloyd's', The New York Times"
},
{
"paperId": null,
"title": "Should politicians accept campaign contributions in bitcoin?', The Download"
},
{
"paperId": null,
"title": "Blockchain Policy Forum"
},
{
"paperId": null,
"title": "Illicit Networks and Politics in the Baltic States"
},
{
"paperId": null,
"title": "Strengthened EU rules to prevent money laundering and terrorism financing"
},
{
"paperId": null,
"title": "Money, influence, corruption and capture: can democracy be protected?', The Global State of Democracy 2017: Exploring Democracy's Resilience (Stockholm: International IDEA"
},
{
"paperId": null,
"title": "G20 leaders declare commitment to regulate crypto assets"
},
{
"paperId": null,
"title": "Survey on the Use of Cryptocurrencies to Finance Politics, conducted online in Arabic, English, French and Spanish among 46 academics and political finance experts"
},
{
"paperId": null,
"title": "Switzerland wants to be the world capital of cryptocurrency"
},
{
"paperId": null,
"title": "Protecting Our Elections: Examining Shell Companies and Virtual Currencies as Avenues for Foreign Interference, US Senate Judiciary Committee Hearing"
},
{
"paperId": null,
"title": "What's the future of cryptocurrencies?', World Economic Forum"
},
{
"paperId": null,
"title": "Average number of Monero transactions daily worldwide from 1st quarter"
},
{
"paperId": null,
"title": "The Representation of Minorities and Indigenous Peoples in Parliament: A Global Overview"
},
{
"paperId": null,
"title": "This is what Bill Gates thinks about cryptocurrencies', World Economic Forum"
},
{
"paperId": null,
"title": "13 types of cryptocurrency that aren't bitcoin', Bankrate"
},
{
"paperId": null,
"title": "57 Ukrainian Officials Declared Over 21,000 Bitcoins"
},
{
"paperId": null,
"title": "US Senate hearing will look at crypto's impact on elections"
},
{
"paperId": null,
"title": "Campaign contributions by bitcoin? Oregon elections chief favors \"yes\"', The Oregonian"
},
{
"paperId": null,
"title": "Cryptocurrency regulation in 2018: where the world stands right now', Bitcoin Magazine"
},
{
"paperId": null,
"title": "How to Report Bitcoin Contributions"
},
{
"paperId": null,
"title": "Here's why central banks shouldn't play cryptocurrencies at their own game"
},
{
"paperId": null,
"title": "The rise of crypto in higher education"
},
{
"paperId": null,
"title": "In surprise move, cryptocurrency exchanges embrace regulation"
},
{
"paperId": null,
"title": "Blow to bitcoin as Coinbase CEO Makes Stark warning"
},
{
"paperId": null,
"title": "Best Political Finance Design Tool (Stockholm: International IDEA, forthcoming)"
},
{
"paperId": null,
"title": "China clamps down on cryptocurrency speculation, but not blockchain development"
},
{
"paperId": null,
"title": "Mining the future: why Sweden is leading the cryptocurrency revolution"
},
{
"paperId": null,
"title": "Bitcoin and other cryptocurrencies are useless"
},
{
"paperId": null,
"title": "How does cryptocurrency work?', n.d., <https:// cryptocurrencyfacts.com/how-does-cryptocurrency-work-2/>, accessed"
},
{
"paperId": null,
"title": "Cryptocurrency latest: Colombia embraces while China cracks down"
},
{
"paperId": null,
"title": "From one cryptocurrency to thousands"
},
{
"paperId": null,
"title": "bitcoin\" para elecciones"
},
{
"paperId": null,
"title": "South Korea to ban cryptocurrency traders from using anonymous bank accounts', Reuters"
},
{
"paperId": null,
"title": "Politicians are getting in on the cryptocurrency craze to fund campaigns"
},
{
"paperId": null,
"title": "Women's Access to Political Finance: Insights from Colombia, Kenya and Tunisia (Stockholm and The Hague: International IDEA and NIMD"
},
{
"paperId": null,
"title": "The world's first political Bitcoin only candidate"
},
{
"paperId": null,
"title": "Regulation of virtual assets"
},
{
"paperId": null,
"title": "Blockchain expert explains one concept in 5 levels of difficulty"
},
{
"paperId": null,
"title": "Cryptocurrencies need regulation to survive, Mistertango survey reveals"
},
{
"paperId": null,
"title": "Why a Swedish MP is joining Bitcoin Exchange BTCX', Coindesk"
},
{
"paperId": null,
"title": "Virtual currencies ante portas"
},
{
"paperId": null,
"title": "Cryptocurrencies and Blockchain: Europe and Central Asia Economic Update"
},
{
"paperId": null,
"title": "Cryptocurrencies by country"
},
{
"paperId": null,
"title": "This city now has its very own cryptocurrency"
},
{
"paperId": null,
"title": "World Bank"
}
] | 17,444
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.